* Re: [dpdk-dev] [PATCH] ethdev: fix ABI breakage in lro code
@ 2015-08-03 8:41 8% ` Thomas Monjalon
2015-08-03 12:53 7% ` Neil Horman
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2015-08-03 8:41 UTC (permalink / raw)
To: Chao Zhu; +Cc: dev
Chao,
The original need was to check with the ABI checker tool that the ABI
was not broken on POWER BE by this patch.
2015-08-03 11:45, Chao Zhu:
> Confirmed. It can compile on Power8 Big Endian.
> Thank you!
>
> On 2015/8/3 10:39, Chao Zhu wrote:
> >
> > Really sorry for the delay.
> > Originally, I thought the email was to asking the ABI checking tools
> > on Power which I'm not so familiar with. So this took me some time to
> > find solution. For Power little endian, the build is OK. I'll give
> > feedback when I tried Big endian compilation.
[...]
> >>>>>> I presume the ABI checker stopped complaining about this with the
> >>>>>> patch, yes?
> >>>>> Hi Neil,
> >>>>>
> >>>>> Yes, I replied about that in the previous thread.
> >>>>>
> >>>> Thank you, I'll ack as soon as Chao confirms its not a problem on
> >>>> ppc Neil
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH] ethdev: fix ABI breakage in lro code
2015-08-03 8:41 8% ` Thomas Monjalon
@ 2015-08-03 12:53 7% ` Neil Horman
0 siblings, 0 replies; 200+ results
From: Neil Horman @ 2015-08-03 12:53 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Mon, Aug 03, 2015 at 10:41:47AM +0200, Thomas Monjalon wrote:
> Chao,
> The original need was to check with the ABI checker tool that the ABI
> was not broken on POWER BE by this patch.
>
Yes, its not compilation thats the concern, its ABI compatibility.
Neil
>
> 2015-08-03 11:45, Chao Zhu:
> > Confirmed. It can compile on Power8 Big Endian.
> > Thank you!
> >
> > On 2015/8/3 10:39, Chao Zhu wrote:
> > >
> > > Really sorry for the delay.
> > > Originally, I thought the email was to asking the ABI checking tools
> > > on Power which I'm not so familiar with. So this took me some time to
> > > find solution. For Power little endian, the build is OK. I'll give
> > > feedback when I tried Big endian compilation.
>
> [...]
>
> > >>>>>> I presume the ABI checker stopped complaining about this with the
> > >>>>>> patch, yes?
> > >>>>> Hi Neil,
> > >>>>>
> > >>>>> Yes, I replied about that in the previous thread.
> > >>>>>
> > >>>> Thank you, I'll ack as soon as Chao confirms its not a problem on
> > >>>> ppc Neil
>
>
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [dpdk-announce] release candidate 2.1.0-rc3
@ 2015-08-03 22:47 4% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-03 22:47 UTC (permalink / raw)
To: announce
A new DPDK release candidate is ready for testing:
http://dpdk.org/browse/dpdk/tag/?id=v2.1.0-rc3
We are very close to the major release 2.1.
The next release candidate should include mostly some doc updates,
and especially patches for the release notes.
Changelog (main changes since 2.1.0-rc2)
- enhancements:
* check mbuf private area alignment
* ixgbe vector support more offload flags
- fixes for:
* build
* ABI
* logs
* socket number without NUMA
* hash crash
* lpm depth small entry
* timer race condition
* ppc timer little endian
* ieee1588 timestamping
* mbuf private size odd
* ixgbe big endian
* ixgbe scalar Rx
* ixgbe vector Rx
* ixgbe stats
* i40evf crash
* fm10k queue disabling
* mlx4 dependency
Please help to check the ABI and the release notes.
Thank you
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for rte_eth_fdir_filter
2015-08-04 8:52 4% ` Mcnamara, John
@ 2015-08-04 8:53 4% ` Mcnamara, John
1 sibling, 0 replies; 200+ results
From: Mcnamara, John @ 2015-08-04 8:53 UTC (permalink / raw)
To: Wu, Jingjing, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jingjing Wu
> Sent: Monday, July 20, 2015 8:04 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] doc: announce ABI change for
> rte_eth_fdir_filter
>
> To fix the FVL's flow director issue for SCTP flow, rte_eth_fdir_filter
> need to be change to support SCTP flow keys extension. Here announce the
> ABI deprecation.
>
> Signed-off-by: jingjing.wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] doc: announce ABI change of rte_eth_fdir_filter, rte_eth_fdir_masks
@ 2015-08-04 8:54 7% ` Mcnamara, John
2015-08-04 8:56 4% ` Mcnamara, John
1 sibling, 0 replies; 200+ results
From: Mcnamara, John @ 2015-08-04 8:54 UTC (permalink / raw)
To: Lu, Wenzhuo, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Friday, July 10, 2015 3:24 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2] doc: announce ABI change of
> rte_eth_fdir_filter, rte_eth_fdir_masks
>
> For x550 supports 2 new flow director modes, MAC VLAN and Cloud. The MAC
> VLAN mode means the MAC and VLAN are monitored. The Cloud mode is for
> VxLAN and NVGRE, and the tunnel type, TNI/VNI, inner MAC and inner VLAN
> are monitored. So, there're a few new lookup fields for these 2 new modes,
> like MAC, tunnel type, TNI/VNI. We have to change the ABI to support these
> new lookup fields.
>
> v2 changes:
> * Correct the names of the structures.
> * Wrap the words.
> * Add explanation for the new modes.
>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
doc: announce ABI change of rte_eth_fdir_filter
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for rte_eth_fdir_filter
@ 2015-08-04 8:52 4% ` Mcnamara, John
2015-08-12 10:38 4% ` Thomas Monjalon
2015-08-04 8:53 4% ` Mcnamara, John
1 sibling, 1 reply; 200+ results
From: Mcnamara, John @ 2015-08-04 8:52 UTC (permalink / raw)
To: Wu, Jingjing, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jingjing Wu
> Sent: Monday, July 20, 2015 8:04 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] doc: announce ABI change for
> rte_eth_fdir_filter
>
> To fix the FVL's flow director issue for SCTP flow, rte_eth_fdir_filter
> need to be change to support SCTP flow keys extension. Here announce the
> ABI deprecation.
>
> Signed-off-by: jingjing.wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] doc: announce ABI change of rte_eth_fdir_filter, rte_eth_fdir_masks
2015-08-04 8:54 7% ` Mcnamara, John
@ 2015-08-04 8:56 4% ` Mcnamara, John
2015-08-12 14:19 4% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Mcnamara, John @ 2015-08-04 8:56 UTC (permalink / raw)
To: Lu, Wenzhuo, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenzhuo Lu
> Sent: Friday, July 10, 2015 3:24 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2] doc: announce ABI change of
> rte_eth_fdir_filter, rte_eth_fdir_masks
>
> For x550 supports 2 new flow director modes, MAC VLAN and Cloud. The MAC
> VLAN mode means the MAC and VLAN are monitored. The Cloud mode is for
> VxLAN and NVGRE, and the tunnel type, TNI/VNI, inner MAC and inner VLAN
> are monitored. So, there're a few new lookup fields for these 2 new modes,
> like MAC, tunnel type, TNI/VNI. We have to change the ABI to support these
> new lookup fields.
>
> v2 changes:
> * Correct the names of the structures.
> * Wrap the words.
> * Add explanation for the new modes.
>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing
@ 2015-08-11 2:12 7% Jingjing Wu
2015-08-11 3:01 4% ` Zhang, Helin
2015-08-11 5:41 4% ` Liu, Jijiang
0 siblings, 2 replies; 200+ results
From: Jingjing Wu @ 2015-08-11 2:12 UTC (permalink / raw)
To: dev
APIs for flow director filters has been replaced by rte_eth_dev_filter_ctrl
by previous releases. Enic, ixgbe and i40e are switched to support filter_ctrl
APIs, so the old APIs are useless, and ready to be removed now.
This patch announces the ABI change for these APIs removing.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5330d3b..b1be38f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -35,3 +35,39 @@ Deprecation Notices
* The following fields have been deprecated in rte_eth_stats:
imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
+
+* APIs for flow director filters has been replaced by rte_eth_dev_filter_ctrl.
+ Following old APIs, and data structures are deprecated and will be removed
+ with version 2.2. There is no backward compatibility planned from release 2.2.
+ APIs:
+ rte_eth_dev_fdir_add_signature_filter
+ rte_eth_dev_fdir_update_signature_filter
+ rte_eth_dev_fdir_remove_signature_filter
+ rte_eth_dev_fdir_get_infos
+ rte_eth_dev_fdir_add_perfect_filter
+ rte_eth_dev_fdir_update_perfect_filter
+ rte_eth_dev_fdir_remove_perfect_filter
+ rte_eth_dev_fdir_set_masks
+ Data structures:
+ struct rte_fdir_filter
+ struct rte_fdir_masks
+ struct rte_eth_fdir
+ fields in struct eth_dev_ops
+ fdir_add_signature_filter
+ fdir_update_signature_filter
+ fdir_remove_signature_filter
+ fdir_infos_get
+ fdir_add_perfect_filter
+ fdir_update_perfect_filter
+ fdir_remove_perfect_filter
+ fdir_set_masks
+ enum rte_l4type
+ enum rte_iptype
+ fdir_add_signature_filter_t
+ fdir_update_signature_filter_t
+ fdir_remove_signature_filter_t
+ fdir_infos_get_t
+ fdir_add_perfect_filter_t
+ fdir_update_perfect_filter_t
+ fdir_remove_perfect_filter_t
+ fdir_set_masks_t
--
2.4.0
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing
2015-08-11 2:12 7% [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing Jingjing Wu
@ 2015-08-11 3:01 4% ` Zhang, Helin
2015-08-12 9:02 4% ` Thomas Monjalon
2015-08-11 5:41 4% ` Liu, Jijiang
1 sibling, 1 reply; 200+ results
From: Zhang, Helin @ 2015-08-11 3:01 UTC (permalink / raw)
To: Wu, Jingjing, dev
> -----Original Message-----
> From: Wu, Jingjing
> Sent: Monday, August 10, 2015 7:12 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Zhang, Helin; Liu, Jijiang
> Subject: [PATCH] doc: announce ABI change for old flow director APIs removing
>
> APIs for flow director filters has been replaced by rte_eth_dev_filter_ctrl by
> previous releases. Enic, ixgbe and i40e are switched to support filter_ctrl APIs, so
> the old APIs are useless, and ready to be removed now.
> This patch announces the ABI change for these APIs removing.
>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Helin Zhang <helin.zhang@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing
2015-08-11 2:12 7% [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing Jingjing Wu
2015-08-11 3:01 4% ` Zhang, Helin
@ 2015-08-11 5:41 4% ` Liu, Jijiang
1 sibling, 0 replies; 200+ results
From: Liu, Jijiang @ 2015-08-11 5:41 UTC (permalink / raw)
To: Wu, Jingjing, dev
> -----Original Message-----
> From: Wu, Jingjing
> Sent: Tuesday, August 11, 2015 10:12 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Zhang, Helin; Liu, Jijiang
> Subject: [PATCH] doc: announce ABI change for old flow director APIs removing
>
> APIs for flow director filters has been replaced by rte_eth_dev_filter_ctrl by
> previous releases. Enic, ixgbe and i40e are switched to support filter_ctrl APIs,
> so the old APIs are useless, and ready to be removed now.
> This patch announces the ABI change for these APIs removing.
>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH] doc: restructured release notes documentation
@ 2015-08-11 11:57 3% John McNamara
0 siblings, 0 replies; 200+ results
From: John McNamara @ 2015-08-11 11:57 UTC (permalink / raw)
To: dev
Restructured the Release Notes documentation into a more useful structure
that is easier to use and to update between releases.
The main changes are:
* Each release version has it's own section with New Features,
Resolved Issues, Known Issues and API/ABI Changes.
* Redundant sections have been removed.
* The FAQ section have been moved to a standalone document.
* The Known Issues tables have been changed to definition lists.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/faq/faq.rst | 240 +++++
doc/guides/faq/index.rst | 42 +
doc/guides/index.rst | 1 +
doc/guides/rel_notes/deprecation.rst | 7 +-
doc/guides/rel_notes/faq.rst | 228 -----
doc/guides/rel_notes/index.rst | 14 +-
doc/guides/rel_notes/known_issues.rst | 1232 ++++++++++-------------
doc/guides/rel_notes/new_features.rst | 129 ---
doc/guides/rel_notes/rel_description.rst | 145 +--
doc/guides/rel_notes/release_1_8.rst | 64 ++
doc/guides/rel_notes/release_2_0.rst | 133 +++
doc/guides/rel_notes/release_2_1.rst | 69 ++
doc/guides/rel_notes/resolved_issues.rst | 1395 ---------------------------
doc/guides/rel_notes/supported_features.rst | 396 --------
doc/guides/rel_notes/supported_os.rst | 14 +-
doc/guides/rel_notes/updating_apps.rst | 136 ---
16 files changed, 1061 insertions(+), 3184 deletions(-)
create mode 100644 doc/guides/faq/faq.rst
create mode 100644 doc/guides/faq/index.rst
delete mode 100644 doc/guides/rel_notes/faq.rst
delete mode 100644 doc/guides/rel_notes/new_features.rst
create mode 100644 doc/guides/rel_notes/release_1_8.rst
create mode 100644 doc/guides/rel_notes/release_2_0.rst
create mode 100644 doc/guides/rel_notes/release_2_1.rst
delete mode 100644 doc/guides/rel_notes/resolved_issues.rst
delete mode 100644 doc/guides/rel_notes/supported_features.rst
delete mode 100644 doc/guides/rel_notes/updating_apps.rst
diff --git a/doc/guides/faq/faq.rst b/doc/guides/faq/faq.rst
new file mode 100644
index 0000000..7b2394c
--- /dev/null
+++ b/doc/guides/faq/faq.rst
@@ -0,0 +1,240 @@
+.. BSD LICENSE
+ Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+What does "EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory" mean?
+---------------------------------------------------------------------------------------------
+
+This is most likely due to the test application not being run with sudo to promote the user to a superuser.
+Alternatively, applications can also be run as regular user.
+For more information, please refer to :doc:`DPDK Getting Started Guide </linux_gsg/index>`.
+
+
+If I want to change the number of TLB Hugepages allocated, how do I remove the original pages allocated?
+--------------------------------------------------------------------------------------------------------
+
+The number of pages allocated can be seen by executing the following command::
+
+ grep Huge /proc/meminfo
+
+Once all the pages are mmapped by an application, they stay that way.
+If you start a test application with less than the maximum, then you have free pages.
+When you stop and restart the test application, it looks to see if the pages are available in the ``/dev/huge`` directory and mmaps them.
+If you look in the directory, you will see ``n`` number of 2M pages files. If you specified 1024, you will see 1024 page files.
+These are then placed in memory segments to get contiguous memory.
+
+If you need to change the number of pages, it is easier to first remove the pages. The tools/setup.sh script provides an option to do this.
+See the "Quick Start Setup Script" section in the :doc:`DPDK Getting Started Guide </linux_gsg/index>` for more information.
+
+
+If I execute "l2fwd -c f -m 64 -n 3 -- -p 3", I get the following output, indicating that there are no socket 0 hugepages to allocate the mbuf and ring structures to?
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------
+
+I have set up a total of 1024 Hugepages (that is, allocated 512 2M pages to each NUMA node).
+
+The -m command line parameter does not guarantee that huge pages will be reserved on specific sockets. Therefore, allocated huge pages may not be on socket 0.
+To request memory to be reserved on a specific socket, please use the --socket-mem command-line parameter instead of -m.
+
+
+I am running a 32-bit DPDK application on a NUMA system, and sometimes the application initializes fine but cannot allocate memory. Why is that happening?
+----------------------------------------------------------------------------------------------------------------------------------------------------------
+
+32-bit applications have limitations in terms of how much virtual memory is available, hence the number of hugepages they are able to allocate is also limited (1 GB per page size).
+If your system has a lot (>1 GB per page size) of hugepage memory, not all of it will be allocated.
+Due to hugepages typically being allocated on a local NUMA node, the hugepages allocation the application gets during the initialization depends on which
+NUMA node it is running on (the EAL does not affinitize cores until much later in the initialization process).
+Sometimes, the Linux OS runs the DPDK application on a core that is located on a different NUMA node from DPDK master core and
+therefore all the hugepages are allocated on the wrong socket.
+
+To avoid this scenario, either lower the amount of hugepage memory available to 1 GB per page size (or less), or run the application with taskset
+affinitizing the application to a would-be master core.
+
+For example, if your EAL coremask is 0xff0, the master core will usually be the first core in the coremask (0x10); this is what you have to supply to taskset::
+
+ taskset 0x10 ./l2fwd -c 0xff0 -n 2
+
+In this way, the hugepages have a greater chance of being allocated to the correct socket.
+Additionally, a ``--socket-mem`` option could be used to ensure the availability of memory for each socket, so that if hugepages were allocated on
+the wrong socket, the application simply will not start.
+
+
+On application startup, there is a lot of EAL information printed. Is there any way to reduce this?
+---------------------------------------------------------------------------------------------------
+
+Yes, each EAL has a configuration file that is located in the /config directory. Within each configuration file, you will find CONFIG_RTE_LOG_LEVEL=8.
+You can change this to a lower value, such as 6 to reduce this printout of debug information. The following is a list of LOG levels that can be found in the rte_log.h file.
+You must remove, then rebuild, the EAL directory for the change to become effective as the configuration file creates the rte_config.h file in the EAL directory.
+
+.. code-block:: c
+
+ #define RTE_LOG_EMERG 1U /* System is unusable. */
+ #define RTE_LOG_ALERT 2U /* Action must be taken immediately. */
+ #define RTE_LOG_CRIT 3U /* Critical conditions. */
+ #define RTE_LOG_ERR 4U /* Error conditions. */
+ #define RTE_LOG_WARNING 5U /* Warning conditions. */
+ #define RTE_LOG_NOTICE 6U /* Normal but significant condition. */
+ #define RTE_LOG_INFO 7U /* Informational. */
+ #define RTE_LOG_DEBUG 8U /* Debug-level messages. */
+
+
+How can I tune my network application to achieve lower latency?
+---------------------------------------------------------------
+
+Traditionally, there is a trade-off between throughput and latency. An application can be tuned to achieve a high throughput,
+but the end-to-end latency of an average packet typically increases as a result.
+Similarly, the application can be tuned to have, on average, a low end-to-end latency at the cost of lower throughput.
+
+To achieve higher throughput, the DPDK attempts to aggregate the cost of processing each packet individually by processing packets in bursts.
+Using the testpmd application as an example, the "burst" size can be set on the command line to a value of 16 (also the default value).
+This allows the application to request 16 packets at a time from the PMD.
+The testpmd application then immediately attempts to transmit all the packets that were received, in this case, all 16 packets.
+The packets are not transmitted until the tail pointer is updated on the corresponding TX queue of the network port.
+This behavior is desirable when tuning for high throughput because the cost of tail pointer updates to both the RX and TX queues
+can be spread across 16 packets, effectively hiding the relatively slow MMIO cost of writing to the PCIe* device.
+
+However, this is not very desirable when tuning for low latency, because the first packet that was received must also wait for the other 15 packets to be received.
+It cannot be transmitted until the other 15 packets have also been processed because the NIC will not know to transmit the packets until the TX tail pointer has been updated,
+which is not done until all 16 packets have been processed for transmission.
+
+To consistently achieve low latency even under heavy system load, the application developer should avoid processing packets in bunches.
+The testpmd application can be configured from the command line to use a burst value of 1.
+This allows a single packet to be processed at a time, providing lower latency, but with the added cost of lower throughput.
+
+
+Without NUMA enabled, my network throughput is low, why?
+--------------------------------------------------------
+
+I have a dual Intel® Xeon® E5645 processors 2.40 GHz with four Intel® 82599 10 Gigabit Ethernet NICs.
+Using eight logical cores on each processor with RSS set to distribute network load from two 10 GbE interfaces to the cores on each processor.
+
+Without NUMA enabled, memory is allocated from both sockets, since memory is interleaved.
+Therefore, each 64B chunk is interleaved across both memory domains.
+
+The first 64B chunk is mapped to node 0, the second 64B chunk is mapped to node 1, the third to node 0, the fourth to node 1.
+If you allocated 256B, you would get memory that looks like this:
+
+.. code-block:: console
+
+ 256B buffer
+ Offset 0x00 - Node 0
+ Offset 0x40 - Node 1
+ Offset 0x80 - Node 0
+ Offset 0xc0 - Node 1
+
+Therefore, packet buffers and descriptor rings are allocated from both memory domains, thus incurring QPI bandwidth accessing the other memory and much higher latency.
+For best performance with NUMA disabled, only one socket should be populated.
+
+
+I am getting errors about not being able to open files. Why?
+------------------------------------------------------------
+
+As the DPDK operates, it opens a lot of files, which can result in reaching the open files limits, which is set using the ulimit command or in the limits.conf file.
+This is especially true when using a large number (>512) of 2 MB huge pages. Please increase the open file limit if your application is not able to open files.
+This can be done either by issuing a ulimit command or editing the limits.conf file. Please consult Linux* manpages for usage information.
+
+
+Does my kernel require patching to run the DPDK?
+------------------------------------------------
+
+Any kernel greater than version 2.6.33 can be used without any patches applied. The following kernels may require patches to provide hugepage support:
+
+* Kernel version 2.6.32 requires the following patches applied:
+
+ * `mm: hugetlb: add hugepage support to pagemap <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=5dc37642cbce34619e4588a9f0bdad1d2f870956>`_
+
+ * `mm: hugetlb: fix hugepage memory leak in walk_page_range() <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d33b9f45bd24a6391bc05e2b5a13c1b5787ca9c2>`_
+
+ * `hugetlb: add nodemask arg to huge page alloc, free and surplus adjust functions <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=6ae11b278bca1cd41651bae49a8c69de2f6a6262>`_
+ (not mandatory, but recommended on a NUMA system to support per-NUMA node hugepages allocation)
+
+* Kernel version 2.6.31, requires the above patches plus the following:
+
+ * `UIO: Add name attributes for mappings and port regions <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8205779114e8f612549d191f8e151526a74ab9f2>`_
+
+
+VF driver for IXGBE devices cannot be initialized
+-------------------------------------------------
+
+Some versions of Linux* IXGBE driver do not assign a random MAC address to VF devices at initialization.
+In this case, this has to be done manually on the VM host, using the following command:
+
+.. code-block:: console
+
+ ip link set <interface> vf <VF function> mac <MAC address>
+
+where <interface> being the interface providing the virtual functions for example, eth0, <VF function> being the virtual function number, for example 0,
+and <MAC address> being the desired MAC address.
+
+
+Is it safe to add an entry to the hash table while running?
+------------------------------------------------------------
+Currently the table implementation is not a thread safe implementation and assumes that locking between threads and processes is handled by the user's application.
+This is likely to be supported in future releases.
+
+
+What is the purpose of setting iommu=pt?
+----------------------------------------
+DPDK uses a 1:1 mapping and does not support IOMMU. IOMMU allows for simpler VM physical address translation.
+The second role of IOMMU is to allow protection from unwanted memory access by an unsafe device that has DMA privileges.
+Unfortunately, the protection comes with an extremely high performance cost for high speed NICs.
+
+Setting ``iommu=pt`` disables IOMMU support for the hypervisor.
+
+
+When trying to send packets from an application to itself, meaning smac==dmac, using Intel(R) 82599 VF packets are lost.
+------------------------------------------------------------------------------------------------------------------------
+
+Check on register ``LLE(PFVMTXSSW[n])``, which allows an individual pool to send traffic and have it looped back to itself.
+
+
+Can I split packet RX to use DPDK and have an application's higher order functions continue using Linux* pthread?
+-----------------------------------------------------------------------------------------------------------------
+
+The DPDK's lcore threads are Linux* pthreads bound onto specific cores. Configure the DPDK to do work on the same
+cores and run the application's other work on other cores using the DPDK's "coremask" setting to specify which
+cores it should launch itself on.
+
+
+Is it possible to exchange data between DPDK processes and regular userspace processes via some shared memory or IPC mechanism?
+-------------------------------------------------------------------------------------------------------------------------------
+
+Yes - DPDK processes are regular Linux/BSD processes, and can use all OS provided IPC mechanisms.
+
+
+Can the multiple queues in Intel(R) I350 be used with DPDK?
+-----------------------------------------------------------
+
+I350 has RSS support and 8 queue pairs can be used in RSS mode. It should work with multi-queue DPDK applications using RSS.
+
+
+How can hugepage-backed memory be shared among multiple processes?
+------------------------------------------------------------------
+
+See the Primary and Secondary examples in the multi-process sample application, see :doc:`/sample_app_ug/multi_process`.
diff --git a/doc/guides/faq/index.rst b/doc/guides/faq/index.rst
new file mode 100644
index 0000000..5bc84ac
--- /dev/null
+++ b/doc/guides/faq/index.rst
@@ -0,0 +1,42 @@
+.. BSD LICENSE
+ Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+DPDK FAQ
+========
+
+This document contains some Frequently Asked Questions that arise when working with DPDK.
+
+Contents
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ faq
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 0a89efd..ebcde22 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -46,3 +46,4 @@ Contents:
testpmd_app_ug/index
rel_notes/index
guidelines/index
+ faq/index
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5330d3b..3db782a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -1,12 +1,9 @@
-Deprecation
-===========
+ABI and API Deprecation
+=======================
See the :doc:`guidelines document for details of the ABI policy </guidelines/versioning>`.
API and ABI deprecation notices are to be posted here.
-Help to update from a previous release is provided in
-:doc:`another section </rel_notes/updating_apps>`.
-
Deprecation Notices
-------------------
diff --git a/doc/guides/rel_notes/faq.rst b/doc/guides/rel_notes/faq.rst
deleted file mode 100644
index d87230a..0000000
--- a/doc/guides/rel_notes/faq.rst
+++ /dev/null
@@ -1,228 +0,0 @@
-.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-Frequently Asked Questions (FAQ)
-================================
-
-When running the test application, I get “EAL: map_all_hugepages(): open failed: Permission denied Cannot init memory”?
------------------------------------------------------------------------------------------------------------------------
-
-This is most likely due to the test application not being run with sudo to promote the user to a superuser.
-Alternatively, applications can also be run as regular user.
-For more information, please refer to *DPDK Getting Started Guide*.
-
-If I want to change the number of TLB Hugepages allocated, how do I remove the original pages allocated?
---------------------------------------------------------------------------------------------------------
-
-The number of pages allocated can be seen by executing the cat /proc/meminfo|grep Huge command.
-Once all the pages are mmapped by an application, they stay that way.
-If you start a test application with less than the maximum, then you have free pages.
-When you stop and restart the test application, it looks to see if the pages are available in the /dev/huge directory and mmaps them.
-If you look in the directory, you will see n number of 2M pages files. If you specified 1024, you will see 1024 files.
-These are then placed in memory segments to get contiguous memory.
-
-If you need to change the number of pages, it is easier to first remove the pages. The tools/setup.sh script provides an option to do this.
-See the “Quick Start Setup Script” section in the *DPDK Getting Started Guide* for more information.
-
-If I execute “l2fwd -c f -m 64 –n 3 -- -p 3”, I get the following output, indicating that there are no socket 0 hugepages to allocate the mbuf and ring structures to?
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-I have set up a total of 1024 Hugepages (that is, allocated 512 2M pages to each NUMA node).
-
-The -m command line parameter does not guarantee that huge pages will be reserved on specific sockets. Therefore, allocated huge pages may not be on socket 0.
-To request memory to be reserved on a specific socket, please use the --socket-mem command-line parameter instead of -m.
-
-I am running a 32-bit DPDK application on a NUMA system, and sometimes the application initializes fine but cannot allocate memory. Why is that happening?
------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
-32-bit applications have limitations in terms of how much virtual memory is available, hence the number of hugepages they are able to allocate is also limited (1 GB per page size).
-If your system has a lot (>1 GB per page size) of hugepage memory, not all of it will be allocated.
-Due to hugepages typically being allocated on a local NUMA node, the hugepages allocation the application gets during the initialization depends on which
-NUMA node it is running on (the EAL does not affinitize cores until much later in the initialization process).
-Sometimes, the Linux OS runs the DPDK application on a core that is located on a different NUMA node from DPDK master core and
-therefore all the hugepages are allocated on the wrong socket.
-
-To avoid this scenario, either lower the amount of hugepage memory available to 1 GB per page size (or less), or run the application with taskset
-affinitizing the application to a would-be master core.
-For example, if your EAL coremask is 0xff0, the master core will usually be the first core in the coremask (0x10); this is what you have to supply to taskset, for example,
-taskset 0x10 ./l2fwd -c 0xff0 -n 2.
-In this way, the hugepages have a greater chance of being allocated to the correct socket.
-Additionally, a --socket-mem option could be used to ensure the availability of memory for each socket, so that if hugepages were allocated on
-the wrong socket, the application simply will not start.
-
-On application startup, there is a lot of EAL information printed. Is there any way to reduce this?
----------------------------------------------------------------------------------------------------
-
-Yes, each EAL has a configuration file that is located in the /config directory. Within each configuration file, you will find CONFIG_RTE_LOG_LEVEL=8.
-You can change this to a lower value, such as 6 to reduce this printout of debug information. The following is a list of LOG levels that can be found in the rte_log.h file.
-You must remove, then rebuild, the EAL directory for the change to become effective as the configuration file creates the rte_config.h file in the EAL directory.
-
-.. code-block:: c
-
- #define RTE_LOG_EMERG 1U /* System is unusable. */
- #define RTE_LOG_ALERT 2U /* Action must be taken immediately. */
- #define RTE_LOG_CRIT 3U /* Critical conditions. */
- #define RTE_LOG_ERR 4U /* Error conditions. */
- #define RTE_LOG_WARNING 5U /* Warning conditions. */
- #define RTE_LOG_NOTICE 6U /* Normal but significant condition. */
- #define RTE_LOG_INFO 7U /* Informational. */
- #define RTE_LOG_DEBUG 8U /* Debug-level messages. */
-
-How can I tune my network application to achieve lower latency?
----------------------------------------------------------------
-
-Traditionally, there is a trade-off between throughput and latency. An application can be tuned to achieve a high throughput,
-but the end-to-end latency of an average packet typically increases as a result.
-Similarly, the application can be tuned to have, on average, a low end-to-end latency at the cost of lower throughput.
-
-To achieve higher throughput, the DPDK attempts to aggregate the cost of processing each packet individually by processing packets in bursts.
-Using the testpmd application as an example, the “burst” size can be set on the command line to a value of 16 (also the default value).
-This allows the application to request 16 packets at a time from the PMD.
-The testpmd application then immediately attempts to transmit all the packets that were received, in this case, all 16 packets.
-The packets are not transmitted until the tail pointer is updated on the corresponding TX queue of the network port.
-This behavior is desirable when tuning for high throughput because the cost of tail pointer updates to both the RX and TX queues
-can be spread across 16 packets, effectively hiding the relatively slow MMIO cost of writing to the PCIe* device.
-
-However, this is not very desirable when tuning for low latency, because the first packet that was received must also wait for the other 15 packets to be received.
-It cannot be transmitted until the other 15 packets have also been processed because the NIC will not know to transmit the packets until the TX tail pointer has been updated,
-which is not done until all 16 packets have been processed for transmission.
-
-To consistently achieve low latency even under heavy system load, the application developer should avoid processing packets in bunches.
-The testpmd application can be configured from the command line to use a burst value of 1.
-This allows a single packet to be processed at a time, providing lower latency, but with the added cost of lower throughput.
-
-Without NUMA enabled, my network throughput is low, why?
---------------------------------------------------------
-
-I have a dual Intel® Xeon® E5645 processors @2.40 GHz with four Intel® 82599 10 Gigabit Ethernet NICs.
-Using eight logical cores on each processor with RSS set to distribute network load from two 10 GbE interfaces to the cores on each processor.
-
-Without NUMA enabled, memory is allocated from both sockets, since memory is interleaved.
-Therefore, each 64B chunk is interleaved across both memory domains.
-
-The first 64B chunk is mapped to node 0, the second 64B chunk is mapped to node 1, the third to node 0, the fourth to node 1.
-If you allocated 256B, you would get memory that looks like this:
-
-.. code-block:: console
-
- 256B buffer
- Offset 0x00 - Node 0
- Offset 0x40 - Node 1
- Offset 0x80 - Node 0
- Offset 0xc0 - Node 1
-
-Therefore, packet buffers and descriptor rings are allocated from both memory domains, thus incurring QPI bandwidth accessing the other memory and much higher latency.
-For best performance with NUMA disabled, only one socket should be populated.
-
-I am getting errors about not being able to open files. Why?
-------------------------------------------------------------
-
-As the DPDK operates, it opens a lot of files, which can result in reaching the open files limits, which is set using the ulimit command or in the limits.conf file.
-This is especially true when using a large number (>512) of 2 MB huge pages. Please increase the open file limit if your application is not able to open files.
-This can be done either by issuing a ulimit command or editing the limits.conf file. Please consult Linux* manpages for usage information.
-
-Does my kernel require patching to run the DPDK?
--------------------------------------------------------
-
-Any kernel greater than version 2.6.33 can be used without any patches applied. The following kernels may require patches to provide hugepage support:
-
-* kernel version 2.6.32 requires the following patches applied:
-
- * `addhugepage support to pagemap <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=5dc37642cbce34619e4588a9f0bdad1d2f870956>`_
-
- * `fix hugepage memory leak <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d33b9f45bd24a6391bc05e2b5a13c1b5787ca9c2>`_
-
- * `add nodemask arg to huge page alloc <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=6ae11b278bca1cd41651bae49a8c69de2f6a6262>`_
-
- (not mandatory, but recommended on a NUMA system to support per-NUMA node hugepages allocation)
-
-* kernel version 2.6.31, requires the following patches applied:
-
- * `fix hugepage memory leak <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d33b9f45bd24a6391bc05e2b5a13c1b5787ca9c2>`_
-
- * `add hugepage support to pagemap <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=5dc37642cbce34619e4588a9f0bdad1d2f870956>`_
-
- * `add uio name attributes and port regions <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8205779114e8f612549d191f8e151526a74ab9f2>`_
-
- * `add nodemask arg to huge page alloc <http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=6ae11b278bca1cd41651bae49a8c69de2f6a6262>`_
-
- (not mandatory, but recommended on a NUMA system to support per-NUMA node hugepages allocation)
-
-.. note::
-
- Blue text in the lists above are direct links to the patch downloads.
-
-VF driver for IXGBE devices cannot be initialized.
---------------------------------------------------
-
-Some versions of Linux* IXGBE driver do not assign a random MAC address to VF devices at initialization.
-In this case, this has to be done manually on the VM host, using the following command:
-
-.. code-block:: console
-
- ip link set <interface> vf <VF function> mac <MAC address>
-
-where <interface> being the interface providing the virtual functions for example, eth0, <VF function> being the virtual function number, for example 0,
-and <MAC address> being the desired MAC address.
-
-Is it safe to add an entry to the hash table while running?
-------------------------------------------------------------
-Currently the table implementation is not a thread safe implementation and assumes that locking between threads and processes is handled by the user's application.
-This is likely to be supported in future releases.
-
-What is the purpose of setting iommu=pt?
-----------------------------------------
-DPDK uses a 1:1 mapping and does not support IOMMU. IOMMU allows for simpler VM physical address translation.
-The second role of IOMMU is to allow protection from unwanted memory access by an unsafe device that has DMA privileges.
-Unfortunately, the protection comes with an extremely high performance cost for high speed NICs.
-
-iommu=pt disables IOMMU support for the hypervisor.
-
-When trying to send packets from an application to itself, meaning smac==dmac, using Intel(R) 82599 VF packets are lost.
-------------------------------------------------------------------------------------------------------------------------
-Check on register LLE(PFVMTXSSW[n]), which allows an individual pool to send traffic and have it looped back to itself.
-
-Can I split packet RX to use DPDK and have an application's higher order functions continue using Linux* pthread?
------------------------------------------------------------------------------------------------------------------
-The DPDK's lcore threads are Linux* pthreads bound onto specific cores. Configure the DPDK to do work on the same
-cores and run the application's other work on other cores using the DPDK's "coremask" setting to specify which
-cores it should launch itself on.
-
-Is it possible to exchange data between DPDK processes and regular userspace processes via some shared memory or IPC mechanism?
--------------------------------------------------------------------------------------------------------------------------------
-Yes - DPDK processes are regular Linux/BSD processes, and can use all OS provided IPC mechanisms.
-
-Can the multiple queues in Intel(R) I350 be used with DPDK?
------------------------------------------------------------
-I350 has RSS support and 8 queue pairs can be used in RSS mode. It should work with multi-queue DPDK applications using RSS.
-
-How can hugepage-backed memory be shared among multiple processes?
-------------------------------------------------------------------
-See the Primary and Secondary examples in the multi-process sample application.
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 9d66cd8..f0f97d1 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -28,10 +28,8 @@
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-Release Notes
-=============
-
-Package Version: 2.0
+DPDK Release Notes
+==================
|today|
@@ -42,11 +40,9 @@ Contents
:numbered:
rel_description
- new_features
- supported_features
+ release_2_1
+ release_2_0
+ release_1_8
supported_os
- updating_apps
known_issues
- resolved_issues
deprecation
- faq
diff --git a/doc/guides/rel_notes/known_issues.rst b/doc/guides/rel_notes/known_issues.rst
index a39c714..b9a52d0 100644
--- a/doc/guides/rel_notes/known_issues.rst
+++ b/doc/guides/rel_notes/known_issues.rst
@@ -28,836 +28,592 @@
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-Known Issues and Limitations
-============================
-This section describes known issues with the DPDK software.
+Known Issues and Limitations in Legacy Releases
+===============================================
+
+This section describes known issues with the DPDK software that aren't covered in the version specific release
+notes sections.
+
Unit Test for Link Bonding may fail at test_tlb_tx_burst()
----------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Unit Test for Link Bonding may fail at test_tlb_tx_burst() |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00390304 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Unit tests will fail at test_tlb_tx_burst function with error for uneven distribution|
-| | of packets. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Unit test link_bonding_autotest will fail |
-| | |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | There is no workaround available. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Fedora 20 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Link Bonding |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+
+**Description**:
+ Unit tests will fail in ``test_tlb_tx_burst()`` function with error for uneven distribution of packets.
+
+**Implication**:
+ Unit test link_bonding_autotest will fail.
+
+**Resolution/Workaround**:
+ There is no workaround available.
+
+**Affected Environment/Platform**:
+ Fedora 20.
+
+**Driver/Module**:
+ Link Bonding.
Pause Frame Forwarding does not work properly on igb
----------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Pause Frame forwarding does not work properly on igb |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00384637 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | For igb devices rte_eth_flow_ctrl_set is not working as expected. |
-| | Pause frames are always forwarded on igb, regardless of the RFCE, MPMCF and DPF |
-| | registers. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Pause frames will never be rejected by the host on 1G NICs and they will always be |
-| | forwarded. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | There is no workaround available. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ For igb devices rte_eth_flow_ctrl_set does not work as expected.
+ Pause frames are always forwarded on igb, regardless of the ``RFCE``, ``MPMCF`` and ``DPF`` registers.
+
+**Implication**:
+ Pause frames will never be rejected by the host on 1G NICs and they will always be forwarded.
+
+**Resolution/Workaround**:
+ There is no workaround available.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
In packets provided by the PMD, some flags are missing
------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | In packets provided by the PMD, some flags are missing |
-| | |
-+================================+======================================================================================+
-| Reference # | 3 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | In packets provided by the PMD, some flags are missing. |
-| | The application does not have access to information provided by the hardware |
-| | (packet is broadcast, packet is multicast, packet is IPv4 and so on). |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The “ol_flags” field in the “rte_mbuf” structure is not correct and should not be |
-| | used. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | The application has to parse the Ethernet header itself to get the information, |
-| | which is slower. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ In packets provided by the PMD, some flags are missing.
+ The application does not have access to information provided by the hardware
+ (packet is broadcast, packet is multicast, packet is IPv4 and so on).
+
+**Implication**:
+ The ``ol_flags`` field in the ``rte_mbuf`` structure is not correct and should not be used.
+
+**Resolution/Workaround**:
+ The application has to parse the Ethernet header itself to get the information, which is slower.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
The rte_malloc library is not fully implemented
-----------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | The rte_malloc library is not fully implemented |
-| | |
-+================================+======================================================================================+
-| Reference # | 6 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The rte_malloc library is not fully implemented. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | All debugging features of rte_malloc library described in architecture documentation |
-| | are not yet implemented. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | No workaround available. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | rte_malloc |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ The ``rte_malloc`` library is not fully implemented.
+
+**Implication**:
+ All debugging features of rte_malloc library described in architecture documentation are not yet implemented.
+
+**Resolution/Workaround**:
+ No workaround available.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ ``rte_malloc``.
+
HPET reading is slow
--------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | HPET reading is slow |
-| | |
-+================================+======================================================================================+
-| Reference # | 7 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Reading the HPET chip is slow. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | An application that calls “rte_get_hpet_cycles()” or “rte_timer_manage()” runs |
-| | slower. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | The application should not call these functions too often in the main loop. |
-| | An alternative is to use the TSC register through “rte_rdtsc()” which is faster, |
-| | but specific to an lcore and is a cycle reference, not a time reference. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Reading the HPET chip is slow.
+
+**Implication**:
+ An application that calls ``rte_get_hpet_cycles()`` or ``rte_timer_manage()`` runs slower.
+
+**Resolution/Workaround**:
+ The application should not call these functions too often in the main loop.
+ An alternative is to use the TSC register through ``rte_rdtsc()`` which is faster,
+ but specific to an lcore and is a cycle reference, not a time reference.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Environment Abstraction Layer (EAL).
+
HPET timers do not work on the Osage customer reference platform
----------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | HPET timers do not work on the Osage customer reference platform |
-| | |
-+================================+======================================================================================+
-| Reference # | 17 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | HPET timers do not work on the Osage customer reference platform |
-| | which includes an Intel® Xeon® processor 5500 series processor) using the |
-| | released BIOS from Intel. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | On Osage boards, the implementation of the “rte_delay_us()” function must be changed |
-| | to not use the HPET timer. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | This can be addressed by building the system with the “CONFIG_RTE_LIBEAL_USE_HPET=n” |
-| | configuration option or by using the --no-hpet EAL option. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | The Osage customer reference platform. |
-| | |
-| | Other vendor platforms with Intel® Xeon® processor 5500 series processors should |
-| | work correctly, provided the BIOS supports HPET. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | lib/librte_eal/common/include/rte_cycles.h |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ HPET timers do not work on the Osage customer reference platform which includes an Intel® Xeon® processor 5500
+ series processor) using the released BIOS from Intel.
+
+**Implication**:
+ On Osage boards, the implementation of the ``rte_delay_us()`` function must be changed to not use the HPET timer.
+
+**Resolution/Workaround**:
+ This can be addressed by building the system with the ``CONFIG_RTE_LIBEAL_USE_HPET=n``
+ configuration option or by using the ``--no-hpet`` EAL option.
+
+**Affected Environment/Platform**:
+ The Osage customer reference platform.
+ Other vendor platforms with Intel® Xeon® processor 5500 series processors should
+ work correctly, provided the BIOS supports HPET.
+
+**Driver/Module**:
+ ``lib/librte_eal/common/include/rte_cycles.h``
+
Not all variants of supported NIC types have been used in testing
-----------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Not all variants of supported NIC types have been used in testing |
-| | |
-+================================+======================================================================================+
-| Reference # | 28 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The supported network interface cards can come in a number of variants with |
-| | different device ID's. Not all of these variants have been tested with the Intel® |
-| | DPDK. |
-| | |
-| | The NIC device identifiers used during testing: |
-| | |
-| | * Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1584] |
-| | |
-| | * Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583] |
-| | |
-| | * Intel® Ethernet Controller X710 for 10GbE SFP+ [8086:1572] |
-| | |
-| | * Intel® 82576 Gigabit Ethernet Controller [8086:10c9] |
-| | |
-| | * Intel® 82576 Quad Copper Gigabit Ethernet Controller [8086:10e8] |
-| | |
-| | * Intel® 82580 Dual Copper Gigabit Ethernet Controller [8086:150e] |
-| | |
-| | * Intel® I350 Quad Copper Gigabit Ethernet Controller [8086:1521] |
-| | |
-| | * Intel® 82599 Dual Fibre 10 Gigabit Ethernet Controller [8086:10fb] |
-| | |
-| | * Intel® Ethernet Server Adapter X520-T2 [8086: 151c] |
-| | |
-| | * Intel® Ethernet Controller X540-T2 [8086:1528] |
-| | |
-| | * Intel® 82574L Gigabit Network Connection [8086:10d3] |
-| | |
-| | * Emulated Intel® 82540EM Gigabit Ethernet Controller [8086:100e] |
-| | |
-| | * Emulated Intel® 82545EM Gigabit Ethernet Controller [8086:100f] |
-| | |
-| | * Intel® Ethernet Server Adapter X520-4 [8086:154a] |
-| | |
-| | * Intel® Ethernet Controller I210 [8086:1533] |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Risk of issues with untested variants. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | Use tested NIC variants. For those supported Ethernet controllers, additional device |
-| | IDs may be added to the software if required. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll-mode drivers |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ The supported network interface cards can come in a number of variants with different device ID's.
+ Not all of these variants have been tested with the Intel® DPDK.
+
+ The NIC device identifiers used during testing:
+
+ * Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1584]
+ * Intel® Ethernet Controller XL710 for 40GbE QSFP+ [8086:1583]
+ * Intel® Ethernet Controller X710 for 10GbE SFP+ [8086:1572]
+ * Intel® 82576 Gigabit Ethernet Controller [8086:10c9]
+ * Intel® 82576 Quad Copper Gigabit Ethernet Controller [8086:10e8]
+ * Intel® 82580 Dual Copper Gigabit Ethernet Controller [8086:150e]
+ * Intel® I350 Quad Copper Gigabit Ethernet Controller [8086:1521]
+ * Intel® 82599 Dual Fibre 10 Gigabit Ethernet Controller [8086:10fb]
+ * Intel® Ethernet Server Adapter X520-T2 [8086: 151c]
+ * Intel® Ethernet Controller X540-T2 [8086:1528]
+ * Intel® 82574L Gigabit Network Connection [8086:10d3]
+ * Emulated Intel® 82540EM Gigabit Ethernet Controller [8086:100e]
+ * Emulated Intel® 82545EM Gigabit Ethernet Controller [8086:100f]
+ * Intel® Ethernet Server Adapter X520-4 [8086:154a]
+ * Intel® Ethernet Controller I210 [8086:1533]
+
+**Implication**:
+ Risk of issues with untested variants.
+
+**Resolution/Workaround**:
+ Use tested NIC variants. For those supported Ethernet controllers, additional device
+ IDs may be added to the software if required.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll-mode drivers
+
Multi-process sample app requires exact memory mapping
------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Multi-process sample app requires exact memory mapping |
-| | |
-+================================+======================================================================================+
-| Reference # | 30 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The multi-process example application assumes that |
-| | it is possible to map the hugepage memory to the same virtual addresses in client |
-| | and server applications. Occasionally, very rarely with 64-bit, this does not occur |
-| | and a client application will fail on startup. The Linux |
-| | “address-space layout randomization” security feature can sometimes cause this to |
-| | occur. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | A multi-process client application fails to initialize. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | See the “Multi-process Limitations” section in the Intel® DPDK Programmer’s Guide |
-| | for more information. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Multi-process example application |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC address is not the MAC address assigned to the VF NIC
---------------------------------------------------------------------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC address |
-| | is not the MAC address assigned to the VF NIC |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00168379 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The 1 GbE/10 GbE SR-IOV driver can only send packets when the Ethernet header’s |
-| | source MAC address is the same as that of the VF NIC. The reason for this is that |
-| | the Linux “ixgbe” driver module in the host OS has its anti-spoofing feature enabled.|
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Packets sent using the 1 GbE/10 GbE SR-IOV driver must have the source MAC address |
-| | correctly set to that of the VF NIC. Packets with other source address values are |
-| | dropped by the NIC if the application attempts to transmit them. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Configure the Ethernet source address in each packet to match that of the VF NIC. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | 1 GbE/10 GbE VF Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ The multi-process example application assumes that
+ it is possible to map the hugepage memory to the same virtual addresses in client and server applications.
+ Occasionally, very rarely with 64-bit, this does not occur and a client application will fail on startup.
+ The Linux "address-space layout randomization" security feature can sometimes cause this to occur.
+
+**Implication**:
+ A multi-process client application fails to initialize.
+
+**Resolution/Workaround**:
+ See the "Multi-process Limitations" section in the Intel® DPDK Programmer's Guide for more information.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Multi-process example application
+
+
+Packets are not sent by the 1 GbE/10 GbE SR-IOV driver when the source MAC is not the MAC assigned to the VF NIC
+----------------------------------------------------------------------------------------------------------------
+
+**Description**:
+ The 1 GbE/10 GbE SR-IOV driver can only send packets when the Ethernet header's source MAC address is the same as
+ that of the VF NIC.
+ The reason for this is that the Linux ``ixgbe`` driver module in the host OS has its anti-spoofing feature enabled.
+
+**Implication**:
+ Packets sent using the 1 GbE/10 GbE SR-IOV driver must have the source MAC address correctly set to that of the VF NIC.
+ Packets with other source address values are dropped by the NIC if the application attempts to transmit them.
+
+**Resolution/Workaround**:
+ Configure the Ethernet source address in each packet to match that of the VF NIC.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ 1 GbE/10 GbE VF Poll Mode Driver (PMD).
+
SR-IOV drivers do not fully implement the rte_ethdev API
--------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | SR-IOV drivers do not fully implement the rte_ethdev API |
-| | |
-+================================+======================================================================================+
-| Reference # | 59 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The SR-IOV drivers only supports the following rte_ethdev API functions: |
-| | |
-| | * rte_eth_dev_configure() |
-| | |
-| | * rte_eth_tx_queue_setup() |
-| | |
-| | * rte_eth_rx_queue_setup() |
-| | |
-| | * rte_eth_dev_info_get() |
-| | |
-| | * rte_eth_dev_start() |
-| | |
-| | * rte_eth_tx_burst() |
-| | |
-| | * rte_eth_rx_burst() |
-| | |
-| | * rte_eth_dev_stop() |
-| | |
-| | * rte_eth_stats_get() |
-| | |
-| | * rte_eth_stats_reset() |
-| | |
-| | * rte_eth_link_get() |
-| | |
-| | * rte_eth_link_get_no_wait() |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Calling an unsupported function will result in an application error. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Do not use other rte_ethdev API functions in applications that use the SR-IOV |
-| | drivers. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | VF Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ The SR-IOV drivers only supports the following rte_ethdev API functions:
+
+ * rte_eth_dev_configure()
+ * rte_eth_tx_queue_setup()
+ * rte_eth_rx_queue_setup()
+ * rte_eth_dev_info_get()
+ * rte_eth_dev_start()
+ * rte_eth_tx_burst()
+ * rte_eth_rx_burst()
+ * rte_eth_dev_stop()
+ * rte_eth_stats_get()
+ * rte_eth_stats_reset()
+ * rte_eth_link_get()
+ * rte_eth_link_get_no_wait()
+
+**Implication**:
+ Calling an unsupported function will result in an application error.
+
+**Resolution/Workaround**:
+ Do not use other rte_ethdev API functions in applications that use the SR-IOV drivers.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ VF Poll Mode Driver (PMD).
+
PMD does not work with --no-huge EAL command line parameter
-----------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | PMD does not work with --no-huge EAL command line parameter |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00373461 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Currently, the DPDK does not store any information about memory allocated by |
-| | malloc() (for example, NUMA node, physical address), hence PMD drivers do not work |
-| | when the --no-huge command line parameter is supplied to EAL. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Sending and receiving data with PMD will not work. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Use huge page memory or use VFIO to map devices. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Systems running the DPDK on Linux |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Currently, the DPDK does not store any information about memory allocated by ``malloc()` (for example, NUMA node,
+ physical address), hence PMD drivers do not work when the ``--no-huge`` command line parameter is supplied to EAL.
+
+**Implication**:
+ Sending and receiving data with PMD will not work.
+
+**Resolution/Workaround**:
+ Use huge page memory or use VFIO to map devices.
+
+**Affected Environment/Platform**:
+ Systems running the DPDK on Linux
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
Some hardware off-load functions are not supported by the VF Driver
-------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Some hardware off-load functions are not supported by the VF Driver |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00378813 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Currently, configuration of the following items is not supported by the VF driver: |
-| | |
-| | * IP/UDP/TCP checksum offload |
-| | |
-| | * Jumbo Frame Receipt |
-| | |
-| | * HW Strip CRC |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Any configuration for these items in the VF register will be ignored. The behavior |
-| | is dependent on the current PF setting. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | For the PF (Physical Function) status on which the VF driver depends, there is an |
-| | option item under PMD in the config file. For others, the VF will keep the same |
-| | behavior as PF setting. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | VF (SR-IOV) Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Currently, configuration of the following items is not supported by the VF driver:
+
+ * IP/UDP/TCP checksum offload
+ * Jumbo Frame Receipt
+ * HW Strip CRC
+
+**Implication**:
+ Any configuration for these items in the VF register will be ignored.
+ The behavior is dependent on the current PF setting.
+
+**Resolution/Workaround**:
+ For the PF (Physical Function) status on which the VF driver depends, there is an option item under PMD in the
+ config file.
+ For others, the VF will keep the same behavior as PF setting.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ VF (SR-IOV) Poll Mode Driver (PMD).
+
Kernel crash on IGB port unbinding
----------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Kernel crash on IGB port unbinding |
-| | |
-+================================+======================================================================================+
-| Reference # | 74 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Kernel crash may occur |
-| | when unbinding 1G ports from the igb_uio driver, on 2.6.3x kernels such as shipped |
-| | with Fedora 14. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Kernel crash occurs. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Use newer kernels or do not unbind ports. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | 2.6.3x kernels such as shipped with Fedora 14 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | IGB Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Kernel crash may occur when unbinding 1G ports from the igb_uio driver, on 2.6.3x kernels such as shipped
+ with Fedora 14.
+
+**Implication**:
+ Kernel crash occurs.
+
+**Resolution/Workaround**:
+ Use newer kernels or do not unbind ports.
+
+**Affected Environment/Platform**:
+ 2.6.3x kernels such as shipped with Fedora 14
+
+**Driver/Module**:
+ IGB Poll Mode Driver (PMD).
+
Twinpond and Ironpond NICs do not report link status correctly
--------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Twinpond and Ironpond NICs do not report link status correctly |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00378800 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Twin Pond/Iron Pond NICs do not bring the physical link down when shutting down the |
-| | port. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The link is reported as up even after issuing "shutdown" command unless the cable is |
-| | physically disconnected. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | None. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Twin Pond and Iron Pond NICs |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Twin Pond/Iron Pond NICs do not bring the physical link down when shutting down the port.
+
+**Implication**:
+ The link is reported as up even after issuing ``shutdown`` command unless the cable is physically disconnected.
+
+**Resolution/Workaround**:
+ None.
+
+**Affected Environment/Platform**:
+ Twin Pond and Iron Pond NICs
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
Discrepancies between statistics reported by different NICs
-----------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Discrepancies between statistics reported by different NICs |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00378113 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | Gigabit Ethernet devices from Intel include CRC bytes when calculating packet |
-| | reception statistics regardless of hardware CRC stripping state, while 10-Gigabit |
-| | Ethernet devices from Intel do so only when hardware CRC stripping is disabled. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | There may be a discrepancy in how different NICs display packet reception |
-| | statistics. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | None |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ Gigabit Ethernet devices from Intel include CRC bytes when calculating packet reception statistics regardless
+ of hardware CRC stripping state, while 10-Gigabit Ethernet devices from Intel do so only when hardware CRC
+ stripping is disabled.
+
+**Implication**:
+ There may be a discrepancy in how different NICs display packet reception statistics.
+
+**Resolution/Workaround**:
+ None
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
Error reported opening files on DPDK initialization
---------------------------------------------------
+**Description**:
+ On DPDK application startup, errors may be reported when opening files as part of the initialization process.
+ This occurs if a large number, for example, 500 or more, or if hugepages are used, due to the per-process
+ limit on the number of open files.
+
+**Implication**:
+ The DPDK application may fail to run.
+
+**Resolution/Workaround**:
+ If using 2 MB hugepages, consider switching to a fewer number of 1 GB pages.
+ Alternatively, use the ``ulimit`` command to increase the number of files which can be opened by a process.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Environment Abstraction Layer (EAL).
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Error reported opening files on DPDK initialization |
-| | |
-+================================+======================================================================================+
-| Reference # | 91 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | On DPDK application startup, errors may be reported when opening files as |
-| | part of the initialization process. This occurs if a large number, for example, 500 |
-| | or more, or if hugepages are used, due to the per-process limit on the number of |
-| | open files. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The DPDK application may fail to run. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | If using 2 MB hugepages, consider switching to a fewer number of 1 GB pages. |
-| | Alternatively, use the “ulimit” command to increase the number of files which can be |
-| | opened by a process. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
Intel® QuickAssist Technology sample application does not work on a 32-bit OS on Shumway
----------------------------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Intel® QuickAssist Technology sample applications does not work on a 32- bit OS on |
-| | Shumway |
-| | |
-+================================+======================================================================================+
-| Reference # | 93 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The Intel® Communications Chipset 89xx Series device does not fully support NUMA on |
-| | a 32-bit OS. Consequently, the sample application cannot work properly on Shumway, |
-| | since it requires NUMA on both nodes. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The sample application cannot work in 32-bit mode with emulated NUMA, on |
-| | multi-socket boards. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | There is no workaround available. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Shumway |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ The Intel® Communications Chipset 89xx Series device does not fully support NUMA on a 32-bit OS.
+ Consequently, the sample application cannot work properly on Shumway, since it requires NUMA on both nodes.
+
+**Implication**:
+ The sample application cannot work in 32-bit mode with emulated NUMA, on multi-socket boards.
+
+**Resolution/Workaround**:
+ There is no workaround available.
+
+**Affected Environment/Platform**:
+ Shumway
+
+**Driver/Module**:
+ All.
+
IEEE1588 support possibly not working with an Intel® Ethernet Controller I210 NIC
---------------------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | IEEE1588 support may not work with an Intel® Ethernet Controller I210 NIC |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00380285 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | IEEE1588 support is not working with an Intel® Ethernet Controller I210 NIC. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | IEEE1588 packets are not forwarded correctly by the Intel® Ethernet Controller I210 |
-| | NIC. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | There is no workaround available. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | IGB Poll Mode Driver |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ IEEE1588 support is not working with an Intel® Ethernet Controller I210 NIC.
+
+**Implication**:
+ IEEE1588 packets are not forwarded correctly by the Intel® Ethernet Controller I210 NIC.
+
+**Resolution/Workaround**:
+ There is no workaround available.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ IGB Poll Mode Driver
+
Differences in how different Intel NICs handle maximum packet length for jumbo frame
------------------------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Differences in how different Intel NICs handle maximum packet length for jumbo frame |
-| | |
-+================================+======================================================================================+
-| Reference # | 96 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | 10 Gigabit Ethernet devices from Intel do not take VLAN tags into account when |
-| | calculating packet size while Gigabit Ethernet devices do so for jumbo frames. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | When receiving packets with VLAN tags, the actual maximum size of useful payload |
-| | that Intel Gigabit Ethernet devices are able to receive is 4 bytes (or 8 bytes in |
-| | the case of packets with extended VLAN tags) less than that of Intel 10 Gigabit |
-| | Ethernet devices. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Increase the configured maximum packet size when using Intel Gigabit Ethernet |
-| | devices. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Binding PCI devices to igb_uio fails on Linux* kernel 3.9 when more than one device is used
--------------------------------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Binding PCI devices to igb_uio fails on Linux* kernel 3.9 when more than one device |
-| | is used |
-| | |
-+================================+======================================================================================+
-| Reference # | 97 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | A known bug in the uio driver included in Linux* kernel version 3.9 prevents more |
-| | than one PCI device to be bound to the igb_uio driver. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The Poll Mode Driver (PMD) will crash on initialization. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Use earlier or later kernel versions, or apply the following |
-| | `patch |
-| | <https://github.com/torvalds/linux/commit/5ed0505c713805f89473cdc0bbfb5110dfd840cb>`_|
-| | . |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Linux* systems with kernel version 3.9 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | igb_uio module |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ 10 Gigabit Ethernet devices from Intel do not take VLAN tags into account when calculating packet size
+ while Gigabit Ethernet devices do so for jumbo frames.
+
+**Implication**:
+ When receiving packets with VLAN tags, the actual maximum size of useful payload that Intel Gigabit Ethernet
+ devices are able to receive is 4 bytes (or 8 bytes in the case of packets with extended VLAN tags) less than
+ that of Intel 10 Gigabit Ethernet devices.
+
+**Resolution/Workaround**:
+ Increase the configured maximum packet size when using Intel Gigabit Ethernet devices.
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
+
+Binding PCI devices to igb_uio fails on Linux kernel 3.9 when more than one device is used
+------------------------------------------------------------------------------------------
+
+**Description**:
+ A known bug in the uio driver included in Linux kernel version 3.9 prevents more than one PCI device to be
+ bound to the igb_uio driver.
+
+**Implication**:
+ The Poll Mode Driver (PMD) will crash on initialization.
+
+**Resolution/Workaround**:
+ Use earlier or later kernel versions, or apply the following
+ `patch <https://github.com/torvalds/linux/commit/5ed0505c713805f89473cdc0bbfb5110dfd840cb>`_.
+
+**Affected Environment/Platform**:
+ Linux systems with kernel version 3.9
+
+**Driver/Module**:
+ igb_uio module
+
GCC might generate Intel® AVX instructions for processors without Intel® AVX support
------------------------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Gcc might generate Intel® AVX instructions for processors without Intel® AVX support |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00382439 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | When compiling Intel® DPDK (and any DPDK app), gcc may generate Intel® AVX |
-| | instructions, even when the processor does not support Intel® AVX. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Any DPDK app might crash while starting up. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Either compile using icc or set EXTRA_CFLAGS=’-O3’ prior to compilation. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Platforms which processor does not support Intel® AVX. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ When compiling Intel® DPDK (and any DPDK app), gcc may generate Intel® AVX instructions, even when the
+ processor does not support Intel® AVX.
+
+**Implication**:
+ Any DPDK app might crash while starting up.
+
+**Resolution/Workaround**:
+ Either compile using icc or set ``EXTRA_CFLAGS='-O3'`` prior to compilation.
+
+**Affected Environment/Platform**:
+ Platforms which processor does not support Intel® AVX.
+
+**Driver/Module**:
+ Environment Abstraction Layer (EAL).
Ethertype filter could receive other packets (non-assigned) in Niantic
----------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Ethertype filter could receive other packets (non-assigned) in Niantic |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00169017 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | On Intel® Ethernet Controller 82599EB: |
-| | |
-| | When Ethertype filter (priority enable) was set, unmatched packets also could be |
-| | received on the assigned queue, such as ARP packets without 802.1q tags or with the |
-| | user priority not equal to set value. |
-| | |
-| | Launch the testpmd by disabling RSS and with multiply queues, then add the ethertype |
-| | filter like: “add_ethertype_filter 0 ethertype 0x0806 priority enable 3 queue 2 |
-| | index 1”, and then start forwarding. |
-| | |
-| | When sending ARP packets without 802.1q tag and with user priority as non-3 by |
-| | tester, all the ARP packets can be received on the assigned queue. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The user priority comparing in Ethertype filter cannot work probably. |
-| | It is the NIC's issue due to the response from PAE: “In fact, ETQF.UP is not |
-| | functional, and the information will be added in errata of 82599 and X540.” |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | None |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ On Intel® Ethernet Controller 82599EB When Ethertype filter (priority enable) was set, unmatched packets also
+ could be received on the assigned queue, such as ARP packets without 802.1q tags or with the user priority not
+ equal to set value.
+ Launch the testpmd by disabling RSS and with multiply queues, then add the ethertype filter like the following
+ and then start forwarding::
+
+ add_ethertype_filter 0 ethertype 0x0806 priority enable 3 queue 2 index 1
+
+ When sending ARP packets without 802.1q tag and with user priority as non-3 by tester, all the ARP packets can
+ be received on the assigned queue.
+
+**Implication**:
+ The user priority comparing in Ethertype filter cannot work probably.
+ It is a NIC's issue due to the following: "In fact, ETQF.UP is not functional, and the information will
+ be added in errata of 82599 and X540."
+
+**Resolution/Workaround**:
+ None
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
Cannot set link speed on Intel® 40G Ethernet controller
-------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Cannot set link speed on Intel® 40G Ethernet controller |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00386379 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | On Intel® 40G Ethernet Controller: |
-| | |
-| | It cannot set the link to specific speed. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The link speed cannot be changed forcibly, though it can be configured by |
-| | application. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | None |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ On Intel® 40G Ethernet Controller you cannot set the link to specific speed.
+
+**Implication**:
+ The link speed cannot be changed forcibly, though it can be configured by application.
+
+**Resolution/Workaround**:
+ None
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
Stopping the port does not down the link on Intel® 40G Ethernet controller
--------------------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Stopping the port does not down the link on Intel® 40G Ethernet controller |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00386380 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | On Intel® 40G Ethernet Controller: |
-| | |
-| | Stopping the port does not really down the port link. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The port link will be still up after stopping the port. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | None |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Devices bound to igb_uio with VT-d enabled do not work on Linux* kernel 3.15-3.17
----------------------------------------------------------------------------------
+**Description**:
+ On Intel® 40G Ethernet Controller stopping the port does not really down the port link.
+
+**Implication**:
+ The port link will be still up after stopping the port.
+
+**Resolution/Workaround**:
+ None
+
+**Affected Environment/Platform**:
+ All.
+
+**Driver/Module**:
+ Poll Mode Driver (PMD).
+
+
+Devices bound to igb_uio with VT-d enabled do not work on Linux kernel 3.15-3.17
+--------------------------------------------------------------------------------
+
+**Description**:
+ When VT-d is enabled (``iommu=pt intel_iommu=on``), devices are 1:1 mapped.
+ In the Linux kernel unbinding devices from drivers removes that mapping which result in IOMMU errors.
+ Introduced in Linux `kernel 3.15 commit
+ <https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/iommu/intel-iommu.c?id=816997d03bca9fabcee65f3481eb0297103eceb7>`_,
+ solved in Linux `kernel 3.18 commit
+ <https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/iommu/intel-iommu.c?id=1196c2fb0407683c2df92d3d09f9144d42830894>`_.
+
+**Implication**:
+ Devices will not be allowed to access memory, resulting in following kernel errors::
+
+ dmar: DRHD: handling fault status reg 2
+ dmar: DMAR:[DMA Read] Request device [02:00.0] fault addr a0c58000
+ DMAR:[fault reason 02] Present bit in context entry is clear
+
+**Resolution/Workaround**:
+ Use earlier or later kernel versions, or avoid driver binding on boot by blacklisting the driver modules.
+ I.e., in the case of ``ixgbe``, we can pass the kernel command line option: ``modprobe.blacklist=ixgbe``.
+ This way we do not need to unbind the device to bind it to igb_uio.
+
+**Affected Environment/Platform**:
+ Linux systems with kernel versions 3.15 to 3.17.
+
+**Driver/Module**:
+ ``igb_uio`` module.
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Devices bound to igb_uio with VT-d enabled do not work on Linux* kernel 3.15-3.17 |
-+================================+======================================================================================+
-| Description | | When VT-d is enabled (iommu=pt intel_iommu=on), devices are 1:1 mapped. |
-| | In the Linux* kernel unbinding devices from drivers removes that mapping which |
-| | result in IOMMU errors. |
-| | | Introduced in Linux `kernel 3.15 commit <https://git.kernel.org/cgit/linux/kernel/ |
-| | git/torvalds/linux.git/commit/drivers/iommu/ |
-| | intel-iommu.c?id=816997d03bca9fabcee65f3481eb0297103eceb7>`_, |
-| | solved in Linux `kernel 3.18 commit <https://git.kernel.org/cgit/linux/kernel/git/ |
-| | torvalds/linux.git/commit/drivers/iommu/ |
-| | intel-iommu.c?id=1196c2fb0407683c2df92d3d09f9144d42830894>`_. |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | | Devices will not be allowed to access memory, resulting in following kernel errors:|
-| | | ``dmar: DRHD: handling fault status reg 2`` |
-| | | ``dmar: DMAR:[DMA Read] Request device [02:00.0] fault addr a0c58000`` |
-| | | ``DMAR:[fault reason 02] Present bit in context entry is clear`` |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | | Use earlier or later kernel versions, or avoid driver binding on boot by |
-| | blacklisting the driver modules. |
-| | | ie. in the case of ixgbe, we can pass the kernel command line option: |
-| | | ``modprobe.blacklist=ixgbe`` |
-| | | This way we do not need to unbind the device to bind it to igb_uio. |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Linux* systems with kernel versions 3.15 to 3.17 |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | igb_uio module |
-+--------------------------------+--------------------------------------------------------------------------------------+
VM power manager may not work on systems with more than 64 cores
----------------------------------------------------------------
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | VM power manager may not work on systems with more than 64 cores |
-| | |
-+================================+======================================================================================+
-| Description | When using VM power manager on a system with more than 64 cores, |
-| | VM(s) should not use cores 64 or higher. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | VM power manager should not be used with VM(s) that are using cores 64 or above. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Do not use cores 64 or above. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Platforms with more than 64 cores. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | VM power manager application |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
+**Description**:
+ When using VM power manager on a system with more than 64 cores, VM(s) should not use cores 64 or higher.
+
+**Implication**:
+ VM power manager should not be used with VM(s) that are using cores 64 or above.
+
+**Resolution/Workaround**:
+ Do not use cores 64 or above.
+
+**Affected Environment/Platform**:
+ Platforms with more than 64 cores.
+
+**Driver/Module**:
+ VM power manager application.
diff --git a/doc/guides/rel_notes/new_features.rst b/doc/guides/rel_notes/new_features.rst
deleted file mode 100644
index 5b724ab..0000000
--- a/doc/guides/rel_notes/new_features.rst
+++ /dev/null
@@ -1,129 +0,0 @@
-.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-New Features
-============
-* Poll-mode driver support for an early release of the PCIE host interface of the Intel(R) Ethernet Switch FM10000.
-
- * Basic Rx/Tx functions for PF/VF
-
- * Interrupt handling support for PF/VF
-
- * Per queue start/stop functions for PF/VF
-
- * Support Mailbox handling between PF/VF and PF/Switch Manager
-
- * Receive Side Scaling (RSS) for PF/VF
-
- * Scatter receive function for PF/VF
-
- * Reta update/query for PF/VF
-
- * VLAN filter set for PF
-
- * Link status query for PF/VF
-
-.. note:: The software is intended to run on pre-release hardware and may contain unknown or unresolved defects or
- issues related to functionality and performance.
- The poll mode driver is also pre-release and will be updated to a released version post hardware and base driver release.
- Should the official hardware release be made between DPDK releases an updated poll-mode driver will be made available.
-
-* Link Bonding
-
- * Support for adaptive load balancing (mode 6) to the link bonding library.
-
- * Support for registration of link status change callbacks with link bonding devices.
-
- * Support for slaves devices which do not support link status change interrupts in the link bonding library via a link status polling mechanism.
-
-* PCI Hotplug with NULL PMD sample application
-
-* ABI versioning
-
-* x32 ABI
-
-* Non-EAL Thread Support
-
-* Multi-pthread Support
-
-* Re-order Library
-
-* ACL for AVX2
-
-* Architecture Independent CRC Hash
-
-* uio_pci_generic Support
-
-* KNI Optimizations
-
-* Vhost-user support
-
-* Virtio (link, vlan, mac, port IO, perf)
-
-* IXGBE-VF RSS
-
-* RX/TX Callbacks
-
-* Unified Flow Types
-
-* Indirect Attached MBUF Flag
-
-* Use default port configuration in TestPMD
-
-* Tunnel offloading in TestPMD
-
-* Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
-
- * Support for Flow Director
-
- * Support for ethertype filter
-
- * Support RSS in VF
-
- * Support configuring redirection table with different size from 1GbE and 10 GbE
-
- - 128/512 entries of 40GbE PF
-
- - 64 entries of 40GbE VF
-
- * Support configuring hash functions
-
- * Support for VXLAN packet on Intel® 40GbE Controllers
-
-* Packet Distributor Sample Application
-
-* Job Stats library and Sample Application.
-
-* Enhanced Jenkins hash (jhash) library
-
-.. note:: The hash values returned by the new jhash library are different
- from the ones returned by the previous library.
-
-For further features supported in this release, see Chapter 3 Supported Features.
diff --git a/doc/guides/rel_notes/rel_description.rst b/doc/guides/rel_notes/rel_description.rst
index 9b1fb7f..a31d696 100644
--- a/doc/guides/rel_notes/rel_description.rst
+++ b/doc/guides/rel_notes/rel_description.rst
@@ -28,151 +28,14 @@
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
Description of Release
======================
-These release notes cover the new features,
-fixed bugs and known issues for Data Plane Development Kit (DPDK) release version 2.0.0.
-
-For instructions on compiling and running the release, see the *DPDK Getting Started Guide*.
-
-Using DPDK Upgrade Patches
---------------------------
-
-For minor updates to the main DPDK releases, the software may be made available both as a new full package and as a patch file to be applied to the previously released package.
-In the latter case, the following commands should be used to apply the patch on top of the already-installed package for the previous release:
-
-.. code-block:: console
-
- # cd $RTE_SDK
- # patch –p1 < /path/to/patch/file
-
-Once the patch has been applied cleanly, the DPDK can be recompiled and used as before (described in the *DPDK Getting Started Guide*).
-
-.. note::
-
- If the patch does not apply cleanly, perhaps because of modifications made locally to the software,
- it is recommended to use the full release package for the minor update, instead of using the patch.
-
-Documentation Roadmap
----------------------
-
-The following is a list of DPDK documents in the suggested reading order:
-
-* **Release Notes**
- (this document): Provides release-specific information, including supported features, limitations, fixed issues, known issues and so on.
- Also, provides the answers to frequently asked questions in FAQ format.
-
-* **Getting Started Guide**
- : Describes how to install and configure the DPDK software; designed to get users up and running quickly with the software.
-
-* **FreeBSD* Getting Started Guide**
- : A document describing the use of the DPDK with FreeBSD* has been added in DPDK Release 1.6.0.
- Refer to this guide for installation and configuration instructions to get started using the DPDK with FreeBSD*.
-
-* **Programmer's Guide**
- : Describes:
-
- * The software architecture and how to use it (through examples), specifically in a Linux* application (linuxapp) environment
-
- * The content of the DPDK, the build system (including the commands that can be used in the root DPDK Makefile to build the development kit and an application)
- and guidelines for porting an application
-
- * Optimizations used in the software and those that should be considered for new development
-
- A glossary of terms is also provided.
-
-* **API Reference**
- : Provides detailed information about DPDK functions, data structures and other programming constructs.
-
-* **Sample Applications User Guide**
- : Describes a set of sample applications. Each chapter describes a sample application that showcases specific functionality and provides instructions on how to compile,
- run and use the sample application.
-
- The following sample applications are included:
-
- * Command Line
-
- * Exception Path (into Linux* for packets using the Linux TUN/TAP driver)
-
- * Hello World
-
- * Integration with Intel® QuickAssist Technology
-
- * Link Status Interrupt (Ethernet* Link Status Detection)
-
- * IP Reassembly
-
- * IP Pipeline
-
- * IP Fragmentation
-
- * IPv4 Multicast
-
- * L2 Forwarding (supports virtualized and non-virtualized environments)
-
- * L2 Forwarding IVSHMEM
-
- * L2 Forwarding Jobstats
-
- * L3 Forwarding
-
- * L3 Forwarding with Access Control
-
- * L3 Forwarding with Power Management
-
- * L3 Forwarding in a Virtualized Environment
-
- * Link Bonding
-
- * Link Status Interrupt
-
- * Load Balancing
-
- * Multi-process
-
- * QoS Scheduler + Dropper
-
- * QoS Metering
-
- * Quota & Watermarks
-
- * Timer
-
- * VMDQ and DCB L2 Forwarding
-
- * VMDQ L2 Forwarding
-
- * Userspace vhost
-
- * Userspace vhost switch
-
- * Netmap
-
- * Kernel NIC Interface (KNI)
-
- * VM Power Management
-
- * Distributor
-
- * RX-TX Callbacks
-
- * Skeleton
-
- In addition, there are some other applications that are built when the libraries are created.
- The source for these applications is in the DPDK/app directory and are called:
-
- * test
-
- * testpmd
- Once the libraries are created, they can be found in the build/app directory.
+This document contains the release notes for Data Plane Development Kit (DPDK) release version 2.0.0 and previous releases.
- * The test application provides a variety of specific tests for the various functions in the DPDK.
+It lists new features, fixed bugs, API and ABI changes and known issues.
- * The testpmd application provides a number of different packet throughput tests and examples of features such as
- how to use the Flow Director found in the Intel® 82599 10 Gigabit Ethernet Controller.
+For instructions on compiling and running the release, see the :doc:`DPDK Getting Started Guide </linux_gsg/index>`.
- The testpmd application is documented in the *DPDK Testpmd Application Note*.
- The test application is not currently documented.
- However, you should be able to run and use test application with the command line help that is provided in the application.
diff --git a/doc/guides/rel_notes/release_1_8.rst b/doc/guides/rel_notes/release_1_8.rst
new file mode 100644
index 0000000..39e6611
--- /dev/null
+++ b/doc/guides/rel_notes/release_1_8.rst
@@ -0,0 +1,64 @@
+.. BSD LICENSE
+ Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+DPDK Release 1.8
+================
+
+New Features
+------------
+
+* Link Bonding
+
+ * Support for 802.3ad link aggregation (mode 4) and transmit load balancing (mode 5) to the link bonding library.
+
+ * Support for registration of link status change callbacks with link bonding devices.
+
+ * Support for slaves devices which do not support link status change interrupts in the link bonding library via a link status polling mechanism.
+
+* Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
+
+ * Support for Flow Director
+
+ * Support for ethertype filter
+
+ * Support RSS in VF
+
+ * Support configuring redirection table with different size from 1GbE and 10 GbE
+
+ - 128/512 entries of 40GbE PF
+
+ - 64 entries of 40GbE VF
+
+ * Support configuring hash functions
+
+ * Support for VXLAN packet on Intel® 40GbE Controllers
+
+* Packet Distributor Sample Application
diff --git a/doc/guides/rel_notes/release_2_0.rst b/doc/guides/rel_notes/release_2_0.rst
new file mode 100644
index 0000000..4341a0c
--- /dev/null
+++ b/doc/guides/rel_notes/release_2_0.rst
@@ -0,0 +1,133 @@
+.. BSD LICENSE
+ Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+DPDK Release 2.0
+================
+
+
+New Features
+------------
+
+* Poll-mode driver support for an early release of the PCIE host interface of the Intel(R) Ethernet Switch FM10000.
+
+ * Basic Rx/Tx functions for PF/VF
+
+ * Interrupt handling support for PF/VF
+
+ * Per queue start/stop functions for PF/VF
+
+ * Support Mailbox handling between PF/VF and PF/Switch Manager
+
+ * Receive Side Scaling (RSS) for PF/VF
+
+ * Scatter receive function for PF/VF
+
+ * Reta update/query for PF/VF
+
+ * VLAN filter set for PF
+
+ * Link status query for PF/VF
+
+.. note:: The software is intended to run on pre-release hardware and may contain unknown or unresolved defects or
+ issues related to functionality and performance.
+ The poll mode driver is also pre-release and will be updated to a released version post hardware and base driver release.
+ Should the official hardware release be made between DPDK releases an updated poll-mode driver will be made available.
+
+* Link Bonding
+
+ * Support for adaptive load balancing (mode 6) to the link bonding library.
+
+ * Support for registration of link status change callbacks with link bonding devices.
+
+ * Support for slaves devices which do not support link status change interrupts in the link bonding library via a link status polling mechanism.
+
+* PCI Hotplug with NULL PMD sample application
+
+* ABI versioning
+
+* x32 ABI
+
+* Non-EAL Thread Support
+
+* Multi-pthread Support
+
+* Re-order Library
+
+* ACL for AVX2
+
+* Architecture Independent CRC Hash
+
+* uio_pci_generic Support
+
+* KNI Optimizations
+
+* Vhost-user support
+
+* Virtio (link, vlan, mac, port IO, perf)
+
+* IXGBE-VF RSS
+
+* RX/TX Callbacks
+
+* Unified Flow Types
+
+* Indirect Attached MBUF Flag
+
+* Use default port configuration in TestPMD
+
+* Tunnel offloading in TestPMD
+
+* Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
+
+ * Support for Flow Director
+
+ * Support for ethertype filter
+
+ * Support RSS in VF
+
+ * Support configuring redirection table with different size from 1GbE and 10 GbE
+
+ - 128/512 entries of 40GbE PF
+
+ - 64 entries of 40GbE VF
+
+ * Support configuring hash functions
+
+ * Support for VXLAN packet on Intel® 40GbE Controllers
+
+* Packet Distributor Sample Application
+
+* Job Stats library and Sample Application.
+
+* Enhanced Jenkins hash (jhash) library
+
+.. note:: The hash values returned by the new jhash library are different
+ from the ones returned by the previous library.
diff --git a/doc/guides/rel_notes/release_2_1.rst b/doc/guides/rel_notes/release_2_1.rst
new file mode 100644
index 0000000..c39418c
--- /dev/null
+++ b/doc/guides/rel_notes/release_2_1.rst
@@ -0,0 +1,69 @@
+.. BSD LICENSE
+ Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+DPDK Release 2.1
+================
+
+
+New Features
+------------
+
+* TODO.
+
+
+Resolved Issues
+---------------
+
+* TODO.
+
+
+Known Issues
+------------
+
+* TODO.
+
+
+API Changes
+-----------
+
+* TODO.
+
+
+API Changes
+-----------
+
+* TODO.
+
+
+ABI Changes
+-----------
+
+* TODO.
diff --git a/doc/guides/rel_notes/resolved_issues.rst b/doc/guides/rel_notes/resolved_issues.rst
deleted file mode 100644
index 8d6bbfa..0000000
--- a/doc/guides/rel_notes/resolved_issues.rst
+++ /dev/null
@@ -1,1395 +0,0 @@
-.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TOR
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-Resolved Issues
-===============
-
-This section describes previously known issues that have been resolved since release version 1.2.
-
-Running TestPMD with SRIOV in Domain U may cause it to hang when XENVIRT switch is on
--------------------------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Running TestPMD with SRIOV in Domain U may cause it to hang when XENVIRT switch is on|
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00168949 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | When TestPMD is run with only SRIOV port /testpmd -c f -n 4 -- -i, the following |
-| | error occurs: |
-| | |
-| | PMD: gntalloc: ioctl error |
-| | |
-| | EAL: Error - exiting with code: 1 |
-| | |
-| | Cause: Creation of mbuf pool for socket 0 failed |
-| | |
-| | Then, alternately run SRIOV port and virtIO with testpmd: |
-| | |
-| | testpmd -c f -n 4 -- -i |
-| | |
-| | testpmd -c f -n 4 --use-dev="eth_xenvirt0" -- -i |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | DomU will not be accessible after you repeat this action some times |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Run testpmd with a "--total-num-mbufs=N(N<=3500)" |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Fedora 16, 64 bits + Xen hypervisor 4.2.3 + Domain 0 kernel 3.10.0 |
-| | +Domain U kernel 3.6.11 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | TestPMD Sample Application |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Vhost-xen cannot detect Domain U application exit on Xen version 4.0.1
-----------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Vhost-xen cannot detect Domain U application exit on Xen 4.0.1. |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00168947 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | When using DPDK applications on Xen 4.0.1, e.g. TestPMD Sample Application, |
-| | on killing the application (e.g. killall testpmd) vhost-switch cannot detect |
-| | the domain U exited and does not free the Virtio device. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Virtio device not freed after application is killed when using vhost-switch on Xen |
-| | 4.0.1 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | Resolved in DPDK 1.8 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Xen 4.0.1 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Vhost-switch |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Virtio incorrect header length used if MSI-X is disabled by kernel driver
--------------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Virtio incorrect header length used if MSI-X is disabled by kernel driver or |
-| | if VIRTIO_NET_F_MAC is not negotiated. |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00384256 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The Virtio header for host-guest communication is of variable length and |
-| | is dependent on whether MSI-X has been enabled by the kernel driver for the network |
-| | device. |
-| | |
-| | The base header length of 20 bytes will be extended by 4 bytes to accommodate MSI-X |
-| | vectors and the Virtio Network Device header will appear at byte offset 24. |
-| | |
-| | The Userspace Virtio Poll Mode Driver tests the guest feature bits for the presence |
-| | of VIRTIO_PCI_FLAG_MISIX, however this bit field is not part of the Virtio |
-| | specification and resolves to the VIRTIO_NET_F_MAC feature instead. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | The DPDK kernel driver will enable MSI-X by default, |
-| | however if loaded with "intr_mode=legacy" on a guest with a Virtio Network Device, |
-| | a KVM-Qemu guest may crash with the following error: "virtio-net header not in first |
-| | element". |
-| | |
-| | If VIRTIO_NET_F_MAC feature has not been negotiated, then the Userspace Poll Mode |
-| | Driver will assume that MSI-X has been disabled and will prevent the proper |
-| | functioning of the driver. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution | Ensure #define VIRTIO_PCI_CONFIG(hw) returns the correct offset (20 or 24 bytes) for |
-| | the devices where in rare cases MSI-X is disabled or VIRTIO_NET_F_MAC has not been |
-| | negotiated. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Virtio devices where MSI-X is disabled or VIRTIO_NET_F_MAC feature has not been |
-| | negotiated. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | librte_pmd_virtio |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Unstable system performance across application executions with 2MB pages
-------------------------------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Unstable system performance across application executions with 2MB pages |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00372346 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | The performance of an DPDK application may vary across executions of an |
-| | application due to a varying number of TLB misses depending on the location of |
-| | accessed structures in memory. |
-| | This situation occurs on rare occasions. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Occasionally, relatively poor performance of DPDK applications is encountered. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Using 1 GB pages results in lower usage of TLB entries, resolving this issue. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | Systems using 2 MB pages |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-Link status change not working with MSI interrupts
---------------------------------------------------
-
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Title | Link status change not working with MSI interrupts |
-| | |
-+================================+======================================================================================+
-| Reference # | IXA00378191 |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Description | MSI interrupts are not supported by the PMD. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Implication | Link status change will only work with legacy or MSI-X interrupts. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Resolution/ Workaround | The igb_uio driver can now be loaded with either legacy or MSI-X interrupt support. |
-| | However, this configuration is not tested. |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Affected Environment/ Platform | All |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+--------------------------------+--------------------------------------------------------------------------------------+
-
-KNI does not provide Ethtool support for all NICs supported by the Poll-Mode Drivers
-------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | KNI does not provide ethtool support for all NICs supported by the Poll Mode Drivers |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00383835 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | To support ethtool functionality using the KNI, the KNI library includes separate |
-| | driver code based off the Linux kernel drivers, because this driver code is separate |
-| | from the poll-mode drivers, the set of supported NICs for these two components may |
-| | differ. |
-| | |
-| | Because of this, in this release, the KNI driver does not provide "ethtool" support |
-| | for the Intel® Ethernet Connection I354 on the Intel Atom Processor C2000 product |
-| | Family SoCs. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Ethtool support with KNI will not work for NICs such as the Intel® Ethernet |
-| | Connection I354. Other KNI functionality, such as injecting packets into the Linux |
-| | kernel is unaffected. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Updated for Intel® Ethernet Connection I354. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Platforms using the Intel® Ethernet Connection I354 or other NICs unsupported by KNI |
-| | ethtool |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | KNI |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Linux IPv4 forwarding is not stable with vhost-switch on high packet rate
--------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Linux IPv4 forwarding is not stable with vhost-switch on high packet rate. |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00384430 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Linux IPv4 forwarding is not stable in Guest when Tx traffic is high from traffic |
-| | generator using two virtio devices in VM with 10G in host. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Packets cannot be forwarded by user space vhost-switch and Linux IPv4 forwarding if |
-| | the rate of incoming packets is greater than 1 Mpps. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | N/A |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Sample application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-PCAP library overwrites mbuf data before data is used
------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | PCAP library overwrites mbuf data before data is used |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00383976 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | PCAP library allocates 64 mbufs for reading packets from PCAP file, but declares them |
-| | as static and reuses the same mbufs repeatedly rather than handing off to the ring |
-| | for allocation of new mbuf for each read from the PCAP file. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | In multi-threaded applications ata in the mbuf is overwritten. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Fixed in eth_pcap_rx() in rte_eth_pcap.c |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Multi-threaded applications using PCAP library |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-MP Client Example app - flushing part of TX is not working for some ports if set specific port mask with skipped ports
-----------------------------------------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | MP Client Example app - flushing part of TX is not working for some ports if set |
-| | specific port mask with skipped ports |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 52 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | When ports not in a consecutive set, for example, ports other than ports 0, 1 or |
-| | 0,1,2,3 are used with the client-service sample app, when no further packets are |
-| | received by a client, the application may not flush correctly any unsent packets |
-| | already buffered inside it. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Not all buffered packets are transmitted if traffic to the clients application is |
-| | stopped. While traffic is continually received for transmission on a port by a |
-| | client, buffer flushing happens normally. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Changed line 284 of the client.c file: |
-| | |
-| | from "send_packets(ports);" to "send_packets(ports->id[port]);" |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Client - Server Multi-process Sample application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Packet truncation with Intel® I350 Gigabit Ethernet Controller
---------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Packet truncation with Intel I350 Gigabit Ethernet Controller |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00372461 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The setting of the hw_strip_crc field in the rte_eth_conf structure passed to the |
-| | rte_eth_dev_configure() function is not respected and hardware CRC stripping is |
-| | always enabled. |
-| | If the field is set to 0, then the software also tries to strip the CRC, resulting |
-| | in packet truncation. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The last 4 bytes of the packets received will be missing. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Fixed an omission in device initialization (setting the STRCRC bit in the DVMOLR |
-| | register) to respect the CRC stripping selection correctly. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Systems using the Intel® I350 Gigabit Ethernet Controller |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 1 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Device initialization failure with Intel® Ethernet Server Adapter X520-T2
--------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Device initialization failure with Intel® Ethernet Server Adapter X520-T2 |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 55 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | If this device is bound to the Linux kernel IXGBE driver when the DPDK is |
-| | initialized, DPDK is initialized, the device initialization fails with error code -17 |
-| | “IXGBE_ERR_PHY_ADDR_INVALID”. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The device is not initialized and cannot be used by an application. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Introduced a small delay in device initialization to allow DPDK to always find |
-| | the device. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Systems using the Intel® Ethernet Server Adapter X520-T2 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 10 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-DPDK kernel module is incompatible with Linux kernel version 3.3
-----------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | DPDK kernel module is incompatible with Linux kernel version 3.3 |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373232 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The igb_uio kernel module fails to compile on systems with Linux kernel version 3.3 |
-| | due to API changes in kernel headers |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The compilation fails and Ethernet controllers fail to initialize without the igb_uio |
-| | module. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Kernel functions pci_block_user_cfg_access() / pci_cfg_access_lock() and |
-| | pci_unblock_user_cfg_access() / pci_cfg_access_unlock() are automatically selected at |
-| | compile time as appropriate. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Linux systems using kernel version 3.3 or later |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | UIO module |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Initialization failure with Intel® Ethernet Controller X540-T2
---------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Initialization failure with Intel® Ethernet Controller X540-T2 |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 57 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | This device causes a failure during initialization when the software tries to read |
-| | the part number from the device EPROM. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Device cannot be used. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Remove unnecessary check of the PBA number from the device. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Systems using the Intel® Ethernet Controller X540-T2 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 10 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-rte_eth_dev_stop() function does not bring down the link for 1 GB NIC ports
----------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | rte_eth_dev_stop() function does not bring down the link for 1 GB NIC ports |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373183 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | When the rte_eth_dev_stop() function is used to stop a NIC port, the link is not |
-| | brought down for that port. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Links are still reported as up, even though the NIC device has been stopped and |
-| | cannot perform TX or RX operations on that port. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The rte_eth_dev_stop() function now brings down the link when called. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 1 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-It is not possible to adjust the duplex setting for 1GB NIC ports
------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | It is not possible to adjust the duplex setting for 1 GB NIC ports |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 66 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The rte_eth_conf structure does not have a parameter that allows a port to be set to |
-| | half-duplex instead of full-duplex mode, therefore, 1 GB NICs cannot be configured |
-| | explicitly to a full- or half-duplex value. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | 1 GB port duplex capability cannot be set manually. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The PMD now uses a new field added to the rte_eth_conf structure to allow 1 GB ports |
-| | to be configured explicitly as half- or full-duplex. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 1 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Calling rte_eth_dev_stop() on a port does not free all the mbufs in use by that port
-------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Calling rte_eth_dev_stop() on a port does not free all the mbufs in use by that port |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 67 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The rte_eth_dev_stop() function initially frees all mbufs used by that port’s RX and |
-| | TX rings, but subsequently repopulates the RX ring again later in the function. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Not all mbufs used by a port are freed when the port is stopped. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The driver no longer re-populates the RX ring in the rte_eth_dev_stop() function. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB and IXGBE Poll Mode Drivers (PMDs) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-PMD does not always create rings that are properly aligned in memory
---------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | PMD does not always create rings that are properly aligned in memory |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373158 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The NIC hardware used by the PMD requires that the RX and TX rings used must be |
-| | aligned in memory on a 128-byte boundary. The memzone reservation function used |
-| | inside the PMD only guarantees that the rings are aligned on a 64-byte boundary, so |
-| | errors can occur if the rings are not aligned on a 128-byte boundary. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Unintended overwriting of memory can occur and PMD behavior may also be effected. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | A new rte_memzone_reserve_aligned() API has been added to allow memory reservations |
-| | from hugepage memory at alignments other than 64-bytes. The PMD has been modified so |
-| | that the rings are allocated using this API with minimum alignment of 128-bytes. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB and IXGBE Poll Mode Drivers (PMDs) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Checksum offload might not work correctly when mixing VLAN-tagged and ordinary packets
---------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Checksum offload might not work correctly when mixing VLAN-tagged and ordinary |
-| | packets |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00378372 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Incorrect handling of protocol header lengths in the PMD driver |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The checksum for one of the packets may be incorrect. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Corrected the offset calculation. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Port not found issue with Intel® 82580 Gigabit Ethernet Controller
-------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Port not found issue with Intel® 82580 Gigabit Ethernet Controller |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 50 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | After going through multiple driver unbind/bind cycles, an Intel® 82580 |
-| | Ethernet Controller port may no longer be found and initialized by the |
-| | DPDK. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The port will be unusable. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Issue was not reproducible and therefore no longer considered an issue. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | 1 GbE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Packet mbufs may be leaked from mempool if rte_eth_dev_start() function fails
------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Packet mbufs may be leaked from mempool if rte_eth_dev_start() function fails |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373373 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The rte_eth_dev_start() function allocates mbufs to populate the NIC RX rings. If the |
-| | start function subsequently fails, these mbufs are not freed back to the memory pool |
-| | from which they came. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | mbufs may be lost to the system if rte_eth_dev_start() fails and the application does |
-| | not terminate. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | mbufs are correctly deallocated if a call to rte_eth_dev_start() fails. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Promiscuous mode for 82580 NICs can only be enabled after a call to rte_eth_dev_start for a port
-------------------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Promiscuous mode for 82580 NICs can only be enabled after a call to rte_eth_dev_start |
-| | for a port |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373833 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | For 82580-based network ports, the rte_eth_dev_start() function can overwrite the |
-| | setting of the promiscuous mode for the device. |
-| | |
-| | Therefore, the rte_eth_promiscuous_enable() API call should be called after |
-| | rte_eth_dev_start() for these devices. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Promiscuous mode can only be enabled if API calls are in a specific order. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | The NIC now restores most of its configuration after a call to rte_eth_dev_start(). |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Incorrect CPU socket information reported in /proc/cpuinfo can prevent the DPDK from running
---------------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Incorrect CPU socket information reported in /proc/cpuinfo can prevent the Intel® |
-| | DPDK from running |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 63 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The DPDK users information supplied by the Linux kernel to determine the |
-| | hardware properties of the system being used. On rare occasions, information supplied |
-| | by /proc/cpuinfo does not match that reported elsewhere. In some cases, it has been |
-| | observed that the CPU socket numbering given in /proc/cpuinfo is incorrect and this |
-| | can prevent DPDK from operating. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The DPDK cannot run on systems where /proc/cpuinfo does not report the correct |
-| | CPU socket topology. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | CPU socket information is now read from /sys/devices/cpu/pcuN/topology |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-L3FWD sample application may fail to transmit packets under extreme conditions
-------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | L3FWD sample application may fail to transmit packets under extreme conditions |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00372919 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Under very heavy load, the L3 Forwarding sample application may fail to transmit |
-| | packets due to the system running out of free mbufs. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Sending and receiving data with the PMD may fail. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | The number of mbufs is now calculated based on application parameters. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | L3 Forwarding sample application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-L3FWD-VF might lose CRC bytes
------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | L3FWD-VF might lose CRC bytes |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373424 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Currently, the CRC stripping configuration does not affect the VF driver. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Packets transmitted by the DPDK in the VM may be lacking 4 bytes (packet CRC). |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Set “strip_crc” to 1 in the sample applications that use the VF PMD. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB and IXGBE VF Poll Mode Drivers (PMDs) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-32-bit DPDK sample applications fails when using more than one 1 GB hugepage
-----------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | 32-bit Intel® DPDK sample applications fails when using more than one 1 GB hugepage |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 31 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | 32-bit applications may have problems when running with multiple 1 GB pages on a |
-| | 64-bit OS. This is due to the limited address space available to 32-bit processes. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | 32-bit processes need to use either 2 MB pages or have their memory use constrained |
-| | to 1 GB if using 1 GB pages. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | EAL now limits virtual memory to 1 GB per page size. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | 64-bit systems running 32-bit Intel® DPDK with 1 GB hugepages |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-l2fwd fails to launch if the NIC is the Intel® 82571EB Gigabit Ethernet Controller
-----------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | l2fwd fails to launch if the NIC is the Intel® 82571EB Gigabit Ethernet Controller |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373340 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The 82571EB NIC can handle only one TX per port. The original implementation allowed |
-| | for a more complex handling of multiple queues per port. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The l2fwd application fails to launch if the NIC is 82571EB. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | l2fwd now uses only one TX queue. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Sample Application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-32-bit DPDK applications may fail to initialize on 64-bit OS
-------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | 32-bit DPDK applications may fail to initialize on 64-bit OS |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00378513 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The EAL used a 32-bit pointer to deal with physical addresses. This could create |
-| | problems when the physical address of a hugepage exceeds the 4 GB limit. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | 32-bit applications may not initialize on a 64-bit OS. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | The physical address pointer is now 64-bit. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | 32-bit applications in a 64-bit Linux* environment |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Lpm issue when using prefixes > 24
-----------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Lpm issue when using prefixes > 24 |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00378395 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Extended tbl8's are overwritten by multiple lpm rule entries when the depth is |
-| | greater than 24. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | LPM tbl8 entries removed by additional rules. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Adding tbl8 entries to a valid group to avoid making the entire table invalid and |
-| | subsequently overwritten. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Sample applications |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-IXGBE PMD hangs on port shutdown when not all packets have been sent
---------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | IXGBE PMD hangs on port shutdown when not all packets have been sent |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373492 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | When the PMD is forwarding packets, and the link goes down, and port shutdown is |
-| | called, the port cannot shutdown. Instead, it hangs due to the IXGBE driver |
-| | incorrectly performing the port shutdown procedure. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The port cannot shutdown and does not come back up until re-initialized. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | The port shutdown procedure has been rewritten. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IXGBE Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Config file change can cause build to fail
-------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Config file change can cause build to fail |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00369247 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | If a change in a config file results in some DPDK files that were needed no |
-| | longer being needed, the build will fail. This is because the \*.o file will still |
-| | exist, and the linker will try to link it. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | DPDK compilation failure |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The Makefile now provides instructions to clean out old kernel module object files. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Load balance sample application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-rte_cmdline library should not be used in production code due to limited testing
---------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | rte_cmdline library should not be used in production code due to limited testing |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 34 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The rte_cmdline library provides a command line interface for use in sample |
-| | applications and test applications distributed as part of DPDK. However, it is |
-| | not validated to the same standard as other DPDK libraries. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | It may contain bugs or errors that could cause issues in production applications. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The rte_cmdline library is now tested correctly. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | rte_cmdline |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Some \*_INITIALIZER macros are not compatible with C++
-------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Some \*_INITIALIZER macros are not compatible with C++ |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00371699 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | These macros do not work with C++ compilers, since they use the C99 method of named |
-| | field initialization. The TOKEN_*_INITIALIZER macros in librte_cmdline have this |
-| | problem. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | C++ application using these macros will fail to compile. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Macros are now compatible with C++ code. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | rte_timer, rte_cmdline |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-No traffic through bridge when using exception_path sample application
-----------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | No traffic through bridge when using exception_path sample application |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00168356 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | On some systems, packets are sent from the exception_path to the tap device, but are |
-| | not forwarded by the bridge. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The sample application does not work as described in its sample application guide. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | If you cannot get packets though the bridge, it might be because IP packet filtering |
-| | rules are up by default on the bridge. In that case you can disable it using the |
-| | following: |
-| | |
-| | # for i in /proc/sys/net/bridge/bridge_nf-\*; do echo 0 > $i; done |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Linux |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Exception path sample application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Segmentation Fault in testpmd after config fails
-------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Segmentation Fault in testpmd after config fails |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00378638 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Starting testpmd with a parameter that causes port queue setup to fail, for example, |
-| | set TX WTHRESH to non 0 when tx_rs_thresh is greater than 1, then doing |
-| | “port start all”. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Seg fault in testpmd |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | Testpmd now forces port reconfiguration if the initial configuration failed. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Testpmd Sample Application |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Linux kernel pci_cfg_access_lock() API can be prone to deadlock
----------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Linux kernel pci_cfg_access_lock() API can be prone to deadlock |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00373232 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The kernel APIs used for locking in the igb_uio driver can cause a deadlock in |
-| | certain situations. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Unknown at this time; depends on the application. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | The igb_uio driver now uses the pci_cfg_access_trylock() function instead of |
-| | pci_cfg_access_lock(). |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB UIO Driver |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-When running multi-process applications, “rte_malloc” functions cannot be used in secondary processes
------------------------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | When running multi-process applications, “rte_malloc” functions cannot be used in |
-| | secondary processes |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 35 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The rte_malloc library provides a set of malloc-type functions that reserve memory |
-| | from hugepage shared memory. Since secondary processes cannot reserve memory directly |
-| | from hugepage memory, rte_malloc functions cannot be used reliably. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The librte_malloc functions, for example, rte_malloc(), rte_zmalloc() |
-| | and rte_realloc() cannot be used reliably in secondary processes. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | In addition to re-entrancy support, the Intel® DPDK now supports the reservation of |
-| | a memzone from the primary thread or secondary threads. This is achieved by putting |
-| | the reservation-related control data structure of the memzone into shared memory. |
-| | Since rte_malloc functions request memory directly from the memzone, the limitation |
-| | for secondary threads no longer applies. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | rte_malloc |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Configuring maximum packet length for IGB with VLAN enabled may not take into account the length of VLAN tag
-------------------------------------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Configuring maximum packet length for IGB with VLAN enabled may not take into account |
-| | the length of VLAN tag |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00379880 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | For IGB, the maximum packet length configured may not include the length of the VLAN |
-| | tag even if VLAN is enabled. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Packets with a VLAN tag with a size close to the maximum may be dropped. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | NIC registers are now correctly initialized. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All with IGB NICs |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Intel® I210 Ethernet controller always strips CRC of incoming packets
----------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Intel® I210 Ethernet controller always strips CRC of incoming packets |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00380265 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The Intel® I210 Ethernet controller (NIC) removes 4 bytes from the end of the packet |
-| | regardless of whether it was configured to do so or not. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Packets will be missing 4 bytes if the NIC is not configured to strip CRC. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/ Workaround | NIC registers are now correctly initialized. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | IGB Poll Mode Driver (PMD) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-EAL can silently reserve less memory than requested
----------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | EAL can silently reserve less memory than requested |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00380689 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | During application initialization, the EAL can silently reserve less memory than |
-| | requested by the user through the -m application option. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The application fails to start. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | EAL will detect if this condition occurs and will give an appropriate error message |
-| | describing steps to fix the problem. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Environmental Abstraction Layer (EAL) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-SSH connectivity with the board may be lost when starting a DPDK application
-----------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | SSH connectivity with the board may be lost when starting a DPDK application |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 26 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Currently, the Intel® DPDK takes over all the NICs found on the board that are |
-| | supported by the DPDK. This results in these NICs being removed from the NIC |
-| | set handled by the kernel,which has the side effect of any SSH connection being |
-| | terminated. See also issue #27. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Loss of network connectivity to board. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | DPDK now no longer binds ports on startup. Please refer to the Getting Started |
-| | Guide for information on how to bind/unbind ports from DPDK. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Systems using a Intel®DPDK supported NIC for remote system access |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Environment Abstraction Layer (EAL) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Remote network connections lost when running autotests or sample applications
------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Remote network connections lost when running autotests or sample applications |
-| | |
-+=================================+=======================================================================================+
-| Reference # | 27 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The PCI autotest and sample applications will scan for PCI devices and will remove |
-| | from Linux* control those recognized by it. This may result in the loss of network |
-| | connections to the system. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Loss of network connectivity to board when connected remotely. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | DPDK now no longer binds ports on startup. |
-| | Please refer to the Getting Started Guide for information on how to bind/unbind ports |
-| | from DPDK. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | Systems using a DPDK supported NIC for remote system access |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Sample applications |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-KNI may not work properly in a multi-process environment
---------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | KNI may not work properly in a multi-process environment |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00380475 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Some of the network interface operations such as, MTU change or link UP/DOWN, when |
-| | executed on KNI interface, might fail in a multi-process environment, although they |
-| | are normally successful in the DPDK single process environment. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Some network interface operations on KNI cannot be used in a DPDK |
-| | multi-process environment. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution | The ifconfig callbacks are now explicitly set in either master or secondary process. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Kernel Network Interface (KNI) |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Hash library cannot be used in multi-process applications with multiple binaries
---------------------------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Hash library cannot be used in multi-process applications with multiple binaries |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00168658 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | The hash function used by a given hash-table implementation is referenced in the code |
-| | by way of a function pointer. This means that it cannot work in cases where the hash |
-| | function is at a different location in the code segment in different processes, as is |
-| | the case where a DPDK multi-process application uses a number of different |
-| | binaries, for example, the client-server multi-process example. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | The Hash library will not work if shared by multiple processes. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | New API was added for multiprocess scenario. Please refer to DPDK Programmer’s |
-| | Guide for more information. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | librte_hash library |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Unused hugepage files are not cleared after initialization
-----------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Hugepage files are not cleared after initialization |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00383462 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | EAL leaves hugepages allocated at initialization in the hugetlbfs even if they are |
-| | not used. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | Reserved hugepages are not freed back to the system, preventing other applications |
-| | that use hugepages from running. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Reserved and unused hugepages are now freed back to the system. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | EAL |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-Packet reception issues when virtualization is enabled
-------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Packet reception issues when virtualization is enabled |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00369908 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | Packets are not transmitted or received on when VT-d is enabled in the BIOS and Intel |
-| | IOMMU is used. More recent kernels do not exhibit this issue. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | An application requiring packet transmission or reception will not function. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | DPDK Poll Mode Driver now has the ability to map correct physical addresses to |
-| | the device structures. |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Poll mode drivers |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-
-
-
-Double VLAN does not work on Intel® 40GbE Ethernet controller
--------------------------------------------------------------
-
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Title | Double VLAN does not work on Intel® 40GbE Ethernet controller |
-| | |
-+=================================+=======================================================================================+
-| Reference # | IXA00369908 |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Description | On Intel® 40 GbE Ethernet controller double VLAN does not work. |
-| | This was confirmed as a Firmware issue which will be fixed in later versions of |
-| | firmware. |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Implication | After setting double vlan to be enabled on a port, no packets can be transmitted out |
-| | on that port. |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Resolution/Workaround | Resolved in latest release with firmware upgrade. |
-| | |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Affected Environment/Platform | All |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
-| Driver/Module | Poll mode drivers |
-| | |
-+---------------------------------+---------------------------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/supported_features.rst b/doc/guides/rel_notes/supported_features.rst
deleted file mode 100644
index 1102b66..0000000
--- a/doc/guides/rel_notes/supported_features.rst
+++ /dev/null
@@ -1,396 +0,0 @@
-.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
-
- * Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in
- the documentation and/or other materials provided with the
- distribution.
- * Neither the name of Intel Corporation nor the names of its
- contributors may be used to endorse or promote products derived
- from this software without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-Supported Features
-==================
-
-* Packet Distributor library for dynamic, single-packet at a time, load balancing
-
-* IP fragmentation and reassembly library
-
-* Support for IPv6 in IP fragmentation and reassembly sample applications
-
-* Support for VFIO for mapping BARs and setting up interrupts
-
-* Link Bonding PMD Library supporting round-robin, active backup, balance(layer 2, layer 2+3, and layer 3+4), broadcast bonding modes
- 802.3ad link aggregation (mode 4), transmit load balancing (mode 5) and adaptive load balancing (mode 6)
-
-* Support zero copy mode RX/TX in user space vhost sample
-
-* Support multiple queues in virtio-net PMD
-
-* Support for Intel 40GbE Controllers:
-
- * Intel® XL710 40 Gigabit Ethernet Controller
-
- * Intel® X710 40 Gigabit Ethernet Controller
-
-* Support NIC filters in addition to flow director for Intel® 1GbE and 10GbE Controllers
-
-* Virtualization (KVM)
-
- * Userspace vhost switch:
-
- New sample application to support userspace virtio back-end in host and packet switching between guests.
-
-* Virtualization (Xen)
-
- * Support for DPDK application running on Xen Domain0 without hugepages.
-
- * Para-virtualization
-
- Support front-end Poll Mode Driver in guest domain
-
- Support userspace packet switching back-end example in host domain
-
-* FreeBSD* 9.2 support for librte_pmd_e1000, librte_pmd_ixgbe and Virtual Function variants.
- Please refer to the *DPDK for FreeBSD\* Getting Started Guide*.
- Application support has been added for the following:
-
- * multiprocess/symmetric_mp
-
- * multiprocess/simple_mp
-
- * l2fwd
-
- * l3fwd
-
-* Support for sharing data over QEMU IVSHMEM
-
-* Support for Intel® Communications Chipset 8925 to 8955 Series in the DPDK-QAT Sample Application
-
-* New VMXNET3 driver for the paravirtual device presented to a VM by the VMware* ESXi Hypervisor.
-
-* BETA: example support for basic Netmap applications on DPDK
-
-* Support for the wireless KASUMI algorithm in the dpdk_qat sample application
-
-* Hierarchical scheduler implementing 5-level scheduling hierarchy (port, sub-port, pipe, traffic class, queue)
- with 64K leaf nodes (packet queues).
-
-* Packet dropper based on Random Early Detection (RED) congestion control mechanism.
-
-* Traffic Metering based on Single Rate Three Color Marker (srTCM) and Two Rate Three Color Marker (trTCM).
-
-* An API for configuring RSS redirection table on the fly
-
-* An API to support KNI in a multi-process environment
-
-* IPv6 LPM forwarding
-
-* Power management library and sample application using CPU frequency scaling
-
-* IPv4 reassembly sample application
-
-* Quota & Watermarks sample application
-
-* PCIe Multi-BAR Mapping Support
-
-* Support for Physical Functions in Poll Mode Driver for the following devices:
-
- * Intel® 82576 Gigabit Ethernet Controller
-
- * Intel® i350 Gigabit Ethernet Controller
-
- * Intel® 82599 10-Gigabit Ethernet Controller
-
- * Intel® XL710/X710 40-Gigabit Ethernet Controller
-
-* Quality of Service (QoS) Hierarchical Scheduler: Sub-port Traffic Class Oversubscription
-
-* Multi-thread Kernel NIC Interface (KNI) for performance improvement
-
-* Virtualization (KVM)
-
- * Para-virtualization
-
- Support virtio front-end poll mode driver in guest virtual machine
- Support vHost raw socket interface as virtio back-end via KNI
-
- * SR-IOV Switching for the 10G Ethernet Controller
-
- Support Physical Function to start/stop Virtual Function Traffic
-
- Support Traffic Mirroring (Pool, VLAN, Uplink and Downlink)
-
- Support VF multiple MAC addresses (Exact/Hash match), VLAN filtering
-
- Support VF receive mode configuration
-
-* Support VMDq for 1 GbE and 10 GbE NICs
-
-* Extension for the Quality of Service (QoS) sample application to allow statistics polling
-
-* New libpcap -based poll-mode driver, including support for reading from 3rd Party NICs
- using Linux kernel drivers
-
-* New multi-process example using fork() to demonstrate application resiliency and recovery,
- including reattachment to and re-initialization of shared data structures where necessary
-
-* New example (vmdq) to demonstrate VLAN-based packet filtering
-
-* Improved scalability for scheduling large numbers of timers using the rte_timer library
-
-* Support for building the DPDK as a shared library
-
-* Support for Intel® Ethernet Server Bypass Adapter X520-SR2
-
-* Poll Mode Driver support for the Intel® Ethernet Connection I354 on the Intel® Atom™
- Processor C2000 Product Family SoCs
-
-* IPv6 exact match flow classification in the l3fwd sample application
-
-* Support for multiple instances of the Intel® DPDK
-
-* Support for Intel® 82574L Gigabit Ethernet Controller - Intel® Gigabit CT Desktop Adapter
- (previously code named "Hartwell")
-
-* Support for Intel® Ethernet Controller I210 (previously code named "Springville")
-
-* Early access support for the Quad-port Intel® Ethernet Server Adapter X520-4 and X520-DA2
- (code named "Spring Fountain")
-
-* Support for Intel® X710/XL710 40 Gigabit Ethernet Controller (code named "Fortville")
-
-* Core components:
-
- * rte_mempool: allocator for fixed-sized objects
-
- * rte_ring: single- or multi- consumer/producer queue implementation
-
- * rte_timer: implementation of timers
-
- * rte_malloc: malloc-like allocator
-
- * rte_mbuf: network packet buffers, including fragmented buffers
-
- * rte_hash: support for exact-match flow classification in software
-
- * rte_lpm: support for longest prefix match in software for IPv4 and IPv6
-
- * rte_sched: support for QoS scheduling
-
- * rte_meter: support for QoS traffic metering
-
- * rte_power: support for power management
-
- * rte_ip_frag: support for IP fragmentation and reassembly
-
-* Poll Mode Driver - Common (rte_ether)
-
- * VLAN support
-
- * Support for Receive Side Scaling (RSS)
-
- * IEEE1588
-
- * Buffer chaining; Jumbo frames
-
- * TX checksum calculation
-
- * Configuration of promiscuous mode, and multicast packet receive filtering
-
- * L2 Mac address filtering
-
- * Statistics recording
-
-* IGB Poll Mode Driver - 1 GbE Controllers (librte_pmd_e1000)
-
- * Support for Intel® 82576 Gigabit Ethernet Controller (previously code named "Kawela")
-
- * Support for Intel® 82580 Gigabit Ethernet Controller (previously code named "Barton Hills")
-
- * Support for Intel® I350 Gigabit Ethernet Controller (previously code named "Powerville")
-
- * Support for Intel® 82574L Gigabit Ethernet Controller - Intel® Gigabit CT Desktop Adapter
- (previously code named "Hartwell")
-
- * Support for Intel® Ethernet Controller I210 (previously code named "Springville")
-
- * Support for L2 Ethertype filters, SYN filters, 2-tuple filters and Flex filters for 82580 and i350
-
- * Support for L2 Ethertype filters, SYN filters and L3/L4 5-tuple filters for 82576
-
-* Poll Mode Driver - 10 GbE Controllers (librte_pmd_ixgbe)
-
- * Support for Intel® 82599 10 Gigabit Ethernet Controller (previously code named "Niantic")
-
- * Support for Intel® Ethernet Server Adapter X520-T2 (previously code named "Iron Pond")
-
- * Support for Intel® Ethernet Controller X540-T2 (previously code named "Twin Pond")
-
- * Support for Virtual Machine Device Queues (VMDq) and Data Center Bridging (DCB) to divide
- incoming traffic into 128 RX queues. DCB is also supported for transmitting packets.
-
- * Support for auto negotiation down to 1 Gb
-
- * Support for Flow Director
-
- * Support for L2 Ethertype filters, SYN filters and L3/L4 5-tuple filters for 82599EB
-
-* Poll Mode Driver - 40 GbE Controllers (librte_pmd_i40e)
-
- * Support for Intel® XL710 40 Gigabit Ethernet Controller
-
- * Support for Intel® X710 40 Gigabit Ethernet Controller
-
-* Environment Abstraction Layer (librte_eal)
-
- * Multi-process support
-
- * Multi-thread support
-
- * 1 GB and 2 MB page support
-
- * Atomic integer operations
-
- * Querying CPU support of specific features
-
- * High Precision Event Timer support (HPET)
-
- * PCI device enumeration and blacklisting
-
- * Spin locks and R/W locks
-
-* Test PMD application
-
- * Support for PMD driver testing
-
-* Test application
-
- * Support for core component tests
-
-* Sample applications
-
- * Command Line
-
- * Exception Path (into Linux* for packets using the Linux TUN/TAP driver)
-
- * Hello World
-
- * Integration with Intel® Quick Assist Technology drivers 1.0.0, 1.0.1 and 1.1.0 on Intel®
- Communications Chipset 89xx Series C0 and C1 silicon.
-
- * Link Status Interrupt (Ethernet* Link Status Detection
-
- * IPv4 Fragmentation
-
- * IPv4 Multicast
-
- * IPv4 Reassembly
-
- * L2 Forwarding (supports virtualized and non-virtualized environments)
-
- * L2 Forwarding Job Stats
-
- * L3 Forwarding (IPv4 and IPv6)
-
- * L3 Forwarding in a Virtualized Environment
-
- * L3 Forwarding with Power Management
-
- * Bonding mode 6
-
- * QoS Scheduling
-
- * QoS Metering + Dropper
-
- * Quota & Watermarks
-
- * Load Balancing
-
- * Multi-process
-
- * Timer
-
- * VMDQ and DCB L2 Forwarding
-
- * Kernel NIC Interface (with ethtool support)
-
- * Userspace vhost switch
-
-* Interactive command line interface (rte_cmdline)
-
-* Updated 10 GbE Poll Mode Driver (PMD) to the latest BSD code base providing support of newer
- ixgbe 10 GbE devices such as the Intel® X520-T2 server Ethernet adapter
-
-* An API for configuring Ethernet flow control
-
-* Support for interrupt-based Ethernet link status change detection
-
-* Support for SR-IOV functions on the Intel® 82599, Intel® 82576 and Intel® i350 Ethernet
- Controllers in a virtualized environment
-
-* Improvements to SR-IOV switch configurability on the Intel® 82599 Ethernet Controllers in
- a virtualized environment.
-
-* An API for L2 Ethernet Address "whitelist" filtering
-
-* An API for resetting statistics counters
-
-* Support for RX L4 (UDP/TCP/SCTP) checksum validation by NIC
-
-* Support for TX L3 (IPv4/IPv6) and L4 (UDP/TCP/SCTP) checksum calculation offloading
-
-* Support for IPv4 packet fragmentation and reassembly
-
-* Support for zero-copy Multicast
-
-* New APIs to allow the "blacklisting" of specific NIC ports.
-
-* Header files for common protocols (IP, SCTP, TCP, UDP)
-
-* Improved multi-process application support, allowing multiple co-operating DPDK
- processes to access the NIC port queues directly.
-
-* CPU-specific compiler optimization
-
-* Job stats library for load/cpu utilization measurements
-
-* Improvements to the Load Balancing sample application
-
-* The addition of a PAUSE instruction to tight loops for energy-usage and performance improvements
-
-* Updated 10 GbE Transmit architecture incorporating new upstream PCIe* optimizations.
-
-* IPv6 support:
-
- * Support in Flow Director Signature Filters and masks
-
- * RSS support in sample application that use RSS
-
- * Exact match flow classification in the L3 Forwarding sample application
-
- * Support in LPM for IPv6 addresses
-
-* Tunneling packet support:
-
- * Provide the APIs for VXLAN destination UDP port and VXLAN packet filter configuration
- and support VXLAN TX checksum offload on Intel® 40GbE Controllers.
diff --git a/doc/guides/rel_notes/supported_os.rst b/doc/guides/rel_notes/supported_os.rst
index c33f731..7ccddbf 100644
--- a/doc/guides/rel_notes/supported_os.rst
+++ b/doc/guides/rel_notes/supported_os.rst
@@ -31,19 +31,19 @@
Supported Operating Systems
===========================
-The following Linux* distributions were successfully used to generate or run DPDK.
+The following Linux distributions were successfully used to compiler or run DPDK.
-* FreeBSD* 10
+* FreeBSD 10
* Fedora release 20
-* Ubuntu* 14.04 LTS
+* Ubuntu 14.04 LTS
-* Wind River* Linux* 6
+* Wind River Linux 6
-* Red Hat* Enterprise Linux 6.5
+* Red Hat Enterprise Linux 6.5
-* SUSE Enterprise Linux* 11 SP3
+* SUSE Enterprise Linux 11 SP3
These distributions may need additional packages that are not installed by default, or a specific kernel.
-Refer to the *DPDK Getting Started Guide* for details.
+Refer to the :doc:`/linux_gsg/index` and :doc:`/freebsd_gsg/index` for details.
diff --git a/doc/guides/rel_notes/updating_apps.rst b/doc/guides/rel_notes/updating_apps.rst
deleted file mode 100644
index b49cb61..0000000
--- a/doc/guides/rel_notes/updating_apps.rst
+++ /dev/null
@@ -1,136 +0,0 @@
-Updating Applications from Previous Versions
-============================================
-
-Although backward compatibility is being maintained across DPDK releases, code written for previous versions of the DPDK
-may require some code updates to benefit from performance and user experience enhancements provided in later DPDK releases.
-
-DPDK 2.0 to DPDK 2.1
---------------------
-
-* The second argument of rte_pktmbuf_pool_init(mempool, opaque) is now a
- pointer to a struct rte_pktmbuf_pool_private instead of a uint16_t
- casted into a pointer. Backward compatibility is preserved when the
- argument was NULL which is the majority of use cases, but not if the
- opaque pointer was not NULL, as it is not technically feasible. In
- this case, the application has to be modified to properly fill a
- rte_pktmbuf_pool_private structure and pass it to
- rte_pktmbuf_pool_init().
-
-* A simpler helper rte_pktmbuf_pool_create() can be used to create a
- packet mbuf pool. The old way using rte_mempool_create() is still
- supported though and is still used for more specific cases.
-
-DPDK 1.7 to DPDK 1.8
---------------------
-
-Note that in DPDK 1.8, the structure of the rte_mbuf has changed considerably from all previous versions.
-It is recommended that users familiarize themselves with the new structure defined in the file rte_mbuf.h in the release package.
-The follow are some common changes that need to be made to code using mbufs, following an update to DPDK 1.8:
-
-* Any references to fields in the pkt or ctrl sub-structures of the mbuf, need to be replaced with references to the field
- directly from the rte_mbuf, i.e. buf->pkt.data_len should be replace by buf->data_len.
-
-* Any direct references to the data field of the mbuf (original buf->pkt.data) should now be replace by the macro rte_pktmbuf_mtod
- to get a computed data address inside the mbuf buffer area.
-
-* Any references to the in_port mbuf field should be replace by references to the port field.
-
-NOTE: The above list is not exhaustive, but only includes the most commonly required changes to code using mbufs.
-
-Intel® DPDK 1.6 to DPDK 1.7
----------------------------
-
-Note the following difference between 1.6 and 1.7:
-
-* The "default" target has been renamed to "native"
-
-Intel® DPDK 1.5 to Intel® DPDK 1.6
-----------------------------------
-
-Note the following difference between 1.5 and 1.6:
-
-* The CONFIG_RTE_EAL _UNBIND_PORTS configuration option, which was deprecated in Intel® DPDK 1.4.x, has been removed in Intel® DPDK 1.6.x.
- Applications using the Intel® DPDK must be explicitly unbound to the igb_uio driver using the dpdk_nic_bind.py script included in the
- Intel® DPDK release and documented in the *Intel® DPDK Getting Started Guide*.
-
-Intel® DPDK 1.4 to Intel® DPDK 1.5
-----------------------------------
-
-Note the following difference between 1.4 and 1.5:
-
-* Starting with version 1.5, the top-level directory created from unzipping the release package will now contain the release version number,
- that is, DPDK-1.5.2/ rather than just DPDK/ .
-
-Intel® DPDK 1.3 to Intel® DPDK 1.4.x
-------------------------------------
-
-Note the following difference between releases 1.3 and 1.4.x:
-
-* In Release 1.4.x, Intel® DPDK applications will no longer unbind the network ports from the Linux* kernel driver when the application initializes.
- Instead, any ports to be used by Intel® DPDK must be unbound from the Linux driver and bound to the igb_uio driver before the application starts.
- This can be done using the pci_unbind.py script included with the Intel® DPDK release and documented in the *Intel® DPDK Getting Started Guide*.
-
- If the port unbinding behavior present in previous Intel® DPDK releases is required, this can be re-enabled using the CONFIG_RTE_EAL_UNBIND_PORTS
- setting in the appropriate Intel® DPDK compile-time configuration file.
-
-* In Release 1.4.x, HPET support is disabled in the Intel® DPDK build configuration files, which means that the existing rte_eal_get_hpet_hz() and
- rte_eal_get_hpet_cycles() APIs are not available by default.
- For applications that require timing APIs, but not the HPET timer specifically, it is recommended that the API calls rte_get_timer_cycles()
- and rte_get_timer_hz() be used instead of the HPET-specific APIs.
- These generic APIs can work with either TSC or HPET time sources, depending on what is requested by an application,
- and on what is available on the system at runtime.
-
- For more details on this and how to re-enable the HPET if it is needed, please consult the *Intel® DPDK Getting Started Guide*.
-
-Intel® DPDK 1.2 to Intel® DPDK 1.3
-----------------------------------
-
-Note the following difference between releases 1.2 and 1.3:
-
-* In release 1.3, the Intel® DPDK supports two different 1 GbE drivers: igb and em.
- Both of them are located in the same library: lib_pmd_e1000.a.
- Therefore, the name of the library to link with for the igb PMD has changed from librte_pmd_igb.a to librte_pmd_e1000.a.
-
-* The rte_common.h macros, RTE_ALIGN, RTE_ALIGN_FLOOR and RTE_ALIGN_CEIL were renamed to, RTE_PTR_ALIGN, RTE_PTR_ALIGN_FLOOR
- and RTE_PTR_ALIGN_CEIL.
- The original macros are still available but they have different behavior.
- Not updating the macros results in strange compilation errors.
-
-* The rte_tailq is now defined statically. The rte_tailq APIs have also been changed from being public to internal use only.
- The old public APIs are maintained for backward compatibility reasons. Details can be found in the *Intel® DPDK API Reference*.
-
-* The method for managing mbufs on the NIC RX rings has been modified to improve performance.
- To allow applications to use the newer, more optimized, code path,
- it is recommended that the rx_free_thresh field in the rte_eth_conf structure,
- which is passed to the Poll Mode Driver when initializing a network port, be set to a value of 32.
-
-Intel® DPDK 1.1 to Intel® DPDK 1.2
-----------------------------------
-
-Note the following difference between release 1.1 and release 1.2:
-
-* The names of the 1G and 10G Ethernet drivers have changed between releases 1.1 and 1.2. While the old driver names still work,
- it is recommended that code be updated to the new names, since the old names are deprecated and may be removed in a future
- release.
-
- The items affected are as follows:
-
- * Any macros referring to RTE_LIBRTE_82576_PMD should be updated to refer to RTE_LIBRTE_IGB_PMD.
-
- * Any macros referring to RTE_LIBRTE_82599_PMD should be updated to refer to RTE_LIBRTE_IXGBE_PMD.
-
- * Any calls to the rte_82576_pmd_init() function should be replaced by calls to rte_igb_pmd_init().
-
- * Any calls to the rte_82599_pmd_init() function should be replaced by calls to rte_ixgbe_pmd_init().
-
-* The method used for managing mbufs on the NIC TX rings for the 10 GbE driver has been modified to improve performance.
- As a result, different parameter values should be passed to the rte_eth_tx_queue_setup() function.
- The recommended default values are to have tx_thresh.tx_wthresh, tx_free_thresh,
- as well as the new parameter tx_rs_thresh (all in the struct rte_eth_txconf datatype) set to zero.
- See the "Configuration of Transmit and Receive Queues" section in the *Intel® DPDK Programmer's Guide* for more details.
-
-.. note::
-
- If the tx_free_thresh field is set to TX_RING_SIZE+1 , as was previously used in some cases to disable free threshold check,
- then an error is generated at port initialization time.
- To avoid this error, configure the TX threshold values as suggested above.
--
1.8.1.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] doc: add missing API headers
@ 2015-08-11 16:29 4% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-11 16:29 UTC (permalink / raw)
To: John McNamara; +Cc: dev
Some libraries were not included in doxygen documentation.
Other ones were included but not listed in the index.
The malloc library is now included in EAL.
The libraries compat and jobstats are added but not doxygen compliant.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/api/doxy-api-index.md | 16 ++++++++++++++--
doc/api/doxy-api.conf | 4 +++-
2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 6958f8f..72ac3c4 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -36,7 +36,9 @@ API {#index}
There are many libraries, so their headers may be grouped by topics:
- **device**:
+ [dev] (@ref rte_dev.h),
[ethdev] (@ref rte_ethdev.h),
+ [ethctrl] (@ref rte_eth_ctrl.h),
[devargs] (@ref rte_devargs.h),
[bond] (@ref rte_eth_bond.h),
[vhost] (@ref rte_virtio_net.h),
@@ -76,12 +78,15 @@ There are many libraries, so their headers may be grouped by topics:
- **layers**:
[ethernet] (@ref rte_ether.h),
+ [ARP] (@ref rte_arp.h),
+ [ICMP] (@ref rte_icmp.h),
[IP] (@ref rte_ip.h),
[SCTP] (@ref rte_sctp.h),
[TCP] (@ref rte_tcp.h),
[UDP] (@ref rte_udp.h),
[frag/reass] (@ref rte_ip_frag.h),
- [LPM route] (@ref rte_lpm.h),
+ [LPM IPv4 route] (@ref rte_lpm.h),
+ [LPM IPv6 route] (@ref rte_lpm6.h),
[ACL] (@ref rte_acl.h)
- **QoS**:
@@ -92,6 +97,7 @@ There are many libraries, so their headers may be grouped by topics:
- **hashes**:
[hash] (@ref rte_hash.h),
[jhash] (@ref rte_jhash.h),
+ [thash] (@ref rte_thash.h),
[FBK hash] (@ref rte_fbk_hash.h),
[CRC hash] (@ref rte_hash_crc.h)
@@ -99,8 +105,10 @@ There are many libraries, so their headers may be grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[ring] (@ref rte_ring.h),
[distributor] (@ref rte_distributor.h),
+ [reorder] (@ref rte_reorder.h),
[tailq] (@ref rte_tailq.h),
- [bitmap] (@ref rte_bitmap.h)
+ [bitmap] (@ref rte_bitmap.h),
+ [ivshmem] (@ref rte_ivshmem.h)
- **packet framework**:
* [port] (@ref rte_port.h):
@@ -122,10 +130,13 @@ There are many libraries, so their headers may be grouped by topics:
- **basic**:
[approx fraction] (@ref rte_approx.h),
[random] (@ref rte_random.h),
+ [config file] (@ref rte_cfgfile.h),
[key/value args] (@ref rte_kvargs.h),
[string] (@ref rte_string_fns.h)
- **debug**:
+ [jobstats] (@ref rte_jobstats.h),
+ [hexdump] (@ref rte_hexdump.h),
[debug] (@ref rte_debug.h),
[log] (@ref rte_log.h),
[warnings] (@ref rte_warnings.h),
@@ -134,4 +145,5 @@ There are many libraries, so their headers may be grouped by topics:
- **misc**:
[EAL config] (@ref rte_eal.h),
[common] (@ref rte_common.h),
+ [ABI compat] (@ref rte_compat.h),
[version] (@ref rte_version.h)
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index de32af1..cfb4627 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -34,16 +34,18 @@ INPUT = doc/api/doxy-api-index.md \
lib/librte_eal/common/include \
lib/librte_eal/common/include/generic \
lib/librte_acl \
+ lib/librte_cfgfile \
lib/librte_cmdline \
+ lib/librte_compat \
lib/librte_distributor \
lib/librte_ether \
lib/librte_hash \
lib/librte_ip_frag \
+ lib/librte_ivshmem \
lib/librte_jobstats \
lib/librte_kni \
lib/librte_kvargs \
lib/librte_lpm \
- lib/librte_malloc \
lib/librte_mbuf \
lib/librte_mempool \
lib/librte_meter \
--
2.4.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH] doc: simplify release notes cover
@ 2015-08-11 22:58 4% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-11 22:58 UTC (permalink / raw)
To: John McNamara; +Cc: dev
One hierarchical level is enough for this table of content.
Use generated release number.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/index.rst | 2 +-
doc/guides/rel_notes/rel_description.rst | 5 ++---
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index f0f97d1..d01cbc8 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -36,7 +36,7 @@ DPDK Release Notes
Contents
.. toctree::
- :maxdepth: 2
+ :maxdepth: 1
:numbered:
rel_description
diff --git a/doc/guides/rel_notes/rel_description.rst b/doc/guides/rel_notes/rel_description.rst
index f240db1..994845f 100644
--- a/doc/guides/rel_notes/rel_description.rst
+++ b/doc/guides/rel_notes/rel_description.rst
@@ -32,10 +32,9 @@
Description of Release
======================
-
-This document contains the release notes for Data Plane Development Kit (DPDK) release version 2.0.0 and previous releases.
+This document contains the release notes for Data Plane Development Kit (DPDK)
+release version |release| and previous releases.
It lists new features, fixed bugs, API and ABI changes and known issues.
For instructions on compiling and running the release, see the :ref:`DPDK Getting Started Guide <linux_gsg>`.
-
--
2.4.2
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] doc: announce abi change for interrupt mode
@ 2015-08-12 8:51 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 8:51 UTC (permalink / raw)
To: Liang, Cunming; +Cc: dev
> > The patch announces the planned ABI changes for interrupt mode.
> >
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Acked-by: Helin Zhang <helin.zhang@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing
2015-08-11 3:01 4% ` Zhang, Helin
@ 2015-08-12 9:02 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 9:02 UTC (permalink / raw)
To: Wu, Jingjing; +Cc: dev
> > APIs for flow director filters has been replaced by rte_eth_dev_filter_ctrl by
> > previous releases. Enic, ixgbe and i40e are switched to support filter_ctrl APIs, so
> > the old APIs are useless, and ready to be removed now.
> > This patch announces the ABI change for these APIs removing.
> >
> > Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> Acked-by: Helin Zhang <helin.zhang@intel.com>
Reworded and applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] abi: announce abi changes plan for struct rte_eth_fdir_flow_ext
@ 2015-08-12 10:25 5% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 10:25 UTC (permalink / raw)
To: Jingjing Wu; +Cc: dev
2015-06-17 11:36, Jingjing Wu:
> It announces the planned ABI change to support flow director filtering in VF on v2.2.
>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Reworded and applied, thanks
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for rte_eth_fdir_filter
2015-08-04 8:52 4% ` Mcnamara, John
@ 2015-08-12 10:38 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 10:38 UTC (permalink / raw)
To: Wu, Jingjing; +Cc: dev
> > To fix the FVL's flow director issue for SCTP flow, rte_eth_fdir_filter
> > need to be change to support SCTP flow keys extension. Here announce the
> > ABI deprecation.
> >
> > Signed-off-by: jingjing.wu <jingjing.wu@intel.com>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
Reworded and applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] doc: announce ABI change of rte_eth_fdir_filter, rte_eth_fdir_masks
2015-08-04 8:56 4% ` Mcnamara, John
@ 2015-08-12 14:19 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 14:19 UTC (permalink / raw)
To: Lu, Wenzhuo; +Cc: dev
> > For x550 supports 2 new flow director modes, MAC VLAN and Cloud. The MAC
> > VLAN mode means the MAC and VLAN are monitored. The Cloud mode is for
> > VxLAN and NVGRE, and the tunnel type, TNI/VNI, inner MAC and inner VLAN
> > are monitored. So, there're a few new lookup fields for these 2 new modes,
> > like MAC, tunnel type, TNI/VNI. We have to change the ABI to support these
> > new lookup fields.
> >
> > v2 changes:
> > * Correct the names of the structures.
> > * Wrap the words.
> > * Add explanation for the new modes.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] abi: Announce abi changes plan for vhost-user multiple queues
@ 2015-08-12 14:57 5% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-12 14:57 UTC (permalink / raw)
To: Ouyang Changchun; +Cc: dev
> It announces the planned ABI changes for vhost-user multiple queues feature on v2.2.
>
> Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> ---
> +* The ABI changes are planned for struct virtio_net in order to support vhost-user multiple queues feature. The upcoming release 2.1 will not contain these ABI changes, but release 2.2 will, and no backwards compatibility is planed due to the vhost-user multiple queues feature enabling. Binaries using this library build prior to version 2.2 will require updating and recompilation.
Applied with this rewording:
It should be integrated in release 2.2 without backward compatibility.
It *should* be in next release but we have to wait that the feature is
stable/integrated in Qemu.
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH] doc: updated release notes for r2.1
@ 2015-08-13 11:04 5% ` John McNamara
0 siblings, 0 replies; 200+ results
From: John McNamara @ 2015-08-13 11:04 UTC (permalink / raw)
To: dev
Added release notes for the DPDK R2.1 release.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_2_1.rst | 980 ++++++++++++++++++++++++++++++++++-
1 file changed, 970 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/release_2_1.rst b/doc/guides/rel_notes/release_2_1.rst
index c39418c..2bcc719 100644
--- a/doc/guides/rel_notes/release_2_1.rst
+++ b/doc/guides/rel_notes/release_2_1.rst
@@ -1,5 +1,5 @@
.. BSD LICENSE
- Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
All rights reserved.
Redistribution and use in source and binary forms, with or without
@@ -36,34 +36,994 @@ DPDK Release 2.1
New Features
------------
-* TODO.
+* **Enabled cloning of indirect mbufs.**
+
+ This feature removes a limitation of ``rte_pktmbuf_attach()`` which
+ generated the warning: "mbuf we're attaching to must be direct".
+
+ Now, when attaching to an indirect mbuf it is possible to:
+
+ * Copy all relevant fields (address, length, offload, ...) as before.
+
+ * Get the pointer to the mbuf that embeds the data buffer (direct mbuf),
+ and increase the reference counter.
+
+ When detaching the mbuf, we can now retrieve this direct mbuf as the
+ pointer is determined from the buffer address.
+
+
+* **Extended packet type support.**
+
+ In previous releases mbuf packet types were indicated by 6 bits in the
+ ``ol_flags``. This was not enough for some supported NICs. For example i40e
+ hardware can recognize more than 150 packet types. Not being able to
+ identify these additional packet types limits access to hardware offload
+ capabilities
+
+ So an extended "unified" packet type was added to support all possible
+ PMDs. The 16 bit packet_type in the mbuf structure was changed to 32 bits
+ and used for this purpose.
+
+ To avoid breaking ABI compatibility, the code changes this feature are
+ enclosed in a ``RTE_NEXT_ABI`` ifdef. This is enabled by default but can be
+ turned off for ABI compatibility with DPDK R2.0.
+
+
+* **Reworked memzone to be allocated by malloc and also support freeing.**
+
+ In the memory hierarchy, memsegs are groups of physically contiguous
+ hugepages, memzones are slices of memsegs, and malloc slices memzones
+ into smaller memory chunks.
+
+ This feature modifies ``malloc()`` so it partitions memsegs instead of
+ memzones. Now memzones allocate their memory from the malloc heap.
+
+ Backward compatibility with API and ABI are maintained.
+
+ This also memzones, and any other structure based on memzones, for example
+ mempools, to be freed. Currently only the API from freeing memzones is
+ supported.
+
+
+* **Interrupt mode PMD.**
+
+ This feature introduces a low-latency one-shot RX interrupt into DPDK. It
+ also adds a polling and interrupt mode switch control example.
+
+ DPDK userspace interrupt notification and handling mechanism is based on
+ UIO/VFIO with the following limitations:
+
+ * Per queue RX interrupt events are only allowed in VFIO which supports
+ multiple MSI-X vectors.
+ * In UIO, the RX interrupt shares the same vector with other
+ interrupts. When the RX interrupt and LSC interrupt are both enabled, only
+ the former is available.
+ * RX interrupt is only implemented for the linuxapp target.
+ * The feature is only currently enabled for tow PMDs: ixgbe and igb.
+
+
+* **Packet Framework enhancements.**
+
+ Several enhancements were made to the Packet Framework:
+
+ * A new configuration file syntax has been introduced for IP pipeline
+ applications. Parsing of the configuration file is changed.
+ * Implementation of the IP pipeline application is modified to make it more
+ structured and user friendly.
+ * Implementation of the command line interface (CLI) for each pipeline type
+ has been moved to the separate compilation unit. Syntax of pipeline CLI
+ commands has been changed.
+ * Initialization of IP pipeline is modified to match the new parameters
+ structure.
+ * New implementation of pass-through pipeline, firewall pipeline, routing
+ pipeline, and flow classification has been added.
+ * Master pipeline with CLI interface has been added.
+ * Added extended documentation of the IP Pipeline.
+
+
+* **Added API for IEEE1588 timestamping.**
+
+ This feature adds an ethdev API to enable, disable and read IEEE1588/802.1AS
+ PTP timestamps from devices that support it. The following functions were
+ added:
+
+ * ``rte_eth_timesync_enable()``
+ * ``rte_eth_timesync_disable()``
+ * ``rte_eth_timesync_read_rx_timestamp()``
+ * ``rte_eth_timesync_read_tx_timestamp()``
+
+ The "ieee1588" forwarding mode in testpmd was also refactored to demonstrate
+ the new API.
+
+
+* **Added multicast address filtering.**
+
+ Added multicast address filtering via a new ethdev function
+ ``set_mc_addr_list()``.
+
+ This overcomes a limitation in previous releases where the receipt of
+ multicast packets on a given port could only be enabled by invoking the
+ ``rte_eth_allmulticast_enable()`` function. This method did not work for VFs
+ in SR-IOV architectures when the host PF driver does not allow these
+ operation on VFs. In such cases, joined multicast addresses had to be added
+ individually to the set of multicast addresses that are filtered by the [VF]
+ port.
+
+
+* **Added Flow Director extensions.**
+
+ Several Flow Director extensions were added such as:
+
+ * Support for RSS and Flow Director hashes in vector RX.
+ * Added Flow Director for L2 payload.
+
+
+* **Added RSS hash key size query per port.**
+
+ This feature supports querying the RSS hash key size of each port. A new
+ field ``hash_key_size`` has been added in the ``rte_eth_dev_info`` struct
+ for storing hash key size in bytes.
+
+
+* **Added userspace ethtool support.**
+
+ Added userspace ethtool support to provide a familiar interface for
+ applications that manage devices via kernel-space ``ethtool_op`` and
+ ``net_device_op``.
+
+ The initial implementation focuses on operations that can be implemented
+ through existing ``netdev`` APIs. More operations will be supported in later
+ releases.
+
+
+* **Updated the ixgbe base driver.**
+
+ The ixgbe base driver was updated with several changes including the
+ following:
+
+ * Added a new 82599 device id.
+ * Added new X550 PHY ids.
+ * Added SFP+ dual-speed support.
+ * Added wait helper for X550 IOSF accesses.
+ * Added X550em features.
+ * Added X557 PHY LEDs support.
+ * Commands for flow director.
+ * Issue firmware command when resetting X550em.
+
+ See the git log for full details of the ixgbe/base changes.
+
+
+* **Added additional hotplug support.**
+
+ Port hotplug support was added to the following PMDs:
+
+ * e1000/igb.
+ * ixgbe.
+ * i40e.
+ * fm10k.
+ * Ring.
+ * Bonding.
+ * Virtio.
+
+ Port hotplug support was added to BSD.
+
+
+* **Added ixgbe LRO support.**
+
+ Added LRO support for x540 and 82599 devices.
+
+
+* **Added extended statistics for ixgbe.**
+
+ Implemented ``xstats_get()`` and ``xstats_reset()`` in dev_ops for
+ ixgbe to expose detailed error statistics to DPDK applications.
+
+ These will be implemented for other PMDs in later releases.
+
+
+* **Added proc_info application.**
+
+ Created a new ``proc_info`` application, by refactoring the existing
+ ``dump_cfg`` application, to demonstrate the usage of retrieving statistics,
+ and the new extended statistics (see above), for DPDK interfaces.
+
+
+* **Updated the i40e base driver.**
+
+ The i40e base driver was updated with several changes including the
+ following:
+
+ * Support for building both PF and VF driver together.
+ * Support for CEE DCBX on recent firmware versions.
+ * Replacement of ``i40e_debug_read_register()``.
+ * Rework of ``i40e_hmc_get_object_va``.
+ * Update of shadow RAM read/write functions.
+ * Enhancement of polling NVM semaphore.
+ * Enhancements on adminq init and sending asq command.
+ * Update of get/set LED functions.
+ * Addition of AOC phy types to case statement in get_media_type.
+ * Support for iSCSI capability.
+ * Setting of FLAG_RD when sending driver version to FW.
+
+ See the git log for full details of the i40e/base changes.
+
+
+* **Added support for port mirroring in i40e.**
+
+ Enabled mirror functionality in the i40e driver.
+
+
+* **Added support for i40e double VLAN, QinQ, stripping and insertion.**
+
+ Added support to the i40e driver for offloading double VLAN (QinQ) tags to
+ the mbuf header, and inserting double vlan tags by hardware to the packets
+ to be transmitted. Added a new field ``vlan_tci_outer`` in the ``rte_mbuf``
+ struct, and new flags in ``ol_flags`` to support this feature.
+
+
+
+* **Added fm10k promiscuous mode support.**
+
+ Added support for promiscuous/allmulticast enable and disable in the fm10k PF
+ function. VF is not supported yet.
+
+
+* **Added fm10k jumbo frame support.**
+
+ Added support for jumbo frame less than 15K in both VF and PF functions in the
+ fm10k pmd.
+
+
+* **Added fm10k mac vlan filtering support.**
+
+ Added support for the fm10k MAC filter, only available in PF. Updated the
+ VLAN filter to add/delete one static entry in the MAC table for each
+ combination of VLAN and MAC address.
+
+
+* **Added support for the Broadcom bnx2x driver.**
+
+ Added support for the Broadcom NetXtreme II bnx2x driver.
+
+
+* **Added support for the Chelsio CXGBE driver.**
+
+ Added support for the CXGBE Poll Mode Driver for the Chelsio Terminator 5
+ series of 10G/40G adapters.
+
+
+* **Enabled VMXNET3 vlan filtering.**
+
+ Added support for the VLAN filter functionality of the VMXNET3 interface.
+
+
+* **Added support for vhost live migration.**
+
+ Added support to allow live migration of vhost. Without this feature, qemu
+ will report the following error: "migrate: Migration disabled: vhost lacks
+ VHOST_F_LOG_ALL feature".
+
+
+* **Added support for pcap jumbo frames.**
+
+ Extended the PCAP PMD to support jumbo frames for RX and TX.
+
+
+* **Added support for the TILE-Gx architecture.**
+
+ Added support for the EZchip TILE-Gx family of SoCs.
+
+
+* **Added hardware memory transactions/lock elision for x86.**
+
+ Added the use of hardware memory transactions (HTM) on fast-path for rwlock
+ and spinlock (a.k.a. lock elision). The methods are implemented for x86
+ using Restricted Transactional Memory instructions (Intel(r) Transactional
+ Synchronization Extensions). The implementation fall-backs to the normal
+ rwlock if HTM is not available or memory transactions fail. This is not a
+ replacement for all rwlock usages since not all critical sections protected
+ by locks are friendly to HTM. For example, an attempt to perform a HW I/O
+ operation inside a hardware memory transaction always aborts the transaction
+ since the CPU is not able to roll-back should the transaction
+ fail. Therefore, hardware transactional locks are not advised to be used
+ around ``rte_eth_rx_burst()`` and ``rte_eth_tx_burst()`` calls.
+
+
+* **Updated Jenkins Hash function**
+
+ Updated the version of the Jenkins Hash (jhash) function used in DPDK from
+ the 1996 version to the 2006 version. This gives up to 35% better
+ performance, compared to the original one.
+
+ Note, the hashes generated by the updated version differ from the hashes
+ generated by the previous version.
+
+
+* **Added software implementation of the Toeplitz RSS hash**
+
+ Added a software implementation of the Toeplitz hash function used by RSS. It
+ can be used either for packet distribution on a single queue NIC or for
+ simulating RSS computation on a specific NIC (for example after GRE header
+ de-encapsulation).
+
+
+* **Replaced the existing hash library with a Cuckoo hash implementation.**
+
+ Replaced the existing hash library with another approach, using the Cuckoo
+ Hash method to resolve collisions (open addressing). This method pushes
+ items from a full bucket when a new entry must be added to it, storing the
+ evicted entry in an alternative location, using a secondary hash function.
+
+ This gives the user the ability to store more entries when a bucket is full,
+ in comparison with the previous implementation.
+
+ The API has not been changed, although new fields have been added in the
+ ``rte_hash`` structure, which has been changed to internal use only.
+
+ The main change when creating a new table is that the number of entries per
+ bucket is now fixed, so its parameter is ignored now (it is still there to
+ maintain the same parameters structure).
+
+ Also, the maximum burst size in lookup_burst function hash been increased to
+ 64, to improve performance.
+
+
+* **Optimized KNI RX burst size computation.**
+
+ Optimized KNI RX burst size computation by avoiding checking how many
+ entries are in ``kni->rx_q`` prior to actually pulling them from the fifo.
+
+
+* **Added KNI multicast.**
+
+ Enabled adding multicast addresses to KNI interfaces by adding an empty
+ callback for ``set_rx_mode`` (typically used for setting up hardware) so
+ that the ioctl succeeds. This is the same thing as the Linux tap interface
+ does.
+
+
+* **Added cmdline polling mode.**
+
+ Added the ability to process console input in the same thread as packet
+ processing by using the ``poll()`` function.
+
+* **Added VXLAN Tunnel End point sample application.**
+
+ Added a Tunnel End point (TEP) sample application that simulates a VXLAN
+ Tunnel Endpoint (VTEP) termination in DPDK. It is used to demonstrate the
+ offload and filtering capabilities of Intel XL710 10/40 GbE NICsfor VXLAN
+ packets.
+
+
+* **Enabled combining of the ``-m`` and ``--no-huge`` EAL options.**
+
+ Added option to allow combining of the ``-m`` and ``--no-huge`` EAL command
+ line options.
+
+ This allows user application to run as non-root but with higher memory
+ allocations, and removes a constraint on ``--no-huge`` mode being limited to
+ 64M.
+
Resolved Issues
---------------
-* TODO.
+* **acl: Fix ambiguity between test rules.**
+
+ Some test rules had equal priority for the same category. That could cause
+ an ambiguity in building the trie and test results.
+
+
+* **acl: Fix invalid rule wildness calculation for bitmask field type.**
+
+
+* **acl: Fix matching rule.**
+
+
+* **acl: Fix unneeded trie splitting for subset of rules.**
+
+ When rebuilding a trie for limited rule-set, don't try to split the rule-set
+ even further.
+
+
+* **app/testpmd: Fix crash when port id out of bound.**
+
+ Fixed issues in testpmd where using a port greater than 32 would cause a seg
+ fault.
+
+ Fixes: edab33b1c01d ("app/testpmd: support port hotplug")
+
+
+* **app/testpmd: Fix reply to a multicast ICMP request.**
+
+ Set the IP source and destination addresses in the IP header of the ICMP
+ reply.
+
+
+* **app/testpmd: fix MAC address in ARP reply.**
+
+ Fixed issue where in the ``icmpecho`` forwarding mode, ARP replies from
+ testpmd contain invalid zero-filled MAC addresses.
+
+ Fixes: 31db4d38de72 ("net: change arp header struct declaration")
+
+
+* **app/testpmd: fix default flow control values.**
+
+ Fixes: 422a20a4e62d ("app/testpmd: fix uninitialized flow control variables")
+
+
+* **bonding: Fix crash when stopping inactive slave.**
+
+
+* **bonding: Fix device initialization error handling.**
+
+
+* **bonding: Fix initial link status of slave.**
+
+ On Fortville NIC, link status change interrupt callback was not executed
+ when slave in bonding was (re-)started.
+
+
+* **bonding: Fix socket id for LACP slave.**
+
+ Fixes: 46fb43683679 ("bond: add mode 4")
+
+
+* **bonding: Fix device initialization error handling.**
+
+
+* **cmdline: Fix small memory leak.**
+
+ A function in ``cmdline.c`` had a return that did not free the buf properly.
+
+
+* **config: Enable same drivers options for Linux and BSD.**
+
+ Enabled vector ixgbe and i40e bulk alloc for BSD as it is already done for
+ Linux.
+
+ Fixes: 304caba12643 ("config: fix bsd options")
+ Fixes: 0ff3324da2eb ("ixgbe: rework vector pmd following mbuf changes")
+
+
+* **devargs: Fix crash on failure.**
+
+ This problem occurred when passing an invalid PCI id to the blacklist API in
+ devargs.
+
+
+* **e1000/i40e: Fix descriptor done flag with odd address.**
+
+
+* **e1000/igb: fix ieee1588 timestamping initialization.**
+
+ Fixed issue with e1000 ieee1588 timestamp initialization. On initialization
+ the IEEE1588 functions read the system time to set their timestamp. However,
+ on some 1G NICs, for example, i350, system time is disabled by default and
+ the IEEE1588 timestamp was always 0.
+
+
+* **eal/Linux: Fix irq handling with igb_uio.**
+
+ Fixed an issue where the the introduction of ``uio_pci_generic`` broke
+ interrupt handling with igb_uio.
+
+ Fixes: c112df6875a5 ("eal/Linux: toggle interrupt for uio_pci_generic")
+
+
+* **eal/bsd: Fix inappropriate header guards.**
+
+
+* **eal/bsd: Fix virtio on FreeBSD.**
+
+ Closing the ``/dev/io`` fd caused a SIGBUS in inb/outb instructions as the
+ process lost the IOPL privileges once the fd is closed.
+
+ Fixes: 8a312224bcde ("eal/bsd: fix fd leak")
+
+
+* **eal/linux: Fix comments on vfio MSI.**
+
+
+* **eal/linux: Fix numa node detection.**
+
+
+* **eal/linux: Fix socket value for undetermined numa node.**
+
+ Sets zero as the default value of pci device numa_node if the socket could
+ not be determined. This provides the same default value as FreeBSD which has
+ no NUMA support, and makes the return value of ``rte_eth_dev_socket_id()``
+ be consistent with the API description.
+
+
+* **eal/ppc: Fix cpu cycle count for little endian.**
+
+ On IBM POWER8 PPC64 little endian architecture, the definition of tsc union
+ will be different. This fix enables the right output from ``rte_rdtsc()``.
+
+
+* **ethdev: Fix check of threshold for TX freeing.**
+
+ Fixed issue where the parameter to ``tx_free_thresh`` was not consistent
+ between the drivers.
+
+
+* **ethdev: Fix crash if malloc of user callback fails.**
+
+ If ``rte_zmalloc()`` failed in ``rte_eth_dev_callback_register`` then the
+ NULL pointer would be dereferenced.
+
+
+* **ethdev: Fix illegal port access.**
+
+ To obtain a detachable flag, ``pci_drv`` is accessed in
+ ``rte_eth_dev_is_detachable()``. However ``pci_drv`` is only valid if port
+ is enabled. Fixed by checking ``rte_eth_dev_is_valid_port()`` first.
+
+
+* **ethdev: Make tables const.**
+
+
+* **ethdev: Rename and extend the mirror type.**
+
+
+* **examples/distributor: Fix debug macro.**
+
+ The macro to turn on additional debug output when the app was compiled with
+ ``-DDEBUG`` was broken.
+
+ Fixes: 07db4a975094 ("examples/distributor: new sample app")
+
+
+* **examples/kni: Fix crash on exit.**
+
+
+* **examples/vhost: Fix build with debug enabled.**
+
+ Fixes: 72ec8d77ac68 ("examples/vhost: rework duplicated code")
+
+
+* **fm10k: Fix RETA table initialization.**
+
+ The fm10k driver has 128 RETA entries in 32 registers, but it only
+ initialized the first 32 when doing multiple RX queue configurations. This
+ fix initializes all 128 entries.
+
+
+* **fm10k: Fix RX buffer size.**
+
+
+* **fm10k: Fix TX multi-segment frame.**
+
+
+* **fm10k: Fix TX queue cleaning after start error.**
+
+
+* **fm10k: Fix Tx queue cleaning after start error.**
+
+
+* **fm10k: Fix default mac/vlan in switch.**
+
+
+* **fm10k: Fix interrupt fault handling.**
+
+
+* **fm10k: Fix jumbo frame issue.**
+
+
+* **fm10k: Fix mac/vlan filtering.**
+
+
+* **fm10k: Fix maximum VF number.**
+
+
+* **fm10k: Fix maximum queue number for VF.**
+
+ Both PF and VF shared code in function ``fm10k_stats_get()``. The function
+ worked with PF, but had problems with VF since it has less queues than PF.
+
+ Fixes: a6061d9e7075 ("fm10k: register PF driver")
+
+
+* **fm10k: Fix queue disabling.**
+
+
+* **fm10k: Fix switch synchronization.**
+
+
+* **i40e/base: Fix error handling of NVM state update.**
+
+
+* **i40e/base: Fix hardware port number for pass-through.**
+
+
+* **i40e/base: Rework virtual address retrieval for lan queue.**
+
+
+* **i40e/base: Update LED blinking.**
+
+
+* **i40e/base: Workaround for PHY type with firmware < 4.4.**
+
+
+* **i40e: Disable setting of PHY configuration.**
+
+
+* **i40e: Fix SCTP flow director.**
+
+
+* **i40e: Fix check of descriptor done flag.**
+
+ Fixes: 4861cde46116 ("i40e: new poll mode driver")
+ Fixes: 05999aab4ca6 ("i40e: add or delete flow director")
+
+
+* **i40e: Fix condition to get VMDQ info.**
+
+
+* **i40e: Fix registers access from big endian CPU.**
+
+
+* **i40evf: Clear command when error occurs.**
+
+
+* **i40evf: Fix RSS with less RX queues than TX queues.**
+
+
+* **i40evf: Fix crash when setup TX queues.**
+
+
+* **i40evf: Fix jumbo frame support.**
+
+
+* **i40evf: Fix offload capability flags.**
+
+ Added checksum offload capability flags which have already been supported
+ for a long time.
+
+
+* **ivshmem: Fix crash in corner case.**
+
+ Fixed issues where depending on the configured segments it was possible to
+ hit a segmentation fault as a result of decrementing an unsigned index with
+ value 0.
+
+
+ Fixes: 40b966a211ab ("ivshmem: library changes for mmaping using ivshmem")
+
+
+* **ixgbe/base: Fix SFP probing.**
+
+
+* **ixgbe/base: Fix TX pending clearing.**
+
+
+* **ixgbe/base: Fix X550 CS4227 address.**
+
+
+* **ixgbe/base: Fix X550 PCIe master disabling.**
+
+
+* **ixgbe/base: Fix X550 check.**
+
+
+* **ixgbe/base: Fix X550 init early return.**
+
+
+* **ixgbe/base: Fix X550 link speed.**
+
+
+* **ixgbe/base: Fix X550em CS4227 speed mode.**
+
+
+* **ixgbe/base: Fix X550em SFP+ link stability.**
+
+
+* **ixgbe/base: Fix X550em UniPHY link configuration.**
+
+
+* **ixgbe/base: Fix X550em flow control for KR backplane.**
+
+
+* **ixgbe/base: Fix X550em flow control to be KR only.**
+
+
+* **ixgbe/base: Fix X550em link setup without SFP.**
+
+
+* **ixgbe/base: Fix X550em mux after MAC reset.**
+
+ Fixes: d2e72774e58c ("ixgbe/base: support X550")
+
+
+* **ixgbe/base: Fix bus type overwrite.**
+
+
+* **ixgbe/base: Fix init handling of X550em link down.**
+
+
+* **ixgbe/base: Fix lan id before first i2c access.**
+
+
+* **ixgbe/base: Fix mac type checks.**
+
+
+* **ixgbe/base: Fix tunneled UDP and TCP frames in flow director.**
+
+
+* **ixgbe: Check mbuf refcnt when clearing a ring.**
+
+ The function to clear the TX ring when a port was being closed, e.g. on exit
+ in testpmd, was not checking the mbuf refcnt before freeing it. Since the
+ function in the vector driver to clear the ring after TX does not setting
+ the pointer to NULL post-free, this caused crashes if mbuf debugging was
+ turned on.
+
+
+* **ixgbe: Fix RX with buffer address not word aligned.**
+
+ Niantic HW expects the Header Buffer Address in the RXD must be word
+ aligned.
+
+
+* **ixgbe: Fix RX with buffer address not word aligned.**
+
+
+* **ixgbe: Fix Rx queue reset.**
+
+ Fix to reset vector related RX queue fields to their initial values.
+
+ Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx")
+
+
+* **ixgbe: Fix TSO in IPv6.**
+
+ When TSO was used with IPv6, the generated frames were incorrect. The L4
+ frame was OK, but the length field of IPv6 header was not populated
+ correctly.
+
+
+* **ixgbe: Fix X550 flow director check.**
+
+
+* **ixgbe: Fix check for split packets.**
+
+ The check for split packets to be reassembled in the vector ixgbe PMD was
+ incorrectly only checking the first 16 elements of the array instead of
+ all 32.
+
+ Fixes: cf4b4708a88a ("ixgbe: improve slow-path perf with vector scattered Rx")
+
+
+* **ixgbe: Fix data access on big endian cpu.**
+
+
+* **ixgbe: Fix flow director flexbytes offset.**
+
+
+ Fixes: d54a9888267c ("ixgbe: support flexpayload configuration of flow director")
+
+
+* **ixgbe: Fix number of segments with vector scattered Rx.**
+
+ Fixes: cf4b4708a88a (ixgbe: improve slow-path perf with vector scattered Rx)
+
+
+* **ixgbe: Fix offload config option name.**
+
+ The RX_OLFLAGS option was renamed from DISABLE to ENABLE in the driver code
+ and Linux config. It is now renamed also in the BSD config and
+ documentation.
+
+ Fixes: 359f106a69a9 ("ixgbe: prefer enabling olflags rather than not disabling")
+
+
+* **ixgbe: Fix release queue mbufs.**
+
+ The calculations of what mbufs were valid in the RX and TX queues were
+ incorrect when freeing the mbufs for the vector PMD. This led to crashes due
+ to invalid reference counts when mbuf debugging was turned on, and possibly
+ other more subtle problems (such as mbufs being freed when in use) in other
+ cases.
+
+
+ Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx")
+
+
+* **ixgbe: Move PMD specific fields out of base driver.**
+
+ Move ``rx_bulk_alloc_allowed`` and ``rx_vec_allowed`` from ``ixgbe_hw`` to
+ ``ixgbe_adapter``.
+
+ Fixes: 01fa1d6215fa ("ixgbe: unify Rx setup")
+
+
+* **ixgbe: Rename TX queue release function.**
+
+
+* **ixgbevf: Fix RX function selection.**
+
+ The logic to select ixgbe the VF RX function is different than the PF.
+
+
+* **ixgbevf: Fix link status for PF up/down events.**
+
+
+* **kni: Fix RX loop limit.**
+
+ Loop processing packets dequeued from rx_q was using the number of packets
+ requested, not how many it actually received.
+
+
+* **kni: Fix ioctl in containers, like Docker.**
+
+
+* **kni: Fix multicast ioctl handling.**
+
+
+* **log: Fix crash after log_history dump.**
+
+
+* **lpm: Fix big endian support.**
+
+
+* **lpm: Fix depth small entry add.**
+
+
+* **mbuf: Fix cloning with private mbuf data.**
+
+ Added a new ``priv_size`` field in mbuf structure that should be initialized
+ at mbuf pool creation. This field contains the size of the application
+ private data in mbufs.
+
+ Introduced new static inline functions ``rte_mbuf_from_indirect()`` and
+ ``rte_mbuf_to_baddr()`` to replace the existing macros, which take the
+ private size into account when attaching and detaching mbufs.
+
+
+* **mbuf: Fix data room size calculation in pool init.**
+
+ Deduct the mbuf data room size from ``mempool->elt_size`` and ``priv_size``,
+ instead of using an hardcoded value that is not related to the real buffer
+ size.
+
+ To use ``rte_pktmbuf_pool_init()``, the user can either:
+
+ * Give a NULL parameter to rte_pktmbuf_pool_init(): in this case, the
+ private size is assumed to be 0, and the room size is ``mp->elt_size`` -
+ ``sizeof(struct rte_mbuf)``.
+ * Give the ``rte_pktmbuf_pool_private`` filled with appropriate
+ data_room_size and priv_size values.
+
+
+* **mbuf: Fix init when private size is not zero.**
+
+ Allow the user to use the default ``rte_pktmbuf_init()`` function even if
+ the mbuf private size is not 0.
+
+
+* **mempool: Add structure for object headers.**
+
+ Each object stored in mempools are prefixed by a header, allowing for
+ instance to retrieve the mempool pointer from the object. When debug is
+ enabled, a cookie is also added in this header that helps to detect
+ corruptions and double-frees.
+
+ Introduced a structure that materializes the content of this header,
+ and will simplify future patches adding things in this header.
+
+
+* **mempool: Fix pages computation to determine number of objects.**
+
+
+* **mempool: Fix returned value after counting objects.**
+
+ Fixes: 148f963fb532 ("xen: core library changes")
+
+
+* **mlx4: Avoid requesting TX completion events to improve performance.**
+
+ Instead of requesting a completion event for each TX burst, request it on a
+ fixed schedule once every MLX4_PMD_TX_PER_COMP_REQ (currently 64) packets to
+ improve performance.
+
+
+* **mlx4: Fix possible crash on scattered mbuf allocation failure.**
+
+ Fixes issue where failing to allocate a segment, ``mlx4_rx_burst_sp()``
+ could call ``rte_pktmbuf_free()`` on an incomplete scattered mbuf whose next
+ pointer in the last segment is not set.
+
+
+* **mlx4: Fix support for multiple vlan filters.**
+
+ This fixes the "Multiple RX VLAN filters can be configured, but only the
+ first one works" bug.
+
+
+* **pcap: Fix storage of name and type in queues.**
+
+ pcap_rx_queue/pcap_tx_queue should store it's own copy of name/type values,
+ not the pointer to temporary allocated space.
+
+
+* **pci: Fix memory leaks and needless increment of map address.**
+
+
+* **pci: Fix uio mapping differences between linux and bsd.**
+
+
+* **port: Fix unaligned access to metadata.**
+
+ Fix RTE_MBUF_METADATA macros to allow for unaligned accesses to meta-data
+ fields.
+
+
+* **ring: Fix return of new port id on creation.**
+
+
+* **timer: Fix race condition.**
+
+ Eliminate problematic race condition in ``rte_timer_manage()`` that can lead
+ to corruption of per-lcore pending-lists (implemented as skip-lists).
+
+
+* **vfio: Fix overflow of BAR region offset and size.**
+
+ Fixes: 90a1633b2347 ("eal/Linux: allow to map BARs with MSI-X tables")
+
+
+* **vhost: Fix enqueue/dequeue to handle chained vring descriptors.**
+
+
+* **vhost: Fix race for connection fd.**
+
+
+* **vhost: Fix virtio freeze due to missed interrupt.**
+
+
+* **virtio: Fix crash if CQ is not negotiated.**
+
+ Fix NULL dereference if virtio control queue is not negotiated.
+
+
+* **virtio: Fix ring size negotiation.**
+
+ Negotiate the virtio ring size. The host may allow for very large rings but
+ application may only want a smaller ring. Conversely, if the number of
+ descriptors requested exceeds the virtio host queue size, then just silently
+ use the smaller host size.
+
+ This fixes issues with virtio in non-QEMU environments. For example Google
+ Compute Engine allows up to 16K elements in ring.
+
+
+* **vmxnet3: Fix link state handling.**
Known Issues
------------
-* TODO.
+* When running the ``vmdq`` sample or ``vhost`` sample applications with the
+ Intel(R) XL710 (i40e) NIC, the configuration option
+ ``CONFIG_RTE_MAX_QUEUES_PER_PORT`` should be increased from 256 to 1024.
-API Changes
------------
-
-* TODO.
+* VM power manager may not work on systems with more than 64 cores.
API Changes
-----------
-* TODO.
+* The order that user supplied RX and TX callbacks are called in has been
+ changed to the order that they were added (fifo) in line with end-user
+ expectations. The previous calling order was the reverse of this (lifo) and
+ was counter intuitive for users. The actual API is unchanged.
ABI Changes
-----------
-* TODO.
+* The ``rte_hash`` structure has been changed to internal use only.
--
1.8.1.4
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for librte_sched
@ 2015-08-15 8:58 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-15 8:58 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > +* librte_sched (rte_sched.h): The scheduler hierarchy structure
> > + (rte_sched_port_hierarchy) will change to allow for a larger number of subport
> > + entries. The number of available traffic_classes and queues may also change.
> > + The mbuf structure element for sched hierarchy will also change from a single
> > + 32 bit to a 64 bit structure.
> > +
> > +* librte_sched (rte_sched.h): The scheduler statistics structure will change
> > + to allow keeping track of RED actions.
> ACK
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for librte_cfgfile
@ 2015-08-15 9:09 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-15 9:09 UTC (permalink / raw)
To: Dumitrescu, Cristian; +Cc: dev
> > Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
>
> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] announce ABI change for librte_table
@ 2015-08-15 21:48 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-15 21:48 UTC (permalink / raw)
To: Dumitrescu, Cristian; +Cc: dev
> > v2 changes:
> > -changed item on LPM table to add LPM IPv6 -removed item for ACL table
> > and replaced with item on table ops -added item for hash tables
> >
> > Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> > ---
>
> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change for librte_pipeline
@ 2015-08-15 21:49 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-15 21:49 UTC (permalink / raw)
To: Dumitrescu, Cristian; +Cc: dev
> > Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
>
> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 2/3] doc: announce removal of kni functions using port id
@ 2015-08-16 22:51 4% ` Thomas Monjalon
2015-08-16 22:51 4% ` [dpdk-dev] [PATCH 3/3] doc: announce ring PMD functions removal Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-16 22:51 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
These functions are marked as deprecated for a long time:
fbf895d44cfe ("kni: identify device by name")
As suggested in this patch, it should be removed:
http://dpdk.org/ml/archives/dev/2015-June/019254.html
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a9a12c6..2424c61 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,6 +58,9 @@ Deprecation Notices
the tunnel type, TNI/VNI, inner MAC and inner VLAN are monitored.
The release 2.2 will contain these changes without backwards compatibility.
+* librte_kni: Functions based on port id are deprecated for a long time and
+ should be removed (rte_kni_create, rte_kni_get_port_id and rte_kni_info_get).
+
* ABI changes are planned for struct virtio_net in order to support vhost-user
multiple queues feature.
It should be integrated in release 2.2 without backward compatibility.
--
2.4.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 3/3] doc: announce ring PMD functions removal
2015-08-16 22:51 4% ` [dpdk-dev] [PATCH 2/3] doc: announce removal of kni functions using port id Thomas Monjalon
@ 2015-08-16 22:51 4% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-16 22:51 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
These functions are marked as deprecated for a long time:
61934c0956d4 ("ring: convert to use of PMD_REGISTER_DRIVER and fix linking")
As suggested in this patch, it should be removed:
http://dpdk.org/ml/archives/dev/2015-June/019253.html
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 2424c61..46a88ca 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -61,6 +61,9 @@ Deprecation Notices
* librte_kni: Functions based on port id are deprecated for a long time and
should be removed (rte_kni_create, rte_kni_get_port_id and rte_kni_info_get).
+* librte_pmd_ring: The deprecated functions rte_eth_ring_pair_create and
+ rte_eth_ring_pair_attach should be removed.
+
* ABI changes are planned for struct virtio_net in order to support vhost-user
multiple queues feature.
It should be integrated in release 2.2 without backward compatibility.
--
2.4.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 1/2] doc: announce removal of jhash2 function
@ 2015-08-17 14:39 4% Thomas Monjalon
2015-08-17 14:39 4% ` [dpdk-dev] [PATCH 2/2] doc: announce removal of LPM memory location Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2015-08-17 14:39 UTC (permalink / raw)
To: John McNamara; +Cc: dev
Fixes: 7530c9eea7d9 ("hash: rename a jhash function")
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 2 ++
1 file changed, 2 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 46a88ca..bf0ac95 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -23,6 +23,8 @@ Deprecation Notices
* The Macros RTE_HASH_BUCKET_ENTRIES_MAX and RTE_HASH_KEY_LENGTH_MAX are
deprecated and will be removed with version 2.2.
+* The function rte_jhash2 is deprecated and should be removed.
+
* Significant ABI changes are planned for struct rte_mbuf, struct rte_kni_mbuf,
and several ``PKT_RX_`` flags will be removed, to support unified packet type
from release 2.1. Those changes may be enabled in the upcoming release 2.1
--
2.4.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 2/2] doc: announce removal of LPM memory location
2015-08-17 14:39 4% [dpdk-dev] [PATCH 1/2] doc: announce removal of jhash2 function Thomas Monjalon
@ 2015-08-17 14:39 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-08-17 14:39 UTC (permalink / raw)
To: John McNamara; +Cc: dev
This field is deprecated for a long time and should be removed.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bf0ac95..da17880 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -25,6 +25,9 @@ Deprecation Notices
* The function rte_jhash2 is deprecated and should be removed.
+* The field mem_location of the rte_lpm structure is deprecated and should be
+ removed as well as the macros RTE_LPM_HEAP and RTE_LPM_MEMZONE.
+
* Significant ABI changes are planned for struct rte_mbuf, struct rte_kni_mbuf,
and several ``PKT_RX_`` flags will be removed, to support unified packet type
from release 2.1. Those changes may be enabled in the upcoming release 2.1
--
2.4.2
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 4/9] ethdev: remove HW specific stats in stats structs
@ 2015-08-17 14:53 0% ` Olivier MATZ
2015-08-19 12:53 0% ` Tahhan, Maryam
0 siblings, 1 reply; 200+ results
From: Olivier MATZ @ 2015-08-17 14:53 UTC (permalink / raw)
To: Maryam Tahhan, dev
Hi Maryam,
On 07/15/2015 03:11 PM, Maryam Tahhan wrote:
> Remove non generic stats in rte_stats_strings and mark the relevant
> fields in struct rte_eth_stats as deprecated.
>
> Signed-off-by: Maryam Tahhan <maryam.tahhan@intel.com>
> ---
> doc/guides/rel_notes/abi.rst | 12 ++++++++++++
> lib/librte_ether/rte_ethdev.c | 9 ---------
> lib/librte_ether/rte_ethdev.h | 30 ++++++++++++++++++++----------
> 3 files changed, 32 insertions(+), 19 deletions(-)
>
> diff --git a/doc/guides/rel_notes/abi.rst b/doc/guides/rel_notes/abi.rst
> index 931e785..d5bf625 100644
> --- a/doc/guides/rel_notes/abi.rst
> +++ b/doc/guides/rel_notes/abi.rst
> @@ -24,3 +24,15 @@ Deprecation Notices
>
> * The Macros RTE_HASH_BUCKET_ENTRIES_MAX and RTE_HASH_KEY_LENGTH_MAX are
> deprecated and will be removed with version 2.2.
> +
> +* The following fields have been deprecated in rte_eth_stats:
> + * uint64_t imissed
> + * uint64_t ibadcrc
> + * uint64_t ibadlen
> + * uint64_t imcasts
> + * uint64_t fdirmatch
> + * uint64_t fdirmiss
> + * uint64_t tx_pause_xon
> + * uint64_t rx_pause_xon
> + * uint64_t tx_pause_xoff
> + * uint64_t rx_pause_xoff
Looking again at this patch, I'm wondering if "imissed" should
be kept instead of beeing deprecated. I think it could be useful to
differentiate ierrors from imissed, and it's not a hw-specific
statistic. What do you think?
One more comment: it seems these fields are marked as deprecated but
they are still used on other drivers (e1000, i40e, ...).
Regards,
Olivier
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 7689328..c8f0e9a 100755
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -142,17 +142,8 @@ static const struct rte_eth_xstats_name_off rte_stats_strings[] = {
> {"rx_bytes", offsetof(struct rte_eth_stats, ibytes)},
> {"tx_bytes", offsetof(struct rte_eth_stats, obytes)},
> {"tx_errors", offsetof(struct rte_eth_stats, oerrors)},
> - {"rx_missed_errors", offsetof(struct rte_eth_stats, imissed)},
> - {"rx_crc_errors", offsetof(struct rte_eth_stats, ibadcrc)},
> - {"rx_bad_length_errors", offsetof(struct rte_eth_stats, ibadlen)},
> {"rx_errors", offsetof(struct rte_eth_stats, ierrors)},
> {"alloc_rx_buff_failed", offsetof(struct rte_eth_stats, rx_nombuf)},
> - {"fdir_match", offsetof(struct rte_eth_stats, fdirmatch)},
> - {"fdir_miss", offsetof(struct rte_eth_stats, fdirmiss)},
> - {"tx_flow_control_xon", offsetof(struct rte_eth_stats, tx_pause_xon)},
> - {"rx_flow_control_xon", offsetof(struct rte_eth_stats, rx_pause_xon)},
> - {"tx_flow_control_xoff", offsetof(struct rte_eth_stats, tx_pause_xoff)},
> - {"rx_flow_control_xoff", offsetof(struct rte_eth_stats, rx_pause_xoff)},
> };
> #define RTE_NB_STATS (sizeof(rte_stats_strings) / sizeof(rte_stats_strings[0]))
>
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index d76bbb3..a862027 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -193,19 +193,29 @@ struct rte_eth_stats {
> uint64_t opackets; /**< Total number of successfully transmitted packets.*/
> uint64_t ibytes; /**< Total number of successfully received bytes. */
> uint64_t obytes; /**< Total number of successfully transmitted bytes. */
> - uint64_t imissed; /**< Total of RX missed packets (e.g full FIFO). */
> - uint64_t ibadcrc; /**< Total of RX packets with CRC error. */
> - uint64_t ibadlen; /**< Total of RX packets with bad length. */
> + /**< Deprecated; Total of RX missed packets (e.g full FIFO). */
> + uint64_t imissed;
> + /**< Deprecated; Total of RX packets with CRC error. */
> + uint64_t ibadcrc;
> + /**< Deprecated; Total of RX packets with bad length. */
> + uint64_t ibadlen;
> uint64_t ierrors; /**< Total number of erroneous received packets. */
> uint64_t oerrors; /**< Total number of failed transmitted packets. */
> - uint64_t imcasts; /**< Total number of multicast received packets. */
> + uint64_t imcasts;
> + /**< Deprecated; Total number of multicast received packets. */
> uint64_t rx_nombuf; /**< Total number of RX mbuf allocation failures. */
> - uint64_t fdirmatch; /**< Total number of RX packets matching a filter. */
> - uint64_t fdirmiss; /**< Total number of RX packets not matching any filter. */
> - uint64_t tx_pause_xon; /**< Total nb. of XON pause frame sent. */
> - uint64_t rx_pause_xon; /**< Total nb. of XON pause frame received. */
> - uint64_t tx_pause_xoff; /**< Total nb. of XOFF pause frame sent. */
> - uint64_t rx_pause_xoff; /**< Total nb. of XOFF pause frame received. */
> + uint64_t fdirmatch;
> + /**< Deprecated; Total number of RX packets matching a filter. */
> + uint64_t fdirmiss;
> + /**< Deprecated; Total number of RX packets not matching any filter. */
> + uint64_t tx_pause_xon;
> + /**< Deprecated; Total nb. of XON pause frame sent. */
> + uint64_t rx_pause_xon;
> + /**< Deprecated; Total nb. of XON pause frame received. */
> + uint64_t tx_pause_xoff;
> + /**< Deprecated; Total nb. of XOFF pause frame sent. */
> + uint64_t rx_pause_xoff;
> + /**< Deprecated; Total nb. of XOFF pause frame received. */
> uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];
> /**< Total number of queue RX packets. */
> uint64_t q_opackets[RTE_ETHDEV_QUEUE_STAT_CNTRS];
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 4/9] ethdev: remove HW specific stats in stats structs
2015-08-17 14:53 0% ` Olivier MATZ
@ 2015-08-19 12:53 0% ` Tahhan, Maryam
0 siblings, 0 replies; 200+ results
From: Tahhan, Maryam @ 2015-08-19 12:53 UTC (permalink / raw)
To: Olivier MATZ, dev
> From: Olivier MATZ [mailto:olivier.matz@6wind.com]
> Sent: Monday, August 17, 2015 3:54 PM
> To: Tahhan, Maryam; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6 4/9] ethdev: remove HW specific stats in
> stats structs
>
> Hi Maryam,
>
> On 07/15/2015 03:11 PM, Maryam Tahhan wrote:
> > Remove non generic stats in rte_stats_strings and mark the relevant
> > fields in struct rte_eth_stats as deprecated.
> >
> > Signed-off-by: Maryam Tahhan <maryam.tahhan@intel.com>
> > ---
> > doc/guides/rel_notes/abi.rst | 12 ++++++++++++
> > lib/librte_ether/rte_ethdev.c | 9 ---------
> > lib/librte_ether/rte_ethdev.h | 30 ++++++++++++++++++++----------
> > 3 files changed, 32 insertions(+), 19 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/abi.rst
> > b/doc/guides/rel_notes/abi.rst index 931e785..d5bf625 100644
> > --- a/doc/guides/rel_notes/abi.rst
> > +++ b/doc/guides/rel_notes/abi.rst
> > @@ -24,3 +24,15 @@ Deprecation Notices
> >
> > * The Macros RTE_HASH_BUCKET_ENTRIES_MAX and
> RTE_HASH_KEY_LENGTH_MAX are
> > deprecated and will be removed with version 2.2.
> > +
> > +* The following fields have been deprecated in rte_eth_stats:
> > + * uint64_t imissed
> > + * uint64_t ibadcrc
> > + * uint64_t ibadlen
> > + * uint64_t imcasts
> > + * uint64_t fdirmatch
> > + * uint64_t fdirmiss
> > + * uint64_t tx_pause_xon
> > + * uint64_t rx_pause_xon
> > + * uint64_t tx_pause_xoff
> > + * uint64_t rx_pause_xoff
>
> Looking again at this patch, I'm wondering if "imissed" should be kept instead
> of beeing deprecated. I think it could be useful to differentiate ierrors from
> imissed, and it's not a hw-specific statistic. What do you think?
>
> One more comment: it seems these fields are marked as deprecated but they
> are still used on other drivers (e1000, i40e, ...).
>
> Regards,
> Olivier
>
Hi Olivier
I can remove the deprecated status for imissed to leave the differentiation between errors and missed packets.
igb and i40e will be updated soon to reflect this. I marked them as deprecated to deter their use in the future. Older instances/use will need to be resolved.
Regards
Maryam
<snip>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] Move common functions in eal_thread.c
@ 2015-08-19 20:46 2% Ravi Kerur
0 siblings, 0 replies; 200+ results
From: Ravi Kerur @ 2015-08-19 20:46 UTC (permalink / raw)
To: dev
v2:
> Remove un-needed header file eal_private.h from freeBSD
eal_thread.c after code movement.
v1:
Changes include
> Moving common functions in eal_thread.c in
linuxapp and bsdapp into common/eal_common_thread.c file.
> Rearrange eal_common_thread.c compilation in Makefile
for ABI.
Compiled successfully for following targets
> x86_64-native-linuxapp-clang
> x86_64-native-linuxapp-gcc
> x86_x32-native-linuxapp-gcc
> i686-native-linuxapp-gcc
> x86_64-native-bsdapp-clang
> x86_64-native-bsdapp-gcc
Tested on
> Ubuntu 14.04, testpmd functionality
> FreeBSD 10.1, testpmd functionality
Signed-off-by: Ravi Kerur <rkerur@gmail.com>
---
lib/librte_eal/bsdapp/eal/Makefile | 3 +-
lib/librte_eal/bsdapp/eal/eal_thread.c | 153 ------------------------------
lib/librte_eal/common/eal_common_thread.c | 147 +++++++++++++++++++++++++++-
lib/librte_eal/linuxapp/eal/Makefile | 3 +-
lib/librte_eal/linuxapp/eal/eal_thread.c | 153 ------------------------------
5 files changed, 150 insertions(+), 309 deletions(-)
diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index a969435..93d76bb 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) := eal.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_memory.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_hugepage_info.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_thread.c
+SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_thread.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_log.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_pci.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_debug.c
@@ -76,7 +77,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_hexdump.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_devargs.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_dev.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_options.c
-SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += eal_common_thread.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += malloc_heap.c
@@ -90,6 +90,7 @@ CFLAGS_eal_common_log.o := -D_GNU_SOURCE
# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
CFLAGS_eal_thread.o += -Wno-return-type
+CFLAGS_eal_common_thread.o += -Wno-return-type
CFLAGS_eal_hpet.o += -Wno-return-type
endif
diff --git a/lib/librte_eal/bsdapp/eal/eal_thread.c b/lib/librte_eal/bsdapp/eal/eal_thread.c
index 9a03437..4036d21 100644
--- a/lib/librte_eal/bsdapp/eal/eal_thread.c
+++ b/lib/librte_eal/bsdapp/eal/eal_thread.c
@@ -35,163 +35,10 @@
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
-#include <unistd.h>
-#include <sched.h>
-#include <pthread_np.h>
-#include <sys/queue.h>
#include <sys/thr.h>
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_launch.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_per_lcore.h>
-#include <rte_eal.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-
-#include "eal_private.h"
#include "eal_thread.h"
-RTE_DEFINE_PER_LCORE(unsigned, _lcore_id) = LCORE_ID_ANY;
-RTE_DEFINE_PER_LCORE(unsigned, _socket_id) = (unsigned)SOCKET_ID_ANY;
-RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
-
-/*
- * Send a message to a slave lcore identified by slave_id to call a
- * function f with argument arg. Once the execution is done, the
- * remote lcore switch in FINISHED state.
- */
-int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
-{
- int n;
- char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
-
- if (lcore_config[slave_id].state != WAIT)
- return -EBUSY;
-
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
-
- /* send message */
- n = 0;
- while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
- if (n < 0)
- rte_panic("cannot write on configuration pipe\n");
-
- /* wait ack */
- do {
- n = read(s2m, &c, 1);
- } while (n < 0 && errno == EINTR);
-
- if (n <= 0)
- rte_panic("cannot read on configuration pipe\n");
-
- return 0;
-}
-
-/* set affinity for current thread */
-static int
-eal_thread_set_affinity(void)
-{
- unsigned lcore_id = rte_lcore_id();
-
- /* acquire system unique id */
- rte_gettid();
-
- /* update EAL thread core affinity */
- return rte_thread_set_affinity(&lcore_config[lcore_id].cpuset);
-}
-
-void eal_thread_init_master(unsigned lcore_id)
-{
- /* set the lcore ID in per-lcore memory area */
- RTE_PER_LCORE(_lcore_id) = lcore_id;
-
- /* set CPU affinity */
- if (eal_thread_set_affinity() < 0)
- rte_panic("cannot set affinity\n");
-}
-
-/* main loop of threads */
-__attribute__((noreturn)) void *
-eal_thread_loop(__attribute__((unused)) void *arg)
-{
- char c;
- int n, ret;
- unsigned lcore_id;
- pthread_t thread_id;
- int m2s, s2m;
- char cpuset[RTE_CPU_AFFINITY_STR_LEN];
-
- thread_id = pthread_self();
-
- /* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (thread_id == lcore_config[lcore_id].thread_id)
- break;
- }
- if (lcore_id == RTE_MAX_LCORE)
- rte_panic("cannot retrieve lcore id\n");
-
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
-
- /* set the lcore ID in per-lcore memory area */
- RTE_PER_LCORE(_lcore_id) = lcore_id;
-
- /* set CPU affinity */
- if (eal_thread_set_affinity() < 0)
- rte_panic("cannot set affinity\n");
-
- ret = eal_thread_dump_affinity(cpuset, RTE_CPU_AFFINITY_STR_LEN);
-
- RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%p;cpuset=[%s%s])\n",
- lcore_id, thread_id, cpuset, ret == 0 ? "" : "...");
-
- /* read on our pipe to get commands */
- while (1) {
- void *fct_arg;
-
- /* wait command */
- do {
- n = read(m2s, &c, 1);
- } while (n < 0 && errno == EINTR);
-
- if (n <= 0)
- rte_panic("cannot read on configuration pipe\n");
-
- lcore_config[lcore_id].state = RUNNING;
-
- /* send ack */
- n = 0;
- while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
- if (n < 0)
- rte_panic("cannot write on configuration pipe\n");
-
- if (lcore_config[lcore_id].f == NULL)
- rte_panic("NULL function pointer\n");
-
- /* call the function and store the return value */
- fct_arg = lcore_config[lcore_id].arg;
- ret = lcore_config[lcore_id].f(fct_arg);
- lcore_config[lcore_id].ret = ret;
- rte_wmb();
- lcore_config[lcore_id].state = FINISHED;
- }
-
- /* never reached */
- /* pthread_exit(NULL); */
- /* return NULL; */
-}
-
/* require calling thread tid by gettid() */
int rte_sys_gettid(void)
{
diff --git a/lib/librte_eal/common/eal_common_thread.c b/lib/librte_eal/common/eal_common_thread.c
index 2405e93..5e55401 100644
--- a/lib/librte_eal/common/eal_common_thread.c
+++ b/lib/librte_eal/common/eal_common_thread.c
@@ -31,11 +31,12 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
-#include <pthread.h>
+#include <sys/queue.h>
#include <sched.h>
#include <assert.h>
#include <string.h>
@@ -43,10 +44,21 @@
#include <rte_lcore.h>
#include <rte_memory.h>
#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_launch.h>
+#include <rte_memzone.h>
+#include <rte_per_lcore.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include "eal_private.h"
#include "eal_thread.h"
RTE_DECLARE_PER_LCORE(unsigned , _socket_id);
+RTE_DEFINE_PER_LCORE(unsigned, _lcore_id) = LCORE_ID_ANY;
+RTE_DEFINE_PER_LCORE(unsigned, _socket_id) = (unsigned)SOCKET_ID_ANY;
+RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
unsigned rte_socket_id(void)
{
@@ -155,3 +167,136 @@ exit:
return ret;
}
+
+/*
+ * Send a message to a slave lcore identified by slave_id to call a
+ * function f with argument arg. Once the execution is done, the
+ * remote lcore switch in FINISHED state.
+ */
+int
+rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
+{
+ int n;
+ char c = 0;
+ int m2s = lcore_config[slave_id].pipe_master2slave[1];
+ int s2m = lcore_config[slave_id].pipe_slave2master[0];
+
+ if (lcore_config[slave_id].state != WAIT)
+ return -EBUSY;
+
+ lcore_config[slave_id].f = f;
+ lcore_config[slave_id].arg = arg;
+
+ /* send message */
+ n = 0;
+ while (n == 0 || (n < 0 && errno == EINTR))
+ n = write(m2s, &c, 1);
+ if (n < 0)
+ rte_panic("cannot write on configuration pipe\n");
+
+ /* wait ack */
+ do {
+ n = read(s2m, &c, 1);
+ } while (n < 0 && errno == EINTR);
+
+ if (n <= 0)
+ rte_panic("cannot read on configuration pipe\n");
+
+ return 0;
+}
+
+/* set affinity for current EAL thread */
+static int
+eal_thread_set_affinity(void)
+{
+ unsigned lcore_id = rte_lcore_id();
+
+ /* acquire system unique id */
+ rte_gettid();
+
+ /* update EAL thread core affinity */
+ return rte_thread_set_affinity(&lcore_config[lcore_id].cpuset);
+}
+
+void eal_thread_init_master(unsigned lcore_id)
+{
+ /* set the lcore ID in per-lcore memory area */
+ RTE_PER_LCORE(_lcore_id) = lcore_id;
+
+ /* set CPU affinity */
+ if (eal_thread_set_affinity() < 0)
+ rte_panic("cannot set affinity\n");
+}
+
+/* main loop of threads */
+__attribute__((noreturn)) void *
+eal_thread_loop(__attribute__((unused)) void *arg)
+{
+ char c;
+ int n, ret;
+ unsigned lcore_id;
+ pthread_t thread_id;
+ int m2s, s2m;
+ char cpuset[RTE_CPU_AFFINITY_STR_LEN];
+
+ thread_id = pthread_self();
+
+ /* retrieve our lcore_id from the configuration structure */
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (thread_id == lcore_config[lcore_id].thread_id)
+ break;
+ }
+ if (lcore_id == RTE_MAX_LCORE)
+ rte_panic("cannot retrieve lcore id\n");
+
+ m2s = lcore_config[lcore_id].pipe_master2slave[0];
+ s2m = lcore_config[lcore_id].pipe_slave2master[1];
+
+ /* set the lcore ID in per-lcore memory area */
+ RTE_PER_LCORE(_lcore_id) = lcore_id;
+
+ /* set CPU affinity */
+ if (eal_thread_set_affinity() < 0)
+ rte_panic("cannot set affinity\n");
+
+ ret = eal_thread_dump_affinity(cpuset, RTE_CPU_AFFINITY_STR_LEN);
+
+ RTE_LOG(DEBUG, EAL, "lcore %u is ready (thread=%d;cpuset=[%s%s])\n",
+ lcore_id, rte_gettid(), cpuset, ret == 0 ? "" : "...");
+
+ /* read on our pipe to get commands */
+ while (1) {
+ void *fct_arg;
+
+ /* wait command */
+ do {
+ n = read(m2s, &c, 1);
+ } while (n < 0 && errno == EINTR);
+
+ if (n <= 0)
+ rte_panic("cannot read on configuration pipe\n");
+
+ lcore_config[lcore_id].state = RUNNING;
+
+ /* send ack */
+ n = 0;
+ while (n == 0 || (n < 0 && errno == EINTR))
+ n = write(s2m, &c, 1);
+ if (n < 0)
+ rte_panic("cannot write on configuration pipe\n");
+
+ if (lcore_config[lcore_id].f == NULL)
+ rte_panic("NULL function pointer\n");
+
+ /* call the function and store the return value */
+ fct_arg = lcore_config[lcore_id].arg;
+ ret = lcore_config[lcore_id].f(fct_arg);
+ lcore_config[lcore_id].ret = ret;
+ rte_wmb();
+ lcore_config[lcore_id].state = FINISHED;
+ }
+
+ /* never reached */
+ /* pthread_exit(NULL); */
+ /* return NULL; */
+}
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 376d275..79beb90 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -55,6 +55,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_xen_memory.c
endif
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_thread.c
+SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_thread.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_log.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_pci.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_pci_uio.c
@@ -86,7 +87,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_hexdump.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_devargs.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_dev.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_options.c
-SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal_common_thread.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += malloc_heap.c
@@ -110,6 +110,7 @@ CFLAGS_eal_common_lcore.o := -D_GNU_SOURCE
# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
CFLAGS_eal_thread.o += -Wno-return-type
+CFLAGS_eal_common_thread.o += -Wno-return-type
endif
INC := rte_interrupts.h rte_kni_common.h rte_dom0_common.h
diff --git a/lib/librte_eal/linuxapp/eal/eal_thread.c b/lib/librte_eal/linuxapp/eal/eal_thread.c
index 18bd8e0..413ab0e 100644
--- a/lib/librte_eal/linuxapp/eal/eal_thread.c
+++ b/lib/librte_eal/linuxapp/eal/eal_thread.c
@@ -34,164 +34,11 @@
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
-#include <stdint.h>
#include <unistd.h>
-#include <pthread.h>
-#include <sched.h>
-#include <sys/queue.h>
#include <sys/syscall.h>
-#include <rte_debug.h>
-#include <rte_atomic.h>
-#include <rte_launch.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_per_lcore.h>
-#include <rte_eal.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-
-#include "eal_private.h"
#include "eal_thread.h"
-RTE_DEFINE_PER_LCORE(unsigned, _lcore_id) = LCORE_ID_ANY;
-RTE_DEFINE_PER_LCORE(unsigned, _socket_id) = (unsigned)SOCKET_ID_ANY;
-RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset);
-
-/*
- * Send a message to a slave lcore identified by slave_id to call a
- * function f with argument arg. Once the execution is done, the
- * remote lcore switch in FINISHED state.
- */
-int
-rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id)
-{
- int n;
- char c = 0;
- int m2s = lcore_config[slave_id].pipe_master2slave[1];
- int s2m = lcore_config[slave_id].pipe_slave2master[0];
-
- if (lcore_config[slave_id].state != WAIT)
- return -EBUSY;
-
- lcore_config[slave_id].f = f;
- lcore_config[slave_id].arg = arg;
-
- /* send message */
- n = 0;
- while (n == 0 || (n < 0 && errno == EINTR))
- n = write(m2s, &c, 1);
- if (n < 0)
- rte_panic("cannot write on configuration pipe\n");
-
- /* wait ack */
- do {
- n = read(s2m, &c, 1);
- } while (n < 0 && errno == EINTR);
-
- if (n <= 0)
- rte_panic("cannot read on configuration pipe\n");
-
- return 0;
-}
-
-/* set affinity for current EAL thread */
-static int
-eal_thread_set_affinity(void)
-{
- unsigned lcore_id = rte_lcore_id();
-
- /* acquire system unique id */
- rte_gettid();
-
- /* update EAL thread core affinity */
- return rte_thread_set_affinity(&lcore_config[lcore_id].cpuset);
-}
-
-void eal_thread_init_master(unsigned lcore_id)
-{
- /* set the lcore ID in per-lcore memory area */
- RTE_PER_LCORE(_lcore_id) = lcore_id;
-
- /* set CPU affinity */
- if (eal_thread_set_affinity() < 0)
- rte_panic("cannot set affinity\n");
-}
-
-/* main loop of threads */
-__attribute__((noreturn)) void *
-eal_thread_loop(__attribute__((unused)) void *arg)
-{
- char c;
- int n, ret;
- unsigned lcore_id;
- pthread_t thread_id;
- int m2s, s2m;
- char cpuset[RTE_CPU_AFFINITY_STR_LEN];
-
- thread_id = pthread_self();
-
- /* retrieve our lcore_id from the configuration structure */
- RTE_LCORE_FOREACH_SLAVE(lcore_id) {
- if (thread_id == lcore_config[lcore_id].thread_id)
- break;
- }
- if (lcore_id == RTE_MAX_LCORE)
- rte_panic("cannot retrieve lcore id\n");
-
- m2s = lcore_config[lcore_id].pipe_master2slave[0];
- s2m = lcore_config[lcore_id].pipe_slave2master[1];
-
- /* set the lcore ID in per-lcore memory area */
- RTE_PER_LCORE(_lcore_id) = lcore_id;
-
- /* set CPU affinity */
- if (eal_thread_set_affinity() < 0)
- rte_panic("cannot set affinity\n");
-
- ret = eal_thread_dump_affinity(cpuset, RTE_CPU_AFFINITY_STR_LEN);
-
- RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%x;cpuset=[%s%s])\n",
- lcore_id, (int)thread_id, cpuset, ret == 0 ? "" : "...");
-
- /* read on our pipe to get commands */
- while (1) {
- void *fct_arg;
-
- /* wait command */
- do {
- n = read(m2s, &c, 1);
- } while (n < 0 && errno == EINTR);
-
- if (n <= 0)
- rte_panic("cannot read on configuration pipe\n");
-
- lcore_config[lcore_id].state = RUNNING;
-
- /* send ack */
- n = 0;
- while (n == 0 || (n < 0 && errno == EINTR))
- n = write(s2m, &c, 1);
- if (n < 0)
- rte_panic("cannot write on configuration pipe\n");
-
- if (lcore_config[lcore_id].f == NULL)
- rte_panic("NULL function pointer\n");
-
- /* call the function and store the return value */
- fct_arg = lcore_config[lcore_id].arg;
- ret = lcore_config[lcore_id].f(fct_arg);
- lcore_config[lcore_id].ret = ret;
- rte_wmb();
- lcore_config[lcore_id].state = FINISHED;
- }
-
- /* never reached */
- /* pthread_exit(NULL); */
- /* return NULL; */
-}
-
/* require calling thread tid by gettid() */
int rte_sys_gettid(void)
{
--
1.9.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH 1/9] ethdev: remove Rx interrupt switch
@ 2015-09-01 21:30 5% ` Thomas Monjalon
2015-09-01 21:30 2% ` [dpdk-dev] [PATCH 2/9] mbuf: remove packet type from offload flags Thomas Monjalon
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-01 21:30 UTC (permalink / raw)
To: dev
The Rx interrupt feature is now part of the standard ABI.
Because of changes in rte_intr_handle and struct rte_eth_conf,
the eal and ethdev library versions are bumped.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 4 --
drivers/net/e1000/igb_ethdev.c | 26 -----------
drivers/net/ixgbe/ixgbe_ethdev.c | 40 ----------------
examples/l3fwd-power/main.c | 2 -
lib/librte_eal/bsdapp/eal/Makefile | 2 +-
.../bsdapp/eal/include/exec-env/rte_interrupts.h | 2 -
lib/librte_eal/linuxapp/eal/Makefile | 2 +-
lib/librte_eal/linuxapp/eal/eal_interrupts.c | 53 ----------------------
.../linuxapp/eal/include/exec-env/rte_interrupts.h | 2 -
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 40 ----------------
lib/librte_ether/rte_ethdev.h | 4 --
12 files changed, 3 insertions(+), 176 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index da17880..991a777 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -13,10 +13,6 @@ Deprecation Notices
There is no backward compatibility planned from release 2.2.
All binaries will need to be rebuilt from release 2.2.
-* ABI changes are planned for struct rte_intr_handle, struct rte_eth_conf
- and struct eth_dev_ops to support interrupt mode feature from release 2.1.
- Those changes may be enabled in the release 2.1 with CONFIG_RTE_NEXT_ABI.
-
* The EAL function rte_eal_pci_close_one is deprecated because renamed to
rte_eal_pci_detach.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index c7e6d55..848ef6e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -106,9 +106,7 @@ static int eth_igb_flow_ctrl_get(struct rte_eth_dev *dev,
static int eth_igb_flow_ctrl_set(struct rte_eth_dev *dev,
struct rte_eth_fc_conf *fc_conf);
static int eth_igb_lsc_interrupt_setup(struct rte_eth_dev *dev);
-#ifdef RTE_NEXT_ABI
static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev);
-#endif
static int eth_igb_interrupt_get_status(struct rte_eth_dev *dev);
static int eth_igb_interrupt_action(struct rte_eth_dev *dev);
static void eth_igb_interrupt_handler(struct rte_intr_handle *handle,
@@ -232,7 +230,6 @@ static int igb_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
uint32_t flags);
static int igb_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
-#ifdef RTE_NEXT_ABI
static int eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev,
@@ -241,7 +238,6 @@ static void eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
static void eth_igb_write_ivar(struct e1000_hw *hw, uint8_t msix_vector,
uint8_t index, uint8_t offset);
-#endif
static void eth_igb_configure_msix_intr(struct rte_eth_dev *dev);
/*
@@ -303,10 +299,8 @@ static const struct eth_dev_ops eth_igb_ops = {
.vlan_tpid_set = eth_igb_vlan_tpid_set,
.vlan_offload_set = eth_igb_vlan_offload_set,
.rx_queue_setup = eth_igb_rx_queue_setup,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = eth_igb_rx_queue_intr_enable,
.rx_queue_intr_disable = eth_igb_rx_queue_intr_disable,
-#endif
.rx_queue_release = eth_igb_rx_queue_release,
.rx_queue_count = eth_igb_rx_queue_count,
.rx_descriptor_done = eth_igb_rx_descriptor_done,
@@ -893,9 +887,7 @@ eth_igb_start(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
int ret, mask;
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
uint32_t ctrl_ext;
PMD_INIT_FUNC_TRACE();
@@ -936,7 +928,6 @@ eth_igb_start(struct rte_eth_dev *dev)
/* configure PF module if SRIOV enabled */
igb_pf_host_configure(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -954,7 +945,6 @@ eth_igb_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
/* confiugre msix for rx interrupt */
eth_igb_configure_msix_intr(dev);
@@ -1050,11 +1040,9 @@ eth_igb_start(struct rte_eth_dev *dev)
" no intr multiplex\n");
}
-#ifdef RTE_NEXT_ABI
/* check if rxq interrupt is enabled */
if (dev->data->dev_conf.intr_conf.rxq != 0)
eth_igb_rxq_interrupt_setup(dev);
-#endif
/* enable uio/vfio intr/eventfd mapping */
rte_intr_enable(intr_handle);
@@ -1146,14 +1134,12 @@ eth_igb_stop(struct rte_eth_dev *dev)
}
filter_info->twotuple_mask = 0;
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
static void
@@ -1163,9 +1149,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_eth_link link;
-#ifdef RTE_NEXT_ABI
struct rte_pci_device *pci_dev;
-#endif
eth_igb_stop(dev);
adapter->stopped = 1;
@@ -1185,13 +1169,11 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
-#ifdef RTE_NEXT_ABI
pci_dev = dev->pci_dev;
if (pci_dev->intr_handle.intr_vec) {
rte_free(pci_dev->intr_handle.intr_vec);
pci_dev->intr_handle.intr_vec = NULL;
}
-#endif
memset(&link, 0, sizeof(link));
rte_igb_dev_atomic_write_link_status(dev, &link);
@@ -2017,7 +1999,6 @@ eth_igb_lsc_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#ifdef RTE_NEXT_ABI
/* It clears the interrupt causes and enables the interrupt.
* It will be called once only during nic initialized.
*
@@ -2044,7 +2025,6 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#endif
/*
* It reads ICR and gets interrupt causes, check it and set a bit flag
@@ -4144,7 +4124,6 @@ static struct rte_driver pmd_igbvf_drv = {
.init = rte_igbvf_pmd_init,
};
-#ifdef RTE_NEXT_ABI
static int
eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
@@ -4219,7 +4198,6 @@ eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
8 * direction);
}
}
-#endif
/* Sets up the hardware to generate MSI-X interrupts properly
* @hw
@@ -4228,13 +4206,11 @@ eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
static void
eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
{
-#ifdef RTE_NEXT_ABI
int queue_id;
uint32_t tmpval, regval, intr_mask;
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t vec = 0;
-#endif
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
@@ -4243,7 +4219,6 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* set interrupt vector for other causes */
if (hw->mac.type == e1000_82575) {
tmpval = E1000_READ_REG(hw, E1000_CTRL_EXT);
@@ -4299,7 +4274,6 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
}
E1000_WRITE_FLUSH(hw);
-#endif
}
PMD_REGISTER_DRIVER(pmd_igb_drv);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b8ee1e9..ec2918c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -190,9 +190,7 @@ static int ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
uint16_t reta_size);
static void ixgbe_dev_link_status_print(struct rte_eth_dev *dev);
static int ixgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev);
-#ifdef RTE_NEXT_ABI
static int ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
-#endif
static int ixgbe_dev_interrupt_get_status(struct rte_eth_dev *dev);
static int ixgbe_dev_interrupt_action(struct rte_eth_dev *dev);
static void ixgbe_dev_interrupt_handler(struct rte_intr_handle *handle,
@@ -227,14 +225,12 @@ static void ixgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask);
static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on);
static void ixgbevf_dev_interrupt_handler(struct rte_intr_handle *handle,
void *param);
-#ifdef RTE_NEXT_ABI
static int ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id);
static void ixgbevf_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
-#endif
static void ixgbevf_configure_msix(struct rte_eth_dev *dev);
/* For Eth VMDQ APIs support */
@@ -252,14 +248,12 @@ static int ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
uint8_t rule_id, uint8_t on);
static int ixgbe_mirror_rule_reset(struct rte_eth_dev *dev,
uint8_t rule_id);
-#ifdef RTE_NEXT_ABI
static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int ixgbe_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id);
static void ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
-#endif
static void ixgbe_configure_msix(struct rte_eth_dev *dev);
static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
@@ -420,10 +414,8 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.tx_queue_start = ixgbe_dev_tx_queue_start,
.tx_queue_stop = ixgbe_dev_tx_queue_stop,
.rx_queue_setup = ixgbe_dev_rx_queue_setup,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = ixgbe_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = ixgbe_dev_rx_queue_intr_disable,
-#endif
.rx_queue_release = ixgbe_dev_rx_queue_release,
.rx_queue_count = ixgbe_dev_rx_queue_count,
.rx_descriptor_done = ixgbe_dev_rx_descriptor_done,
@@ -497,10 +489,8 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
.rx_descriptor_done = ixgbe_dev_rx_descriptor_done,
.tx_queue_setup = ixgbe_dev_tx_queue_setup,
.tx_queue_release = ixgbe_dev_tx_queue_release,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = ixgbevf_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = ixgbevf_dev_rx_queue_intr_disable,
-#endif
.mac_addr_add = ixgbevf_add_mac_addr,
.mac_addr_remove = ixgbevf_remove_mac_addr,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
@@ -1680,9 +1670,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
int err, link_up = 0, negotiate = 0;
uint32_t speed = 0;
int mask = 0;
@@ -1715,7 +1703,6 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
/* configure PF module if SRIOV enabled */
ixgbe_pf_host_configure(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -1734,7 +1721,6 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
/* confiugre msix for sleep until rx interrupt */
ixgbe_configure_msix(dev);
@@ -1827,11 +1813,9 @@ skip_link_setup:
" no intr multiplex\n");
}
-#ifdef RTE_NEXT_ABI
/* check if rxq interrupt is enabled */
if (dev->data->dev_conf.intr_conf.rxq != 0)
ixgbe_dev_rxq_interrupt_setup(dev);
-#endif
/* enable uio/vfio intr/eventfd mapping */
rte_intr_enable(intr_handle);
@@ -1942,14 +1926,12 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
memset(filter_info->fivetuple_mask, 0,
sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
/*
@@ -2623,7 +2605,6 @@ ixgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev)
* - On success, zero.
* - On failure, a negative value.
*/
-#ifdef RTE_NEXT_ABI
static int
ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
@@ -2634,7 +2615,6 @@ ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#endif
/*
* It reads ICR and sets flag (IXGBE_EICR_LSC) for the link_update.
@@ -3435,9 +3415,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
int err, mask = 0;
@@ -3470,7 +3448,6 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_dev_rxtx_start(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -3488,7 +3465,6 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
ixgbevf_configure_msix(dev);
if (dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -3534,23 +3510,19 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* disable intr eventfd mapping */
rte_intr_disable(intr_handle);
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
static void
ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-#ifdef RTE_NEXT_ABI
struct rte_pci_device *pci_dev;
-#endif
PMD_INIT_FUNC_TRACE();
@@ -3563,13 +3535,11 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
/* reprogram the RAR[0] in case user changed it. */
ixgbe_set_rar(hw, 0, hw->mac.addr, 0, IXGBE_RAH_AV);
-#ifdef RTE_NEXT_ABI
pci_dev = dev->pci_dev;
if (pci_dev->intr_handle.intr_vec) {
rte_free(pci_dev->intr_handle.intr_vec);
pci_dev->intr_handle.intr_vec = NULL;
}
-#endif
}
static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on)
@@ -4087,7 +4057,6 @@ ixgbe_mirror_rule_reset(struct rte_eth_dev *dev, uint8_t rule_id)
return 0;
}
-#ifdef RTE_NEXT_ABI
static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
@@ -4240,18 +4209,15 @@ ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
}
}
}
-#endif
static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
uint32_t vector_idx = 0;
-#endif
/* won't configure msix register if no mapping is done
* between intr vector and event fd.
@@ -4259,7 +4225,6 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* Configure all RX queues of VF */
for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) {
/* Force all queue use vector 0,
@@ -4271,7 +4236,6 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
/* Configure VF Rx queue ivar */
ixgbevf_set_ivar_map(hw, -1, 1, vector_idx);
-#endif
}
/**
@@ -4283,13 +4247,11 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, vec = 0;
uint32_t mask;
uint32_t gpie;
-#endif
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -4297,7 +4259,6 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* setup GPIE for MSI-x mode */
gpie = IXGBE_READ_REG(hw, IXGBE_GPIE);
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT |
@@ -4347,7 +4308,6 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
IXGBE_EIMS_LSC);
IXGBE_WRITE_REG(hw, IXGBE_EIAC, mask);
-#endif
}
static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 2f205ea..086f29b 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -239,9 +239,7 @@ static struct rte_eth_conf port_conf = {
},
.intr_conf = {
.lsc = 1,
-#ifdef RTE_NEXT_ABI
.rxq = 1,
-#endif
},
};
diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index a969435..a49dcec 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -44,7 +44,7 @@ CFLAGS += $(WERROR_FLAGS) -O3
EXPORT_MAP := rte_eal_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# specific to linuxapp exec-env
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) := eal.c
diff --git a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
index bffa902..88d4ae1 100644
--- a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
+++ b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
@@ -50,11 +50,9 @@ struct rte_intr_handle {
int fd; /**< file descriptor */
int uio_cfg_fd; /**< UIO config file descriptor */
enum rte_intr_handle_type type; /**< handle type */
-#ifdef RTE_NEXT_ABI
int max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efds */
int *intr_vec; /**< intr vector number array */
-#endif
};
/**
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 376d275..d62196e 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -35,7 +35,7 @@ LIB = librte_eal.a
EXPORT_MAP := rte_eal_version.map
-LIBABIVER := 1
+LIBABIVER := 2
VPATH += $(RTE_SDK)/lib/librte_eal/common
diff --git a/lib/librte_eal/linuxapp/eal/eal_interrupts.c b/lib/librte_eal/linuxapp/eal/eal_interrupts.c
index 3f87875..66e1fe3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_interrupts.c
+++ b/lib/librte_eal/linuxapp/eal/eal_interrupts.c
@@ -290,26 +290,18 @@ vfio_enable_msix(struct rte_intr_handle *intr_handle) {
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
-#ifdef RTE_NEXT_ABI
if (!intr_handle->max_intr)
intr_handle->max_intr = 1;
else if (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID)
intr_handle->max_intr = RTE_MAX_RXTX_INTR_VEC_ID + 1;
irq_set->count = intr_handle->max_intr;
-#else
- irq_set->count = 1;
-#endif
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
-#ifdef RTE_NEXT_ABI
memcpy(fd_ptr, intr_handle->efds, sizeof(intr_handle->efds));
fd_ptr[intr_handle->max_intr - 1] = intr_handle->fd;
-#else
- fd_ptr[0] = intr_handle->fd;
-#endif
ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
@@ -886,7 +878,6 @@ rte_eal_intr_init(void)
return -ret;
}
-#ifdef RTE_NEXT_ABI
static void
eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
{
@@ -929,7 +920,6 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
return;
} while (1);
}
-#endif
static int
eal_epoll_process_event(struct epoll_event *evs, unsigned int n,
@@ -1068,7 +1058,6 @@ rte_epoll_ctl(int epfd, int op, int fd,
return 0;
}
-#ifdef RTE_NEXT_ABI
int
rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
int op, unsigned int vec, void *data)
@@ -1192,45 +1181,3 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
{
return !!(intr_handle->max_intr - intr_handle->nb_efd);
}
-
-#else
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data)
-{
- RTE_SET_USED(intr_handle);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(vec);
- RTE_SET_USED(data);
- return -ENOTSUP;
-}
-
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
-{
- RTE_SET_USED(intr_handle);
- RTE_SET_USED(nb_efd);
- return 0;
-}
-
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
-}
-
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
- return 0;
-}
-
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
- return 1;
-}
-#endif
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
index b05f4c8..45071b7 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
@@ -86,14 +86,12 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
enum rte_intr_handle_type type; /**< handle type */
-#ifdef RTE_NEXT_ABI
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
/**< intr vector epoll event */
int *intr_vec; /**< intr vector number array */
-#endif
};
#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index fc45a71..3e81a0e 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_ether_version.map
-LIBABIVER := 1
+LIBABIVER := 2
SRCS-y += rte_ethdev.c
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 6b2400c..b309309 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3033,7 +3033,6 @@ _rte_eth_dev_callback_process(struct rte_eth_dev *dev,
rte_spinlock_unlock(&rte_eth_dev_cb_lock);
}
-#ifdef RTE_NEXT_ABI
int
rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
{
@@ -3139,45 +3138,6 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
}
-#else
-int
-rte_eth_dev_rx_intr_enable(uint8_t port_id, uint16_t queue_id)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- return -ENOTSUP;
-}
-
-int
-rte_eth_dev_rx_intr_disable(uint8_t port_id, uint16_t queue_id)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- return -ENOTSUP;
-}
-
-int
-rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(data);
- return -1;
-}
-
-int
-rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
- int epfd, int op, void *data)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(data);
- return -1;
-}
-#endif
#ifdef RTE_NIC_BYPASS
int rte_eth_dev_bypass_init(uint8_t port_id)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 544afe0..fa06554 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -845,10 +845,8 @@ struct rte_eth_fdir {
struct rte_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint16_t lsc;
-#ifdef RTE_NEXT_ABI
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
uint16_t rxq;
-#endif
};
/**
@@ -1392,12 +1390,10 @@ struct eth_dev_ops {
eth_queue_release_t rx_queue_release;/**< Release RX queue.*/
eth_rx_queue_count_t rx_queue_count; /**< Get Rx queue count. */
eth_rx_descriptor_done_t rx_descriptor_done; /**< Check rxd DD bit */
-#ifdef RTE_NEXT_ABI
/**< Enable Rx queue interrupt. */
eth_rx_enable_intr_t rx_queue_intr_enable;
/**< Disable Rx queue interrupt.*/
eth_rx_disable_intr_t rx_queue_intr_disable;
-#endif
eth_tx_queue_setup_t tx_queue_setup;/**< Set up device TX queue.*/
eth_queue_release_t tx_queue_release;/**< Release TX queue.*/
eth_dev_led_on_t dev_led_on; /**< Turn on LED. */
--
2.5.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH 2/9] mbuf: remove packet type from offload flags
2015-09-01 21:30 5% ` [dpdk-dev] [PATCH 1/9] ethdev: remove Rx interrupt switch Thomas Monjalon
@ 2015-09-01 21:30 2% ` Thomas Monjalon
2015-09-01 21:30 11% ` [dpdk-dev] [PATCH 3/9] ethdev: remove SCTP flow entries switch Thomas Monjalon
` (2 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-01 21:30 UTC (permalink / raw)
To: dev
The extended unified packet type is now part of the standard ABI.
As mbuf struct is changed, the mbuf library version is bumped.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test-pipeline/pipeline_hash.c | 12 -
app/test-pmd/csumonly.c | 14 -
app/test-pmd/rxonly.c | 16 --
app/test/packet_burst_generator.c | 12 -
doc/guides/rel_notes/deprecation.rst | 5 -
drivers/net/cxgbe/sge.c | 16 --
drivers/net/e1000/igb_rxtx.c | 34 ---
drivers/net/enic/enic_main.c | 25 --
drivers/net/fm10k/fm10k_rxtx.c | 15 --
drivers/net/i40e/i40e_rxtx.c | 293 ---------------------
drivers/net/ixgbe/ixgbe_rxtx.c | 87 ------
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 111 --------
drivers/net/mlx4/mlx4.c | 29 --
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 -
examples/ip_fragmentation/main.c | 10 -
examples/ip_reassembly/main.c | 10 -
examples/l3fwd-acl/main.c | 27 --
examples/l3fwd-power/main.c | 9 -
examples/l3fwd/main.c | 114 --------
examples/tep_termination/vxlan.c | 5 -
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 -
lib/librte_mbuf/Makefile | 2 +-
lib/librte_mbuf/rte_mbuf.c | 10 -
lib/librte_mbuf/rte_mbuf.h | 28 +-
24 files changed, 2 insertions(+), 896 deletions(-)
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index aa3f9e5..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,33 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- } else {
-#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
-#ifdef RTE_NEXT_ABI
} else
continue;
-#else
- }
-#endif
*signature = test_hash(key, 0, 0);
}
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 1bf3485..e561dde 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,14 +202,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
-#ifdef RTE_NEXT_ABI
parse_vxlan(struct udp_hdr *udp_hdr,
struct testpmd_offload_info *info,
uint32_t pkt_type)
-#else
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
-#endif
{
struct ether_hdr *eth_hdr;
@@ -217,12 +212,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
-#ifdef RTE_NEXT_ABI
RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
-#else
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
-#endif
return;
info->is_tunnel = 1;
@@ -559,11 +549,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
-#ifdef RTE_NEXT_ABI
parse_vxlan(udp_hdr, &info, m->packet_type);
-#else
- parse_vxlan(udp_hdr, &info, m->ol_flags);
-#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ee7fd8d..14555ab 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,11 +91,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
-#ifdef RTE_NEXT_ABI
uint16_t is_encapsulation;
-#else
- uint64_t is_encapsulation;
-#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -138,13 +134,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
-#ifdef RTE_NEXT_ABI
is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
-#else
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -171,7 +161,6 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
-#ifdef RTE_NEXT_ABI
if (mb->packet_type) {
uint32_t ptype;
@@ -341,7 +330,6 @@ pkt_burst_receive(struct fwd_stream *fs)
printf("\n");
} else
printf("Unknown packet type\n");
-#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -355,11 +343,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
-#else
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
-#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = rte_pktmbuf_mtod_offset(mb,
struct ipv4_hdr *,
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index d9d808b..a93c3b5 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -273,21 +273,9 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-#ifndef RTE_NEXT_ABI
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
-#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-#ifndef RTE_NEXT_ABI
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
-#endif
}
pkts_burst[nb_pkt] = pkt;
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 991a777..639ab18 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -24,11 +24,6 @@ Deprecation Notices
* The field mem_location of the rte_lpm structure is deprecated and should be
removed as well as the macros RTE_LPM_HEAP and RTE_LPM_MEMZONE.
-* Significant ABI changes are planned for struct rte_mbuf, struct rte_kni_mbuf,
- and several ``PKT_RX_`` flags will be removed, to support unified packet type
- from release 2.1. Those changes may be enabled in the upcoming release 2.1
- with CONFIG_RTE_NEXT_ABI.
-
* librte_malloc library has been integrated into librte_eal. The 2.1 release
creates a dummy/empty malloc library to fulfill binaries with dynamic linking
dependencies on librte_malloc.so. Such dummy library will not be created from
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index d570d33..6eb1244 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1299,22 +1299,14 @@ int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
mbuf->port = pkt->iff;
if (pkt->l2info & htonl(F_RXF_IP)) {
-#ifdef RTE_NEXT_ABI
mbuf->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- mbuf->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (unlikely(!csum_ok))
mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
if ((pkt->l2info & htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (pkt->l2info & htonl(F_RXF_IP6)) {
-#ifdef RTE_NEXT_ABI
mbuf->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- mbuf->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
}
mbuf->port = pkt->iff;
@@ -1419,11 +1411,7 @@ static int process_responses(struct sge_rspq *q, int budget,
unmap_rx_buf(&rxq->fl);
if (cpl->l2info & htonl(F_RXF_IP)) {
-#ifdef RTE_NEXT_ABI
pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (unlikely(!csum_ok))
pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -1431,11 +1419,7 @@ static int process_responses(struct sge_rspq *q, int budget,
htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (cpl->l2info & htonl(F_RXF_IP6)) {
-#ifdef RTE_NEXT_ABI
pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
}
if (!rss_hdr->filter_tid && rss_hdr->hash_type) {
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index b13930e..19905fd 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,7 +590,6 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
-#ifdef RTE_NEXT_ABI
#define IGB_PACKET_TYPE_IPV4 0X01
#define IGB_PACKET_TYPE_IPV4_TCP 0X11
#define IGB_PACKET_TYPE_IPV4_UDP 0X21
@@ -684,35 +683,6 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
return pkt_flags;
}
-#else /* RTE_NEXT_ABI */
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
-{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
-
-#if defined(RTE_LIBRTE_IEEE1588)
- static uint32_t ip_pkt_etqf_map[8] = {
- 0, 0, 0, PKT_RX_IEEE1588_PTP,
- 0, 0, 0, 0,
- };
-
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
-}
-#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -886,10 +856,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
-#ifdef RTE_NEXT_ABI
rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
lo_dword.hs_rss.pkt_info);
-#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1124,10 +1092,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
-#ifdef RTE_NEXT_ABI
first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
lower.lo_dword.hs_rss.pkt_info);
-#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index f47e96c..3b8719f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,11 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -436,11 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -453,11 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -469,22 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
hdr_rx_pkt->packet_type =
RTE_PTYPE_L3_IPV4;
-#else
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -496,13 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
hdr_rx_pkt->packet_type =
RTE_PTYPE_L3_IPV6;
-#else
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
-
}
}
}
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index b5fa2e6..d3f7b89 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,7 +68,6 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
-#ifdef RTE_NEXT_ABI
static const uint32_t
ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
__rte_cache_aligned = {
@@ -91,14 +90,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
>> FM10K_RXD_PKTTYPE_SHIFT];
-#else /* RTE_NEXT_ABI */
- uint16_t ptype;
- static const uint16_t pt_lut[] = { 0,
- PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
- 0, 0, 0
- };
-#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -121,12 +112,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
-
-#ifndef RTE_NEXT_ABI
- ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
- FM10K_RXD_PKTTYPE_SHIFT;
- m->ol_flags |= pt_lut[(uint8_t)ptype];
-#endif
}
uint16_t
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index eae4ab0..fd656d5 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -188,11 +188,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
| I40E_RXD_QW1_STATUS_TSYNINDX_MASK))
>> I40E_RX_DESC_STATUS_TSYNINDX_SHIFT;
-#ifdef RTE_NEXT_ABI
if ((mb->packet_type & RTE_PTYPE_L2_MASK)
== RTE_PTYPE_L2_ETHER_TIMESYNC)
pkt_flags = PKT_RX_IEEE1588_PTP;
-#endif
if (tsyn & 0x04) {
pkt_flags |= PKT_RX_IEEE1588_TMST;
mb->timesync = tsyn & 0x03;
@@ -202,7 +200,6 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
}
#endif
-#ifdef RTE_NEXT_ABI
/* For each value it means, datasheet of hardware can tell more details */
static inline uint32_t
i40e_rxd_pkt_type_mapping(uint8_t ptype)
@@ -735,275 +732,6 @@ i40e_rxd_pkt_type_mapping(uint8_t ptype)
return ptype_table[ptype];
}
-#else /* RTE_NEXT_ABI */
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
-{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- PKT_RX_IEEE1588_PTP, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
- };
-
- return ip_ptype_map[ptype];
-}
-#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -1292,18 +1020,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
mb->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
-
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -1549,15 +1269,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
rxm->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1717,16 +1431,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
first_seg->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 91023b9..a710102 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -864,7 +864,6 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-#ifdef RTE_NEXT_ABI
#define IXGBE_PACKET_TYPE_IPV4 0X01
#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
@@ -967,43 +966,6 @@ ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
return ip_rss_types_map[pkt_info & 0XF];
#endif
}
-#else /* RTE_NEXT_ABI */
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
-{
- uint64_t pkt_flags;
-
- static const uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
-
- static const uint64_t ip_rss_types_map[16] = {
- 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
- 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
- PKT_RX_RSS_HASH, 0, 0, 0,
- 0, 0, 0, PKT_RX_FDIR,
- };
-
-#ifdef RTE_LIBRTE_IEEE1588
- static uint64_t ip_pkt_etqf_map[8] = {
- 0, 0, 0, PKT_RX_IEEE1588_PTP,
- 0, 0, 0, 0,
- };
-
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
-#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
-}
-#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -1058,13 +1020,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
-#ifdef RTE_NEXT_ABI
int nb_dd;
uint32_t s[LOOK_AHEAD];
uint16_t pkt_info[LOOK_AHEAD];
-#else
- int s[LOOK_AHEAD], nb_dd;
-#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
uint32_t status;
@@ -1088,11 +1046,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rte_le_to_cpu_32(rxdp[j].wb.upper.status_error);
-#ifdef RTE_NEXT_ABI
for (j = LOOK_AHEAD - 1; j >= 0; --j)
pkt_info[j] = rxdp[j].wb.lower.lo_dword.
hs_rss.pkt_info;
-#endif /* RTE_NEXT_ABI */
/* Compute how many status bits were set */
nb_dd = 0;
@@ -1111,7 +1067,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
-#ifdef RTE_NEXT_ABI
pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
pkt_flags |=
@@ -1119,15 +1074,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->ol_flags = pkt_flags;
mb->packet_type =
ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
-#else /* RTE_NEXT_ABI */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rte_le_to_cpu_32(
- rxdp[j].wb.lower.lo_dword.data));
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
- pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
- mb->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rte_le_to_cpu_32(
@@ -1328,11 +1274,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
-#ifdef RTE_NEXT_ABI
uint32_t pkt_info;
-#else
- uint32_t hlen_type_rss;
-#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1450,7 +1392,6 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
-#ifdef RTE_NEXT_ABI
pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
@@ -1462,16 +1403,6 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
-#else /* RTE_NEXT_ABI */
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
-
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
- pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
- rxm->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rte_le_to_cpu_32(
@@ -1547,7 +1478,6 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
-#ifdef RTE_NEXT_ABI
uint16_t pkt_info;
uint64_t pkt_flags;
@@ -1563,23 +1493,6 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
head->ol_flags = pkt_flags;
head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
-#else /* RTE_NEXT_ABI */
- uint32_t hlen_type_rss;
- uint64_t pkt_flags;
-
- head->port = port_id;
-
- /*
- * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
- * set in the pkt_flags field.
- */
- head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
- pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
- head->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index cf25a53..6979b1e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -140,19 +140,11 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#ifndef RTE_NEXT_ABI
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define PTYPE_SHIFT (1)
-#endif /* RTE_NEXT_ABI */
-
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
-#ifdef RTE_NEXT_ABI
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -190,50 +182,6 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
vtag1 = _mm_or_si128(ptype0, vtag1);
vol.dword = _mm_cvtsi128_si64(vtag1);
-#else
- __m128i ptype0, ptype1, vtag0, vtag1;
- union {
- uint16_t e[4];
- uint64_t dword;
- } vol;
-
- /* pkt type + vlan olflags mask */
- const __m128i pkttype_msk = _mm_set_epi16(
- 0x0000, 0x0000, 0x0000, 0x0000,
- OLFLAGS_MASK, OLFLAGS_MASK, OLFLAGS_MASK, OLFLAGS_MASK);
-
- /* mask everything except rss type */
- const __m128i rsstype_msk = _mm_set_epi16(
- 0x0000, 0x0000, 0x0000, 0x0000,
- 0x000F, 0x000F, 0x000F, 0x000F);
-
- /* rss type to PKT_RX_RSS_HASH translation */
- const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
- 0, 0, 0, PKT_RX_RSS_HASH,
- PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
- PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
-
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
- vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
- vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
-
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
- vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype0 = _mm_and_si128(ptype1, rsstype_msk);
- ptype0 = _mm_shuffle_epi8(rss_flags, ptype0);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
- vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
-
- ptype1 = _mm_or_si128(ptype1, vtag1);
- ptype1 = _mm_and_si128(ptype1, pkttype_msk);
-
- ptype0 = _mm_or_si128(ptype0, ptype1);
-
- vol.dword = _mm_cvtsi128_si64(ptype0);
-#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -264,7 +212,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
-#ifdef RTE_NEXT_ABI
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, /* ignore non-length fields */
-rxq->crc_len, /* sub crc on data_len */
@@ -275,16 +222,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
__m128i dd_check, eop_check;
__m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
0xFFFFFFFF, 0xFFFF07F0);
-#else
- __m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
- 0, /* ignore high-16bits of pkt_len */
- -rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
- );
- __m128i dd_check, eop_check;
-#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -313,7 +250,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
-#ifdef RTE_NEXT_ABI
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
@@ -324,23 +260,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
1, /* octet 1, 8 bits pkt_type field */
0 /* octet 0, 4 bits offset 4 pkt_type field */
);
-#else
- shuf_msk = _mm_set_epi8(
- 7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
- 15, 14, /* octet 14~15, low 16 bits vlan_macip */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
- );
-#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
-#ifdef RTE_NEXT_ABI
/* A. load 4 packet in one loop
* [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
@@ -348,20 +272,10 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
-#else
- /* A. load 4 packet in one loop
- * B. copy 4 mbuf point from swring to rx_pkts
- * C. calc the number of DD bits among the 4 packets
- * [C*. extract the end-of-packet bit, if requested]
- * D. fill info. from desc to mbuf
- */
-#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
-#ifdef RTE_NEXT_ABI
__m128i descs0[RTE_IXGBE_DESCS_PER_LOOP];
-#endif /* RTE_NEXT_ABI */
__m128i descs[RTE_IXGBE_DESCS_PER_LOOP];
__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
@@ -377,7 +291,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.1 load 1 mbuf point */
mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
-#ifdef RTE_NEXT_ABI
/* Read desc statuses backwards to avoid race condition */
/* A.1 load 4 pkts desc */
descs0[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
@@ -403,25 +316,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* A* mask out 0~3 bits RSS type */
descs[1] = _mm_and_si128(descs0[1], desc_mask);
descs[0] = _mm_and_si128(descs0[0], desc_mask);
-#else
- /* Read desc statuses backwards to avoid race condition */
- /* A.1 load 4 pkts desc */
- descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
-
- /* B.2 copy 2 mbuf point into rx_pkts */
- _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
-
- /* B.1 load 1 mbuf point */
- mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
-
- descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
- /* B.1 load 2 mbuf point */
- descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
- descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
-
- /* B.2 copy 2 mbuf point into rx_pkts */
- _mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
-#endif /* RTE_NEXT_ABI */
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -435,13 +329,8 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
-#ifdef RTE_NEXT_ABI
/* set ol_flags with vlan packet type */
desc_to_olflags_v(descs0, &rx_pkts[pos]);
-#else
- /* set ol_flags with packet type and vlan tag */
- desc_to_olflags_v(descs, &rx_pkts[pos]);
-#endif /* RTE_NEXT_ABI */
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index fa3cb7e..6c6342f 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1264,16 +1264,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
* offsets but automatically recognizes the packet
* type. For inner L3/L4 checksums, only VXLAN (UDP)
* tunnels are currently supported. */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_TUNNEL_PKT(buf->packet_type))
-#else
- /* FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
- * the outer packet type is unknown. All we know is
- * that the L2 header is of unusual length (not
- * ETHER_HDR_LEN with or without 802.1Q header). */
- if ((buf->l2_len != ETHER_HDR_LEN) &&
- (buf->l2_len != (ETHER_HDR_LEN + 4)))
-#endif
send_flags |= IBV_EXP_QP_BURST_TUNNEL;
}
if (likely(segs == 1)) {
@@ -2488,7 +2479,6 @@ rxq_cleanup(struct rxq *rxq)
memset(rxq, 0, sizeof(*rxq));
}
-#ifdef RTE_NEXT_ABI
/**
* Translate RX completion flags to packet type.
*
@@ -2521,7 +2511,6 @@ rxq_cq_to_pkt_type(uint32_t flags)
IBV_EXP_CQ_RX_IPV6_PACKET, RTE_PTYPE_L3_IPV6);
return pkt_type;
}
-#endif /* RTE_NEXT_ABI */
/**
* Translate RX completion flags to offload flags.
@@ -2539,11 +2528,6 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
{
uint32_t ol_flags = 0;
-#ifndef RTE_NEXT_ABI
- ol_flags =
- TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV4_PACKET, PKT_RX_IPV4_HDR) |
- TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV6_PACKET, PKT_RX_IPV6_HDR);
-#endif
if (rxq->csum)
ol_flags |=
TRANSPOSE(~flags,
@@ -2559,14 +2543,6 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
*/
if ((flags & IBV_EXP_CQ_RX_TUNNEL_PACKET) && (rxq->csum_l2tun))
ol_flags |=
-#ifndef RTE_NEXT_ABI
- TRANSPOSE(flags,
- IBV_EXP_CQ_RX_OUTER_IPV4_PACKET,
- PKT_RX_TUNNEL_IPV4_HDR) |
- TRANSPOSE(flags,
- IBV_EXP_CQ_RX_OUTER_IPV6_PACKET,
- PKT_RX_TUNNEL_IPV6_HDR) |
-#endif
TRANSPOSE(~flags,
IBV_EXP_CQ_RX_OUTER_IP_CSUM_OK,
PKT_RX_IP_CKSUM_BAD) |
@@ -2758,10 +2734,7 @@ mlx4_rx_burst_sp(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NB_SEGS(pkt_buf) = j;
PORT(pkt_buf) = rxq->port_id;
PKT_LEN(pkt_buf) = pkt_buf_len;
-#ifdef RTE_NEXT_ABI
pkt_buf->packet_type = rxq_cq_to_pkt_type(flags);
-#endif
- pkt_buf->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
*(pkts++) = pkt_buf;
@@ -2921,9 +2894,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NEXT(seg) = NULL;
PKT_LEN(seg) = len;
DATA_LEN(seg) = len;
-#ifdef RTE_NEXT_ABI
seg->packet_type = rxq_cq_to_pkt_type(flags);
-#endif
seg->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 39ad6ef..4de5d89 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -520,17 +520,9 @@ vmxnet3_rx_offload(const Vmxnet3_RxCompDesc *rcd, struct rte_mbuf *rxm)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
-#ifdef RTE_NEXT_ABI
rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
-#else
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
-#endif
else
-#ifdef RTE_NEXT_ABI
rxm->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!rcd->cnc) {
if (!rcd->ipc)
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index b71d05f..fbc0b8d 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,11 +283,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -321,14 +317,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if this is an IPv6 packet */
-#else
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
-#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index f1c47ad..741c398 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,11 +356,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
-#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -400,14 +396,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if packet is IPv6 */
-#else
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
-#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index b2bdf2f..f612671 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,13 +645,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-#endif
ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -670,11 +664,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (type == PKT_RX_IPV6_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -692,22 +682,12 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (type == PKT_RX_IPV6_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -755,17 +735,10 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR)
- dump_acl4_rule(m, res);
- else
- dump_acl6_rule(m, res);
-#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 086f29b..8bb88ce 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -650,11 +650,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
@@ -689,12 +685,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- }
- else {
-#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index fe5a257..1f3e5c6 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1073,11 +1073,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
/* Handle IPv4 headers.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -1108,11 +1104,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- } else {
-#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1131,13 +1123,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else
/* Free the mbuf that contains non-IPV4/IPV6 packet */
rte_pktmbuf_free(m);
-#else
- }
-#endif
}
#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \
@@ -1163,19 +1151,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-#ifdef RTE_NEXT_ABI
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
-#else
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
-#endif
{
uint8_t ihl;
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(ptype)) {
-#else
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
-#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1206,19 +1186,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
-#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1252,17 +1224,12 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
-#ifdef RTE_NEXT_ABI
rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
-#else
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
-#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
-#ifdef RTE_NEXT_ABI
/*
* Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
@@ -1297,57 +1264,18 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
-#else /* RTE_NEXT_ABI */
-/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
- */
-static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
-{
- struct ipv4_hdr *ipv4_hdr;
- struct ether_hdr *eth_hdr;
- uint32_t x0, x1, x2, x3;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
-
- dip[0] = _mm_set_epi32(x3, x2, x1, x0);
-}
-#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-#ifdef RTE_NEXT_ABI
processx4_step2(const struct lcore_conf *qconf,
__m128i dip,
uint32_t ipv4_flag,
uint8_t portid,
struct rte_mbuf *pkt[FWDSTEP],
uint16_t dprt[FWDSTEP])
-#else
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
-#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1357,11 +1285,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
-#ifdef RTE_NEXT_ABI
if (likely(ipv4_flag)) {
-#else
- if (likely(flag != 0)) {
-#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1411,7 +1335,6 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
-#ifdef RTE_NEXT_ABI
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1420,16 +1343,6 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->packet_type);
-#else /* RTE_NEXT_ABI */
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
-#endif /* RTE_NEXT_ABI */
}
/*
@@ -1616,11 +1529,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
-#ifdef RTE_NEXT_ABI
uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
-#else
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
-#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1690,7 +1599,6 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 8);
for (j = 0; j < n; j += 8) {
-#ifdef RTE_NEXT_ABI
uint32_t pkt_type =
pkts_burst[j]->packet_type &
pkts_burst[j+1]->packet_type &
@@ -1705,20 +1613,6 @@ main_loop(__attribute__((unused)) void *dummy)
&pkts_burst[j], portid, qconf);
} else if (pkt_type &
RTE_PTYPE_L3_IPV6) {
-#else /* RTE_NEXT_ABI */
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags
- & pkts_burst[j+4]->ol_flags
- & pkts_burst[j+5]->ol_flags
- & pkts_burst[j+6]->ol_flags
- & pkts_burst[j+7]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
- simple_ipv4_fwd_8pkts(&pkts_burst[j],
- portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
-#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_8pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1751,21 +1645,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
-#ifdef RTE_NEXT_ABI
&ipv4_flag[j / FWDSTEP]);
-#else
- &flag[j / FWDSTEP]);
-#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
-#ifdef RTE_NEXT_ABI
ipv4_flag[j / FWDSTEP], portid,
-#else
- flag[j / FWDSTEP], portid,
-#endif
&pkts_burst[j], &dst_port[j]);
}
diff --git a/examples/tep_termination/vxlan.c b/examples/tep_termination/vxlan.c
index e98a29f..5ee1f95 100644
--- a/examples/tep_termination/vxlan.c
+++ b/examples/tep_termination/vxlan.c
@@ -180,12 +180,7 @@ decapsulation(struct rte_mbuf *pkt)
* (rfc7348) or that the rx offload flag is set (i40e only
* currently)*/
if (udp_hdr->dst_port != rte_cpu_to_be_16(DEFAULT_VXLAN_PORT) &&
-#ifdef RTE_NEXT_ABI
(pkt->packet_type & RTE_PTYPE_TUNNEL_MASK) == 0)
-#else
- (pkt->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
-#endif
return -1;
outer_header_len = info.outer_l2_len + info.outer_l3_len
+ sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr);
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index e9f38bd..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,15 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
-#ifdef RTE_NEXT_ABI
char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
uint16_t data_len; /**< Amount of data in segment buffer. */
-#else
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
-#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index 080f3cf..8d62b0d 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -38,7 +38,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
EXPORT_MAP := rte_mbuf_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index e416312..c18b438 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -258,18 +258,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
-#ifndef RTE_NEXT_ABI
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
-#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-#ifndef RTE_NEXT_ABI
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
-#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8c2db1b..d7c9030 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -93,18 +93,8 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#ifndef RTE_NEXT_ABI
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#ifndef RTE_NEXT_ABI
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
@@ -209,7 +199,6 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
-#ifdef RTE_NEXT_ABI
/*
* 32 bits are divided into several fields to mark packet types. Note that
* each field is indexical.
@@ -696,7 +685,6 @@ extern "C" {
RTE_PTYPE_INNER_L2_MASK | \
RTE_PTYPE_INNER_L3_MASK | \
RTE_PTYPE_INNER_L4_MASK))
-#endif /* RTE_NEXT_ABI */
/** Alignment constraint of mbuf private area. */
#define RTE_MBUF_PRIV_ALIGN 8
@@ -775,7 +763,6 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
-#ifdef RTE_NEXT_ABI
/*
* The packet type, which is the combination of outer/inner L2, L3, L4
* and tunnel types.
@@ -796,19 +783,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
-#else /* RTE_NEXT_ABI */
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
- */
- uint16_t packet_type;
- uint16_t data_len; /**< Amount of data in segment buffer. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
- uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
-#endif /* RTE_NEXT_ABI */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -829,9 +804,8 @@ struct rte_mbuf {
} hash; /**< hash information */
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
-#ifdef RTE_NEXT_ABI
+
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
-#endif /* RTE_NEXT_ABI */
/* second cache line - fields only used in slow path or on TX */
MARKER cacheline1 __rte_cache_aligned;
--
2.5.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH 3/9] ethdev: remove SCTP flow entries switch
2015-09-01 21:30 5% ` [dpdk-dev] [PATCH 1/9] ethdev: remove Rx interrupt switch Thomas Monjalon
2015-09-01 21:30 2% ` [dpdk-dev] [PATCH 2/9] mbuf: remove packet type from offload flags Thomas Monjalon
@ 2015-09-01 21:30 11% ` Thomas Monjalon
2015-09-01 21:31 4% ` [dpdk-dev] [PATCH 9/9] ring: remove deprecated functions Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
4 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-01 21:30 UTC (permalink / raw)
To: dev
The extended SCTP flow entries are now part of the standard API.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
app/test-pmd/cmdline.c | 4 ----
doc/guides/rel_notes/deprecation.rst | 3 ---
drivers/net/i40e/i40e_fdir.c | 4 ----
lib/librte_ether/rte_eth_ctrl.h | 4 ----
4 files changed, 15 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 5799c9c..0f8f48f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -7888,12 +7888,10 @@ cmd_flow_director_filter_parsed(void *parsed_result,
IPV4_ADDR_TO_UINT(res->ip_src,
entry.input.flow.sctp4_flow.ip.src_ip);
/* need convert to big endian. */
-#ifdef RTE_NEXT_ABI
entry.input.flow.sctp4_flow.dst_port =
rte_cpu_to_be_16(res->port_dst);
entry.input.flow.sctp4_flow.src_port =
rte_cpu_to_be_16(res->port_src);
-#endif
entry.input.flow.sctp4_flow.verify_tag =
rte_cpu_to_be_32(res->verify_tag_value);
break;
@@ -7917,12 +7915,10 @@ cmd_flow_director_filter_parsed(void *parsed_result,
IPV6_ADDR_TO_ARRAY(res->ip_src,
entry.input.flow.sctp6_flow.ip.src_ip);
/* need convert to big endian. */
-#ifdef RTE_NEXT_ABI
entry.input.flow.sctp6_flow.dst_port =
rte_cpu_to_be_16(res->port_dst);
entry.input.flow.sctp6_flow.src_port =
rte_cpu_to_be_16(res->port_src);
-#endif
entry.input.flow.sctp6_flow.verify_tag =
rte_cpu_to_be_32(res->verify_tag_value);
break;
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 639ab18..cf5cd17 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,9 +44,6 @@ Deprecation Notices
flow director filtering in VF. The release 2.1 does not contain these ABI
changes, but release 2.2 will, and no backwards compatibility is planned.
-* ABI change is planned to extend the SCTP flow's key input from release 2.1.
- The change may be enabled in the release 2.1 with CONFIG_RTE_NEXT_ABI.
-
* ABI changes are planned for struct rte_eth_fdir_filter and
rte_eth_fdir_masks in order to support new flow director modes,
MAC VLAN and Cloud, on x550. The MAC VLAN mode means the MAC and
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 8208273..c9ce98f 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -822,7 +822,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) +
sizeof(struct ipv4_hdr));
payload = (unsigned char *)sctp + sizeof(struct sctp_hdr);
-#ifdef RTE_NEXT_ABI
/*
* The source and destination fields in the transmitted packet
* need to be presented in a reversed order with respect
@@ -830,7 +829,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
*/
sctp->src_port = fdir_input->flow.sctp4_flow.dst_port;
sctp->dst_port = fdir_input->flow.sctp4_flow.src_port;
-#endif
sctp->tag = fdir_input->flow.sctp4_flow.verify_tag;
break;
@@ -873,7 +871,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) +
sizeof(struct ipv6_hdr));
payload = (unsigned char *)sctp + sizeof(struct sctp_hdr);
-#ifdef RTE_NEXT_ABI
/*
* The source and destination fields in the transmitted packet
* need to be presented in a reversed order with respect
@@ -881,7 +878,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
*/
sctp->src_port = fdir_input->flow.sctp6_flow.dst_port;
sctp->dst_port = fdir_input->flow.sctp6_flow.src_port;
-#endif
sctp->tag = fdir_input->flow.sctp6_flow.verify_tag;
break;
diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
index 4beb981..26b7b33 100644
--- a/lib/librte_ether/rte_eth_ctrl.h
+++ b/lib/librte_ether/rte_eth_ctrl.h
@@ -335,10 +335,8 @@ struct rte_eth_tcpv4_flow {
*/
struct rte_eth_sctpv4_flow {
struct rte_eth_ipv4_flow ip; /**< IPv4 fields to match. */
-#ifdef RTE_NEXT_ABI
uint16_t src_port; /**< SCTP source port to match. */
uint16_t dst_port; /**< SCTP destination port to match. */
-#endif
uint32_t verify_tag; /**< Verify tag to match */
};
@@ -373,10 +371,8 @@ struct rte_eth_tcpv6_flow {
*/
struct rte_eth_sctpv6_flow {
struct rte_eth_ipv6_flow ip; /**< IPv6 fields to match. */
-#ifdef RTE_NEXT_ABI
uint16_t src_port; /**< SCTP source port to match. */
uint16_t dst_port; /**< SCTP destination port to match. */
-#endif
uint32_t verify_tag; /**< Verify tag to match */
};
--
2.5.1
^ permalink raw reply [relevance 11%]
* [dpdk-dev] [PATCH 9/9] ring: remove deprecated functions
` (2 preceding siblings ...)
2015-09-01 21:30 11% ` [dpdk-dev] [PATCH 3/9] ethdev: remove SCTP flow entries switch Thomas Monjalon
@ 2015-09-01 21:31 4% ` Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
4 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-01 21:31 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
From: Stephen Hemminger <shemming@brocade.com>
These were deprecated in 2.0 so remove them from 2.2
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 3 --
drivers/net/ring/rte_eth_ring.c | 56 -------------------------------
drivers/net/ring/rte_eth_ring.h | 3 --
drivers/net/ring/rte_eth_ring_version.map | 2 --
4 files changed, 64 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 04819fa..5f6079b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -40,9 +40,6 @@ Deprecation Notices
the tunnel type, TNI/VNI, inner MAC and inner VLAN are monitored.
The release 2.2 will contain these changes without backwards compatibility.
-* librte_pmd_ring: The deprecated functions rte_eth_ring_pair_create and
- rte_eth_ring_pair_attach should be removed.
-
* ABI changes are planned for struct virtio_net in order to support vhost-user
multiple queues feature.
It should be integrated in release 2.2 without backward compatibility.
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 6fd3d0a..0ba36d5 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -391,62 +391,6 @@ eth_dev_ring_create(const char *name, const unsigned numa_node,
return 0;
}
-
-static int
-eth_dev_ring_pair_create(const char *name, const unsigned numa_node,
- enum dev_action action)
-{
- /* rx and tx are so-called from point of view of first port.
- * They are inverted from the point of view of second port
- */
- struct rte_ring *rx[RTE_PMD_RING_MAX_RX_RINGS];
- struct rte_ring *tx[RTE_PMD_RING_MAX_TX_RINGS];
- unsigned i;
- char rx_rng_name[RTE_RING_NAMESIZE];
- char tx_rng_name[RTE_RING_NAMESIZE];
- unsigned num_rings = RTE_MIN(RTE_PMD_RING_MAX_RX_RINGS,
- RTE_PMD_RING_MAX_TX_RINGS);
-
- for (i = 0; i < num_rings; i++) {
- snprintf(rx_rng_name, sizeof(rx_rng_name), "ETH_RX%u_%s", i, name);
- rx[i] = (action == DEV_CREATE) ?
- rte_ring_create(rx_rng_name, 1024, numa_node,
- RING_F_SP_ENQ|RING_F_SC_DEQ) :
- rte_ring_lookup(rx_rng_name);
- if (rx[i] == NULL)
- return -1;
- snprintf(tx_rng_name, sizeof(tx_rng_name), "ETH_TX%u_%s", i, name);
- tx[i] = (action == DEV_CREATE) ?
- rte_ring_create(tx_rng_name, 1024, numa_node,
- RING_F_SP_ENQ|RING_F_SC_DEQ):
- rte_ring_lookup(tx_rng_name);
- if (tx[i] == NULL)
- return -1;
- }
-
- if (rte_eth_from_rings(rx_rng_name, rx, num_rings, tx, num_rings,
- numa_node) < 0 ||
- rte_eth_from_rings(tx_rng_name, tx, num_rings, rx,
- num_rings, numa_node) < 0)
- return -1;
-
- return 0;
-}
-
-int
-rte_eth_ring_pair_create(const char *name, const unsigned numa_node)
-{
- RTE_LOG(WARNING, PMD, "rte_eth_ring_pair_create is deprecated\n");
- return eth_dev_ring_pair_create(name, numa_node, DEV_CREATE);
-}
-
-int
-rte_eth_ring_pair_attach(const char *name, const unsigned numa_node)
-{
- RTE_LOG(WARNING, PMD, "rte_eth_ring_pair_attach is deprecated\n");
- return eth_dev_ring_pair_create(name, numa_node, DEV_ATTACH);
-}
-
struct node_action_pair {
char name[PATH_MAX];
unsigned node;
diff --git a/drivers/net/ring/rte_eth_ring.h b/drivers/net/ring/rte_eth_ring.h
index 2262249..5a69bff 100644
--- a/drivers/net/ring/rte_eth_ring.h
+++ b/drivers/net/ring/rte_eth_ring.h
@@ -65,9 +65,6 @@ int rte_eth_from_rings(const char *name,
const unsigned nb_tx_queues,
const unsigned numa_node);
-int rte_eth_ring_pair_create(const char *name, const unsigned numa_node);
-int rte_eth_ring_pair_attach(const char *name, const unsigned numa_node);
-
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/ring/rte_eth_ring_version.map b/drivers/net/ring/rte_eth_ring_version.map
index 8ad107d..0875e25 100644
--- a/drivers/net/ring/rte_eth_ring_version.map
+++ b/drivers/net/ring/rte_eth_ring_version.map
@@ -2,8 +2,6 @@ DPDK_2.0 {
global:
rte_eth_from_rings;
- rte_eth_ring_pair_attach;
- rte_eth_ring_pair_create;
local: *;
};
--
2.5.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 00/10] clean deprecated code
` (3 preceding siblings ...)
2015-09-01 21:31 4% ` [dpdk-dev] [PATCH 9/9] ring: remove deprecated functions Thomas Monjalon
@ 2015-09-02 13:16 3% ` Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 01/10] doc: init next release notes Thomas Monjalon
` (10 more replies)
4 siblings, 11 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
Before starting a new integration cycle (2.2.0-rc0),
the deprecated code is removed.
The hash library is not cleaned in this patchset and would be
better done by its maintainers. Bruce, Pablo, please check the
file doc/guides/rel_notes/deprecation.rst.
Changes in v2:
- increment KNI and ring PMD versions
- list library versions in release notes
- list API/ABI changes in release notes
Stephen Hemminger (2):
kni: remove deprecated functions
ring: remove deprecated functions
Thomas Monjalon (8):
doc: init next release notes
ethdev: remove Rx interrupt switch
mbuf: remove packet type from offload flags
ethdev: remove SCTP flow entries switch
eal: remove deprecated function
mem: remove dummy malloc library
lpm: remove deprecated field
acl: remove old API
MAINTAINERS | 1 -
app/test-acl/main.c | 17 ++
app/test-pipeline/pipeline_hash.c | 12 -
app/test-pmd/cmdline.c | 4 -
app/test-pmd/csumonly.c | 14 -
app/test-pmd/rxonly.c | 16 --
app/test/Makefile | 6 -
app/test/packet_burst_generator.c | 12 -
app/test/test_acl.c | 194 ++++++++++++++
app/test/test_acl.h | 59 +++++
app/test/test_func_reentrancy.c | 4 +-
app/test/test_kni.c | 36 ---
app/test/test_lpm.c | 4 +-
doc/guides/prog_guide/dev_kit_build_system.rst | 2 +-
doc/guides/prog_guide/env_abstraction_layer.rst | 2 +-
doc/guides/prog_guide/kernel_nic_interface.rst | 2 -
doc/guides/prog_guide/source_org.rst | 1 -
.../thread_safety_intel_dpdk_functions.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 33 ---
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_2_2.rst | 81 ++++++
doc/guides/sample_app_ug/kernel_nic_interface.rst | 9 -
drivers/net/cxgbe/sge.c | 16 --
drivers/net/e1000/igb_ethdev.c | 26 --
drivers/net/e1000/igb_rxtx.c | 34 ---
drivers/net/enic/enic_main.c | 25 --
drivers/net/fm10k/fm10k_rxtx.c | 15 --
drivers/net/i40e/i40e_fdir.c | 4 -
drivers/net/i40e/i40e_rxtx.c | 293 ---------------------
drivers/net/ixgbe/ixgbe_ethdev.c | 40 ---
drivers/net/ixgbe/ixgbe_rxtx.c | 87 ------
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 111 --------
drivers/net/mlx4/mlx4.c | 29 --
drivers/net/ring/Makefile | 2 +-
drivers/net/ring/rte_eth_ring.c | 56 ----
drivers/net/ring/rte_eth_ring.h | 3 -
drivers/net/ring/rte_eth_ring_version.map | 2 -
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 -
examples/ip_fragmentation/main.c | 10 -
examples/ip_reassembly/main.c | 10 -
examples/l3fwd-acl/main.c | 44 ++--
examples/l3fwd-power/main.c | 11 -
examples/l3fwd/main.c | 114 --------
examples/tep_termination/vxlan.c | 5 -
lib/Makefile | 1 -
lib/librte_acl/Makefile | 2 +-
lib/librte_acl/rte_acl.c | 170 ------------
lib/librte_acl/rte_acl.h | 104 --------
lib/librte_acl/rte_acl_version.map | 2 -
lib/librte_eal/bsdapp/eal/Makefile | 2 +-
.../bsdapp/eal/include/exec-env/rte_interrupts.h | 2 -
lib/librte_eal/bsdapp/eal/rte_eal_version.map | 1 -
lib/librte_eal/common/eal_common_pci.c | 6 -
lib/librte_eal/common/include/rte_pci.h | 2 -
lib/librte_eal/linuxapp/eal/Makefile | 2 +-
lib/librte_eal/linuxapp/eal/eal_interrupts.c | 53 ----
.../linuxapp/eal/include/exec-env/rte_interrupts.h | 2 -
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 -
lib/librte_eal/linuxapp/eal/rte_eal_version.map | 1 -
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_eth_ctrl.h | 4 -
lib/librte_ether/rte_ethdev.c | 40 ---
lib/librte_ether/rte_ethdev.h | 4 -
lib/librte_kni/Makefile | 2 +-
lib/librte_kni/rte_kni.c | 51 ----
lib/librte_kni/rte_kni.h | 54 ----
lib/librte_kni/rte_kni_version.map | 3 -
lib/librte_lpm/Makefile | 2 +-
lib/librte_lpm/rte_lpm.h | 11 -
lib/librte_malloc/Makefile | 48 ----
lib/librte_malloc/rte_malloc_empty.c | 34 ---
lib/librte_malloc/rte_malloc_version.map | 3 -
lib/librte_mbuf/Makefile | 2 +-
lib/librte_mbuf/rte_mbuf.c | 10 -
lib/librte_mbuf/rte_mbuf.h | 28 +-
mk/rte.app.mk | 1 -
76 files changed, 385 insertions(+), 1727 deletions(-)
create mode 100644 doc/guides/rel_notes/release_2_2.rst
delete mode 100644 lib/librte_malloc/Makefile
delete mode 100644 lib/librte_malloc/rte_malloc_empty.c
delete mode 100644 lib/librte_malloc/rte_malloc_version.map
--
2.5.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 01/10] doc: init next release notes
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
@ 2015-09-02 13:16 5% ` Thomas Monjalon
2015-09-03 15:44 3% ` Mcnamara, John
2015-09-02 13:16 7% ` [dpdk-dev] [PATCH v2 02/10] ethdev: remove Rx interrupt switch Thomas Monjalon
` (9 subsequent siblings)
10 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
---
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_2_2.rst | 58 ++++++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+)
create mode 100644 doc/guides/rel_notes/release_2_2.rst
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index d01cbc8..d8cadeb 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -40,6 +40,7 @@ Contents
:numbered:
rel_description
+ release_2_2
release_2_1
release_2_0
release_1_8
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
new file mode 100644
index 0000000..494b4eb
--- /dev/null
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -0,0 +1,58 @@
+DPDK Release 2.2
+================
+
+New Features
+------------
+
+
+Resolved Issues
+---------------
+
+
+Known Issues
+------------
+
+
+API Changes
+-----------
+
+
+ABI Changes
+-----------
+
+
+Shared Library Versions
+-----------------------
+
+The libraries prepended with a plus sign were incremented in this version.
+
+.. code-block:: diff
+
+ libethdev.so.1
+ librte_acl.so.1
+ librte_cfgfile.so.1
+ librte_cmdline.so.1
+ librte_distributor.so.1
+ librte_eal.so.1
+ librte_hash.so.1
+ librte_ip_frag.so.1
+ librte_ivshmem.so.1
+ librte_jobstats.so.1
+ librte_kni.so.1
+ librte_kvargs.so.1
+ librte_lpm.so.1
+ librte_malloc.so.1
+ librte_mbuf.so.1
+ librte_mempool.so.1
+ librte_meter.so.1
+ librte_pipeline.so.1
+ librte_pmd_bond.so.1
+ librte_pmd_ring.so.1
+ librte_port.so.1
+ librte_power.so.1
+ librte_reorder.so.1
+ librte_ring.so.1
+ librte_sched.so.1
+ librte_table.so.1
+ librte_timer.so.1
+ librte_vhost.so.1
--
2.5.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 02/10] ethdev: remove Rx interrupt switch
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 01/10] doc: init next release notes Thomas Monjalon
@ 2015-09-02 13:16 7% ` Thomas Monjalon
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 03/10] mbuf: remove packet type from offload flags Thomas Monjalon
` (8 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The Rx interrupt feature is now part of the standard ABI.
Because of changes in rte_intr_handle and struct rte_eth_conf,
the eal and ethdev library versions are incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_2_2.rst | 7 ++-
drivers/net/e1000/igb_ethdev.c | 26 -----------
drivers/net/ixgbe/ixgbe_ethdev.c | 40 ----------------
examples/l3fwd-power/main.c | 2 -
lib/librte_eal/bsdapp/eal/Makefile | 2 +-
.../bsdapp/eal/include/exec-env/rte_interrupts.h | 2 -
lib/librte_eal/linuxapp/eal/Makefile | 2 +-
lib/librte_eal/linuxapp/eal/eal_interrupts.c | 53 ----------------------
.../linuxapp/eal/include/exec-env/rte_interrupts.h | 2 -
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 40 ----------------
lib/librte_ether/rte_ethdev.h | 4 --
13 files changed, 8 insertions(+), 178 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index da17880..991a777 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -13,10 +13,6 @@ Deprecation Notices
There is no backward compatibility planned from release 2.2.
All binaries will need to be rebuilt from release 2.2.
-* ABI changes are planned for struct rte_intr_handle, struct rte_eth_conf
- and struct eth_dev_ops to support interrupt mode feature from release 2.1.
- Those changes may be enabled in the release 2.1 with CONFIG_RTE_NEXT_ABI.
-
* The EAL function rte_eal_pci_close_one is deprecated because renamed to
rte_eal_pci_detach.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 494b4eb..388d2e3 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -20,6 +20,9 @@ API Changes
ABI Changes
-----------
+* The EAL and ethdev structures rte_intr_handle and rte_eth_conf were changed
+ to support Rx interrupt. It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+
Shared Library Versions
-----------------------
@@ -28,12 +31,12 @@ The libraries prepended with a plus sign were incremented in this version.
.. code-block:: diff
- libethdev.so.1
+ + libethdev.so.2
librte_acl.so.1
librte_cfgfile.so.1
librte_cmdline.so.1
librte_distributor.so.1
- librte_eal.so.1
+ + librte_eal.so.2
librte_hash.so.1
librte_ip_frag.so.1
librte_ivshmem.so.1
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index c7e6d55..848ef6e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -106,9 +106,7 @@ static int eth_igb_flow_ctrl_get(struct rte_eth_dev *dev,
static int eth_igb_flow_ctrl_set(struct rte_eth_dev *dev,
struct rte_eth_fc_conf *fc_conf);
static int eth_igb_lsc_interrupt_setup(struct rte_eth_dev *dev);
-#ifdef RTE_NEXT_ABI
static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev);
-#endif
static int eth_igb_interrupt_get_status(struct rte_eth_dev *dev);
static int eth_igb_interrupt_action(struct rte_eth_dev *dev);
static void eth_igb_interrupt_handler(struct rte_intr_handle *handle,
@@ -232,7 +230,6 @@ static int igb_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
uint32_t flags);
static int igb_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
-#ifdef RTE_NEXT_ABI
static int eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev,
@@ -241,7 +238,6 @@ static void eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
static void eth_igb_write_ivar(struct e1000_hw *hw, uint8_t msix_vector,
uint8_t index, uint8_t offset);
-#endif
static void eth_igb_configure_msix_intr(struct rte_eth_dev *dev);
/*
@@ -303,10 +299,8 @@ static const struct eth_dev_ops eth_igb_ops = {
.vlan_tpid_set = eth_igb_vlan_tpid_set,
.vlan_offload_set = eth_igb_vlan_offload_set,
.rx_queue_setup = eth_igb_rx_queue_setup,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = eth_igb_rx_queue_intr_enable,
.rx_queue_intr_disable = eth_igb_rx_queue_intr_disable,
-#endif
.rx_queue_release = eth_igb_rx_queue_release,
.rx_queue_count = eth_igb_rx_queue_count,
.rx_descriptor_done = eth_igb_rx_descriptor_done,
@@ -893,9 +887,7 @@ eth_igb_start(struct rte_eth_dev *dev)
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
int ret, mask;
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
uint32_t ctrl_ext;
PMD_INIT_FUNC_TRACE();
@@ -936,7 +928,6 @@ eth_igb_start(struct rte_eth_dev *dev)
/* configure PF module if SRIOV enabled */
igb_pf_host_configure(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -954,7 +945,6 @@ eth_igb_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
/* confiugre msix for rx interrupt */
eth_igb_configure_msix_intr(dev);
@@ -1050,11 +1040,9 @@ eth_igb_start(struct rte_eth_dev *dev)
" no intr multiplex\n");
}
-#ifdef RTE_NEXT_ABI
/* check if rxq interrupt is enabled */
if (dev->data->dev_conf.intr_conf.rxq != 0)
eth_igb_rxq_interrupt_setup(dev);
-#endif
/* enable uio/vfio intr/eventfd mapping */
rte_intr_enable(intr_handle);
@@ -1146,14 +1134,12 @@ eth_igb_stop(struct rte_eth_dev *dev)
}
filter_info->twotuple_mask = 0;
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
static void
@@ -1163,9 +1149,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_eth_link link;
-#ifdef RTE_NEXT_ABI
struct rte_pci_device *pci_dev;
-#endif
eth_igb_stop(dev);
adapter->stopped = 1;
@@ -1185,13 +1169,11 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
-#ifdef RTE_NEXT_ABI
pci_dev = dev->pci_dev;
if (pci_dev->intr_handle.intr_vec) {
rte_free(pci_dev->intr_handle.intr_vec);
pci_dev->intr_handle.intr_vec = NULL;
}
-#endif
memset(&link, 0, sizeof(link));
rte_igb_dev_atomic_write_link_status(dev, &link);
@@ -2017,7 +1999,6 @@ eth_igb_lsc_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#ifdef RTE_NEXT_ABI
/* It clears the interrupt causes and enables the interrupt.
* It will be called once only during nic initialized.
*
@@ -2044,7 +2025,6 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#endif
/*
* It reads ICR and gets interrupt causes, check it and set a bit flag
@@ -4144,7 +4124,6 @@ static struct rte_driver pmd_igbvf_drv = {
.init = rte_igbvf_pmd_init,
};
-#ifdef RTE_NEXT_ABI
static int
eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
@@ -4219,7 +4198,6 @@ eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
8 * direction);
}
}
-#endif
/* Sets up the hardware to generate MSI-X interrupts properly
* @hw
@@ -4228,13 +4206,11 @@ eth_igb_assign_msix_vector(struct e1000_hw *hw, int8_t direction,
static void
eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
{
-#ifdef RTE_NEXT_ABI
int queue_id;
uint32_t tmpval, regval, intr_mask;
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t vec = 0;
-#endif
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
@@ -4243,7 +4219,6 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* set interrupt vector for other causes */
if (hw->mac.type == e1000_82575) {
tmpval = E1000_READ_REG(hw, E1000_CTRL_EXT);
@@ -4299,7 +4274,6 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
}
E1000_WRITE_FLUSH(hw);
-#endif
}
PMD_REGISTER_DRIVER(pmd_igb_drv);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b8ee1e9..ec2918c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -190,9 +190,7 @@ static int ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
uint16_t reta_size);
static void ixgbe_dev_link_status_print(struct rte_eth_dev *dev);
static int ixgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev);
-#ifdef RTE_NEXT_ABI
static int ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
-#endif
static int ixgbe_dev_interrupt_get_status(struct rte_eth_dev *dev);
static int ixgbe_dev_interrupt_action(struct rte_eth_dev *dev);
static void ixgbe_dev_interrupt_handler(struct rte_intr_handle *handle,
@@ -227,14 +225,12 @@ static void ixgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask);
static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on);
static void ixgbevf_dev_interrupt_handler(struct rte_intr_handle *handle,
void *param);
-#ifdef RTE_NEXT_ABI
static int ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id);
static void ixgbevf_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
-#endif
static void ixgbevf_configure_msix(struct rte_eth_dev *dev);
/* For Eth VMDQ APIs support */
@@ -252,14 +248,12 @@ static int ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
uint8_t rule_id, uint8_t on);
static int ixgbe_mirror_rule_reset(struct rte_eth_dev *dev,
uint8_t rule_id);
-#ifdef RTE_NEXT_ABI
static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id);
static int ixgbe_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id);
static void ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
-#endif
static void ixgbe_configure_msix(struct rte_eth_dev *dev);
static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
@@ -420,10 +414,8 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.tx_queue_start = ixgbe_dev_tx_queue_start,
.tx_queue_stop = ixgbe_dev_tx_queue_stop,
.rx_queue_setup = ixgbe_dev_rx_queue_setup,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = ixgbe_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = ixgbe_dev_rx_queue_intr_disable,
-#endif
.rx_queue_release = ixgbe_dev_rx_queue_release,
.rx_queue_count = ixgbe_dev_rx_queue_count,
.rx_descriptor_done = ixgbe_dev_rx_descriptor_done,
@@ -497,10 +489,8 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = {
.rx_descriptor_done = ixgbe_dev_rx_descriptor_done,
.tx_queue_setup = ixgbe_dev_tx_queue_setup,
.tx_queue_release = ixgbe_dev_tx_queue_release,
-#ifdef RTE_NEXT_ABI
.rx_queue_intr_enable = ixgbevf_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = ixgbevf_dev_rx_queue_intr_disable,
-#endif
.mac_addr_add = ixgbevf_add_mac_addr,
.mac_addr_remove = ixgbevf_remove_mac_addr,
.set_mc_addr_list = ixgbe_dev_set_mc_addr_list,
@@ -1680,9 +1670,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
int err, link_up = 0, negotiate = 0;
uint32_t speed = 0;
int mask = 0;
@@ -1715,7 +1703,6 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
/* configure PF module if SRIOV enabled */
ixgbe_pf_host_configure(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -1734,7 +1721,6 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
/* confiugre msix for sleep until rx interrupt */
ixgbe_configure_msix(dev);
@@ -1827,11 +1813,9 @@ skip_link_setup:
" no intr multiplex\n");
}
-#ifdef RTE_NEXT_ABI
/* check if rxq interrupt is enabled */
if (dev->data->dev_conf.intr_conf.rxq != 0)
ixgbe_dev_rxq_interrupt_setup(dev);
-#endif
/* enable uio/vfio intr/eventfd mapping */
rte_intr_enable(intr_handle);
@@ -1942,14 +1926,12 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
memset(filter_info->fivetuple_mask, 0,
sizeof(uint32_t) * IXGBE_5TUPLE_ARRAY_SIZE);
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
/*
@@ -2623,7 +2605,6 @@ ixgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev)
* - On success, zero.
* - On failure, a negative value.
*/
-#ifdef RTE_NEXT_ABI
static int
ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
@@ -2634,7 +2615,6 @@ ixgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
return 0;
}
-#endif
/*
* It reads ICR and sets flag (IXGBE_EICR_LSC) for the link_update.
@@ -3435,9 +3415,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-#ifdef RTE_NEXT_ABI
uint32_t intr_vector = 0;
-#endif
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
int err, mask = 0;
@@ -3470,7 +3448,6 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
ixgbevf_dev_rxtx_start(dev);
-#ifdef RTE_NEXT_ABI
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0)
intr_vector = dev->data->nb_rx_queues;
@@ -3488,7 +3465,6 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
return -ENOMEM;
}
}
-#endif
ixgbevf_configure_msix(dev);
if (dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -3534,23 +3510,19 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* disable intr eventfd mapping */
rte_intr_disable(intr_handle);
-#ifdef RTE_NEXT_ABI
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
if (intr_handle->intr_vec != NULL) {
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
-#endif
}
static void
ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-#ifdef RTE_NEXT_ABI
struct rte_pci_device *pci_dev;
-#endif
PMD_INIT_FUNC_TRACE();
@@ -3563,13 +3535,11 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
/* reprogram the RAR[0] in case user changed it. */
ixgbe_set_rar(hw, 0, hw->mac.addr, 0, IXGBE_RAH_AV);
-#ifdef RTE_NEXT_ABI
pci_dev = dev->pci_dev;
if (pci_dev->intr_handle.intr_vec) {
rte_free(pci_dev->intr_handle.intr_vec);
pci_dev->intr_handle.intr_vec = NULL;
}
-#endif
}
static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on)
@@ -4087,7 +4057,6 @@ ixgbe_mirror_rule_reset(struct rte_eth_dev *dev, uint8_t rule_id)
return 0;
}
-#ifdef RTE_NEXT_ABI
static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
@@ -4240,18 +4209,15 @@ ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
}
}
}
-#endif
static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
uint32_t vector_idx = 0;
-#endif
/* won't configure msix register if no mapping is done
* between intr vector and event fd.
@@ -4259,7 +4225,6 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* Configure all RX queues of VF */
for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) {
/* Force all queue use vector 0,
@@ -4271,7 +4236,6 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
/* Configure VF Rx queue ivar */
ixgbevf_set_ivar_map(hw, -1, 1, vector_idx);
-#endif
}
/**
@@ -4283,13 +4247,11 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = &dev->pci_dev->intr_handle;
-#ifdef RTE_NEXT_ABI
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, vec = 0;
uint32_t mask;
uint32_t gpie;
-#endif
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -4297,7 +4259,6 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
-#ifdef RTE_NEXT_ABI
/* setup GPIE for MSI-x mode */
gpie = IXGBE_READ_REG(hw, IXGBE_GPIE);
gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT |
@@ -4347,7 +4308,6 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
IXGBE_EIMS_LSC);
IXGBE_WRITE_REG(hw, IXGBE_EIAC, mask);
-#endif
}
static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 2f205ea..086f29b 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -239,9 +239,7 @@ static struct rte_eth_conf port_conf = {
},
.intr_conf = {
.lsc = 1,
-#ifdef RTE_NEXT_ABI
.rxq = 1,
-#endif
},
};
diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index a969435..a49dcec 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -44,7 +44,7 @@ CFLAGS += $(WERROR_FLAGS) -O3
EXPORT_MAP := rte_eal_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# specific to linuxapp exec-env
SRCS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) := eal.c
diff --git a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
index bffa902..88d4ae1 100644
--- a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
+++ b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
@@ -50,11 +50,9 @@ struct rte_intr_handle {
int fd; /**< file descriptor */
int uio_cfg_fd; /**< UIO config file descriptor */
enum rte_intr_handle_type type; /**< handle type */
-#ifdef RTE_NEXT_ABI
int max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efds */
int *intr_vec; /**< intr vector number array */
-#endif
};
/**
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 376d275..d62196e 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -35,7 +35,7 @@ LIB = librte_eal.a
EXPORT_MAP := rte_eal_version.map
-LIBABIVER := 1
+LIBABIVER := 2
VPATH += $(RTE_SDK)/lib/librte_eal/common
diff --git a/lib/librte_eal/linuxapp/eal/eal_interrupts.c b/lib/librte_eal/linuxapp/eal/eal_interrupts.c
index 3f87875..66e1fe3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_interrupts.c
+++ b/lib/librte_eal/linuxapp/eal/eal_interrupts.c
@@ -290,26 +290,18 @@ vfio_enable_msix(struct rte_intr_handle *intr_handle) {
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
-#ifdef RTE_NEXT_ABI
if (!intr_handle->max_intr)
intr_handle->max_intr = 1;
else if (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID)
intr_handle->max_intr = RTE_MAX_RXTX_INTR_VEC_ID + 1;
irq_set->count = intr_handle->max_intr;
-#else
- irq_set->count = 1;
-#endif
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
-#ifdef RTE_NEXT_ABI
memcpy(fd_ptr, intr_handle->efds, sizeof(intr_handle->efds));
fd_ptr[intr_handle->max_intr - 1] = intr_handle->fd;
-#else
- fd_ptr[0] = intr_handle->fd;
-#endif
ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
@@ -886,7 +878,6 @@ rte_eal_intr_init(void)
return -ret;
}
-#ifdef RTE_NEXT_ABI
static void
eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
{
@@ -929,7 +920,6 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
return;
} while (1);
}
-#endif
static int
eal_epoll_process_event(struct epoll_event *evs, unsigned int n,
@@ -1068,7 +1058,6 @@ rte_epoll_ctl(int epfd, int op, int fd,
return 0;
}
-#ifdef RTE_NEXT_ABI
int
rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
int op, unsigned int vec, void *data)
@@ -1192,45 +1181,3 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
{
return !!(intr_handle->max_intr - intr_handle->nb_efd);
}
-
-#else
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data)
-{
- RTE_SET_USED(intr_handle);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(vec);
- RTE_SET_USED(data);
- return -ENOTSUP;
-}
-
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
-{
- RTE_SET_USED(intr_handle);
- RTE_SET_USED(nb_efd);
- return 0;
-}
-
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
-}
-
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
- return 0;
-}
-
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle)
-{
- RTE_SET_USED(intr_handle);
- return 1;
-}
-#endif
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
index b05f4c8..45071b7 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
@@ -86,14 +86,12 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
enum rte_intr_handle_type type; /**< handle type */
-#ifdef RTE_NEXT_ABI
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
/**< intr vector epoll event */
int *intr_vec; /**< intr vector number array */
-#endif
};
#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index fc45a71..3e81a0e 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_ether_version.map
-LIBABIVER := 1
+LIBABIVER := 2
SRCS-y += rte_ethdev.c
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 6b2400c..b309309 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -3033,7 +3033,6 @@ _rte_eth_dev_callback_process(struct rte_eth_dev *dev,
rte_spinlock_unlock(&rte_eth_dev_cb_lock);
}
-#ifdef RTE_NEXT_ABI
int
rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
{
@@ -3139,45 +3138,6 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id,
FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_intr_disable, -ENOTSUP);
return (*dev->dev_ops->rx_queue_intr_disable)(dev, queue_id);
}
-#else
-int
-rte_eth_dev_rx_intr_enable(uint8_t port_id, uint16_t queue_id)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- return -ENOTSUP;
-}
-
-int
-rte_eth_dev_rx_intr_disable(uint8_t port_id, uint16_t queue_id)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- return -ENOTSUP;
-}
-
-int
-rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(data);
- return -1;
-}
-
-int
-rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id,
- int epfd, int op, void *data)
-{
- RTE_SET_USED(port_id);
- RTE_SET_USED(queue_id);
- RTE_SET_USED(epfd);
- RTE_SET_USED(op);
- RTE_SET_USED(data);
- return -1;
-}
-#endif
#ifdef RTE_NIC_BYPASS
int rte_eth_dev_bypass_init(uint8_t port_id)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 544afe0..fa06554 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -845,10 +845,8 @@ struct rte_eth_fdir {
struct rte_intr_conf {
/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
uint16_t lsc;
-#ifdef RTE_NEXT_ABI
/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
uint16_t rxq;
-#endif
};
/**
@@ -1392,12 +1390,10 @@ struct eth_dev_ops {
eth_queue_release_t rx_queue_release;/**< Release RX queue.*/
eth_rx_queue_count_t rx_queue_count; /**< Get Rx queue count. */
eth_rx_descriptor_done_t rx_descriptor_done; /**< Check rxd DD bit */
-#ifdef RTE_NEXT_ABI
/**< Enable Rx queue interrupt. */
eth_rx_enable_intr_t rx_queue_intr_enable;
/**< Disable Rx queue interrupt.*/
eth_rx_disable_intr_t rx_queue_intr_disable;
-#endif
eth_tx_queue_setup_t tx_queue_setup;/**< Set up device TX queue.*/
eth_queue_release_t tx_queue_release;/**< Release TX queue.*/
eth_dev_led_on_t dev_led_on; /**< Turn on LED. */
--
2.5.1
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v2 03/10] mbuf: remove packet type from offload flags
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 01/10] doc: init next release notes Thomas Monjalon
2015-09-02 13:16 7% ` [dpdk-dev] [PATCH v2 02/10] ethdev: remove Rx interrupt switch Thomas Monjalon
@ 2015-09-02 13:16 4% ` Thomas Monjalon
2015-09-02 13:16 15% ` [dpdk-dev] [PATCH v2 04/10] ethdev: remove SCTP flow entries switch Thomas Monjalon
` (7 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The extended unified packet type is now part of the standard ABI.
As mbuf struct is changed, the mbuf library version is incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
app/test-pipeline/pipeline_hash.c | 12 -
app/test-pmd/csumonly.c | 14 -
app/test-pmd/rxonly.c | 16 --
app/test/packet_burst_generator.c | 12 -
doc/guides/rel_notes/deprecation.rst | 5 -
doc/guides/rel_notes/release_2_2.rst | 5 +-
drivers/net/cxgbe/sge.c | 16 --
drivers/net/e1000/igb_rxtx.c | 34 ---
drivers/net/enic/enic_main.c | 25 --
drivers/net/fm10k/fm10k_rxtx.c | 15 --
drivers/net/i40e/i40e_rxtx.c | 293 ---------------------
drivers/net/ixgbe/ixgbe_rxtx.c | 87 ------
drivers/net/ixgbe/ixgbe_rxtx_vec.c | 111 --------
drivers/net/mlx4/mlx4.c | 29 --
drivers/net/vmxnet3/vmxnet3_rxtx.c | 8 -
examples/ip_fragmentation/main.c | 10 -
examples/ip_reassembly/main.c | 10 -
examples/l3fwd-acl/main.c | 27 --
examples/l3fwd-power/main.c | 9 -
examples/l3fwd/main.c | 114 --------
examples/tep_termination/vxlan.c | 5 -
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 6 -
lib/librte_mbuf/Makefile | 2 +-
lib/librte_mbuf/rte_mbuf.c | 10 -
lib/librte_mbuf/rte_mbuf.h | 28 +-
25 files changed, 6 insertions(+), 897 deletions(-)
diff --git a/app/test-pipeline/pipeline_hash.c b/app/test-pipeline/pipeline_hash.c
index aa3f9e5..548615f 100644
--- a/app/test-pipeline/pipeline_hash.c
+++ b/app/test-pipeline/pipeline_hash.c
@@ -459,33 +459,21 @@ app_main_loop_rx_metadata(void) {
signature = RTE_MBUF_METADATA_UINT32_PTR(m, 0);
key = RTE_MBUF_METADATA_UINT8_PTR(m, 32);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
ip_hdr = (struct ipv4_hdr *)
&m_data[sizeof(struct ether_hdr)];
ip_dst = ip_hdr->dst_addr;
k32 = (uint32_t *) key;
k32[0] = ip_dst & 0xFFFFFF00;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- } else {
-#endif
ipv6_hdr = (struct ipv6_hdr *)
&m_data[sizeof(struct ether_hdr)];
ipv6_dst = ipv6_hdr->dst_addr;
memcpy(key, ipv6_dst, 16);
-#ifdef RTE_NEXT_ABI
} else
continue;
-#else
- }
-#endif
*signature = test_hash(key, 0, 0);
}
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 1bf3485..e561dde 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -202,14 +202,9 @@ parse_ethernet(struct ether_hdr *eth_hdr, struct testpmd_offload_info *info)
/* Parse a vxlan header */
static void
-#ifdef RTE_NEXT_ABI
parse_vxlan(struct udp_hdr *udp_hdr,
struct testpmd_offload_info *info,
uint32_t pkt_type)
-#else
-parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
- uint64_t mbuf_olflags)
-#endif
{
struct ether_hdr *eth_hdr;
@@ -217,12 +212,7 @@ parse_vxlan(struct udp_hdr *udp_hdr, struct testpmd_offload_info *info,
* (rfc7348) or that the rx offload flag is set (i40e only
* currently) */
if (udp_hdr->dst_port != _htons(4789) &&
-#ifdef RTE_NEXT_ABI
RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 0)
-#else
- (mbuf_olflags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
-#endif
return;
info->is_tunnel = 1;
@@ -559,11 +549,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
struct udp_hdr *udp_hdr;
udp_hdr = (struct udp_hdr *)((char *)l3_hdr +
info.l3_len);
-#ifdef RTE_NEXT_ABI
parse_vxlan(udp_hdr, &info, m->packet_type);
-#else
- parse_vxlan(udp_hdr, &info, m->ol_flags);
-#endif
} else if (info.l4_proto == IPPROTO_GRE) {
struct simple_gre_hdr *gre_hdr;
gre_hdr = (struct simple_gre_hdr *)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index ee7fd8d..14555ab 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -91,11 +91,7 @@ pkt_burst_receive(struct fwd_stream *fs)
uint64_t ol_flags;
uint16_t nb_rx;
uint16_t i, packet_type;
-#ifdef RTE_NEXT_ABI
uint16_t is_encapsulation;
-#else
- uint64_t is_encapsulation;
-#endif
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
@@ -138,13 +134,7 @@ pkt_burst_receive(struct fwd_stream *fs)
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
-
-#ifdef RTE_NEXT_ABI
is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
-#else
- is_encapsulation = ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR);
-#endif
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
@@ -171,7 +161,6 @@ pkt_burst_receive(struct fwd_stream *fs)
if (ol_flags & PKT_RX_QINQ_PKT)
printf(" - QinQ VLAN tci=0x%x, VLAN tci outer=0x%x",
mb->vlan_tci, mb->vlan_tci_outer);
-#ifdef RTE_NEXT_ABI
if (mb->packet_type) {
uint32_t ptype;
@@ -341,7 +330,6 @@ pkt_burst_receive(struct fwd_stream *fs)
printf("\n");
} else
printf("Unknown packet type\n");
-#endif /* RTE_NEXT_ABI */
if (is_encapsulation) {
struct ipv4_hdr *ipv4_hdr;
struct ipv6_hdr *ipv6_hdr;
@@ -355,11 +343,7 @@ pkt_burst_receive(struct fwd_stream *fs)
l2_len = sizeof(struct ether_hdr);
/* Do not support ipv4 option field */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(packet_type)) {
-#else
- if (ol_flags & PKT_RX_TUNNEL_IPV4_HDR) {
-#endif
l3_len = sizeof(struct ipv4_hdr);
ipv4_hdr = rte_pktmbuf_mtod_offset(mb,
struct ipv4_hdr *,
diff --git a/app/test/packet_burst_generator.c b/app/test/packet_burst_generator.c
index d9d808b..a93c3b5 100644
--- a/app/test/packet_burst_generator.c
+++ b/app/test/packet_burst_generator.c
@@ -273,21 +273,9 @@ nomore_mbuf:
if (ipv4) {
pkt->vlan_tci = ETHER_TYPE_IPv4;
pkt->l3_len = sizeof(struct ipv4_hdr);
-#ifndef RTE_NEXT_ABI
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV4_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV4_HDR;
-#endif
} else {
pkt->vlan_tci = ETHER_TYPE_IPv6;
pkt->l3_len = sizeof(struct ipv6_hdr);
-#ifndef RTE_NEXT_ABI
- if (vlan_enabled)
- pkt->ol_flags = PKT_RX_IPV6_HDR | PKT_RX_VLAN_PKT;
- else
- pkt->ol_flags = PKT_RX_IPV6_HDR;
-#endif
}
pkts_burst[nb_pkt] = pkt;
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 991a777..639ab18 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -24,11 +24,6 @@ Deprecation Notices
* The field mem_location of the rte_lpm structure is deprecated and should be
removed as well as the macros RTE_LPM_HEAP and RTE_LPM_MEMZONE.
-* Significant ABI changes are planned for struct rte_mbuf, struct rte_kni_mbuf,
- and several ``PKT_RX_`` flags will be removed, to support unified packet type
- from release 2.1. Those changes may be enabled in the upcoming release 2.1
- with CONFIG_RTE_NEXT_ABI.
-
* librte_malloc library has been integrated into librte_eal. The 2.1 release
creates a dummy/empty malloc library to fulfill binaries with dynamic linking
dependencies on librte_malloc.so. Such dummy library will not be created from
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 388d2e3..3a6d2cc 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -23,6 +23,9 @@ ABI Changes
* The EAL and ethdev structures rte_intr_handle and rte_eth_conf were changed
to support Rx interrupt. It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* The mbuf structure was changed to support unified packet type.
+ It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+
Shared Library Versions
-----------------------
@@ -45,7 +48,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_kvargs.so.1
librte_lpm.so.1
librte_malloc.so.1
- librte_mbuf.so.1
+ + librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
librte_pipeline.so.1
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index d570d33..6eb1244 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1299,22 +1299,14 @@ int t4_ethrx_handler(struct sge_rspq *q, const __be64 *rsp,
mbuf->port = pkt->iff;
if (pkt->l2info & htonl(F_RXF_IP)) {
-#ifdef RTE_NEXT_ABI
mbuf->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- mbuf->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (unlikely(!csum_ok))
mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD;
if ((pkt->l2info & htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (pkt->l2info & htonl(F_RXF_IP6)) {
-#ifdef RTE_NEXT_ABI
mbuf->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- mbuf->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
}
mbuf->port = pkt->iff;
@@ -1419,11 +1411,7 @@ static int process_responses(struct sge_rspq *q, int budget,
unmap_rx_buf(&rxq->fl);
if (cpl->l2info & htonl(F_RXF_IP)) {
-#ifdef RTE_NEXT_ABI
pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (unlikely(!csum_ok))
pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -1431,11 +1419,7 @@ static int process_responses(struct sge_rspq *q, int budget,
htonl(F_RXF_UDP | F_RXF_TCP)) && !csum_ok)
pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
} else if (cpl->l2info & htonl(F_RXF_IP6)) {
-#ifdef RTE_NEXT_ABI
pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
}
if (!rss_hdr->filter_tid && rss_hdr->hash_type) {
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index b13930e..19905fd 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -590,7 +590,6 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
-#ifdef RTE_NEXT_ABI
#define IGB_PACKET_TYPE_IPV4 0X01
#define IGB_PACKET_TYPE_IPV4_TCP 0X11
#define IGB_PACKET_TYPE_IPV4_UDP 0X21
@@ -684,35 +683,6 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
return pkt_flags;
}
-#else /* RTE_NEXT_ABI */
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
-{
- uint64_t pkt_flags;
-
- static uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
-
-#if defined(RTE_LIBRTE_IEEE1588)
- static uint32_t ip_pkt_etqf_map[8] = {
- 0, 0, 0, PKT_RX_IEEE1588_PTP,
- 0, 0, 0, 0,
- };
-
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#endif
- return pkt_flags | (((hl_tp_rs & 0x0F) == 0) ? 0 : PKT_RX_RSS_HASH);
-}
-#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -886,10 +856,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
-#ifdef RTE_NEXT_ABI
rxm->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.lower.
lo_dword.hs_rss.pkt_info);
-#endif
/*
* Store the mbuf address into the next entry of the array
@@ -1124,10 +1092,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
-#ifdef RTE_NEXT_ABI
first_seg->packet_type = igb_rxd_pkt_info_to_pkt_type(rxd.wb.
lower.lo_dword.hs_rss.pkt_info);
-#endif
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index f47e96c..3b8719f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -423,11 +423,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -436,11 +432,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
} else {
/* Header split */
if (sop && !eop) {
@@ -453,11 +445,7 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
*rx_pkt_bucket = rx_pkt;
rx_pkt->pkt_len = bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
rx_pkt->ol_flags |=
@@ -469,22 +457,14 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
rx_pkt->packet_type = RTE_PTYPE_L3_IPV6;
-#else
- rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
} else {
/* Payload */
hdr_rx_pkt = *rx_pkt_bucket;
hdr_rx_pkt->pkt_len += bytes_written;
if (ipv4) {
-#ifdef RTE_NEXT_ABI
hdr_rx_pkt->packet_type =
RTE_PTYPE_L3_IPV4;
-#else
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!csum_not_calc) {
if (unlikely(!ipv4_csum_ok))
hdr_rx_pkt->ol_flags |=
@@ -496,13 +476,8 @@ static int enic_rq_indicate_buf(struct vnic_rq *rq,
PKT_RX_L4_CKSUM_BAD;
}
} else if (ipv6)
-#ifdef RTE_NEXT_ABI
hdr_rx_pkt->packet_type =
RTE_PTYPE_L3_IPV6;
-#else
- hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
-#endif
-
}
}
}
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index b5fa2e6..d3f7b89 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -68,7 +68,6 @@ static inline void dump_rxd(union fm10k_rx_desc *rxd)
static inline void
rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
{
-#ifdef RTE_NEXT_ABI
static const uint32_t
ptype_table[FM10K_RXD_PKTTYPE_MASK >> FM10K_RXD_PKTTYPE_SHIFT]
__rte_cache_aligned = {
@@ -91,14 +90,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
m->packet_type = ptype_table[(d->w.pkt_info & FM10K_RXD_PKTTYPE_MASK)
>> FM10K_RXD_PKTTYPE_SHIFT];
-#else /* RTE_NEXT_ABI */
- uint16_t ptype;
- static const uint16_t pt_lut[] = { 0,
- PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, PKT_RX_IPV6_HDR_EXT,
- 0, 0, 0
- };
-#endif /* RTE_NEXT_ABI */
if (d->w.pkt_info & FM10K_RXD_RSSTYPE_MASK)
m->ol_flags |= PKT_RX_RSS_HASH;
@@ -121,12 +112,6 @@ rx_desc_to_ol_flags(struct rte_mbuf *m, const union fm10k_rx_desc *d)
if (unlikely(d->d.staterr & FM10K_RXD_STATUS_RXE))
m->ol_flags |= PKT_RX_RECIP_ERR;
-
-#ifndef RTE_NEXT_ABI
- ptype = (d->d.data & FM10K_RXD_PKTTYPE_MASK_L3) >>
- FM10K_RXD_PKTTYPE_SHIFT;
- m->ol_flags |= pt_lut[(uint8_t)ptype];
-#endif
}
uint16_t
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index eae4ab0..fd656d5 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -188,11 +188,9 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
| I40E_RXD_QW1_STATUS_TSYNINDX_MASK))
>> I40E_RX_DESC_STATUS_TSYNINDX_SHIFT;
-#ifdef RTE_NEXT_ABI
if ((mb->packet_type & RTE_PTYPE_L2_MASK)
== RTE_PTYPE_L2_ETHER_TIMESYNC)
pkt_flags = PKT_RX_IEEE1588_PTP;
-#endif
if (tsyn & 0x04) {
pkt_flags |= PKT_RX_IEEE1588_TMST;
mb->timesync = tsyn & 0x03;
@@ -202,7 +200,6 @@ i40e_get_iee15888_flags(struct rte_mbuf *mb, uint64_t qword)
}
#endif
-#ifdef RTE_NEXT_ABI
/* For each value it means, datasheet of hardware can tell more details */
static inline uint32_t
i40e_rxd_pkt_type_mapping(uint8_t ptype)
@@ -735,275 +732,6 @@ i40e_rxd_pkt_type_mapping(uint8_t ptype)
return ptype_table[ptype];
}
-#else /* RTE_NEXT_ABI */
-/* Translate pkt types to pkt flags */
-static inline uint64_t
-i40e_rxd_ptype_to_pkt_flags(uint64_t qword)
-{
- uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
- static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = {
- 0, /* PTYPE 0 */
- 0, /* PTYPE 1 */
- PKT_RX_IEEE1588_PTP, /* PTYPE 2 */
- 0, /* PTYPE 3 */
- 0, /* PTYPE 4 */
- 0, /* PTYPE 5 */
- 0, /* PTYPE 6 */
- 0, /* PTYPE 7 */
- 0, /* PTYPE 8 */
- 0, /* PTYPE 9 */
- 0, /* PTYPE 10 */
- 0, /* PTYPE 11 */
- 0, /* PTYPE 12 */
- 0, /* PTYPE 13 */
- 0, /* PTYPE 14 */
- 0, /* PTYPE 15 */
- 0, /* PTYPE 16 */
- 0, /* PTYPE 17 */
- 0, /* PTYPE 18 */
- 0, /* PTYPE 19 */
- 0, /* PTYPE 20 */
- 0, /* PTYPE 21 */
- PKT_RX_IPV4_HDR, /* PTYPE 22 */
- PKT_RX_IPV4_HDR, /* PTYPE 23 */
- PKT_RX_IPV4_HDR, /* PTYPE 24 */
- 0, /* PTYPE 25 */
- PKT_RX_IPV4_HDR, /* PTYPE 26 */
- PKT_RX_IPV4_HDR, /* PTYPE 27 */
- PKT_RX_IPV4_HDR, /* PTYPE 28 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */
- 0, /* PTYPE 32 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */
- 0, /* PTYPE 39 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */
- 0, /* PTYPE 47 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */
- 0, /* PTYPE 54 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */
- 0, /* PTYPE 62 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */
- 0, /* PTYPE 69 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */
- 0, /* PTYPE 77 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */
- 0, /* PTYPE 84 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */
- PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */
- PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */
- PKT_RX_IPV6_HDR, /* PTYPE 88 */
- PKT_RX_IPV6_HDR, /* PTYPE 89 */
- PKT_RX_IPV6_HDR, /* PTYPE 90 */
- 0, /* PTYPE 91 */
- PKT_RX_IPV6_HDR, /* PTYPE 92 */
- PKT_RX_IPV6_HDR, /* PTYPE 93 */
- PKT_RX_IPV6_HDR, /* PTYPE 94 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */
- 0, /* PTYPE 98 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */
- 0, /* PTYPE 105 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */
- 0, /* PTYPE 113 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */
- 0, /* PTYPE 120 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */
- 0, /* PTYPE 128 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */
- 0, /* PTYPE 135 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */
- 0, /* PTYPE 143 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */
- 0, /* PTYPE 150 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */
- PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */
- PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */
- 0, /* PTYPE 154 */
- 0, /* PTYPE 155 */
- 0, /* PTYPE 156 */
- 0, /* PTYPE 157 */
- 0, /* PTYPE 158 */
- 0, /* PTYPE 159 */
- 0, /* PTYPE 160 */
- 0, /* PTYPE 161 */
- 0, /* PTYPE 162 */
- 0, /* PTYPE 163 */
- 0, /* PTYPE 164 */
- 0, /* PTYPE 165 */
- 0, /* PTYPE 166 */
- 0, /* PTYPE 167 */
- 0, /* PTYPE 168 */
- 0, /* PTYPE 169 */
- 0, /* PTYPE 170 */
- 0, /* PTYPE 171 */
- 0, /* PTYPE 172 */
- 0, /* PTYPE 173 */
- 0, /* PTYPE 174 */
- 0, /* PTYPE 175 */
- 0, /* PTYPE 176 */
- 0, /* PTYPE 177 */
- 0, /* PTYPE 178 */
- 0, /* PTYPE 179 */
- 0, /* PTYPE 180 */
- 0, /* PTYPE 181 */
- 0, /* PTYPE 182 */
- 0, /* PTYPE 183 */
- 0, /* PTYPE 184 */
- 0, /* PTYPE 185 */
- 0, /* PTYPE 186 */
- 0, /* PTYPE 187 */
- 0, /* PTYPE 188 */
- 0, /* PTYPE 189 */
- 0, /* PTYPE 190 */
- 0, /* PTYPE 191 */
- 0, /* PTYPE 192 */
- 0, /* PTYPE 193 */
- 0, /* PTYPE 194 */
- 0, /* PTYPE 195 */
- 0, /* PTYPE 196 */
- 0, /* PTYPE 197 */
- 0, /* PTYPE 198 */
- 0, /* PTYPE 199 */
- 0, /* PTYPE 200 */
- 0, /* PTYPE 201 */
- 0, /* PTYPE 202 */
- 0, /* PTYPE 203 */
- 0, /* PTYPE 204 */
- 0, /* PTYPE 205 */
- 0, /* PTYPE 206 */
- 0, /* PTYPE 207 */
- 0, /* PTYPE 208 */
- 0, /* PTYPE 209 */
- 0, /* PTYPE 210 */
- 0, /* PTYPE 211 */
- 0, /* PTYPE 212 */
- 0, /* PTYPE 213 */
- 0, /* PTYPE 214 */
- 0, /* PTYPE 215 */
- 0, /* PTYPE 216 */
- 0, /* PTYPE 217 */
- 0, /* PTYPE 218 */
- 0, /* PTYPE 219 */
- 0, /* PTYPE 220 */
- 0, /* PTYPE 221 */
- 0, /* PTYPE 222 */
- 0, /* PTYPE 223 */
- 0, /* PTYPE 224 */
- 0, /* PTYPE 225 */
- 0, /* PTYPE 226 */
- 0, /* PTYPE 227 */
- 0, /* PTYPE 228 */
- 0, /* PTYPE 229 */
- 0, /* PTYPE 230 */
- 0, /* PTYPE 231 */
- 0, /* PTYPE 232 */
- 0, /* PTYPE 233 */
- 0, /* PTYPE 234 */
- 0, /* PTYPE 235 */
- 0, /* PTYPE 236 */
- 0, /* PTYPE 237 */
- 0, /* PTYPE 238 */
- 0, /* PTYPE 239 */
- 0, /* PTYPE 240 */
- 0, /* PTYPE 241 */
- 0, /* PTYPE 242 */
- 0, /* PTYPE 243 */
- 0, /* PTYPE 244 */
- 0, /* PTYPE 245 */
- 0, /* PTYPE 246 */
- 0, /* PTYPE 247 */
- 0, /* PTYPE 248 */
- 0, /* PTYPE 249 */
- 0, /* PTYPE 250 */
- 0, /* PTYPE 251 */
- 0, /* PTYPE 252 */
- 0, /* PTYPE 253 */
- 0, /* PTYPE 254 */
- 0, /* PTYPE 255 */
- };
-
- return ip_ptype_map[ptype];
-}
-#endif /* RTE_NEXT_ABI */
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03
#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01
@@ -1292,18 +1020,10 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
i40e_rxd_to_vlan_tci(mb, &rxdp[j]);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
mb->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >>
I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
-
- mb->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
mb->hash.rss = rte_le_to_cpu_32(\
rxdp[j].wb.qword0.hi_dword.rss);
@@ -1549,15 +1269,9 @@ i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i40e_rxd_to_vlan_tci(rxm, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
rxm->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
@@ -1717,16 +1431,9 @@ i40e_recv_scattered_pkts(void *rx_queue,
i40e_rxd_to_vlan_tci(first_seg, &rxd);
pkt_flags = i40e_rxd_status_to_pkt_flags(qword1);
pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1);
-#ifdef RTE_NEXT_ABI
first_seg->packet_type =
i40e_rxd_pkt_type_mapping((uint8_t)((qword1 &
I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT));
-#else
- pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1);
- first_seg->packet_type = (uint16_t)((qword1 &
- I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT);
-#endif /* RTE_NEXT_ABI */
if (pkt_flags & PKT_RX_RSS_HASH)
rxm->hash.rss =
rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 91023b9..a710102 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -864,7 +864,6 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-#ifdef RTE_NEXT_ABI
#define IXGBE_PACKET_TYPE_IPV4 0X01
#define IXGBE_PACKET_TYPE_IPV4_TCP 0X11
#define IXGBE_PACKET_TYPE_IPV4_UDP 0X21
@@ -967,43 +966,6 @@ ixgbe_rxd_pkt_info_to_pkt_flags(uint16_t pkt_info)
return ip_rss_types_map[pkt_info & 0XF];
#endif
}
-#else /* RTE_NEXT_ABI */
-static inline uint64_t
-rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
-{
- uint64_t pkt_flags;
-
- static const uint64_t ip_pkt_types_map[16] = {
- 0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
- PKT_RX_IPV6_HDR, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
- };
-
- static const uint64_t ip_rss_types_map[16] = {
- 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
- 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
- PKT_RX_RSS_HASH, 0, 0, 0,
- 0, 0, 0, PKT_RX_FDIR,
- };
-
-#ifdef RTE_LIBRTE_IEEE1588
- static uint64_t ip_pkt_etqf_map[8] = {
- 0, 0, 0, PKT_RX_IEEE1588_PTP,
- 0, 0, 0, 0,
- };
-
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-#else
- pkt_flags = (hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F];
-
-#endif
- return pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF];
-}
-#endif /* RTE_NEXT_ABI */
static inline uint64_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
@@ -1058,13 +1020,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
struct rte_mbuf *mb;
uint16_t pkt_len;
uint64_t pkt_flags;
-#ifdef RTE_NEXT_ABI
int nb_dd;
uint32_t s[LOOK_AHEAD];
uint16_t pkt_info[LOOK_AHEAD];
-#else
- int s[LOOK_AHEAD], nb_dd;
-#endif /* RTE_NEXT_ABI */
int i, j, nb_rx = 0;
uint32_t status;
@@ -1088,11 +1046,9 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
for (j = LOOK_AHEAD-1; j >= 0; --j)
s[j] = rte_le_to_cpu_32(rxdp[j].wb.upper.status_error);
-#ifdef RTE_NEXT_ABI
for (j = LOOK_AHEAD - 1; j >= 0; --j)
pkt_info[j] = rxdp[j].wb.lower.lo_dword.
hs_rss.pkt_info;
-#endif /* RTE_NEXT_ABI */
/* Compute how many status bits were set */
nb_dd = 0;
@@ -1111,7 +1067,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].wb.upper.vlan);
/* convert descriptor fields to rte mbuf flags */
-#ifdef RTE_NEXT_ABI
pkt_flags = rx_desc_status_to_pkt_flags(s[j]);
pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
pkt_flags |=
@@ -1119,15 +1074,6 @@ ixgbe_rx_scan_hw_ring(struct ixgbe_rx_queue *rxq)
mb->ol_flags = pkt_flags;
mb->packet_type =
ixgbe_rxd_pkt_info_to_pkt_type(pkt_info[j]);
-#else /* RTE_NEXT_ABI */
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(
- rte_le_to_cpu_32(
- rxdp[j].wb.lower.lo_dword.data));
- /* reuse status field from scan list */
- pkt_flags |= rx_desc_status_to_pkt_flags(s[j]);
- pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
- mb->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
mb->hash.rss = rte_le_to_cpu_32(
@@ -1328,11 +1274,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
union ixgbe_adv_rx_desc rxd;
uint64_t dma_addr;
uint32_t staterr;
-#ifdef RTE_NEXT_ABI
uint32_t pkt_info;
-#else
- uint32_t hlen_type_rss;
-#endif
uint16_t pkt_len;
uint16_t rx_id;
uint16_t nb_rx;
@@ -1450,7 +1392,6 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->data_len = pkt_len;
rxm->port = rxq->port_id;
-#ifdef RTE_NEXT_ABI
pkt_info = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.hs_rss.
pkt_info);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
@@ -1462,16 +1403,6 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
rxm->ol_flags = pkt_flags;
rxm->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
-#else /* RTE_NEXT_ABI */
- hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
- /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
-
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = pkt_flags | rx_desc_status_to_pkt_flags(staterr);
- pkt_flags = pkt_flags | rx_desc_error_to_pkt_flags(staterr);
- rxm->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
rxm->hash.rss = rte_le_to_cpu_32(
@@ -1547,7 +1478,6 @@ ixgbe_fill_cluster_head_buf(
uint8_t port_id,
uint32_t staterr)
{
-#ifdef RTE_NEXT_ABI
uint16_t pkt_info;
uint64_t pkt_flags;
@@ -1563,23 +1493,6 @@ ixgbe_fill_cluster_head_buf(
pkt_flags |= ixgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
head->ol_flags = pkt_flags;
head->packet_type = ixgbe_rxd_pkt_info_to_pkt_type(pkt_info);
-#else /* RTE_NEXT_ABI */
- uint32_t hlen_type_rss;
- uint64_t pkt_flags;
-
- head->port = port_id;
-
- /*
- * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
- * set in the pkt_flags field.
- */
- head->vlan_tci = rte_le_to_cpu_16(desc->wb.upper.vlan);
- hlen_type_rss = rte_le_to_cpu_32(desc->wb.lower.lo_dword.data);
- pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags |= rx_desc_status_to_pkt_flags(staterr);
- pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
- head->ol_flags = pkt_flags;
-#endif /* RTE_NEXT_ABI */
if (likely(pkt_flags & PKT_RX_RSS_HASH))
head->hash.rss = rte_le_to_cpu_32(desc->wb.lower.hi_dword.rss);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
index cf25a53..6979b1e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c
@@ -140,19 +140,11 @@ ixgbe_rxq_rearm(struct ixgbe_rx_queue *rxq)
*/
#ifdef RTE_IXGBE_RX_OLFLAGS_ENABLE
-#ifndef RTE_NEXT_ABI
-#define OLFLAGS_MASK ((uint16_t)(PKT_RX_VLAN_PKT | PKT_RX_IPV4_HDR |\
- PKT_RX_IPV4_HDR_EXT | PKT_RX_IPV6_HDR |\
- PKT_RX_IPV6_HDR_EXT))
-#define PTYPE_SHIFT (1)
-#endif /* RTE_NEXT_ABI */
-
#define VTAG_SHIFT (3)
static inline void
desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
{
-#ifdef RTE_NEXT_ABI
__m128i ptype0, ptype1, vtag0, vtag1;
union {
uint16_t e[4];
@@ -190,50 +182,6 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts)
vtag1 = _mm_or_si128(ptype0, vtag1);
vol.dword = _mm_cvtsi128_si64(vtag1);
-#else
- __m128i ptype0, ptype1, vtag0, vtag1;
- union {
- uint16_t e[4];
- uint64_t dword;
- } vol;
-
- /* pkt type + vlan olflags mask */
- const __m128i pkttype_msk = _mm_set_epi16(
- 0x0000, 0x0000, 0x0000, 0x0000,
- OLFLAGS_MASK, OLFLAGS_MASK, OLFLAGS_MASK, OLFLAGS_MASK);
-
- /* mask everything except rss type */
- const __m128i rsstype_msk = _mm_set_epi16(
- 0x0000, 0x0000, 0x0000, 0x0000,
- 0x000F, 0x000F, 0x000F, 0x000F);
-
- /* rss type to PKT_RX_RSS_HASH translation */
- const __m128i rss_flags = _mm_set_epi8(PKT_RX_FDIR, 0, 0, 0,
- 0, 0, 0, PKT_RX_RSS_HASH,
- PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, 0,
- PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, 0);
-
- ptype0 = _mm_unpacklo_epi16(descs[0], descs[1]);
- ptype1 = _mm_unpacklo_epi16(descs[2], descs[3]);
- vtag0 = _mm_unpackhi_epi16(descs[0], descs[1]);
- vtag1 = _mm_unpackhi_epi16(descs[2], descs[3]);
-
- ptype1 = _mm_unpacklo_epi32(ptype0, ptype1);
- vtag1 = _mm_unpacklo_epi32(vtag0, vtag1);
-
- ptype0 = _mm_and_si128(ptype1, rsstype_msk);
- ptype0 = _mm_shuffle_epi8(rss_flags, ptype0);
-
- ptype1 = _mm_slli_epi16(ptype1, PTYPE_SHIFT);
- vtag1 = _mm_srli_epi16(vtag1, VTAG_SHIFT);
-
- ptype1 = _mm_or_si128(ptype1, vtag1);
- ptype1 = _mm_and_si128(ptype1, pkttype_msk);
-
- ptype0 = _mm_or_si128(ptype0, ptype1);
-
- vol.dword = _mm_cvtsi128_si64(ptype0);
-#endif /* RTE_NEXT_ABI */
rx_pkts[0]->ol_flags = vol.e[0];
rx_pkts[1]->ol_flags = vol.e[1];
@@ -264,7 +212,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
int pos;
uint64_t var;
__m128i shuf_msk;
-#ifdef RTE_NEXT_ABI
__m128i crc_adjust = _mm_set_epi16(
0, 0, 0, /* ignore non-length fields */
-rxq->crc_len, /* sub crc on data_len */
@@ -275,16 +222,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
__m128i dd_check, eop_check;
__m128i desc_mask = _mm_set_epi32(0xFFFFFFFF, 0xFFFFFFFF,
0xFFFFFFFF, 0xFFFF07F0);
-#else
- __m128i crc_adjust = _mm_set_epi16(
- 0, 0, 0, 0, /* ignore non-length fields */
- 0, /* ignore high-16bits of pkt_len */
- -rxq->crc_len, /* sub crc on pkt_len */
- -rxq->crc_len, /* sub crc on data_len */
- 0 /* ignore pkt_type field */
- );
- __m128i dd_check, eop_check;
-#endif /* RTE_NEXT_ABI */
if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST))
return 0;
@@ -313,7 +250,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL);
/* mask to shuffle from desc. to mbuf */
-#ifdef RTE_NEXT_ABI
shuf_msk = _mm_set_epi8(
7, 6, 5, 4, /* octet 4~7, 32bits rss */
15, 14, /* octet 14~15, low 16 bits vlan_macip */
@@ -324,23 +260,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
1, /* octet 1, 8 bits pkt_type field */
0 /* octet 0, 4 bits offset 4 pkt_type field */
);
-#else
- shuf_msk = _mm_set_epi8(
- 7, 6, 5, 4, /* octet 4~7, 32bits rss */
- 0xFF, 0xFF, /* skip high 16 bits vlan_macip, zero out */
- 15, 14, /* octet 14~15, low 16 bits vlan_macip */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 13, 12, /* octet 12~13, low 16 bits pkt_len */
- 13, 12, /* octet 12~13, 16 bits data_len */
- 0xFF, 0xFF /* skip pkt_type field */
- );
-#endif /* RTE_NEXT_ABI */
/* Cache is empty -> need to scan the buffer rings, but first move
* the next 'n' mbufs into the cache */
sw_ring = &rxq->sw_ring[rxq->rx_tail];
-#ifdef RTE_NEXT_ABI
/* A. load 4 packet in one loop
* [A*. mask out 4 unused dirty field in desc]
* B. copy 4 mbuf point from swring to rx_pkts
@@ -348,20 +272,10 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
* [C*. extract the end-of-packet bit, if requested]
* D. fill info. from desc to mbuf
*/
-#else
- /* A. load 4 packet in one loop
- * B. copy 4 mbuf point from swring to rx_pkts
- * C. calc the number of DD bits among the 4 packets
- * [C*. extract the end-of-packet bit, if requested]
- * D. fill info. from desc to mbuf
- */
-#endif /* RTE_NEXT_ABI */
for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST;
pos += RTE_IXGBE_DESCS_PER_LOOP,
rxdp += RTE_IXGBE_DESCS_PER_LOOP) {
-#ifdef RTE_NEXT_ABI
__m128i descs0[RTE_IXGBE_DESCS_PER_LOOP];
-#endif /* RTE_NEXT_ABI */
__m128i descs[RTE_IXGBE_DESCS_PER_LOOP];
__m128i pkt_mb1, pkt_mb2, pkt_mb3, pkt_mb4;
__m128i zero, staterr, sterr_tmp1, sterr_tmp2;
@@ -377,7 +291,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* B.1 load 1 mbuf point */
mbp1 = _mm_loadu_si128((__m128i *)&sw_ring[pos]);
-#ifdef RTE_NEXT_ABI
/* Read desc statuses backwards to avoid race condition */
/* A.1 load 4 pkts desc */
descs0[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
@@ -403,25 +316,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* A* mask out 0~3 bits RSS type */
descs[1] = _mm_and_si128(descs0[1], desc_mask);
descs[0] = _mm_and_si128(descs0[0], desc_mask);
-#else
- /* Read desc statuses backwards to avoid race condition */
- /* A.1 load 4 pkts desc */
- descs[3] = _mm_loadu_si128((__m128i *)(rxdp + 3));
-
- /* B.2 copy 2 mbuf point into rx_pkts */
- _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1);
-
- /* B.1 load 1 mbuf point */
- mbp2 = _mm_loadu_si128((__m128i *)&sw_ring[pos + 2]);
-
- descs[2] = _mm_loadu_si128((__m128i *)(rxdp + 2));
- /* B.1 load 2 mbuf point */
- descs[1] = _mm_loadu_si128((__m128i *)(rxdp + 1));
- descs[0] = _mm_loadu_si128((__m128i *)(rxdp));
-
- /* B.2 copy 2 mbuf point into rx_pkts */
- _mm_storeu_si128((__m128i *)&rx_pkts[pos + 2], mbp2);
-#endif /* RTE_NEXT_ABI */
/* avoid compiler reorder optimization */
rte_compiler_barrier();
@@ -435,13 +329,8 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
/* C.1 4=>2 filter staterr info only */
sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
-#ifdef RTE_NEXT_ABI
/* set ol_flags with vlan packet type */
desc_to_olflags_v(descs0, &rx_pkts[pos]);
-#else
- /* set ol_flags with packet type and vlan tag */
- desc_to_olflags_v(descs, &rx_pkts[pos]);
-#endif /* RTE_NEXT_ABI */
/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
pkt_mb4 = _mm_add_epi16(pkt_mb4, crc_adjust);
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index fa3cb7e..6c6342f 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1264,16 +1264,7 @@ mlx4_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
* offsets but automatically recognizes the packet
* type. For inner L3/L4 checksums, only VXLAN (UDP)
* tunnels are currently supported. */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_TUNNEL_PKT(buf->packet_type))
-#else
- /* FIXME: since PKT_TX_UDP_TUNNEL_PKT has been removed,
- * the outer packet type is unknown. All we know is
- * that the L2 header is of unusual length (not
- * ETHER_HDR_LEN with or without 802.1Q header). */
- if ((buf->l2_len != ETHER_HDR_LEN) &&
- (buf->l2_len != (ETHER_HDR_LEN + 4)))
-#endif
send_flags |= IBV_EXP_QP_BURST_TUNNEL;
}
if (likely(segs == 1)) {
@@ -2488,7 +2479,6 @@ rxq_cleanup(struct rxq *rxq)
memset(rxq, 0, sizeof(*rxq));
}
-#ifdef RTE_NEXT_ABI
/**
* Translate RX completion flags to packet type.
*
@@ -2521,7 +2511,6 @@ rxq_cq_to_pkt_type(uint32_t flags)
IBV_EXP_CQ_RX_IPV6_PACKET, RTE_PTYPE_L3_IPV6);
return pkt_type;
}
-#endif /* RTE_NEXT_ABI */
/**
* Translate RX completion flags to offload flags.
@@ -2539,11 +2528,6 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
{
uint32_t ol_flags = 0;
-#ifndef RTE_NEXT_ABI
- ol_flags =
- TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV4_PACKET, PKT_RX_IPV4_HDR) |
- TRANSPOSE(flags, IBV_EXP_CQ_RX_IPV6_PACKET, PKT_RX_IPV6_HDR);
-#endif
if (rxq->csum)
ol_flags |=
TRANSPOSE(~flags,
@@ -2559,14 +2543,6 @@ rxq_cq_to_ol_flags(const struct rxq *rxq, uint32_t flags)
*/
if ((flags & IBV_EXP_CQ_RX_TUNNEL_PACKET) && (rxq->csum_l2tun))
ol_flags |=
-#ifndef RTE_NEXT_ABI
- TRANSPOSE(flags,
- IBV_EXP_CQ_RX_OUTER_IPV4_PACKET,
- PKT_RX_TUNNEL_IPV4_HDR) |
- TRANSPOSE(flags,
- IBV_EXP_CQ_RX_OUTER_IPV6_PACKET,
- PKT_RX_TUNNEL_IPV6_HDR) |
-#endif
TRANSPOSE(~flags,
IBV_EXP_CQ_RX_OUTER_IP_CSUM_OK,
PKT_RX_IP_CKSUM_BAD) |
@@ -2758,10 +2734,7 @@ mlx4_rx_burst_sp(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NB_SEGS(pkt_buf) = j;
PORT(pkt_buf) = rxq->port_id;
PKT_LEN(pkt_buf) = pkt_buf_len;
-#ifdef RTE_NEXT_ABI
pkt_buf->packet_type = rxq_cq_to_pkt_type(flags);
-#endif
- pkt_buf->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
*(pkts++) = pkt_buf;
@@ -2921,9 +2894,7 @@ mlx4_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
NEXT(seg) = NULL;
PKT_LEN(seg) = len;
DATA_LEN(seg) = len;
-#ifdef RTE_NEXT_ABI
seg->packet_type = rxq_cq_to_pkt_type(flags);
-#endif
seg->ol_flags = rxq_cq_to_ol_flags(rxq, flags);
/* Return packet. */
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 39ad6ef..4de5d89 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -520,17 +520,9 @@ vmxnet3_rx_offload(const Vmxnet3_RxCompDesc *rcd, struct rte_mbuf *rxm)
struct ipv4_hdr *ip = (struct ipv4_hdr *)(eth + 1);
if (((ip->version_ihl & 0xf) << 2) > (int)sizeof(struct ipv4_hdr))
-#ifdef RTE_NEXT_ABI
rxm->packet_type = RTE_PTYPE_L3_IPV4_EXT;
-#else
- rxm->ol_flags |= PKT_RX_IPV4_HDR_EXT;
-#endif
else
-#ifdef RTE_NEXT_ABI
rxm->packet_type = RTE_PTYPE_L3_IPV4;
-#else
- rxm->ol_flags |= PKT_RX_IPV4_HDR;
-#endif
if (!rcd->cnc) {
if (!rcd->ipc)
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index b71d05f..fbc0b8d 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -283,11 +283,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
len = qconf->tx_mbufs[port_out].len;
/* if this is an IPv4 packet */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
/* Read the lookup key (i.e. ip_dst) from the input packet */
@@ -321,14 +317,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf,
if (unlikely (len2 < 0))
return;
}
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if this is an IPv6 packet */
-#else
- }
- /* if this is an IPv6 packet */
- else if (m->ol_flags & PKT_RX_IPV6_HDR) {
-#endif
struct ipv6_hdr *ip_hdr;
ipv6 = 1;
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index f1c47ad..741c398 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -356,11 +356,7 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dst_port = portid;
/* if packet is IPv4 */
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & (PKT_RX_IPV4_HDR)) {
-#endif
struct ipv4_hdr *ip_hdr;
uint32_t ip_dst;
@@ -400,14 +396,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
}
eth_hdr->ether_type = rte_be_to_cpu_16(ETHER_TYPE_IPv4);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
/* if packet is IPv6 */
-#else
- }
- /* if packet is IPv6 */
- else if (m->ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) {
-#endif
struct ipv6_extension_fragment *frag_hdr;
struct ipv6_hdr *ip_hdr;
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index b2bdf2f..f612671 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -645,13 +645,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
struct ipv4_hdr *ipv4_hdr;
struct rte_mbuf *pkt = pkts_in[index];
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-#endif
ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -670,11 +664,7 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
/* Not a valid IPv4 packet */
rte_pktmbuf_free(pkt);
}
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (type == PKT_RX_IPV6_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -692,22 +682,12 @@ prepare_one_packet(struct rte_mbuf **pkts_in, struct acl_search_t *acl,
{
struct rte_mbuf *pkt = pkts_in[index];
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- int type = pkt->ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV6_HDR);
-
- if (type == PKT_RX_IPV4_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv4[acl->num_ipv4] = MBUF_IPV4_2PROTO(pkt);
acl->m_ipv4[(acl->num_ipv4)++] = pkt;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (type == PKT_RX_IPV6_HDR) {
-#endif
/* Fill acl structure */
acl->data_ipv6[acl->num_ipv6] = MBUF_IPV6_2PROTO(pkt);
acl->m_ipv6[(acl->num_ipv6)++] = pkt;
@@ -755,17 +735,10 @@ send_one_packet(struct rte_mbuf *m, uint32_t res)
/* in the ACL list, drop it */
#ifdef L3FWDACL_DEBUG
if ((res & ACL_DENY_SIGNATURE) != 0) {
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type))
dump_acl4_rule(m, res);
else if (RTE_ETH_IS_IPV6_HDR(m->packet_type))
dump_acl6_rule(m, res);
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR)
- dump_acl4_rule(m, res);
- else
- dump_acl6_rule(m, res);
-#endif /* RTE_NEXT_ABI */
}
#endif
rte_pktmbuf_free(m);
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 086f29b..8bb88ce 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -650,11 +650,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
/* Handle IPv4 headers.*/
ipv4_hdr =
rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
@@ -689,12 +685,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- }
- else {
-#endif
/* Handle IPv6 headers.*/
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
struct ipv6_hdr *ipv6_hdr;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index fe5a257..1f3e5c6 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1073,11 +1073,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {
-#else
- if (m->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
/* Handle IPv4 headers.*/
ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct ipv4_hdr *,
sizeof(struct ether_hdr));
@@ -1108,11 +1104,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {
-#else
- } else {
-#endif
/* Handle IPv6 headers.*/
struct ipv6_hdr *ipv6_hdr;
@@ -1131,13 +1123,9 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
ether_addr_copy(&ports_eth_addr[dst_port], ð_hdr->s_addr);
send_single_packet(m, dst_port);
-#ifdef RTE_NEXT_ABI
} else
/* Free the mbuf that contains non-IPV4/IPV6 packet */
rte_pktmbuf_free(m);
-#else
- }
-#endif
}
#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \
@@ -1163,19 +1151,11 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-#ifdef RTE_NEXT_ABI
rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
-#else
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t flags)
-#endif
{
uint8_t ihl;
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(ptype)) {
-#else
- if ((flags & PKT_RX_IPV4_HDR) != 0) {
-#endif
ihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;
ipv4_hdr->time_to_live--;
@@ -1206,19 +1186,11 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
-#ifdef RTE_NEXT_ABI
if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {
-#else
- if (pkt->ol_flags & PKT_RX_IPV4_HDR) {
-#endif
if (rte_lpm_lookup(qconf->ipv4_lookup_struct, dst_ipv4,
&next_hop) != 0)
next_hop = portid;
-#ifdef RTE_NEXT_ABI
} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {
-#else
- } else if (pkt->ol_flags & PKT_RX_IPV6_HDR) {
-#endif
eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
if (rte_lpm6_lookup(qconf->ipv6_lookup_struct,
@@ -1252,17 +1224,12 @@ process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
ve = val_eth[dp];
dst_port[0] = dp;
-#ifdef RTE_NEXT_ABI
rfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);
-#else
- rfc1812_process(ipv4_hdr, dst_port, pkt->ol_flags);
-#endif
te = _mm_blend_epi16(te, ve, MASK_ETH);
_mm_store_si128((__m128i *)eth_hdr, te);
}
-#ifdef RTE_NEXT_ABI
/*
* Read packet_type and destination IPV4 addresses from 4 mbufs.
*/
@@ -1297,57 +1264,18 @@ processx4_step1(struct rte_mbuf *pkt[FWDSTEP],
dip[0] = _mm_set_epi32(x3, x2, x1, x0);
}
-#else /* RTE_NEXT_ABI */
-/*
- * Read ol_flags and destination IPV4 addresses from 4 mbufs.
- */
-static inline void
-processx4_step1(struct rte_mbuf *pkt[FWDSTEP], __m128i *dip, uint32_t *flag)
-{
- struct ipv4_hdr *ipv4_hdr;
- struct ether_hdr *eth_hdr;
- uint32_t x0, x1, x2, x3;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[0], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x0 = ipv4_hdr->dst_addr;
- flag[0] = pkt[0]->ol_flags & PKT_RX_IPV4_HDR;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[1], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x1 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[1]->ol_flags;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[2], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x2 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[2]->ol_flags;
-
- eth_hdr = rte_pktmbuf_mtod(pkt[3], struct ether_hdr *);
- ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
- x3 = ipv4_hdr->dst_addr;
- flag[0] &= pkt[3]->ol_flags;
-
- dip[0] = _mm_set_epi32(x3, x2, x1, x0);
-}
-#endif /* RTE_NEXT_ABI */
/*
* Lookup into LPM for destination port.
* If lookup fails, use incoming port (portid) as destination port.
*/
static inline void
-#ifdef RTE_NEXT_ABI
processx4_step2(const struct lcore_conf *qconf,
__m128i dip,
uint32_t ipv4_flag,
uint8_t portid,
struct rte_mbuf *pkt[FWDSTEP],
uint16_t dprt[FWDSTEP])
-#else
-processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
- uint8_t portid, struct rte_mbuf *pkt[FWDSTEP], uint16_t dprt[FWDSTEP])
-#endif /* RTE_NEXT_ABI */
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1357,11 +1285,7 @@ processx4_step2(const struct lcore_conf *qconf, __m128i dip, uint32_t flag,
dip = _mm_shuffle_epi8(dip, bswap_mask);
/* if all 4 packets are IPV4. */
-#ifdef RTE_NEXT_ABI
if (likely(ipv4_flag)) {
-#else
- if (likely(flag != 0)) {
-#endif
rte_lpm_lookupx4(qconf->ipv4_lookup_struct, dip, dprt, portid);
} else {
dst.x = dip;
@@ -1411,7 +1335,6 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
_mm_store_si128(p[2], te[2]);
_mm_store_si128(p[3], te[3]);
-#ifdef RTE_NEXT_ABI
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
&dst_port[0], pkt[0]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
@@ -1420,16 +1343,6 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
&dst_port[2], pkt[2]->packet_type);
rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
&dst_port[3], pkt[3]->packet_type);
-#else /* RTE_NEXT_ABI */
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[0] + 1),
- &dst_port[0], pkt[0]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[1] + 1),
- &dst_port[1], pkt[1]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[2] + 1),
- &dst_port[2], pkt[2]->ol_flags);
- rfc1812_process((struct ipv4_hdr *)((struct ether_hdr *)p[3] + 1),
- &dst_port[3], pkt[3]->ol_flags);
-#endif /* RTE_NEXT_ABI */
}
/*
@@ -1616,11 +1529,7 @@ main_loop(__attribute__((unused)) void *dummy)
uint16_t *lp;
uint16_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
-#ifdef RTE_NEXT_ABI
uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
-#else
- uint32_t flag[MAX_PKT_BURST / FWDSTEP];
-#endif
uint16_t pnum[MAX_PKT_BURST + 1];
#endif
@@ -1690,7 +1599,6 @@ main_loop(__attribute__((unused)) void *dummy)
*/
int32_t n = RTE_ALIGN_FLOOR(nb_rx, 8);
for (j = 0; j < n; j += 8) {
-#ifdef RTE_NEXT_ABI
uint32_t pkt_type =
pkts_burst[j]->packet_type &
pkts_burst[j+1]->packet_type &
@@ -1705,20 +1613,6 @@ main_loop(__attribute__((unused)) void *dummy)
&pkts_burst[j], portid, qconf);
} else if (pkt_type &
RTE_PTYPE_L3_IPV6) {
-#else /* RTE_NEXT_ABI */
- uint32_t ol_flag = pkts_burst[j]->ol_flags
- & pkts_burst[j+1]->ol_flags
- & pkts_burst[j+2]->ol_flags
- & pkts_burst[j+3]->ol_flags
- & pkts_burst[j+4]->ol_flags
- & pkts_burst[j+5]->ol_flags
- & pkts_burst[j+6]->ol_flags
- & pkts_burst[j+7]->ol_flags;
- if (ol_flag & PKT_RX_IPV4_HDR ) {
- simple_ipv4_fwd_8pkts(&pkts_burst[j],
- portid, qconf);
- } else if (ol_flag & PKT_RX_IPV6_HDR) {
-#endif /* RTE_NEXT_ABI */
simple_ipv6_fwd_8pkts(&pkts_burst[j],
portid, qconf);
} else {
@@ -1751,21 +1645,13 @@ main_loop(__attribute__((unused)) void *dummy)
for (j = 0; j != k; j += FWDSTEP) {
processx4_step1(&pkts_burst[j],
&dip[j / FWDSTEP],
-#ifdef RTE_NEXT_ABI
&ipv4_flag[j / FWDSTEP]);
-#else
- &flag[j / FWDSTEP]);
-#endif
}
k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);
for (j = 0; j != k; j += FWDSTEP) {
processx4_step2(qconf, dip[j / FWDSTEP],
-#ifdef RTE_NEXT_ABI
ipv4_flag[j / FWDSTEP], portid,
-#else
- flag[j / FWDSTEP], portid,
-#endif
&pkts_burst[j], &dst_port[j]);
}
diff --git a/examples/tep_termination/vxlan.c b/examples/tep_termination/vxlan.c
index e98a29f..5ee1f95 100644
--- a/examples/tep_termination/vxlan.c
+++ b/examples/tep_termination/vxlan.c
@@ -180,12 +180,7 @@ decapsulation(struct rte_mbuf *pkt)
* (rfc7348) or that the rx offload flag is set (i40e only
* currently)*/
if (udp_hdr->dst_port != rte_cpu_to_be_16(DEFAULT_VXLAN_PORT) &&
-#ifdef RTE_NEXT_ABI
(pkt->packet_type & RTE_PTYPE_TUNNEL_MASK) == 0)
-#else
- (pkt->ol_flags & (PKT_RX_TUNNEL_IPV4_HDR |
- PKT_RX_TUNNEL_IPV6_HDR)) == 0)
-#endif
return -1;
outer_header_len = info.outer_l2_len + info.outer_l3_len
+ sizeof(struct udp_hdr) + sizeof(struct vxlan_hdr);
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index e9f38bd..bd1cc09 100644
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -117,15 +117,9 @@ struct rte_kni_mbuf {
uint16_t data_off; /**< Start address of data in segment buffer. */
char pad1[4];
uint64_t ol_flags; /**< Offload features. */
-#ifdef RTE_NEXT_ABI
char pad2[4];
uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
uint16_t data_len; /**< Amount of data in segment buffer. */
-#else
- char pad2[2];
- uint16_t data_len; /**< Amount of data in segment buffer. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
-#endif
/* fields on second cache line */
char pad3[8] __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index 080f3cf..8d62b0d 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -38,7 +38,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
EXPORT_MAP := rte_mbuf_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index e416312..c18b438 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -258,18 +258,8 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)
/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
-#ifndef RTE_NEXT_ABI
- case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
- case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
- case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
- case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
-#endif /* RTE_NEXT_ABI */
case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-#ifndef RTE_NEXT_ABI
- case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
- case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
-#endif /* RTE_NEXT_ABI */
default: return NULL;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 8c2db1b..d7c9030 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -93,18 +93,8 @@ extern "C" {
#define PKT_RX_HBUF_OVERFLOW (0ULL << 0) /**< Header buffer overflow. */
#define PKT_RX_RECIP_ERR (0ULL << 0) /**< Hardware processing error. */
#define PKT_RX_MAC_ERR (0ULL << 0) /**< MAC error. */
-#ifndef RTE_NEXT_ABI
-#define PKT_RX_IPV4_HDR (1ULL << 5) /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT (1ULL << 6) /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR (1ULL << 7) /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT (1ULL << 8) /**< RX packet with extended IPv6 header. */
-#endif /* RTE_NEXT_ABI */
#define PKT_RX_IEEE1588_PTP (1ULL << 9) /**< RX IEEE1588 L2 Ethernet PT Packet. */
#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#ifndef RTE_NEXT_ABI
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#endif /* RTE_NEXT_ABI */
#define PKT_RX_FDIR_ID (1ULL << 13) /**< FD id reported if FDIR match. */
#define PKT_RX_FDIR_FLX (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
#define PKT_RX_QINQ_PKT (1ULL << 15) /**< RX packet with double VLAN stripped. */
@@ -209,7 +199,6 @@ extern "C" {
/* Use final bit of flags to indicate a control mbuf */
#define CTRL_MBUF_FLAG (1ULL << 63) /**< Mbuf contains control data */
-#ifdef RTE_NEXT_ABI
/*
* 32 bits are divided into several fields to mark packet types. Note that
* each field is indexical.
@@ -696,7 +685,6 @@ extern "C" {
RTE_PTYPE_INNER_L2_MASK | \
RTE_PTYPE_INNER_L3_MASK | \
RTE_PTYPE_INNER_L4_MASK))
-#endif /* RTE_NEXT_ABI */
/** Alignment constraint of mbuf private area. */
#define RTE_MBUF_PRIV_ALIGN 8
@@ -775,7 +763,6 @@ struct rte_mbuf {
/* remaining bytes are set on RX when pulling packet from descriptor */
MARKER rx_descriptor_fields1;
-#ifdef RTE_NEXT_ABI
/*
* The packet type, which is the combination of outer/inner L2, L3, L4
* and tunnel types.
@@ -796,19 +783,7 @@ struct rte_mbuf {
uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
uint16_t data_len; /**< Amount of data in segment buffer. */
uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
-#else /* RTE_NEXT_ABI */
- /**
- * The packet type, which is used to indicate ordinary packet and also
- * tunneled packet format, i.e. each number is represented a type of
- * packet.
- */
- uint16_t packet_type;
- uint16_t data_len; /**< Amount of data in segment buffer. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
- uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order) */
- uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
-#endif /* RTE_NEXT_ABI */
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -829,9 +804,8 @@ struct rte_mbuf {
} hash; /**< hash information */
uint32_t seqn; /**< Sequence number. See also rte_reorder_insert() */
-#ifdef RTE_NEXT_ABI
+
uint16_t vlan_tci_outer; /**< Outer VLAN Tag Control Identifier (CPU order) */
-#endif /* RTE_NEXT_ABI */
/* second cache line - fields only used in slow path or on TX */
MARKER cacheline1 __rte_cache_aligned;
--
2.5.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 04/10] ethdev: remove SCTP flow entries switch
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (2 preceding siblings ...)
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 03/10] mbuf: remove packet type from offload flags Thomas Monjalon
@ 2015-09-02 13:16 15% ` Thomas Monjalon
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 05/10] eal: remove deprecated function Thomas Monjalon
` (6 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The extended SCTP flow entries are now part of the standard API.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
app/test-pmd/cmdline.c | 4 ----
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 3 +++
drivers/net/i40e/i40e_fdir.c | 4 ----
lib/librte_ether/rte_eth_ctrl.h | 4 ----
5 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 5799c9c..0f8f48f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -7888,12 +7888,10 @@ cmd_flow_director_filter_parsed(void *parsed_result,
IPV4_ADDR_TO_UINT(res->ip_src,
entry.input.flow.sctp4_flow.ip.src_ip);
/* need convert to big endian. */
-#ifdef RTE_NEXT_ABI
entry.input.flow.sctp4_flow.dst_port =
rte_cpu_to_be_16(res->port_dst);
entry.input.flow.sctp4_flow.src_port =
rte_cpu_to_be_16(res->port_src);
-#endif
entry.input.flow.sctp4_flow.verify_tag =
rte_cpu_to_be_32(res->verify_tag_value);
break;
@@ -7917,12 +7915,10 @@ cmd_flow_director_filter_parsed(void *parsed_result,
IPV6_ADDR_TO_ARRAY(res->ip_src,
entry.input.flow.sctp6_flow.ip.src_ip);
/* need convert to big endian. */
-#ifdef RTE_NEXT_ABI
entry.input.flow.sctp6_flow.dst_port =
rte_cpu_to_be_16(res->port_dst);
entry.input.flow.sctp6_flow.src_port =
rte_cpu_to_be_16(res->port_src);
-#endif
entry.input.flow.sctp6_flow.verify_tag =
rte_cpu_to_be_32(res->verify_tag_value);
break;
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 639ab18..cf5cd17 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,9 +44,6 @@ Deprecation Notices
flow director filtering in VF. The release 2.1 does not contain these ABI
changes, but release 2.2 will, and no backwards compatibility is planned.
-* ABI change is planned to extend the SCTP flow's key input from release 2.1.
- The change may be enabled in the release 2.1 with CONFIG_RTE_NEXT_ABI.
-
* ABI changes are planned for struct rte_eth_fdir_filter and
rte_eth_fdir_masks in order to support new flow director modes,
MAC VLAN and Cloud, on x550. The MAC VLAN mode means the MAC and
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 3a6d2cc..825c612 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -23,6 +23,9 @@ ABI Changes
* The EAL and ethdev structures rte_intr_handle and rte_eth_conf were changed
to support Rx interrupt. It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* The ethdev flow director entries for SCTP were changed.
+ It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 8208273..c9ce98f 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -822,7 +822,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) +
sizeof(struct ipv4_hdr));
payload = (unsigned char *)sctp + sizeof(struct sctp_hdr);
-#ifdef RTE_NEXT_ABI
/*
* The source and destination fields in the transmitted packet
* need to be presented in a reversed order with respect
@@ -830,7 +829,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
*/
sctp->src_port = fdir_input->flow.sctp4_flow.dst_port;
sctp->dst_port = fdir_input->flow.sctp4_flow.src_port;
-#endif
sctp->tag = fdir_input->flow.sctp4_flow.verify_tag;
break;
@@ -873,7 +871,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) +
sizeof(struct ipv6_hdr));
payload = (unsigned char *)sctp + sizeof(struct sctp_hdr);
-#ifdef RTE_NEXT_ABI
/*
* The source and destination fields in the transmitted packet
* need to be presented in a reversed order with respect
@@ -881,7 +878,6 @@ i40e_fdir_construct_pkt(struct i40e_pf *pf,
*/
sctp->src_port = fdir_input->flow.sctp6_flow.dst_port;
sctp->dst_port = fdir_input->flow.sctp6_flow.src_port;
-#endif
sctp->tag = fdir_input->flow.sctp6_flow.verify_tag;
break;
diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
index 4beb981..26b7b33 100644
--- a/lib/librte_ether/rte_eth_ctrl.h
+++ b/lib/librte_ether/rte_eth_ctrl.h
@@ -335,10 +335,8 @@ struct rte_eth_tcpv4_flow {
*/
struct rte_eth_sctpv4_flow {
struct rte_eth_ipv4_flow ip; /**< IPv4 fields to match. */
-#ifdef RTE_NEXT_ABI
uint16_t src_port; /**< SCTP source port to match. */
uint16_t dst_port; /**< SCTP destination port to match. */
-#endif
uint32_t verify_tag; /**< Verify tag to match */
};
@@ -373,10 +371,8 @@ struct rte_eth_tcpv6_flow {
*/
struct rte_eth_sctpv6_flow {
struct rte_eth_ipv6_flow ip; /**< IPv6 fields to match. */
-#ifdef RTE_NEXT_ABI
uint16_t src_port; /**< SCTP source port to match. */
uint16_t dst_port; /**< SCTP destination port to match. */
-#endif
uint32_t verify_tag; /**< Verify tag to match */
};
--
2.5.1
^ permalink raw reply [relevance 15%]
* [dpdk-dev] [PATCH v2 05/10] eal: remove deprecated function
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (3 preceding siblings ...)
2015-09-02 13:16 15% ` [dpdk-dev] [PATCH v2 04/10] ethdev: remove SCTP flow entries switch Thomas Monjalon
@ 2015-09-02 13:16 4% ` Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 06/10] mem: remove dummy malloc library Thomas Monjalon
` (5 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The function rte_eal_pci_close_one() was renamed rte_eal_pci_detach().
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 3 +++
lib/librte_eal/bsdapp/eal/rte_eal_version.map | 1 -
lib/librte_eal/common/eal_common_pci.c | 6 ------
lib/librte_eal/common/include/rte_pci.h | 2 --
lib/librte_eal/linuxapp/eal/rte_eal_version.map | 1 -
6 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cf5cd17..604a899 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -13,9 +13,6 @@ Deprecation Notices
There is no backward compatibility planned from release 2.2.
All binaries will need to be rebuilt from release 2.2.
-* The EAL function rte_eal_pci_close_one is deprecated because renamed to
- rte_eal_pci_detach.
-
* The Macros RTE_HASH_BUCKET_ENTRIES_MAX and RTE_HASH_KEY_LENGTH_MAX are
deprecated and will be removed with version 2.2.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 825c612..8a5b29a 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -16,6 +16,9 @@ Known Issues
API Changes
-----------
+* The function rte_eal_pci_close_one() is removed.
+ It was replaced by rte_eal_pci_detach().
+
ABI Changes
-----------
diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
index 2758848..64fdfb1 100644
--- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
@@ -40,7 +40,6 @@ DPDK_2.0 {
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
rte_eal_parse_devargs_str;
- rte_eal_pci_close_one;
rte_eal_pci_dump;
rte_eal_pci_probe;
rte_eal_pci_probe_one;
diff --git a/lib/librte_eal/common/eal_common_pci.c b/lib/librte_eal/common/eal_common_pci.c
index 16e8629..dcfe947 100644
--- a/lib/librte_eal/common/eal_common_pci.c
+++ b/lib/librte_eal/common/eal_common_pci.c
@@ -348,12 +348,6 @@ err_return:
return -1;
}
-int __attribute__ ((deprecated))
-rte_eal_pci_close_one(const struct rte_pci_addr *addr)
-{
- return rte_eal_pci_detach(addr);
-}
-
/*
* Detach device specified by its pci address.
*/
diff --git a/lib/librte_eal/common/include/rte_pci.h b/lib/librte_eal/common/include/rte_pci.h
index 3fb2d3a..83e3c28 100644
--- a/lib/librte_eal/common/include/rte_pci.h
+++ b/lib/librte_eal/common/include/rte_pci.h
@@ -426,8 +426,6 @@ int rte_eal_pci_probe_one(const struct rte_pci_addr *addr);
* - Negative on error.
*/
int rte_eal_pci_detach(const struct rte_pci_addr *addr);
-int __attribute__ ((deprecated))
-rte_eal_pci_close_one(const struct rte_pci_addr *addr);
/**
* Dump the content of the PCI bus.
diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
index 59b1717..dbb8fa1 100644
--- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
@@ -40,7 +40,6 @@ DPDK_2.0 {
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
rte_eal_parse_devargs_str;
- rte_eal_pci_close_one;
rte_eal_pci_dump;
rte_eal_pci_probe;
rte_eal_pci_probe_one;
--
2.5.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 06/10] mem: remove dummy malloc library
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (4 preceding siblings ...)
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 05/10] eal: remove deprecated function Thomas Monjalon
@ 2015-09-02 13:16 3% ` Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 07/10] lpm: remove deprecated field Thomas Monjalon
` (4 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The malloc library is now part of the EAL.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
MAINTAINERS | 1 -
doc/guides/prog_guide/dev_kit_build_system.rst | 2 +-
doc/guides/prog_guide/env_abstraction_layer.rst | 2 +-
doc/guides/prog_guide/source_org.rst | 1 -
.../thread_safety_intel_dpdk_functions.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 5 ---
doc/guides/rel_notes/release_2_2.rst | 3 +-
lib/Makefile | 1 -
lib/librte_malloc/Makefile | 48 ----------------------
lib/librte_malloc/rte_malloc_empty.c | 34 ---------------
lib/librte_malloc/rte_malloc_version.map | 3 --
mk/rte.app.mk | 1 -
12 files changed, 5 insertions(+), 98 deletions(-)
delete mode 100644 lib/librte_malloc/Makefile
delete mode 100644 lib/librte_malloc/rte_malloc_empty.c
delete mode 100644 lib/librte_malloc/rte_malloc_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 17d4265..080a8e8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -102,7 +102,6 @@ F: lib/librte_eal/common/include/rte_malloc.h
F: lib/librte_eal/common/*malloc*
F: lib/librte_eal/common/eal_common_mem*
F: lib/librte_eal/common/eal_hugepages.h
-F: lib/librte_malloc/
F: doc/guides/prog_guide/env_abstraction_layer.rst
F: app/test/test_func_reentrancy.c
F: app/test/test_malloc.c
diff --git a/doc/guides/prog_guide/dev_kit_build_system.rst b/doc/guides/prog_guide/dev_kit_build_system.rst
index 7dc2de6..dd3e3d0 100644
--- a/doc/guides/prog_guide/dev_kit_build_system.rst
+++ b/doc/guides/prog_guide/dev_kit_build_system.rst
@@ -85,7 +85,7 @@ Each build directory contains include files, libraries, and applications:
librte_cmdline.a librte_lpm.a librte_mempool.a librte_ring.a
- librte_eal.a librte_malloc.a librte_pmd_e1000.a librte_timer.a
+ librte_eal.a librte_pmd_e1000.a librte_timer.a
~/DEV/DPDK$ ls i686-native-linuxapp-gcc/include/
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index f1b3ff1..a03e40d 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -115,7 +115,7 @@ The physical address of the reserved memory for that memory zone is also returne
.. note::
- Memory reservations done using the APIs provided by the rte_malloc library are also backed by pages from the hugetlbfs filesystem.
+ Memory reservations done using the APIs provided by rte_malloc are also backed by pages from the hugetlbfs filesystem.
Xen Dom0 support without hugetbls
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/source_org.rst b/doc/guides/prog_guide/source_org.rst
index b81e965..ae11b3b 100644
--- a/doc/guides/prog_guide/source_org.rst
+++ b/doc/guides/prog_guide/source_org.rst
@@ -74,7 +74,6 @@ The lib directory contains::
+-- librte_kni # Kernel NIC interface
+-- librte_kvargs # Argument parsing library
+-- librte_lpm # Longest prefix match library
- +-- librte_malloc # Malloc-like functions
+-- librte_mbuf # Packet and control mbuf manipulation
+-- librte_mempool # Memory pool manager (fixed sized objects)
+-- librte_meter # QoS metering library
diff --git a/doc/guides/prog_guide/thread_safety_intel_dpdk_functions.rst b/doc/guides/prog_guide/thread_safety_intel_dpdk_functions.rst
index 0034bf4..403e5fc 100644
--- a/doc/guides/prog_guide/thread_safety_intel_dpdk_functions.rst
+++ b/doc/guides/prog_guide/thread_safety_intel_dpdk_functions.rst
@@ -73,7 +73,7 @@ Performance Insensitive API
Outside of the performance sensitive areas described in Section 25.1,
the DPDK provides a thread-safe API for most other libraries.
-For example, malloc(librte_malloc) and memzone functions are safe for use in multi-threaded and multi-process environments.
+For example, malloc and memzone functions are safe for use in multi-threaded and multi-process environments.
The setup and configuration of the PMD is not performance sensitive, but is not thread safe either.
It is possible that the multiple read/writes during PMD setup and configuration could be corrupted in a multi-thread environment.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 604a899..3fa4c90 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -21,11 +21,6 @@ Deprecation Notices
* The field mem_location of the rte_lpm structure is deprecated and should be
removed as well as the macros RTE_LPM_HEAP and RTE_LPM_MEMZONE.
-* librte_malloc library has been integrated into librte_eal. The 2.1 release
- creates a dummy/empty malloc library to fulfill binaries with dynamic linking
- dependencies on librte_malloc.so. Such dummy library will not be created from
- release 2.2 so binaries will need to be rebuilt.
-
* The following fields have been deprecated in rte_eth_stats:
imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 8a5b29a..f0f67da 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -32,6 +32,8 @@ ABI Changes
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* The dummy malloc library is removed. The content was moved into EAL in 2.1.
+
Shared Library Versions
-----------------------
@@ -53,7 +55,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_kni.so.1
librte_kvargs.so.1
librte_lpm.so.1
- librte_malloc.so.1
+ librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
diff --git a/lib/Makefile b/lib/Makefile
index 2055539..9727b83 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -33,7 +33,6 @@ include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += librte_compat
DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
-DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_malloc
DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
diff --git a/lib/librte_malloc/Makefile b/lib/librte_malloc/Makefile
deleted file mode 100644
index 9558f3d..0000000
--- a/lib/librte_malloc/Makefile
+++ /dev/null
@@ -1,48 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-include $(RTE_SDK)/mk/rte.vars.mk
-
-# library name
-LIB = librte_malloc.a
-
-LIBABIVER := 1
-
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
-
-EXPORT_MAP := rte_malloc_version.map
-
-# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_EAL) := rte_malloc_empty.c
-
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL) += lib/librte_eal
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_malloc/rte_malloc_empty.c b/lib/librte_malloc/rte_malloc_empty.c
deleted file mode 100644
index 4892a61..0000000
--- a/lib/librte_malloc/rte_malloc_empty.c
+++ /dev/null
@@ -1,34 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/* Empty file to be able to create a dummy library for deprecation policy */
diff --git a/lib/librte_malloc/rte_malloc_version.map b/lib/librte_malloc/rte_malloc_version.map
deleted file mode 100644
index 63cb5fc..0000000
--- a/lib/librte_malloc/rte_malloc_version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_2.0 {
- local: *;
-};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3871205..9e1909e 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -114,7 +114,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs
_LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF) += -lrte_mbuf
_LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += -lrte_ip_frag
_LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER) += -lethdev
-_LDLIBS-$(CONFIG_RTE_LIBRTE_MALLOC) += -lrte_malloc
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_LIBRTE_RING) += -lrte_ring
_LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
--
2.5.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 07/10] lpm: remove deprecated field
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (5 preceding siblings ...)
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 06/10] mem: remove dummy malloc library Thomas Monjalon
@ 2015-09-02 13:16 5% ` Thomas Monjalon
2015-09-02 13:16 2% ` [dpdk-dev] [PATCH v2 08/10] acl: remove old API Thomas Monjalon
` (3 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The library version is incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
app/test/test_func_reentrancy.c | 4 ++--
app/test/test_lpm.c | 4 ++--
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_lpm/Makefile | 2 +-
lib/librte_lpm/rte_lpm.h | 11 -----------
6 files changed, 8 insertions(+), 20 deletions(-)
diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c
index be61773..dbecc52 100644
--- a/app/test/test_func_reentrancy.c
+++ b/app/test/test_func_reentrancy.c
@@ -366,7 +366,7 @@ lpm_create_free(__attribute__((unused)) void *arg)
/* create the same lpm simultaneously on all threads */
for (i = 0; i < MAX_ITER_TIMES; i++) {
- lpm = rte_lpm_create("fr_test_once", SOCKET_ID_ANY, 4, RTE_LPM_HEAP);
+ lpm = rte_lpm_create("fr_test_once", SOCKET_ID_ANY, 4, 0);
if ((NULL == lpm) && (rte_lpm_find_existing("fr_test_once") == NULL))
return -1;
}
@@ -374,7 +374,7 @@ lpm_create_free(__attribute__((unused)) void *arg)
/* create mutiple fbk tables simultaneously */
for (i = 0; i < MAX_LPM_ITER_TIMES; i++) {
snprintf(lpm_name, sizeof(lpm_name), "fr_test_%d_%d", lcore_self, i);
- lpm = rte_lpm_create(lpm_name, SOCKET_ID_ANY, 4, RTE_LPM_HEAP);
+ lpm = rte_lpm_create(lpm_name, SOCKET_ID_ANY, 4, 0);
if (NULL == lpm)
return -1;
diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 6d8823e..8b4ded9 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -165,7 +165,7 @@ test2(void)
{
struct rte_lpm *lpm = NULL;
- lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
+ lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
TEST_LPM_ASSERT(lpm != NULL);
rte_lpm_free(lpm);
@@ -607,7 +607,7 @@ test10(void)
/* Add rule that covers a TBL24 range previously invalid & lookup
* (& delete & lookup) */
- lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, RTE_LPM_HEAP);
+ lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
TEST_LPM_ASSERT(lpm != NULL);
ip = IPv4(128, 0, 0, 0);
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 3fa4c90..c40764a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -18,9 +18,6 @@ Deprecation Notices
* The function rte_jhash2 is deprecated and should be removed.
-* The field mem_location of the rte_lpm structure is deprecated and should be
- removed as well as the macros RTE_LPM_HEAP and RTE_LPM_MEMZONE.
-
* The following fields have been deprecated in rte_eth_stats:
imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index f0f67da..2a238c5 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -34,6 +34,8 @@ ABI Changes
* The dummy malloc library is removed. The content was moved into EAL in 2.1.
+* The LPM structure is changed. The deprecated field mem_location is removed.
+
Shared Library Versions
-----------------------
@@ -54,7 +56,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_jobstats.so.1
librte_kni.so.1
librte_kvargs.so.1
- librte_lpm.so.1
+ + librte_lpm.so.2
+ librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index 0a7a888..688cfc9 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -39,7 +39,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
EXPORT_MAP := rte_lpm_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_LPM) := rte_lpm.c rte_lpm6.c
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 11f0c04..c299ce2 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -56,16 +56,6 @@ extern "C" {
/** Max number of characters in LPM name. */
#define RTE_LPM_NAMESIZE 32
-/** @deprecated Possible location to allocate memory. This was for last
- * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
- * allocated in memory using librte_malloc which uses a memzone. */
-#define RTE_LPM_HEAP 0
-
-/** @deprecated Possible location to allocate memory. This was for last
- * parameter of rte_lpm_create(), but is now redundant. The LPM table is always
- * allocated in memory using librte_malloc which uses a memzone. */
-#define RTE_LPM_MEMZONE 1
-
/** Maximum depth value possible for IPv4 LPM. */
#define RTE_LPM_MAX_DEPTH 32
@@ -154,7 +144,6 @@ struct rte_lpm_rule_info {
struct rte_lpm {
/* LPM metadata. */
char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- int mem_location; /**< @deprecated @see RTE_LPM_HEAP and RTE_LPM_MEMZONE. */
uint32_t max_rules; /**< Max. balanced rules per lpm. */
struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
--
2.5.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 08/10] acl: remove old API
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (6 preceding siblings ...)
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 07/10] lpm: remove deprecated field Thomas Monjalon
@ 2015-09-02 13:16 2% ` Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 09/10] kni: remove deprecated functions Thomas Monjalon
` (2 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev
The functions and structures are moved to app/test in order to keep
existing unit tests. Some minor changes were done in these functions
because of library scope restrictions.
An enum is also copied in two other applications to keep existing code.
The library version is incremented.
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
app/test-acl/main.c | 17 +++
app/test/test_acl.c | 194 +++++++++++++++++++++++++++++++++++
app/test/test_acl.h | 59 +++++++++++
doc/guides/rel_notes/deprecation.rst | 4 -
doc/guides/rel_notes/release_2_2.rst | 4 +-
examples/l3fwd-acl/main.c | 17 +++
lib/librte_acl/Makefile | 2 +-
lib/librte_acl/rte_acl.c | 170 ------------------------------
lib/librte_acl/rte_acl.h | 104 -------------------
lib/librte_acl/rte_acl_version.map | 2 -
10 files changed, 291 insertions(+), 282 deletions(-)
diff --git a/app/test-acl/main.c b/app/test-acl/main.c
index be3d773..72ce83c 100644
--- a/app/test-acl/main.c
+++ b/app/test-acl/main.c
@@ -162,6 +162,23 @@ enum {
NUM_FIELDS_IPV4
};
+/*
+ * That effectively defines order of IPV4VLAN classifications:
+ * - PROTO
+ * - VLAN (TAG and DOMAIN)
+ * - SRC IP ADDRESS
+ * - DST IP ADDRESS
+ * - PORTS (SRC and DST)
+ */
+enum {
+ RTE_ACL_IPV4VLAN_PROTO,
+ RTE_ACL_IPV4VLAN_VLAN,
+ RTE_ACL_IPV4VLAN_SRC,
+ RTE_ACL_IPV4VLAN_DST,
+ RTE_ACL_IPV4VLAN_PORTS,
+ RTE_ACL_IPV4VLAN_NUM
+};
+
struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
diff --git a/app/test/test_acl.c b/app/test/test_acl.c
index b4a107d..2b82790 100644
--- a/app/test/test_acl.c
+++ b/app/test/test_acl.c
@@ -45,6 +45,8 @@
#include "test_acl.h"
+#define BIT_SIZEOF(x) (sizeof(x) * CHAR_BIT)
+
#define LEN RTE_ACL_MAX_CATEGORIES
RTE_ACL_RULE_DEF(acl_ipv4vlan_rule, RTE_ACL_IPV4VLAN_NUM_FIELDS);
@@ -100,6 +102,198 @@ bswap_test_data(struct ipv4_7tuple *data, int len, int to_be)
}
}
+static int
+acl_ipv4vlan_check_rule(const struct rte_acl_ipv4vlan_rule *rule)
+{
+ if (rule->src_port_low > rule->src_port_high ||
+ rule->dst_port_low > rule->dst_port_high ||
+ rule->src_mask_len > BIT_SIZEOF(rule->src_addr) ||
+ rule->dst_mask_len > BIT_SIZEOF(rule->dst_addr))
+ return -EINVAL;
+ return 0;
+}
+
+static void
+acl_ipv4vlan_convert_rule(const struct rte_acl_ipv4vlan_rule *ri,
+ struct acl_ipv4vlan_rule *ro)
+{
+ ro->data = ri->data;
+
+ ro->field[RTE_ACL_IPV4VLAN_PROTO_FIELD].value.u8 = ri->proto;
+ ro->field[RTE_ACL_IPV4VLAN_VLAN1_FIELD].value.u16 = ri->vlan;
+ ro->field[RTE_ACL_IPV4VLAN_VLAN2_FIELD].value.u16 = ri->domain;
+ ro->field[RTE_ACL_IPV4VLAN_SRC_FIELD].value.u32 = ri->src_addr;
+ ro->field[RTE_ACL_IPV4VLAN_DST_FIELD].value.u32 = ri->dst_addr;
+ ro->field[RTE_ACL_IPV4VLAN_SRCP_FIELD].value.u16 = ri->src_port_low;
+ ro->field[RTE_ACL_IPV4VLAN_DSTP_FIELD].value.u16 = ri->dst_port_low;
+
+ ro->field[RTE_ACL_IPV4VLAN_PROTO_FIELD].mask_range.u8 = ri->proto_mask;
+ ro->field[RTE_ACL_IPV4VLAN_VLAN1_FIELD].mask_range.u16 = ri->vlan_mask;
+ ro->field[RTE_ACL_IPV4VLAN_VLAN2_FIELD].mask_range.u16 =
+ ri->domain_mask;
+ ro->field[RTE_ACL_IPV4VLAN_SRC_FIELD].mask_range.u32 =
+ ri->src_mask_len;
+ ro->field[RTE_ACL_IPV4VLAN_DST_FIELD].mask_range.u32 = ri->dst_mask_len;
+ ro->field[RTE_ACL_IPV4VLAN_SRCP_FIELD].mask_range.u16 =
+ ri->src_port_high;
+ ro->field[RTE_ACL_IPV4VLAN_DSTP_FIELD].mask_range.u16 =
+ ri->dst_port_high;
+}
+
+/*
+ * Add ipv4vlan rules to an existing ACL context.
+ * This function is not multi-thread safe.
+ *
+ * @param ctx
+ * ACL context to add patterns to.
+ * @param rules
+ * Array of rules to add to the ACL context.
+ * Note that all fields in rte_acl_ipv4vlan_rule structures are expected
+ * to be in host byte order.
+ * @param num
+ * Number of elements in the input array of rules.
+ * @return
+ * - -ENOMEM if there is no space in the ACL context for these rules.
+ * - -EINVAL if the parameters are invalid.
+ * - Zero if operation completed successfully.
+ */
+static int
+rte_acl_ipv4vlan_add_rules(struct rte_acl_ctx *ctx,
+ const struct rte_acl_ipv4vlan_rule *rules,
+ uint32_t num)
+{
+ int32_t rc;
+ uint32_t i;
+ struct acl_ipv4vlan_rule rv;
+
+ if (ctx == NULL || rules == NULL)
+ return -EINVAL;
+
+ /* check input rules. */
+ for (i = 0; i != num; i++) {
+ rc = acl_ipv4vlan_check_rule(rules + i);
+ if (rc != 0) {
+ RTE_LOG(ERR, ACL, "%s: rule #%u is invalid\n",
+ __func__, i + 1);
+ return rc;
+ }
+ }
+
+ /* perform conversion to the internal format and add to the context. */
+ for (i = 0, rc = 0; i != num && rc == 0; i++) {
+ acl_ipv4vlan_convert_rule(rules + i, &rv);
+ rc = rte_acl_add_rules(ctx, (struct rte_acl_rule *)&rv, 1);
+ }
+
+ return rc;
+}
+
+static void
+acl_ipv4vlan_config(struct rte_acl_config *cfg,
+ const uint32_t layout[RTE_ACL_IPV4VLAN_NUM],
+ uint32_t num_categories)
+{
+ static const struct rte_acl_field_def
+ ipv4_defs[RTE_ACL_IPV4VLAN_NUM_FIELDS] = {
+ {
+ .type = RTE_ACL_FIELD_TYPE_BITMASK,
+ .size = sizeof(uint8_t),
+ .field_index = RTE_ACL_IPV4VLAN_PROTO_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_PROTO,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_BITMASK,
+ .size = sizeof(uint16_t),
+ .field_index = RTE_ACL_IPV4VLAN_VLAN1_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_VLAN,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_BITMASK,
+ .size = sizeof(uint16_t),
+ .field_index = RTE_ACL_IPV4VLAN_VLAN2_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_VLAN,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_MASK,
+ .size = sizeof(uint32_t),
+ .field_index = RTE_ACL_IPV4VLAN_SRC_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_SRC,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_MASK,
+ .size = sizeof(uint32_t),
+ .field_index = RTE_ACL_IPV4VLAN_DST_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_DST,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_RANGE,
+ .size = sizeof(uint16_t),
+ .field_index = RTE_ACL_IPV4VLAN_SRCP_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_PORTS,
+ },
+ {
+ .type = RTE_ACL_FIELD_TYPE_RANGE,
+ .size = sizeof(uint16_t),
+ .field_index = RTE_ACL_IPV4VLAN_DSTP_FIELD,
+ .input_index = RTE_ACL_IPV4VLAN_PORTS,
+ },
+ };
+
+ memcpy(&cfg->defs, ipv4_defs, sizeof(ipv4_defs));
+ cfg->num_fields = RTE_DIM(ipv4_defs);
+
+ cfg->defs[RTE_ACL_IPV4VLAN_PROTO_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_PROTO];
+ cfg->defs[RTE_ACL_IPV4VLAN_VLAN1_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_VLAN];
+ cfg->defs[RTE_ACL_IPV4VLAN_VLAN2_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_VLAN] +
+ cfg->defs[RTE_ACL_IPV4VLAN_VLAN1_FIELD].size;
+ cfg->defs[RTE_ACL_IPV4VLAN_SRC_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_SRC];
+ cfg->defs[RTE_ACL_IPV4VLAN_DST_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_DST];
+ cfg->defs[RTE_ACL_IPV4VLAN_SRCP_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_PORTS];
+ cfg->defs[RTE_ACL_IPV4VLAN_DSTP_FIELD].offset =
+ layout[RTE_ACL_IPV4VLAN_PORTS] +
+ cfg->defs[RTE_ACL_IPV4VLAN_SRCP_FIELD].size;
+
+ cfg->num_categories = num_categories;
+}
+
+/*
+ * Analyze set of ipv4vlan rules and build required internal
+ * run-time structures.
+ * This function is not multi-thread safe.
+ *
+ * @param ctx
+ * ACL context to build.
+ * @param layout
+ * Layout of input data to search through.
+ * @param num_categories
+ * Maximum number of categories to use in that build.
+ * @return
+ * - -ENOMEM if couldn't allocate enough memory.
+ * - -EINVAL if the parameters are invalid.
+ * - Negative error code if operation failed.
+ * - Zero if operation completed successfully.
+ */
+static int
+rte_acl_ipv4vlan_build(struct rte_acl_ctx *ctx,
+ const uint32_t layout[RTE_ACL_IPV4VLAN_NUM],
+ uint32_t num_categories)
+{
+ struct rte_acl_config cfg;
+
+ if (ctx == NULL || layout == NULL)
+ return -EINVAL;
+
+ memset(&cfg, 0, sizeof(cfg));
+ acl_ipv4vlan_config(&cfg, layout, num_categories);
+ return rte_acl_build(ctx, &cfg);
+}
+
/*
* Test scalar and SSE ACL lookup.
*/
diff --git a/app/test/test_acl.h b/app/test/test_acl.h
index 9813569..421f310 100644
--- a/app/test/test_acl.h
+++ b/app/test/test_acl.h
@@ -46,6 +46,65 @@ struct ipv4_7tuple {
uint32_t deny;
};
+/**
+ * Legacy support for 7-tuple IPv4 and VLAN rule.
+ * This structure and corresponding API is deprecated.
+ */
+struct rte_acl_ipv4vlan_rule {
+ struct rte_acl_rule_data data; /**< Miscellaneous data for the rule. */
+ uint8_t proto; /**< IPv4 protocol ID. */
+ uint8_t proto_mask; /**< IPv4 protocol ID mask. */
+ uint16_t vlan; /**< VLAN ID. */
+ uint16_t vlan_mask; /**< VLAN ID mask. */
+ uint16_t domain; /**< VLAN domain. */
+ uint16_t domain_mask; /**< VLAN domain mask. */
+ uint32_t src_addr; /**< IPv4 source address. */
+ uint32_t src_mask_len; /**< IPv4 source address mask. */
+ uint32_t dst_addr; /**< IPv4 destination address. */
+ uint32_t dst_mask_len; /**< IPv4 destination address mask. */
+ uint16_t src_port_low; /**< L4 source port low. */
+ uint16_t src_port_high; /**< L4 source port high. */
+ uint16_t dst_port_low; /**< L4 destination port low. */
+ uint16_t dst_port_high; /**< L4 destination port high. */
+};
+
+/**
+ * Specifies fields layout inside rte_acl_rule for rte_acl_ipv4vlan_rule.
+ */
+enum {
+ RTE_ACL_IPV4VLAN_PROTO_FIELD,
+ RTE_ACL_IPV4VLAN_VLAN1_FIELD,
+ RTE_ACL_IPV4VLAN_VLAN2_FIELD,
+ RTE_ACL_IPV4VLAN_SRC_FIELD,
+ RTE_ACL_IPV4VLAN_DST_FIELD,
+ RTE_ACL_IPV4VLAN_SRCP_FIELD,
+ RTE_ACL_IPV4VLAN_DSTP_FIELD,
+ RTE_ACL_IPV4VLAN_NUM_FIELDS
+};
+
+/**
+ * Macro to define rule size for rte_acl_ipv4vlan_rule.
+ */
+#define RTE_ACL_IPV4VLAN_RULE_SZ \
+ RTE_ACL_RULE_SZ(RTE_ACL_IPV4VLAN_NUM_FIELDS)
+
+/*
+ * That effectively defines order of IPV4VLAN classifications:
+ * - PROTO
+ * - VLAN (TAG and DOMAIN)
+ * - SRC IP ADDRESS
+ * - DST IP ADDRESS
+ * - PORTS (SRC and DST)
+ */
+enum {
+ RTE_ACL_IPV4VLAN_PROTO,
+ RTE_ACL_IPV4VLAN_VLAN,
+ RTE_ACL_IPV4VLAN_SRC,
+ RTE_ACL_IPV4VLAN_DST,
+ RTE_ACL_IPV4VLAN_PORTS,
+ RTE_ACL_IPV4VLAN_NUM
+};
+
/* rules for invalid layout test */
struct rte_acl_ipv4vlan_rule invalid_layout_rules[] = {
/* test src and dst address */
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c40764a..e7e213c 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -59,10 +59,6 @@ Deprecation Notices
* The scheduler statistics structure will change to allow keeping track of
RED actions.
-* librte_acl: The structure rte_acl_ipv4vlan_rule is deprecated and should
- be removed as well as the associated functions rte_acl_ipv4vlan_add_rules
- and rte_acl_ipv4vlan_build.
-
* librte_cfgfile: In order to allow for longer names and values,
the value of macros CFG_NAME_LEN and CFG_NAME_VAL will be increased.
Most likely, the new values will be 64 and 256, respectively.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 2a238c5..6e73092 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -19,6 +19,8 @@ API Changes
* The function rte_eal_pci_close_one() is removed.
It was replaced by rte_eal_pci_detach().
+* The deprecated ACL API ipv4vlan is removed.
+
ABI Changes
-----------
@@ -45,7 +47,7 @@ The libraries prepended with a plus sign were incremented in this version.
.. code-block:: diff
+ libethdev.so.2
- librte_acl.so.1
+ + librte_acl.so.2
librte_cfgfile.so.1
librte_cmdline.so.1
librte_distributor.so.1
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index f612671..f676d14 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -261,6 +261,23 @@ enum {
NUM_FIELDS_IPV4
};
+/*
+ * That effectively defines order of IPV4VLAN classifications:
+ * - PROTO
+ * - VLAN (TAG and DOMAIN)
+ * - SRC IP ADDRESS
+ * - DST IP ADDRESS
+ * - PORTS (SRC and DST)
+ */
+enum {
+ RTE_ACL_IPV4VLAN_PROTO,
+ RTE_ACL_IPV4VLAN_VLAN,
+ RTE_ACL_IPV4VLAN_SRC,
+ RTE_ACL_IPV4VLAN_DST,
+ RTE_ACL_IPV4VLAN_PORTS,
+ RTE_ACL_IPV4VLAN_NUM
+};
+
struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = {
{
.type = RTE_ACL_FIELD_TYPE_BITMASK,
diff --git a/lib/librte_acl/Makefile b/lib/librte_acl/Makefile
index 46acc2b..7a1cf8a 100644
--- a/lib/librte_acl/Makefile
+++ b/lib/librte_acl/Makefile
@@ -39,7 +39,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
EXPORT_MAP := rte_acl_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_ACL) += tb_mem.c
diff --git a/lib/librte_acl/rte_acl.c b/lib/librte_acl/rte_acl.c
index a54d531..d60219f 100644
--- a/lib/librte_acl/rte_acl.c
+++ b/lib/librte_acl/rte_acl.c
@@ -34,8 +34,6 @@
#include <rte_acl.h>
#include "acl.h"
-#define BIT_SIZEOF(x) (sizeof(x) * CHAR_BIT)
-
TAILQ_HEAD(rte_acl_list, rte_tailq_entry);
static struct rte_tailq_elem rte_acl_tailq = {
@@ -365,171 +363,3 @@ rte_acl_list_dump(void)
}
rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
}
-
-/*
- * Support for legacy ipv4vlan rules.
- */
-
-RTE_ACL_RULE_DEF(acl_ipv4vlan_rule, RTE_ACL_IPV4VLAN_NUM_FIELDS);
-
-static int
-acl_ipv4vlan_check_rule(const struct rte_acl_ipv4vlan_rule *rule)
-{
- if (rule->src_port_low > rule->src_port_high ||
- rule->dst_port_low > rule->dst_port_high ||
- rule->src_mask_len > BIT_SIZEOF(rule->src_addr) ||
- rule->dst_mask_len > BIT_SIZEOF(rule->dst_addr))
- return -EINVAL;
-
- return acl_check_rule(&rule->data);
-}
-
-static void
-acl_ipv4vlan_convert_rule(const struct rte_acl_ipv4vlan_rule *ri,
- struct acl_ipv4vlan_rule *ro)
-{
- ro->data = ri->data;
-
- ro->field[RTE_ACL_IPV4VLAN_PROTO_FIELD].value.u8 = ri->proto;
- ro->field[RTE_ACL_IPV4VLAN_VLAN1_FIELD].value.u16 = ri->vlan;
- ro->field[RTE_ACL_IPV4VLAN_VLAN2_FIELD].value.u16 = ri->domain;
- ro->field[RTE_ACL_IPV4VLAN_SRC_FIELD].value.u32 = ri->src_addr;
- ro->field[RTE_ACL_IPV4VLAN_DST_FIELD].value.u32 = ri->dst_addr;
- ro->field[RTE_ACL_IPV4VLAN_SRCP_FIELD].value.u16 = ri->src_port_low;
- ro->field[RTE_ACL_IPV4VLAN_DSTP_FIELD].value.u16 = ri->dst_port_low;
-
- ro->field[RTE_ACL_IPV4VLAN_PROTO_FIELD].mask_range.u8 = ri->proto_mask;
- ro->field[RTE_ACL_IPV4VLAN_VLAN1_FIELD].mask_range.u16 = ri->vlan_mask;
- ro->field[RTE_ACL_IPV4VLAN_VLAN2_FIELD].mask_range.u16 =
- ri->domain_mask;
- ro->field[RTE_ACL_IPV4VLAN_SRC_FIELD].mask_range.u32 =
- ri->src_mask_len;
- ro->field[RTE_ACL_IPV4VLAN_DST_FIELD].mask_range.u32 = ri->dst_mask_len;
- ro->field[RTE_ACL_IPV4VLAN_SRCP_FIELD].mask_range.u16 =
- ri->src_port_high;
- ro->field[RTE_ACL_IPV4VLAN_DSTP_FIELD].mask_range.u16 =
- ri->dst_port_high;
-}
-
-int
-rte_acl_ipv4vlan_add_rules(struct rte_acl_ctx *ctx,
- const struct rte_acl_ipv4vlan_rule *rules,
- uint32_t num)
-{
- int32_t rc;
- uint32_t i;
- struct acl_ipv4vlan_rule rv;
-
- if (ctx == NULL || rules == NULL || ctx->rule_sz != sizeof(rv))
- return -EINVAL;
-
- /* check input rules. */
- for (i = 0; i != num; i++) {
- rc = acl_ipv4vlan_check_rule(rules + i);
- if (rc != 0) {
- RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n",
- __func__, ctx->name, i + 1);
- return rc;
- }
- }
-
- if (num + ctx->num_rules > ctx->max_rules)
- return -ENOMEM;
-
- /* perform conversion to the internal format and add to the context. */
- for (i = 0, rc = 0; i != num && rc == 0; i++) {
- acl_ipv4vlan_convert_rule(rules + i, &rv);
- rc = acl_add_rules(ctx, &rv, 1);
- }
-
- return rc;
-}
-
-static void
-acl_ipv4vlan_config(struct rte_acl_config *cfg,
- const uint32_t layout[RTE_ACL_IPV4VLAN_NUM],
- uint32_t num_categories)
-{
- static const struct rte_acl_field_def
- ipv4_defs[RTE_ACL_IPV4VLAN_NUM_FIELDS] = {
- {
- .type = RTE_ACL_FIELD_TYPE_BITMASK,
- .size = sizeof(uint8_t),
- .field_index = RTE_ACL_IPV4VLAN_PROTO_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_PROTO,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_BITMASK,
- .size = sizeof(uint16_t),
- .field_index = RTE_ACL_IPV4VLAN_VLAN1_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_VLAN,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_BITMASK,
- .size = sizeof(uint16_t),
- .field_index = RTE_ACL_IPV4VLAN_VLAN2_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_VLAN,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_MASK,
- .size = sizeof(uint32_t),
- .field_index = RTE_ACL_IPV4VLAN_SRC_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_SRC,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_MASK,
- .size = sizeof(uint32_t),
- .field_index = RTE_ACL_IPV4VLAN_DST_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_DST,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_RANGE,
- .size = sizeof(uint16_t),
- .field_index = RTE_ACL_IPV4VLAN_SRCP_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_PORTS,
- },
- {
- .type = RTE_ACL_FIELD_TYPE_RANGE,
- .size = sizeof(uint16_t),
- .field_index = RTE_ACL_IPV4VLAN_DSTP_FIELD,
- .input_index = RTE_ACL_IPV4VLAN_PORTS,
- },
- };
-
- memcpy(&cfg->defs, ipv4_defs, sizeof(ipv4_defs));
- cfg->num_fields = RTE_DIM(ipv4_defs);
-
- cfg->defs[RTE_ACL_IPV4VLAN_PROTO_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_PROTO];
- cfg->defs[RTE_ACL_IPV4VLAN_VLAN1_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_VLAN];
- cfg->defs[RTE_ACL_IPV4VLAN_VLAN2_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_VLAN] +
- cfg->defs[RTE_ACL_IPV4VLAN_VLAN1_FIELD].size;
- cfg->defs[RTE_ACL_IPV4VLAN_SRC_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_SRC];
- cfg->defs[RTE_ACL_IPV4VLAN_DST_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_DST];
- cfg->defs[RTE_ACL_IPV4VLAN_SRCP_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_PORTS];
- cfg->defs[RTE_ACL_IPV4VLAN_DSTP_FIELD].offset =
- layout[RTE_ACL_IPV4VLAN_PORTS] +
- cfg->defs[RTE_ACL_IPV4VLAN_SRCP_FIELD].size;
-
- cfg->num_categories = num_categories;
-}
-
-int
-rte_acl_ipv4vlan_build(struct rte_acl_ctx *ctx,
- const uint32_t layout[RTE_ACL_IPV4VLAN_NUM],
- uint32_t num_categories)
-{
- struct rte_acl_config cfg;
-
- if (ctx == NULL || layout == NULL)
- return -EINVAL;
-
- memset(&cfg, 0, sizeof(cfg));
- acl_ipv4vlan_config(&cfg, layout, num_categories);
- return rte_acl_build(ctx, &cfg);
-}
diff --git a/lib/librte_acl/rte_acl.h b/lib/librte_acl/rte_acl.h
index bd8f892..98ef2fc 100644
--- a/lib/librte_acl/rte_acl.h
+++ b/lib/librte_acl/rte_acl.h
@@ -380,110 +380,6 @@ rte_acl_dump(const struct rte_acl_ctx *ctx);
void
rte_acl_list_dump(void);
-/**
- * Legacy support for 7-tuple IPv4 and VLAN rule.
- * This structure and corresponding API is deprecated.
- */
-struct rte_acl_ipv4vlan_rule {
- struct rte_acl_rule_data data; /**< Miscellaneous data for the rule. */
- uint8_t proto; /**< IPv4 protocol ID. */
- uint8_t proto_mask; /**< IPv4 protocol ID mask. */
- uint16_t vlan; /**< VLAN ID. */
- uint16_t vlan_mask; /**< VLAN ID mask. */
- uint16_t domain; /**< VLAN domain. */
- uint16_t domain_mask; /**< VLAN domain mask. */
- uint32_t src_addr; /**< IPv4 source address. */
- uint32_t src_mask_len; /**< IPv4 source address mask. */
- uint32_t dst_addr; /**< IPv4 destination address. */
- uint32_t dst_mask_len; /**< IPv4 destination address mask. */
- uint16_t src_port_low; /**< L4 source port low. */
- uint16_t src_port_high; /**< L4 source port high. */
- uint16_t dst_port_low; /**< L4 destination port low. */
- uint16_t dst_port_high; /**< L4 destination port high. */
-};
-
-/**
- * Specifies fields layout inside rte_acl_rule for rte_acl_ipv4vlan_rule.
- */
-enum {
- RTE_ACL_IPV4VLAN_PROTO_FIELD,
- RTE_ACL_IPV4VLAN_VLAN1_FIELD,
- RTE_ACL_IPV4VLAN_VLAN2_FIELD,
- RTE_ACL_IPV4VLAN_SRC_FIELD,
- RTE_ACL_IPV4VLAN_DST_FIELD,
- RTE_ACL_IPV4VLAN_SRCP_FIELD,
- RTE_ACL_IPV4VLAN_DSTP_FIELD,
- RTE_ACL_IPV4VLAN_NUM_FIELDS
-};
-
-/**
- * Macro to define rule size for rte_acl_ipv4vlan_rule.
- */
-#define RTE_ACL_IPV4VLAN_RULE_SZ \
- RTE_ACL_RULE_SZ(RTE_ACL_IPV4VLAN_NUM_FIELDS)
-
-/*
- * That effectively defines order of IPV4VLAN classifications:
- * - PROTO
- * - VLAN (TAG and DOMAIN)
- * - SRC IP ADDRESS
- * - DST IP ADDRESS
- * - PORTS (SRC and DST)
- */
-enum {
- RTE_ACL_IPV4VLAN_PROTO,
- RTE_ACL_IPV4VLAN_VLAN,
- RTE_ACL_IPV4VLAN_SRC,
- RTE_ACL_IPV4VLAN_DST,
- RTE_ACL_IPV4VLAN_PORTS,
- RTE_ACL_IPV4VLAN_NUM
-};
-
-/**
- * Add ipv4vlan rules to an existing ACL context.
- * This function is not multi-thread safe.
- *
- * @param ctx
- * ACL context to add patterns to.
- * @param rules
- * Array of rules to add to the ACL context.
- * Note that all fields in rte_acl_ipv4vlan_rule structures are expected
- * to be in host byte order.
- * @param num
- * Number of elements in the input array of rules.
- * @return
- * - -ENOMEM if there is no space in the ACL context for these rules.
- * - -EINVAL if the parameters are invalid.
- * - Zero if operation completed successfully.
- */
-int
-rte_acl_ipv4vlan_add_rules(struct rte_acl_ctx *ctx,
- const struct rte_acl_ipv4vlan_rule *rules,
- uint32_t num);
-
-/**
- * Analyze set of ipv4vlan rules and build required internal
- * run-time structures.
- * This function is not multi-thread safe.
- *
- * @param ctx
- * ACL context to build.
- * @param layout
- * Layout of input data to search through.
- * @param num_categories
- * Maximum number of categories to use in that build.
- * @return
- * - -ENOMEM if couldn't allocate enough memory.
- * - -EINVAL if the parameters are invalid.
- * - Negative error code if operation failed.
- * - Zero if operation completed successfully.
- */
-int
-rte_acl_ipv4vlan_build(struct rte_acl_ctx *ctx,
- const uint32_t layout[RTE_ACL_IPV4VLAN_NUM],
- uint32_t num_categories);
-
-
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_acl/rte_acl_version.map b/lib/librte_acl/rte_acl_version.map
index 3f9c810..b09370a 100644
--- a/lib/librte_acl/rte_acl_version.map
+++ b/lib/librte_acl/rte_acl_version.map
@@ -10,8 +10,6 @@ DPDK_2.0 {
rte_acl_dump;
rte_acl_find_existing;
rte_acl_free;
- rte_acl_ipv4vlan_add_rules;
- rte_acl_ipv4vlan_build;
rte_acl_list_dump;
rte_acl_reset;
rte_acl_reset_rules;
--
2.5.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 09/10] kni: remove deprecated functions
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (7 preceding siblings ...)
2015-09-02 13:16 2% ` [dpdk-dev] [PATCH v2 08/10] acl: remove old API Thomas Monjalon
@ 2015-09-02 13:16 3% ` Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 10/10] ring: " Thomas Monjalon
2015-09-04 7:50 0% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
From: Stephen Hemminger <shemming@brocade.com>
These functions were tagged as deprecated in 2.0 so they can be
removed in 2.2.
The library version is incremented.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Helin Zhang <helin.zhang@intel.com>
[Thomas: update doc and version]
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
app/test/Makefile | 6 ---
app/test/test_kni.c | 36 ---------------
doc/guides/prog_guide/kernel_nic_interface.rst | 2 -
doc/guides/rel_notes/deprecation.rst | 3 --
doc/guides/rel_notes/release_2_2.rst | 5 ++-
doc/guides/sample_app_ug/kernel_nic_interface.rst | 9 ----
lib/librte_kni/Makefile | 2 +-
lib/librte_kni/rte_kni.c | 51 ---------------------
lib/librte_kni/rte_kni.h | 54 -----------------------
lib/librte_kni/rte_kni_version.map | 3 --
10 files changed, 5 insertions(+), 166 deletions(-)
diff --git a/app/test/Makefile b/app/test/Makefile
index e7f148f..7778e1c 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -146,12 +146,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
-# Disable warnings of deprecated-declarations in test_kni.c
-ifeq ($(CC), icc)
-CFLAGS_test_kni.o += -wd1478
-else
-CFLAGS_test_kni.o += -Wno-deprecated-declarations
-endif
CFLAGS += -D_GNU_SOURCE
# Disable VTA for memcpy test
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 506b543..9dad988 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -398,17 +398,6 @@ test_kni_processing(uint8_t port_id, struct rte_mempool *mp)
printf("fail to create kni\n");
return -1;
}
- if (rte_kni_get_port_id(kni) != port_id) {
- printf("fail to get port id\n");
- ret = -1;
- goto fail_kni;
- }
-
- if (rte_kni_info_get(RTE_MAX_ETHPORTS)) {
- printf("Unexpectedly get a KNI successfully\n");
- ret = -1;
- goto fail_kni;
- }
test_kni_ctx = kni;
test_kni_processing_flag = 0;
@@ -591,14 +580,6 @@ test_kni(void)
goto fail;
}
- /* test of getting port id according to NULL kni context */
- if (rte_kni_get_port_id(NULL) < RTE_MAX_ETHPORTS) {
- ret = -1;
- printf("unexpectedly get port id successfully by NULL kni "
- "pointer\n");
- goto fail;
- }
-
/* test of releasing NULL kni context */
ret = rte_kni_release(NULL);
if (ret == 0) {
@@ -645,23 +626,6 @@ test_kni(void)
goto fail;
}
- /* test the interface of creating a KNI, for backward compatibility */
- memset(&ops, 0, sizeof(ops));
- ops = kni_ops;
- kni = rte_kni_create(port_id, MAX_PACKET_SZ, mp, &ops);
- if (!kni) {
- ret = -1;
- printf("Fail to create a KNI device for port %d\n", port_id);
- goto fail;
- }
-
- ret = rte_kni_release(kni);
- if (ret < 0) {
- printf("Fail to release a KNI device\n");
- goto fail;
- }
-
- ret = 0;
fail:
rte_eth_dev_stop(port_id);
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
index 713d30b..0d91476 100644
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ b/doc/guides/prog_guide/kernel_nic_interface.rst
@@ -100,8 +100,6 @@ Refer to rte_kni_common.h in the DPDK source code for more details.
The physical addresses will be re-mapped into the kernel address space and stored in separate KNI contexts.
-Once KNI interfaces are created, the KNI context information can be queried by calling the rte_kni_info_get() function.
-
The KNI interfaces can be deleted by a DPDK application dynamically after being created.
Furthermore, all those KNI interfaces not deleted will be deleted on the release operation
of the miscellaneous device (when the DPDK application is closed).
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e7e213c..04819fa 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -40,9 +40,6 @@ Deprecation Notices
the tunnel type, TNI/VNI, inner MAC and inner VLAN are monitored.
The release 2.2 will contain these changes without backwards compatibility.
-* librte_kni: Functions based on port id are deprecated for a long time and
- should be removed (rte_kni_create, rte_kni_get_port_id and rte_kni_info_get).
-
* librte_pmd_ring: The deprecated functions rte_eth_ring_pair_create and
rte_eth_ring_pair_attach should be removed.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 6e73092..6dcfe88 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -21,6 +21,9 @@ API Changes
* The deprecated ACL API ipv4vlan is removed.
+* The deprecated KNI functions are removed:
+ rte_kni_create(), rte_kni_get_port_id() and rte_kni_info_get().
+
ABI Changes
-----------
@@ -56,7 +59,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_ip_frag.so.1
librte_ivshmem.so.1
librte_jobstats.so.1
- librte_kni.so.1
+ + librte_kni.so.2
librte_kvargs.so.1
+ librte_lpm.so.2
+ librte_mbuf.so.2
diff --git a/doc/guides/sample_app_ug/kernel_nic_interface.rst b/doc/guides/sample_app_ug/kernel_nic_interface.rst
index 02dde59..f1deca9 100644
--- a/doc/guides/sample_app_ug/kernel_nic_interface.rst
+++ b/doc/guides/sample_app_ug/kernel_nic_interface.rst
@@ -242,15 +242,6 @@ Setup of mbuf pool, driver and queues is similar to the setup done in the L2 For
In addition, one or more kernel NIC interfaces are allocated for each
of the configured ports according to the command line parameters.
-The code for creating the kernel NIC interface for a specific port is as follows:
-
-.. code-block:: c
-
- kni = rte_kni_create(port, MAX_PACKET_SZ, pktmbuf_pool, &kni_ops);
- if (kni == NULL)
- rte_exit(EXIT_FAILURE, "Fail to create kni dev "
- "for port: %d\n", port);
-
The code for allocating the kernel NIC interfaces for a specific port is as follows:
.. code-block:: c
diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile
index 7107832..1398164 100644
--- a/lib/librte_kni/Makefile
+++ b/lib/librte_kni/Makefile
@@ -38,7 +38,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing
EXPORT_MAP := rte_kni_version.map
-LIBABIVER := 1
+LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_KNI) := rte_kni.c
diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c
index 08155db..ea9baf4 100644
--- a/lib/librte_kni/rte_kni.c
+++ b/lib/librte_kni/rte_kni.c
@@ -311,31 +311,6 @@ kni_fail:
max_kni_ifaces);
}
-/* It is deprecated and just for backward compatibility */
-struct rte_kni *
-rte_kni_create(uint8_t port_id,
- unsigned mbuf_size,
- struct rte_mempool *pktmbuf_pool,
- struct rte_kni_ops *ops)
-{
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
-
- memset(&info, 0, sizeof(info));
- memset(&conf, 0, sizeof(conf));
- rte_eth_dev_info_get(port_id, &info);
-
- snprintf(conf.name, sizeof(conf.name), "vEth%u", port_id);
- conf.addr = info.pci_dev->addr;
- conf.id = info.pci_dev->id;
- conf.group_id = (uint16_t)port_id;
- conf.mbuf_size = mbuf_size;
-
- /* Save the port id for request handling */
- ops->port_id = port_id;
-
- return rte_kni_alloc(pktmbuf_pool, &conf, ops);
-}
struct rte_kni *
rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
@@ -650,16 +625,6 @@ kni_allocate_mbufs(struct rte_kni *kni)
}
}
-/* It is deprecated and just for backward compatibility */
-uint8_t
-rte_kni_get_port_id(struct rte_kni *kni)
-{
- if (!kni)
- return ~0x0;
-
- return kni->ops.port_id;
-}
-
struct rte_kni *
rte_kni_get(const char *name)
{
@@ -686,22 +651,6 @@ rte_kni_get_name(const struct rte_kni *kni)
return kni->name;
}
-/*
- * It is deprecated and just for backward compatibility.
- */
-struct rte_kni *
-rte_kni_info_get(uint8_t port_id)
-{
- char name[RTE_MEMZONE_NAMESIZE];
-
- if (port_id >= RTE_MAX_ETHPORTS)
- return NULL;
-
- snprintf(name, RTE_MEMZONE_NAMESIZE, "vEth%u", port_id);
-
- return rte_kni_get(name);
-}
-
static enum kni_ops_status
kni_check_request_register(struct rte_kni_ops *ops)
{
diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h
index 52ffdb7..ef9faa9 100644
--- a/lib/librte_kni/rte_kni.h
+++ b/lib/librte_kni/rte_kni.h
@@ -129,30 +129,6 @@ extern struct rte_kni *rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
struct rte_kni_ops *ops);
/**
- * It create a KNI device for specific port.
- *
- * Note: It is deprecated and just for backward compatibility.
- *
- * @param port_id
- * Port ID.
- * @param mbuf_size
- * mbuf size.
- * @param pktmbuf_pool
- * The mempool for allocting mbufs for packets.
- * @param ops
- * The pointer to the callbacks for the KNI kernel requests.
- *
- * @return
- * - The pointer to the context of a KNI interface.
- * - NULL indicate error.
- */
-extern struct rte_kni *rte_kni_create(uint8_t port_id,
- unsigned mbuf_size,
- struct rte_mempool *pktmbuf_pool,
- struct rte_kni_ops *ops) \
- __attribute__ ((deprecated));
-
-/**
* Release KNI interface according to the context. It will also release the
* paired KNI interface in kernel space. All processing on the specific KNI
* context need to be stopped before calling this interface.
@@ -221,21 +197,6 @@ extern unsigned rte_kni_tx_burst(struct rte_kni *kni,
struct rte_mbuf **mbufs, unsigned num);
/**
- * Get the port id from KNI interface.
- *
- * Note: It is deprecated and just for backward compatibility.
- *
- * @param kni
- * The KNI interface context.
- *
- * @return
- * On success: The port id.
- * On failure: ~0x0
- */
-extern uint8_t rte_kni_get_port_id(struct rte_kni *kni) \
- __attribute__ ((deprecated));
-
-/**
* Get the KNI context of its name.
*
* @param name
@@ -258,21 +219,6 @@ extern struct rte_kni *rte_kni_get(const char *name);
extern const char *rte_kni_get_name(const struct rte_kni *kni);
/**
- * Get the KNI context of the specific port.
- *
- * Note: It is deprecated and just for backward compatibility.
- *
- * @param port_id
- * the port id.
- *
- * @return
- * On success: Pointer to KNI interface.
- * On failure: NULL
- */
-extern struct rte_kni *rte_kni_info_get(uint8_t port_id) \
- __attribute__ ((deprecated));
-
-/**
* Register KNI request handling for a specified port,and it can
* be called by master process or slave process.
*
diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map
index a987d31..acd515e 100644
--- a/lib/librte_kni/rte_kni_version.map
+++ b/lib/librte_kni/rte_kni_version.map
@@ -3,12 +3,9 @@ DPDK_2.0 {
rte_kni_alloc;
rte_kni_close;
- rte_kni_create;
rte_kni_get;
rte_kni_get_name;
- rte_kni_get_port_id;
rte_kni_handle_request;
- rte_kni_info_get;
rte_kni_init;
rte_kni_register_handlers;
rte_kni_release;
--
2.5.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 10/10] ring: remove deprecated functions
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (8 preceding siblings ...)
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 09/10] kni: remove deprecated functions Thomas Monjalon
@ 2015-09-02 13:16 5% ` Thomas Monjalon
2015-09-04 7:50 0% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 13:16 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
From: Stephen Hemminger <shemming@brocade.com>
These were deprecated in 2.0 so remove them from 2.2.
The library version is incremented.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Thomas Monjalon <thomas.monjalon@6wind.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
---
doc/guides/rel_notes/deprecation.rst | 3 --
doc/guides/rel_notes/release_2_2.rst | 5 ++-
drivers/net/ring/Makefile | 2 +-
drivers/net/ring/rte_eth_ring.c | 56 -------------------------------
drivers/net/ring/rte_eth_ring.h | 3 --
drivers/net/ring/rte_eth_ring_version.map | 2 --
6 files changed, 5 insertions(+), 66 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 04819fa..5f6079b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -40,9 +40,6 @@ Deprecation Notices
the tunnel type, TNI/VNI, inner MAC and inner VLAN are monitored.
The release 2.2 will contain these changes without backwards compatibility.
-* librte_pmd_ring: The deprecated functions rte_eth_ring_pair_create and
- rte_eth_ring_pair_attach should be removed.
-
* ABI changes are planned for struct virtio_net in order to support vhost-user
multiple queues feature.
It should be integrated in release 2.2 without backward compatibility.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 6dcfe88..abe57b4 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -24,6 +24,9 @@ API Changes
* The deprecated KNI functions are removed:
rte_kni_create(), rte_kni_get_port_id() and rte_kni_info_get().
+* The deprecated ring PMD functions are removed:
+ rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
+
ABI Changes
-----------
@@ -67,7 +70,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_meter.so.1
librte_pipeline.so.1
librte_pmd_bond.so.1
- librte_pmd_ring.so.1
+ + librte_pmd_ring.so.2
librte_port.so.1
librte_power.so.1
librte_reorder.so.1
diff --git a/drivers/net/ring/Makefile b/drivers/net/ring/Makefile
index e442d0b..ae83505 100644
--- a/drivers/net/ring/Makefile
+++ b/drivers/net/ring/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_eth_ring_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 6fd3d0a..0ba36d5 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -391,62 +391,6 @@ eth_dev_ring_create(const char *name, const unsigned numa_node,
return 0;
}
-
-static int
-eth_dev_ring_pair_create(const char *name, const unsigned numa_node,
- enum dev_action action)
-{
- /* rx and tx are so-called from point of view of first port.
- * They are inverted from the point of view of second port
- */
- struct rte_ring *rx[RTE_PMD_RING_MAX_RX_RINGS];
- struct rte_ring *tx[RTE_PMD_RING_MAX_TX_RINGS];
- unsigned i;
- char rx_rng_name[RTE_RING_NAMESIZE];
- char tx_rng_name[RTE_RING_NAMESIZE];
- unsigned num_rings = RTE_MIN(RTE_PMD_RING_MAX_RX_RINGS,
- RTE_PMD_RING_MAX_TX_RINGS);
-
- for (i = 0; i < num_rings; i++) {
- snprintf(rx_rng_name, sizeof(rx_rng_name), "ETH_RX%u_%s", i, name);
- rx[i] = (action == DEV_CREATE) ?
- rte_ring_create(rx_rng_name, 1024, numa_node,
- RING_F_SP_ENQ|RING_F_SC_DEQ) :
- rte_ring_lookup(rx_rng_name);
- if (rx[i] == NULL)
- return -1;
- snprintf(tx_rng_name, sizeof(tx_rng_name), "ETH_TX%u_%s", i, name);
- tx[i] = (action == DEV_CREATE) ?
- rte_ring_create(tx_rng_name, 1024, numa_node,
- RING_F_SP_ENQ|RING_F_SC_DEQ):
- rte_ring_lookup(tx_rng_name);
- if (tx[i] == NULL)
- return -1;
- }
-
- if (rte_eth_from_rings(rx_rng_name, rx, num_rings, tx, num_rings,
- numa_node) < 0 ||
- rte_eth_from_rings(tx_rng_name, tx, num_rings, rx,
- num_rings, numa_node) < 0)
- return -1;
-
- return 0;
-}
-
-int
-rte_eth_ring_pair_create(const char *name, const unsigned numa_node)
-{
- RTE_LOG(WARNING, PMD, "rte_eth_ring_pair_create is deprecated\n");
- return eth_dev_ring_pair_create(name, numa_node, DEV_CREATE);
-}
-
-int
-rte_eth_ring_pair_attach(const char *name, const unsigned numa_node)
-{
- RTE_LOG(WARNING, PMD, "rte_eth_ring_pair_attach is deprecated\n");
- return eth_dev_ring_pair_create(name, numa_node, DEV_ATTACH);
-}
-
struct node_action_pair {
char name[PATH_MAX];
unsigned node;
diff --git a/drivers/net/ring/rte_eth_ring.h b/drivers/net/ring/rte_eth_ring.h
index 2262249..5a69bff 100644
--- a/drivers/net/ring/rte_eth_ring.h
+++ b/drivers/net/ring/rte_eth_ring.h
@@ -65,9 +65,6 @@ int rte_eth_from_rings(const char *name,
const unsigned nb_tx_queues,
const unsigned numa_node);
-int rte_eth_ring_pair_create(const char *name, const unsigned numa_node);
-int rte_eth_ring_pair_attach(const char *name, const unsigned numa_node);
-
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/ring/rte_eth_ring_version.map b/drivers/net/ring/rte_eth_ring_version.map
index 8ad107d..0875e25 100644
--- a/drivers/net/ring/rte_eth_ring_version.map
+++ b/drivers/net/ring/rte_eth_ring_version.map
@@ -2,8 +2,6 @@ DPDK_2.0 {
global:
rte_eth_from_rings;
- rte_eth_ring_pair_attach;
- rte_eth_ring_pair_create;
local: *;
};
--
2.5.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] libdpdk upstream changes for ecosystem best practices
@ 2015-09-02 13:49 5% Robie Basak
2015-09-02 14:18 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Robie Basak @ 2015-09-02 13:49 UTC (permalink / raw)
To: dev
Hi,
We’re looking at packaging DPDK in Ubuntu. We’d like to discuss upstream
changes to better integrate DPDK into Linux distributions. Here’s a
summary of what we need:
1) Define one library ABI (soname and sover) that we can use instead of the
split build.
2) Fix #includes so we don't have to include config.h
3) Put headers into /usr/include/dpdk instead of /usr/include
You can see our current packaging progress at
https://git.launchpad.net/~ubuntu-server/dpdk/log/?h=ubuntu-wily and a
test PPA at https://launchpad.net/~smb/+archive/ubuntu/dpdk/
First, it would be easier for us to ship a single binary package that
ships a single shared library to cover all of DPDK that library
consumers might need, rather than having it split up as you do. I
understand the build system is capable of doing this already, but what
we don’t have is a well defined soname and sover (currently
parameterized in the build) for ABI compatibility purposes. As a binary
distribution, this is something that we’d expect upstream to define,
since normally we expect to achieve binary compatibility across all
distributions at this level in the stack.
So I have the following requests:
So that we can get DPDK packaging into Ubuntu immediately, please could
we agree to define (and burn) libdpdk.so.0 to be the ABI that builds
with upstream release 2.0.0 when built with the native-linuxapp-gcc
template options plus the following changes:
CONFIG_RTE_MACHINE=”default”
CONFIG_RTE_APP_TEST=n
CONFIG_LIBRTE_VHOST=y
CONFIG_RTE_EAL_IGB_UIO=n
CONFIG_RTE_LIBRTE_KNI=n
CONFIG_RTE_BUILD_COMBINE_LIBS=y
CONFIG_RTE_BUILD_SHARED_LIB=y
CONFIG_RTE_LIBNAME=”dpdk”
The combined library would be placed into /usr/lib/$(ARCH)-linux-gnu/
where it can be found without modification to the library search path.
We want to ship it like this in Ubuntu anyway, but I’d prefer upstream
to have defined it as such since then we’ll have a proper definition of
the ABI that can be shared across distributions and other consumers any
time ABI compatibility is expected.
Though not strictly part of a shared library ABI, I also propose some
build-related upstream changes at API level below, that I’d like to also
ship in the initial Ubuntu packaging of the header files. Clearly you
cannot make this change in an existing release, but I propose that you
do this for your next release so all library consumers will see a
consistent and standard API interface. If you agree to this, then I’d
also like to ship the Ubuntu package with patches to do the same thing
in your current release.
Right now, I understand that library consumers need to either: 1) use
the upstream-provided build system (.mk files etc); or 2) otherwise make
sure to include rte_config.h by specifying it as an extra CPPFLAGS
parameter as the upstream API documentation does not require its
inclusion use in source files. This is problematic because somebody
writing against multiple libraries should just expect to #include the
API-defined headers and link simply with -l for the build to work. It is
common to have a config.h type file generated at build time, but in this
case I’d expect it to be conditionally included automatically as part of
the API, for example by #include’ing it in any file the API _does_
define that library users must include. To fix this, I propose to
#include <dpdk/rte_config.h> in every header file that library users may
#include according to the API.
That brings me to paths. To avoid polluting the /usr/include namespace,
I’d expect either a single /usr/include/dpdk.h, or everything inside
/usr/include/dpdk/, or both. Then library consumers would #include
combinations of <dpdk.h> and <dpdk/foo.h> as required, our packaging
could install into these directories without stealing any other part of
the shared filesystem namespace, and library users wouldn’t have to be
concerned about paths, configuration or build systems. This would then
match every other shared library we package. Does this sound reasonable
to you? Is this a change you will accept?
Thanks,
Robie
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] libdpdk upstream changes for ecosystem best practices
2015-09-02 13:49 5% [dpdk-dev] libdpdk upstream changes for ecosystem best practices Robie Basak
@ 2015-09-02 14:18 3% ` Thomas Monjalon
2015-09-02 16:01 0% ` Stephen Hemminger
2015-09-18 10:39 5% ` Robie Basak
0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2015-09-02 14:18 UTC (permalink / raw)
To: Robie Basak; +Cc: dev
Hi,
2015-09-02 14:49, Robie Basak:
> Hi,
>
> We’re looking at packaging DPDK in Ubuntu. We’d like to discuss upstream
Nice
> changes to better integrate DPDK into Linux distributions. Here’s a
> summary of what we need:
>
> 1) Define one library ABI (soname and sover) that we can use instead of the
> split build.
>
> 2) Fix #includes so we don't have to include config.h
>
> 3) Put headers into /usr/include/dpdk instead of /usr/include
>
> You can see our current packaging progress at
> https://git.launchpad.net/~ubuntu-server/dpdk/log/?h=ubuntu-wily and a
Thanks for sharing
> test PPA at https://launchpad.net/~smb/+archive/ubuntu/dpdk/
>
> First, it would be easier for us to ship a single binary package that
> ships a single shared library to cover all of DPDK that library
> consumers might need, rather than having it split up as you do. I
> understand the build system is capable of doing this already, but what
> we don’t have is a well defined soname and sover (currently
> parameterized in the build) for ABI compatibility purposes. As a binary
No it is now fixed:
http://dpdk.org/browse/dpdk/commit/?id=c3ce2ad3548
> distribution, this is something that we’d expect upstream to define,
> since normally we expect to achieve binary compatibility across all
> distributions at this level in the stack.
>
> So I have the following requests:
>
> So that we can get DPDK packaging into Ubuntu immediately, please could
> we agree to define (and burn) libdpdk.so.0 to be the ABI that builds
> with upstream release 2.0.0 when built with the native-linuxapp-gcc
> template options plus the following changes:
> CONFIG_RTE_MACHINE=”default”
> CONFIG_RTE_APP_TEST=n
> CONFIG_LIBRTE_VHOST=y
> CONFIG_RTE_EAL_IGB_UIO=n
> CONFIG_RTE_LIBRTE_KNI=n
> CONFIG_RTE_BUILD_COMBINE_LIBS=y
> CONFIG_RTE_BUILD_SHARED_LIB=y
I feel this configuration is the responsibility of the distribution.
What do you expect to have in the source project?
> CONFIG_RTE_LIBNAME=”dpdk”
not exist anymore
> The combined library would be placed into /usr/lib/$(ARCH)-linux-gnu/
> where it can be found without modification to the library search path.
> We want to ship it like this in Ubuntu anyway, but I’d prefer upstream
> to have defined it as such since then we’ll have a proper definition of
> the ABI that can be shared across distributions and other consumers any
> time ABI compatibility is expected.
You mean you target ABI compatibility between Linux distributons?
But other libraries could have different versions so you would be lucky
to have a binary application finding the same dependencies.
> Though not strictly part of a shared library ABI, I also propose some
> build-related upstream changes at API level below, that I’d like to also
> ship in the initial Ubuntu packaging of the header files. Clearly you
> cannot make this change in an existing release, but I propose that you
> do this for your next release so all library consumers will see a
> consistent and standard API interface. If you agree to this, then I’d
> also like to ship the Ubuntu package with patches to do the same thing
> in your current release.
Yes cleanup patches are welcome :)
> Right now, I understand that library consumers need to either: 1) use
> the upstream-provided build system (.mk files etc); or 2) otherwise make
> sure to include rte_config.h by specifying it as an extra CPPFLAGS
> parameter as the upstream API documentation does not require its
> inclusion use in source files. This is problematic because somebody
> writing against multiple libraries should just expect to #include the
> API-defined headers and link simply with -l for the build to work. It is
> common to have a config.h type file generated at build time, but in this
> case I’d expect it to be conditionally included automatically as part of
> the API, for example by #include’ing it in any file the API _does_
> define that library users must include. To fix this, I propose to
> #include <dpdk/rte_config.h> in every header file that library users may
> #include according to the API.
>
> That brings me to paths. To avoid polluting the /usr/include namespace,
> I’d expect either a single /usr/include/dpdk.h, or everything inside
> /usr/include/dpdk/, or both. Then library consumers would #include
The second option seems more reasonable.
> combinations of <dpdk.h> and <dpdk/foo.h> as required, our packaging
> could install into these directories without stealing any other part of
> the shared filesystem namespace, and library users wouldn’t have to be
> concerned about paths, configuration or build systems. This would then
> match every other shared library we package. Does this sound reasonable
> to you? Is this a change you will accept?
Yes there is clearly a namespace issue in DPDK.
I would add that libethdev should be librte_ethdev.
> Thanks,
Thanks for suggesting
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the macros values
@ 2015-09-02 15:53 3% Jasvinder Singh
2015-09-02 20:47 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jasvinder Singh @ 2015-09-02 15:53 UTC (permalink / raw)
To: dev
This patch refers to the ABI change proposed for librte_cfgfile (rte_cfgfile.h).
In order to allow for longer names and values, the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/librte_cfgfile/rte_cfgfile.h b/lib/librte_cfgfile/rte_cfgfile.h
index 7c9fc91..d443782 100644
--- a/lib/librte_cfgfile/rte_cfgfile.h
+++ b/lib/librte_cfgfile/rte_cfgfile.h
@@ -47,8 +47,13 @@ extern "C" {
*
***/
-#define CFG_NAME_LEN 32
-#define CFG_VALUE_LEN 64
+#ifndef CFG_NAME_LEN
+#define CFG_NAME_LEN 64
+#endif
+
+#ifndef CFG_VALUE_LEN
+#define CFG_VALUE_LEN 256
+#endif
/** Configuration file */
struct rte_cfgfile;
--
2.1.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] libdpdk upstream changes for ecosystem best practices
2015-09-02 14:18 3% ` Thomas Monjalon
@ 2015-09-02 16:01 0% ` Stephen Hemminger
2015-09-18 10:39 5% ` Robie Basak
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2015-09-02 16:01 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Wed, 02 Sep 2015 16:18:33 +0200
Thomas Monjalon <thomas.monjalon@6wind.com> wrote:
> Hi,
>
> 2015-09-02 14:49, Robie Basak:
> > Hi,
> >
> > We’re looking at packaging DPDK in Ubuntu. We’d like to discuss upstream
>
> Nice
This matches what we do internally. I was heading towards making this
a real Debian package. Since Debian is more free and is a superset
of Ubuntu.
>
> > changes to better integrate DPDK into Linux distributions. Here’s a
> > summary of what we need:
> >
> > 1) Define one library ABI (soname and sover) that we can use instead of the
> > split build.
> >
> > 2) Fix #includes so we don't have to include config.h
> >
> > 3) Put headers into /usr/include/dpdk instead of /usr/include
> >
> > You can see our current packaging progress at
> > https://git.launchpad.net/~ubuntu-server/dpdk/log/?h=ubuntu-wily and a
>
> Thanks for sharing
I have made basically the same decisions. Target is /usr/include/dpdk
and the library version comes from rte config. It seems more logical
to make the library shared object version equal the major version
of DPDK (ie 2) rather than having shared object and source versions
diverge.
Just updating to 2.1 packaging now, will send patches.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the macros values
2015-09-02 15:53 3% [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the macros values Jasvinder Singh
@ 2015-09-02 20:47 3% ` Thomas Monjalon
2015-09-03 9:53 0% ` Singh, Jasvinder
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2015-09-02 20:47 UTC (permalink / raw)
To: Jasvinder Singh; +Cc: dev
2015-09-02 16:53, Jasvinder Singh:
> This patch refers to the ABI change proposed for librte_cfgfile (rte_cfgfile.h).
> In order to allow for longer names and values, the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
>
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> ---
> lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
If the ABI is changed, the LIBABIVER number must be bumped.
The release notes must also be updated and the deprecation announce
must be removed.
Thanks
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the macros values
2015-09-02 20:47 3% ` Thomas Monjalon
@ 2015-09-03 9:53 0% ` Singh, Jasvinder
0 siblings, 0 replies; 200+ results
From: Singh, Jasvinder @ 2015-09-03 9:53 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, September 2, 2015 9:48 PM
> To: Singh, Jasvinder
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the
> macros values
>
> 2015-09-02 16:53, Jasvinder Singh:
> > This patch refers to the ABI change proposed for librte_cfgfile
> (rte_cfgfile.h).
> > In order to allow for longer names and values, the new values of macros
> CFG_NAME_LEN and CFG_NAME_VAL are set.
> >
> > Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> > ---
> > lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
> > 1 file changed, 7 insertions(+), 2 deletions(-)
>
> If the ABI is changed, the LIBABIVER number must be bumped.
> The release notes must also be updated and the deprecation announce must
> be removed.
> Thanks
Thanks, Thomas. I will submit v2 patch.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] librte_cfgfile (rte_cfgfile.h): modify the macros values
[not found] <1441289108-4501-1-git-send-email-jasvinder.singh@intel.com>
@ 2015-09-03 14:18 3% ` Jasvinder Singh
2015-09-03 14:33 0% ` Thomas Monjalon
2015-09-04 10:58 8% ` [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
0 siblings, 2 replies; 200+ results
From: Jasvinder Singh @ 2015-09-03 14:18 UTC (permalink / raw)
To: dev
This patch refers to the ABI change proposed for librte_cfgfile (rte_cfgfile.h).
In order to allow for longer names and values, the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
lib/librte_cfgfile/Makefile | 2 +-
lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index da17880..ec049e7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -86,10 +86,6 @@ Deprecation Notices
be removed as well as the associated functions rte_acl_ipv4vlan_add_rules
and rte_acl_ipv4vlan_build.
-* librte_cfgfile: In order to allow for longer names and values,
- the value of macros CFG_NAME_LEN and CFG_NAME_VAL will be increased.
- Most likely, the new values will be 64 and 256, respectively.
-
* librte_port: Macros to access the packet meta-data stored within the
packet buffer will be adjusted to cover the packet mbuf structure as well,
as currently they are able to access any packet buffer location except the
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
index 032c240..616aef0 100644
--- a/lib/librte_cfgfile/Makefile
+++ b/lib/librte_cfgfile/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_cfgfile_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/lib/librte_cfgfile/rte_cfgfile.h b/lib/librte_cfgfile/rte_cfgfile.h
index 7c9fc91..d443782 100644
--- a/lib/librte_cfgfile/rte_cfgfile.h
+++ b/lib/librte_cfgfile/rte_cfgfile.h
@@ -47,8 +47,13 @@ extern "C" {
*
***/
-#define CFG_NAME_LEN 32
-#define CFG_VALUE_LEN 64
+#ifndef CFG_NAME_LEN
+#define CFG_NAME_LEN 64
+#endif
+
+#ifndef CFG_VALUE_LEN
+#define CFG_VALUE_LEN 256
+#endif
/** Configuration file */
struct rte_cfgfile;
--
2.1.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] librte_cfgfile (rte_cfgfile.h): modify the macros values
2015-09-03 14:18 3% ` [dpdk-dev] [PATCH v2] " Jasvinder Singh
@ 2015-09-03 14:33 0% ` Thomas Monjalon
2015-09-04 10:58 8% ` [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-03 14:33 UTC (permalink / raw)
To: Jasvinder Singh; +Cc: dev
2015-09-03 15:18, Jasvinder Singh:
> This patch refers to the ABI change proposed for librte_cfgfile (rte_cfgfile.h).
> In order to allow for longer names and values, the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
Please wrap the commit message.
CFG_NAME_VAL should be CFG_VALUE_LEN.
The title should start with "cfgfile:".
Instead of talking about macros changes, saying "increase maximum" would give
more clue about the goal of the change.
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ----
> lib/librte_cfgfile/Makefile | 2 +-
> lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
> 3 files changed, 8 insertions(+), 7 deletions(-)
You have forgotten to update doc/guides/rel_notes/release_2_2.rst.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 01/10] doc: init next release notes
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 01/10] doc: init next release notes Thomas Monjalon
@ 2015-09-03 15:44 3% ` Mcnamara, John
2015-09-04 7:50 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Mcnamara, John @ 2015-09-03 15:44 UTC (permalink / raw)
To: Thomas Monjalon, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> Sent: Wednesday, September 2, 2015 2:17 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 01/10] doc: init next release notes
P.S. Perhaps we should announce, or maybe this will do as an announcement, that from this release forward the Release Notes should be updated as part of a patchset that contains one of the following:
* New Features
* Resolved Issues (in relation to features existing in the previous releases)
* Known Issues
* API Changes
* ABI Changes
* Shared Library Versions
John.
--
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 01/10] doc: init next release notes
2015-09-03 15:44 3% ` Mcnamara, John
@ 2015-09-04 7:50 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-04 7:50 UTC (permalink / raw)
To: Mcnamara, John; +Cc: dev
2015-09-03 15:44, Mcnamara, John:
> P.S. Perhaps we should announce, or maybe this will do as an announcement, that from this release forward the Release Notes should be updated as part of a patchset that contains one of the following:
>
> * New Features
> * Resolved Issues (in relation to features existing in the previous releases)
> * Known Issues
> * API Changes
> * ABI Changes
> * Shared Library Versions
Maybe we should update doc/guides/contributing/documentation.rst to clearly state it.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 00/10] clean deprecated code
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
` (9 preceding siblings ...)
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 10/10] ring: " Thomas Monjalon
@ 2015-09-04 7:50 0% ` Thomas Monjalon
10 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-09-04 7:50 UTC (permalink / raw)
To: dev
2015-09-02 15:16, Thomas Monjalon:
> Before starting a new integration cycle (2.2.0-rc0),
> the deprecated code is removed.
>
> The hash library is not cleaned in this patchset and would be
> better done by its maintainers. Bruce, Pablo, please check the
> file doc/guides/rel_notes/deprecation.rst.
>
> Changes in v2:
> - increment KNI and ring PMD versions
> - list library versions in release notes
> - list API/ABI changes in release notes
>
> Stephen Hemminger (2):
> kni: remove deprecated functions
> ring: remove deprecated functions
>
> Thomas Monjalon (8):
> doc: init next release notes
> ethdev: remove Rx interrupt switch
> mbuf: remove packet type from offload flags
> ethdev: remove SCTP flow entries switch
> eal: remove deprecated function
> mem: remove dummy malloc library
> lpm: remove deprecated field
> acl: remove old API
Applied
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 3/3] hash: remove deprecated functions and macros
@ 2015-09-04 9:05 4% ` Pablo de Lara
0 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2015-09-04 9:05 UTC (permalink / raw)
To: dev
The function rte_jhash2() was renamed rte_jhash_32b and
macros RTE_HASH_KEY_LENGTH_MAX and RTE_HASH_BUCKET_ENTRIES_MAX
were tagged as deprecated, so they can be removed in 2.2.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 -----
doc/guides/rel_notes/release_2_2.rst | 3 +++
lib/librte_hash/rte_hash.h | 6 ------
lib/librte_hash/rte_jhash.h | 15 ++-------------
4 files changed, 5 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5f6079b..fffad80 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -13,11 +13,6 @@ Deprecation Notices
There is no backward compatibility planned from release 2.2.
All binaries will need to be rebuilt from release 2.2.
-* The Macros RTE_HASH_BUCKET_ENTRIES_MAX and RTE_HASH_KEY_LENGTH_MAX are
- deprecated and will be removed with version 2.2.
-
-* The function rte_jhash2 is deprecated and should be removed.
-
* The following fields have been deprecated in rte_eth_stats:
imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index abe57b4..aa44862 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -27,6 +27,9 @@ API Changes
* The deprecated ring PMD functions are removed:
rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
+* The function rte_jhash2() is removed.
+ It was replaced by rte_jhash_32b().
+
ABI Changes
-----------
diff --git a/lib/librte_hash/rte_hash.h b/lib/librte_hash/rte_hash.h
index 1cddc07..175c0bb 100644
--- a/lib/librte_hash/rte_hash.h
+++ b/lib/librte_hash/rte_hash.h
@@ -49,12 +49,6 @@ extern "C" {
/** Maximum size of hash table that can be created. */
#define RTE_HASH_ENTRIES_MAX (1 << 30)
-/** @deprecated Maximum bucket size that can be created. */
-#define RTE_HASH_BUCKET_ENTRIES_MAX 4
-
-/** @deprecated Maximum length of key that can be used. */
-#define RTE_HASH_KEY_LENGTH_MAX 64
-
/** Maximum number of characters in hash name.*/
#define RTE_HASH_NAMESIZE 32
diff --git a/lib/librte_hash/rte_jhash.h b/lib/librte_hash/rte_jhash.h
index f9a8266..457f225 100644
--- a/lib/librte_hash/rte_jhash.h
+++ b/lib/librte_hash/rte_jhash.h
@@ -267,10 +267,10 @@ rte_jhash_2hashes(const void *key, uint32_t length, uint32_t *pc, uint32_t *pb)
}
/**
- * Same as rte_jhash2, but takes two seeds and return two uint32_ts.
+ * Same as rte_jhash_32b, but takes two seeds and return two uint32_ts.
* pc and pb must be non-null, and *pc and *pb must both be initialized
* with seeds. If you pass in (*pb)=0, the output (*pc) will be
- * the same as the return value from rte_jhash2.
+ * the same as the return value from rte_jhash_32b.
*
* @param k
* Key to calculate hash of.
@@ -335,17 +335,6 @@ rte_jhash_32b(const uint32_t *k, uint32_t length, uint32_t initval)
}
static inline uint32_t
-__attribute__ ((deprecated))
-rte_jhash2(const uint32_t *k, uint32_t length, uint32_t initval)
-{
- uint32_t initval2 = 0;
-
- rte_jhash_32b_2hashes(k, length, &initval, &initval2);
-
- return initval;
-}
-
-static inline uint32_t
__rte_jhash_3words(uint32_t a, uint32_t b, uint32_t c, uint32_t initval)
{
a += RTE_JHASH_GOLDEN_RATIO + initval;
--
2.4.2
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): modify the macros values
2015-09-03 14:18 3% ` [dpdk-dev] [PATCH v2] " Jasvinder Singh
2015-09-03 14:33 0% ` Thomas Monjalon
@ 2015-09-04 10:58 8% ` Jasvinder Singh
2015-09-07 11:23 0% ` Dumitrescu, Cristian
2015-10-22 14:03 4% ` [dpdk-dev] [PATCH v4 0/2] cfgfile: " Jasvinder Singh
1 sibling, 2 replies; 200+ results
From: Jasvinder Singh @ 2015-09-04 10:58 UTC (permalink / raw)
To: dev
This patch refers to the ABI change proposed for librte_cfgfile
(rte_cfgfile.h). In order to allow for longer names and values,
the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_2_2.rst | 7 ++++++-
lib/librte_cfgfile/Makefile | 2 +-
lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
4 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5f6079b..2fbdee2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -53,10 +53,6 @@ Deprecation Notices
* The scheduler statistics structure will change to allow keeping track of
RED actions.
-* librte_cfgfile: In order to allow for longer names and values,
- the value of macros CFG_NAME_LEN and CFG_NAME_VAL will be increased.
- Most likely, the new values will be 64 and 256, respectively.
-
* librte_port: Macros to access the packet meta-data stored within the
packet buffer will be adjusted to cover the packet mbuf structure as well,
as currently they are able to access any packet buffer location except the
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index abe57b4..ff64da8 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -44,6 +44,11 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* librte_cfgfile: In order to allow for longer names and values,
+ the value of macros CFG_NAME_LEN and CFG_NAME_VAL is increased,
+ the new values are 64 and 256, respectively
+
+
Shared Library Versions
-----------------------
@@ -54,7 +59,7 @@ The libraries prepended with a plus sign were incremented in this version.
+ libethdev.so.2
+ librte_acl.so.2
- librte_cfgfile.so.1
+ + librte_cfgfile.so.2
librte_cmdline.so.1
librte_distributor.so.1
+ librte_eal.so.2
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
index 032c240..616aef0 100644
--- a/lib/librte_cfgfile/Makefile
+++ b/lib/librte_cfgfile/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_cfgfile_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/lib/librte_cfgfile/rte_cfgfile.h b/lib/librte_cfgfile/rte_cfgfile.h
index 7c9fc91..d443782 100644
--- a/lib/librte_cfgfile/rte_cfgfile.h
+++ b/lib/librte_cfgfile/rte_cfgfile.h
@@ -47,8 +47,13 @@ extern "C" {
*
***/
-#define CFG_NAME_LEN 32
-#define CFG_VALUE_LEN 64
+#ifndef CFG_NAME_LEN
+#define CFG_NAME_LEN 64
+#endif
+
+#ifndef CFG_VALUE_LEN
+#define CFG_VALUE_LEN 256
+#endif
/** Configuration file */
struct rte_cfgfile;
--
2.1.0
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment extension header
@ 2015-09-07 11:21 4% ` Dumitrescu, Cristian
2015-09-07 11:23 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-09-07 11:21 UTC (permalink / raw)
To: Ananyev, Konstantin, Azarewicz, PiotrX T, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> Konstantin
> Sent: Friday, September 4, 2015 6:51 PM
> To: Azarewicz, PiotrX T; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> extension header
>
> Hi Piotr,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Piotr
> > Sent: Wednesday, September 02, 2015 3:13 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> extension header
> >
> > From: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> >
> > Previous implementation won't work on every environment. The order of
> > allocation of bit-fields within a unit (high-order to low-order or
> > low-order to high-order) is implementation-defined.
> > Solution: used bytes instead of bit fields.
>
> Seems like right thing to do to me.
> Though I think we also should replace:
> union {
> struct {
> uint16_t frag_offset:13; /**< Offset from the start of the packet
> */
> uint16_t reserved2:2; /**< Reserved */
> uint16_t more_frags:1;
> /**< 1 if more fragments left, 0 if last fragment */
> };
> uint16_t frag_data;
> /**< union of all fragmentation data */
> };
>
> With just:
> uint16_t frag_data;
> and probably provide macros to read/set fragment_offset and more_flags
> values.
> Otherwise people might keep using the wrong layout.
> Konstantin
>
I agree with your proposal, but wouldn't this be an ABI change? To avoid an ABI change, we should probably leave the union?
> >
> > Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> > ---
> > lib/librte_ip_frag/rte_ipv6_fragmentation.c | 6 ++----
> > 1 file changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > index 0e32aa8..7342421 100644
> > --- a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > +++ b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > @@ -65,10 +65,8 @@ __fill_ipv6hdr_frag(struct ipv6_hdr *dst,
> >
> > fh = (struct ipv6_extension_fragment *) ++dst;
> > fh->next_header = src->proto;
> > - fh->reserved1 = 0;
> > - fh->frag_offset = rte_cpu_to_be_16(fofs);
> > - fh->reserved2 = 0;
> > - fh->more_frags = rte_cpu_to_be_16(mf);
> > + fh->reserved1 = 0;
> > + fh->frag_data = rte_cpu_to_be_16((fofs & ~IPV6_HDR_FO_MASK) |
> mf);
> > fh->id = 0;
> > }
> >
> > --
> > 1.7.9.5
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment extension header
2015-09-07 11:21 4% ` Dumitrescu, Cristian
@ 2015-09-07 11:23 0% ` Ananyev, Konstantin
2015-09-07 11:24 0% ` Dumitrescu, Cristian
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2015-09-07 11:23 UTC (permalink / raw)
To: Dumitrescu, Cristian, Azarewicz, PiotrX T, dev
> -----Original Message-----
> From: Dumitrescu, Cristian
> Sent: Monday, September 07, 2015 12:22 PM
> To: Ananyev, Konstantin; Azarewicz, PiotrX T; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment extension header
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> > Konstantin
> > Sent: Friday, September 4, 2015 6:51 PM
> > To: Azarewicz, PiotrX T; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> > extension header
> >
> > Hi Piotr,
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Piotr
> > > Sent: Wednesday, September 02, 2015 3:13 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> > extension header
> > >
> > > From: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> > >
> > > Previous implementation won't work on every environment. The order of
> > > allocation of bit-fields within a unit (high-order to low-order or
> > > low-order to high-order) is implementation-defined.
> > > Solution: used bytes instead of bit fields.
> >
> > Seems like right thing to do to me.
> > Though I think we also should replace:
> > union {
> > struct {
> > uint16_t frag_offset:13; /**< Offset from the start of the packet
> > */
> > uint16_t reserved2:2; /**< Reserved */
> > uint16_t more_frags:1;
> > /**< 1 if more fragments left, 0 if last fragment */
> > };
> > uint16_t frag_data;
> > /**< union of all fragmentation data */
> > };
> >
> > With just:
> > uint16_t frag_data;
> > and probably provide macros to read/set fragment_offset and more_flags
> > values.
> > Otherwise people might keep using the wrong layout.
> > Konstantin
> >
>
> I agree with your proposal, but wouldn't this be an ABI change? To avoid an ABI change, we should probably leave the union?
No I don't think it would - the size of the field will remain the same: uint16_t.
Also if the bit-field is invalid what for to keep it?
Konstantin
>
> > >
> > > Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> > > ---
> > > lib/librte_ip_frag/rte_ipv6_fragmentation.c | 6 ++----
> > > 1 file changed, 2 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > index 0e32aa8..7342421 100644
> > > --- a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > +++ b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > @@ -65,10 +65,8 @@ __fill_ipv6hdr_frag(struct ipv6_hdr *dst,
> > >
> > > fh = (struct ipv6_extension_fragment *) ++dst;
> > > fh->next_header = src->proto;
> > > - fh->reserved1 = 0;
> > > - fh->frag_offset = rte_cpu_to_be_16(fofs);
> > > - fh->reserved2 = 0;
> > > - fh->more_frags = rte_cpu_to_be_16(mf);
> > > + fh->reserved1 = 0;
> > > + fh->frag_data = rte_cpu_to_be_16((fofs & ~IPV6_HDR_FO_MASK) |
> > mf);
> > > fh->id = 0;
> > > }
> > >
> > > --
> > > 1.7.9.5
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): modify the macros values
2015-09-04 10:58 8% ` [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
@ 2015-09-07 11:23 0% ` Dumitrescu, Cristian
2015-10-22 14:03 4% ` [dpdk-dev] [PATCH v4 0/2] cfgfile: " Jasvinder Singh
1 sibling, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2015-09-07 11:23 UTC (permalink / raw)
To: Singh, Jasvinder, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jasvinder Singh
> Sent: Friday, September 4, 2015 1:59 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): modify the
> macros values
>
> This patch refers to the ABI change proposed for librte_cfgfile
> (rte_cfgfile.h). In order to allow for longer names and values,
> the new values of macros CFG_NAME_LEN and CFG_NAME_VAL are set.
>
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> ---
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment extension header
2015-09-07 11:23 0% ` Ananyev, Konstantin
@ 2015-09-07 11:24 0% ` Dumitrescu, Cristian
0 siblings, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2015-09-07 11:24 UTC (permalink / raw)
To: Ananyev, Konstantin, Azarewicz, PiotrX T, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Monday, September 7, 2015 2:23 PM
> To: Dumitrescu, Cristian; Azarewicz, PiotrX T; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> extension header
>
>
>
> > -----Original Message-----
> > From: Dumitrescu, Cristian
> > Sent: Monday, September 07, 2015 12:22 PM
> > To: Ananyev, Konstantin; Azarewicz, PiotrX T; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> extension header
> >
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev,
> > > Konstantin
> > > Sent: Friday, September 4, 2015 6:51 PM
> > > To: Azarewicz, PiotrX T; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> > > extension header
> > >
> > > Hi Piotr,
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Piotr
> > > > Sent: Wednesday, September 02, 2015 3:13 PM
> > > > To: dev@dpdk.org
> > > > Subject: [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment
> > > extension header
> > > >
> > > > From: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> > > >
> > > > Previous implementation won't work on every environment. The order
> of
> > > > allocation of bit-fields within a unit (high-order to low-order or
> > > > low-order to high-order) is implementation-defined.
> > > > Solution: used bytes instead of bit fields.
> > >
> > > Seems like right thing to do to me.
> > > Though I think we also should replace:
> > > union {
> > > struct {
> > > uint16_t frag_offset:13; /**< Offset from the start of the
> packet
> > > */
> > > uint16_t reserved2:2; /**< Reserved */
> > > uint16_t more_frags:1;
> > > /**< 1 if more fragments left, 0 if last fragment */
> > > };
> > > uint16_t frag_data;
> > > /**< union of all fragmentation data */
> > > };
> > >
> > > With just:
> > > uint16_t frag_data;
> > > and probably provide macros to read/set fragment_offset and
> more_flags
> > > values.
> > > Otherwise people might keep using the wrong layout.
> > > Konstantin
> > >
> >
> > I agree with your proposal, but wouldn't this be an ABI change? To avoid an
> ABI change, we should probably leave the union?
>
>
> No I don't think it would - the size of the field will remain the same: uint16_t.
> Also if the bit-field is invalid what for to keep it?
> Konstantin
>
Excellent then.
> >
> > > >
> > > > Signed-off-by: Piotr Azarewicz <piotrx.t.azarewicz@intel.com>
> > > > ---
> > > > lib/librte_ip_frag/rte_ipv6_fragmentation.c | 6 ++----
> > > > 1 file changed, 2 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > > index 0e32aa8..7342421 100644
> > > > --- a/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > > +++ b/lib/librte_ip_frag/rte_ipv6_fragmentation.c
> > > > @@ -65,10 +65,8 @@ __fill_ipv6hdr_frag(struct ipv6_hdr *dst,
> > > >
> > > > fh = (struct ipv6_extension_fragment *) ++dst;
> > > > fh->next_header = src->proto;
> > > > - fh->reserved1 = 0;
> > > > - fh->frag_offset = rte_cpu_to_be_16(fofs);
> > > > - fh->reserved2 = 0;
> > > > - fh->more_frags = rte_cpu_to_be_16(mf);
> > > > + fh->reserved1 = 0;
> > > > + fh->frag_data = rte_cpu_to_be_16((fofs & ~IPV6_HDR_FO_MASK) |
> > > mf);
> > > > fh->id = 0;
> > > > }
> > > >
> > > > --
> > > > 1.7.9.5
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table
@ 2015-09-08 10:11 3% Jasvinder Singh
2015-09-08 10:11 3% ` [dpdk-dev] [PATCH 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Jasvinder Singh @ 2015-09-08 10:11 UTC (permalink / raw)
To: dev
This patchset links to ABI change announced for librte_table. For lpm table,
name parameter has been included in LPM table parameters structure.
It will eventually allow applications to create more than one instances
of lpm table, if required.
Jasvinder Singh (4):
librte_table: add name parameter to LPM table
app/test: modify table and pipeline test
ip_pipeline: modify lpm table for routing pipeline
librte_table: modify release notes and deprecation notice
app/test-pipeline/pipeline_lpm.c | 1 +
app/test-pipeline/pipeline_lpm_ipv6.c | 1 +
app/test/test_table_combined.c | 2 +
app/test/test_table_tables.c | 102 ++++++++++++---------
doc/guides/rel_notes/deprecation.rst | 3 -
doc/guides/rel_notes/release_2_2.rst | 4 +-
.../ip_pipeline/pipeline/pipeline_routing_be.c | 1 +
lib/librte_table/Makefile | 2 +-
lib/librte_table/rte_table_lpm.c | 8 +-
lib/librte_table/rte_table_lpm.h | 3 +
lib/librte_table/rte_table_lpm_ipv6.c | 8 +-
lib/librte_table/rte_table_lpm_ipv6.h | 3 +
12 files changed, 86 insertions(+), 52 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 1/4] librte_table: modify LPM table parameter structure
2015-09-08 10:11 3% [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Jasvinder Singh
@ 2015-09-08 10:11 3% ` Jasvinder Singh
2015-09-08 10:11 5% ` [dpdk-dev] [PATCH 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-09-08 12:57 0% ` [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-09-08 10:11 UTC (permalink / raw)
To: dev
This patch relates to ABI change proposed for librte_table
(lpm table). A new parameter to hold the table name has
been added to the LPM table parameter structures
rte_table_lpm_params and rte_table_lpm_ipv6_params.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
lib/librte_table/rte_table_lpm.c | 8 ++++++--
lib/librte_table/rte_table_lpm.h | 3 +++
lib/librte_table/rte_table_lpm_ipv6.c | 8 ++++++--
lib/librte_table/rte_table_lpm_ipv6.h | 3 +++
4 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index b218d64..849d899 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -103,7 +103,11 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
__func__);
return NULL;
}
-
+ if (p->name == NULL) {
+ RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ __func__);
+ return NULL;
+ }
entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
/* Memory allocation */
@@ -119,7 +123,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
}
/* LPM low-level table creation */
- lpm->lpm = rte_lpm_create("LPM", socket_id, p->n_rules, 0);
+ lpm->lpm = rte_lpm_create(p->name, socket_id, p->n_rules, 0);
if (lpm->lpm == NULL) {
rte_free(lpm);
RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n");
diff --git a/lib/librte_table/rte_table_lpm.h b/lib/librte_table/rte_table_lpm.h
index c08c958..06e8410 100644
--- a/lib/librte_table/rte_table_lpm.h
+++ b/lib/librte_table/rte_table_lpm.h
@@ -77,6 +77,9 @@ extern "C" {
/** LPM table parameters */
struct rte_table_lpm_params {
+ /** Table name */
+ const char *name;
+
/** Maximum number of LPM rules (i.e. IP routes) */
uint32_t n_rules;
diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
index ff4a9c2..ce91db2 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.c
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -109,13 +109,17 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
__func__);
return NULL;
}
-
+ if (p->name == NULL) {
+ RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ __func__);
+ return NULL;
+ }
entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
/* Memory allocation */
nht_size = RTE_TABLE_LPM_MAX_NEXT_HOPS * entry_size;
total_size = sizeof(struct rte_table_lpm_ipv6) + nht_size;
- lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
+ lpm = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE,
socket_id);
if (lpm == NULL) {
RTE_LOG(ERR, TABLE,
diff --git a/lib/librte_table/rte_table_lpm_ipv6.h b/lib/librte_table/rte_table_lpm_ipv6.h
index 91fb0d8..43aea39 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.h
+++ b/lib/librte_table/rte_table_lpm_ipv6.h
@@ -79,6 +79,9 @@ extern "C" {
/** LPM table parameters */
struct rte_table_lpm_ipv6_params {
+ /** Table name */
+ const char *name;
+
/** Maximum number of LPM rules (i.e. IP routes) */
uint32_t n_rules;
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 4/4] librte_table: modify release notes and deprecation notice
2015-09-08 10:11 3% [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-08 10:11 3% ` [dpdk-dev] [PATCH 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
@ 2015-09-08 10:11 5% ` Jasvinder Singh
2015-09-08 12:57 0% ` [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-09-08 10:11 UTC (permalink / raw)
To: dev
The LIBABIVER number is incremented. The release notes
is updated and the deprecation announce is removed.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_table/Makefile | 2 +-
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5f6079b..ce6147e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -62,9 +62,6 @@ Deprecation Notices
as currently they are able to access any packet buffer location except the
packet mbuf structure.
-* librte_table LPM: A new parameter to hold the table name will be added to
- the LPM table parameter structure.
-
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index abe57b4..75fc1ab 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -44,6 +44,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* librte_table LPM: A new parameter to hold the table name will be added to
+ the LPM table parameter structure.
Shared Library Versions
-----------------------
@@ -76,6 +78,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ + librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
2.1.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table
2015-09-08 10:11 3% [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-08 10:11 3% ` [dpdk-dev] [PATCH 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
2015-09-08 10:11 5% ` [dpdk-dev] [PATCH 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
@ 2015-09-08 12:57 0% ` Dumitrescu, Cristian
2015-10-09 10:49 0% ` Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-09-08 12:57 UTC (permalink / raw)
To: Singh, Jasvinder, dev
> -----Original Message-----
> From: Singh, Jasvinder
> Sent: Tuesday, September 8, 2015 1:11 PM
> To: dev@dpdk.org
> Cc: Dumitrescu, Cristian
> Subject: [PATCH 0/4] librte_table: add name parameter to lpm table
>
> This patchset links to ABI change announced for librte_table. For lpm table,
> name parameter has been included in LPM table parameters structure.
> It will eventually allow applications to create more than one instances
> of lpm table, if required.
>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 5/5] doc: modify release notes and deprecation notice for table and pipeline
@ 2015-09-11 10:31 5% ` Maciej Gajdzica
1 sibling, 0 replies; 200+ results
From: Maciej Gajdzica @ 2015-09-11 10:31 UTC (permalink / raw)
To: dev
The LIBABIVER number is incremented for table and pipeline libraries.
The release notes is updated and the deprecation announce is removed.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 6 ++++--
lib/librte_pipeline/Makefile | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 8 ++++++++
lib/librte_table/Makefile | 2 +-
5 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fffad80..5f46cf9 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -60,9 +60,6 @@ Deprecation Notices
* librte_table LPM: A new parameter to hold the table name will be added to
the LPM table parameter structure.
-* librte_table: New functions for table entry bulk add/delete will be added
- to the table operations structure.
-
* librte_table hash: Key mask parameter will be added to the hash table
parameter structure for 8-byte key and 16-byte key extendible bucket and
LRU tables.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 682f468..deb8e4e 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -47,6 +47,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* Added functions add/delete bulk to table and pipeline libraries.
+
Shared Library Versions
-----------------------
@@ -71,7 +73,7 @@ The libraries prepended with a plus sign were incremented in this version.
+ librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
- librte_pipeline.so.1
+ + librte_pipeline.so.2
librte_pmd_bond.so.1
+ librte_pmd_ring.so.2
librte_port.so.1
@@ -79,6 +81,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ + librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_pipeline/Makefile b/lib/librte_pipeline/Makefile
index 15e406b..1166d3c 100644
--- a/lib/librte_pipeline/Makefile
+++ b/lib/librte_pipeline/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_pipeline_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 8f25d0f..5150070 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -29,3 +29,11 @@ DPDK_2.1 {
rte_pipeline_table_stats_read;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete_bulk;
+
+} DPDK_2.1;
\ No newline at end of file
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
1.7.9.5
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH] doc: add guideline for updating release notes
@ 2015-09-11 11:04 15% John McNamara
0 siblings, 0 replies; 200+ results
From: John McNamara @ 2015-09-11 11:04 UTC (permalink / raw)
To: dev
>From version 2.2 of DPDK onwards patchsets should include
updates to the Release Notes for additions, fixes and
changes.
Add guideline on what to update in the Release Notes to the
Documentation Contribution guidelines.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/documentation.rst | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index cb5ca0d..7c1eb41 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -63,9 +63,20 @@ added to by the developer.
any known issues.
The Releases Notes also contain notifications of features that will change ABI compatibility in the next major release.
- Developers should update the Release Notes to add a short description of new or updated features.
- Developers should also update the Release Notes to add ABI announcements if necessary,
- (see :doc:`/contributing/versioning` for details).
+ Developers should include updates to the Release Notes with patch sets that relate to any of the following sections:
+
+ * New Features
+ * Resolved Issues (see below)
+ * Known Issues
+ * API Changes
+ * ABI Changes
+ * Shared Library Versions
+
+ Resolved Issues should only include issues from previous releases that have been resolved in the current release.
+ Issues that are introduced and then fixed within a release cycle do not have to be included here.
+
+ Refer to the Release Notes from the previous DPDK release for the correct format of each section.
+
* **API documentation**
--
1.8.1.4
^ permalink raw reply [relevance 15%]
* [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data
@ 2015-09-11 13:35 3% roy.fan.zhang
2015-09-11 13:35 3% ` [dpdk-dev] [PATCH 1/4] librte_port: " roy.fan.zhang
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: roy.fan.zhang @ 2015-09-11 13:35 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patchset links to ABI change announced for librte_port. Macros to
access the packet meta-data stored within the packet buffer has been
adjusted to cover the packet mbuf structure.
Fan Zhang (4):
librte_port: modify macros to access packet meta-data
app/test: modify table and pipeline test
app/test_pipeline: modify pipeline test
librte_port: modify release notes and deprecation notice
app/test-pipeline/main.h | 2 ++
app/test-pipeline/pipeline_hash.c | 34 ++++++++++++++-------------
app/test-pipeline/pipeline_lpm.c | 2 +-
app/test-pipeline/pipeline_lpm_ipv6.c | 2 +-
app/test/test_table.h | 8 +++++--
app/test/test_table_combined.c | 28 +++++++++++-----------
app/test/test_table_pipeline.c | 3 ++-
app/test/test_table_tables.c | 44 ++++++++++++++++++-----------------
doc/guides/rel_notes/deprecation.rst | 5 ----
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_port/Makefile | 2 +-
lib/librte_port/rte_port.h | 2 +-
12 files changed, 72 insertions(+), 64 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 1/4] librte_port: modify macros to access packet meta-data
2015-09-11 13:35 3% [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data roy.fan.zhang
@ 2015-09-11 13:35 3% ` roy.fan.zhang
2015-09-11 13:35 5% ` [dpdk-dev] [PATCH 4/4] librte_port: modify release notes and deprecation notice roy.fan.zhang
2015-09-11 13:40 0% ` [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: roy.fan.zhang @ 2015-09-11 13:35 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_port. Macros to
access the packet meta-data stored within the packet buffer has been
adjusted to cover the packet mbuf structure.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/librte_port/rte_port.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_port/rte_port.h b/lib/librte_port/rte_port.h
index 396c7e9..00b97a9 100644
--- a/lib/librte_port/rte_port.h
+++ b/lib/librte_port/rte_port.h
@@ -55,7 +55,7 @@ extern "C" {
* just beyond the end of the mbuf data structure returned by a port
*/
#define RTE_MBUF_METADATA_UINT8_PTR(mbuf, offset) \
- (&((uint8_t *) &(mbuf)[1])[offset])
+ (&((uint8_t *)(mbuf))[offset])
#define RTE_MBUF_METADATA_UINT16_PTR(mbuf, offset) \
((uint16_t *) RTE_MBUF_METADATA_UINT8_PTR(mbuf, offset))
#define RTE_MBUF_METADATA_UINT32_PTR(mbuf, offset) \
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 4/4] librte_port: modify release notes and deprecation notice
2015-09-11 13:35 3% [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data roy.fan.zhang
2015-09-11 13:35 3% ` [dpdk-dev] [PATCH 1/4] librte_port: " roy.fan.zhang
@ 2015-09-11 13:35 5% ` roy.fan.zhang
2015-09-11 13:40 0% ` [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: roy.fan.zhang @ 2015-09-11 13:35 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
The LIBABIVER number is incremented. The release notes
is updated and the deprecation announce is removed.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 5 -----
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_port/Makefile | 2 +-
3 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fffad80..d08ba63 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -52,11 +52,6 @@ Deprecation Notices
the value of macros CFG_NAME_LEN and CFG_NAME_VAL will be increased.
Most likely, the new values will be 64 and 256, respectively.
-* librte_port: Macros to access the packet meta-data stored within the
- packet buffer will be adjusted to cover the packet mbuf structure as well,
- as currently they are able to access any packet buffer location except the
- packet mbuf structure.
-
* librte_table LPM: A new parameter to hold the table name will be added to
the LPM table parameter structure.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 682f468..4d945db 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -47,6 +47,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* librte_port: Macros to access the packet meta-data stored within the packet
+ buffer has been adjusted to cover the packet mbuf structure.
Shared Library Versions
-----------------------
@@ -74,7 +76,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_pipeline.so.1
librte_pmd_bond.so.1
+ librte_pmd_ring.so.2
- librte_port.so.1
+ librte_port.so.2
librte_power.so.1
librte_reorder.so.1
librte_ring.so.1
diff --git a/lib/librte_port/Makefile b/lib/librte_port/Makefile
index ddbb383..410053e 100644
--- a/lib/librte_port/Makefile
+++ b/lib/librte_port/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_port_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
2.1.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data
2015-09-11 13:35 3% [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data roy.fan.zhang
2015-09-11 13:35 3% ` [dpdk-dev] [PATCH 1/4] librte_port: " roy.fan.zhang
2015-09-11 13:35 5% ` [dpdk-dev] [PATCH 4/4] librte_port: modify release notes and deprecation notice roy.fan.zhang
@ 2015-09-11 13:40 0% ` Dumitrescu, Cristian
2015-10-19 15:02 0% ` Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-09-11 13:40 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> roy.fan.zhang@intel.com
> Sent: Friday, September 11, 2015 4:36 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet
> meta-data
>
> From: Fan Zhang <roy.fan.zhang@intel.com>
>
> This patchset links to ABI change announced for librte_port. Macros to
> access the packet meta-data stored within the packet buffer has been
> adjusted to cover the packet mbuf structure.
>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/2] rte_sched: remove useless bitmap_free
@ 2015-09-11 17:28 3% ` Dumitrescu, Cristian
2015-09-11 19:18 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-09-11 17:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Friday, August 28, 2015 7:50 PM
> To: Dumitrescu, Cristian
> Cc: dev@dpdk.org; Stephen Hemminger
> Subject: [PATCH 2/2] rte_sched: remove useless bitmap_free
>
> Coverity reports that rte_bitmap_free() does nothing and caller does
> not check return value. Just remove it.
>
> Also since rte_free(NULL) is a nop, remove useless check here.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> lib/librte_sched/rte_bitmap.h | 19 -------------------
> lib/librte_sched/rte_sched.c | 5 -----
> 2 files changed, 24 deletions(-)
>
> diff --git a/lib/librte_sched/rte_bitmap.h b/lib/librte_sched/rte_bitmap.h
> index 216a344..47eeeeb 100644
> --- a/lib/librte_sched/rte_bitmap.h
> +++ b/lib/librte_sched/rte_bitmap.h
> @@ -275,25 +275,6 @@ rte_bitmap_init(uint32_t n_bits, uint8_t *mem,
> uint32_t mem_size)
> }
>
> /**
> - * Bitmap free
> - *
> - * @param bmp
> - * Handle to bitmap instance
> - * @return
> - * 0 upon success, error code otherwise
> - */
> -static inline int
> -rte_bitmap_free(struct rte_bitmap *bmp)
> -{
> - /* Check input arguments */
> - if (bmp == NULL) {
> - return -1;
> - }
> -
> - return 0;
> -}
> -
> -/**
> * Bitmap reset
> *
> * @param bmp
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 924c172..cbe3f3b 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -716,11 +716,6 @@ rte_sched_port_config(struct
> rte_sched_port_params *params)
> void
> rte_sched_port_free(struct rte_sched_port *port)
> {
> - /* Check user parameters */
> - if (port == NULL)
> - return;
> -
> - rte_bitmap_free(port->bmp);
> rte_free(port);
> }
>
> --
> 2.1.4
Hi Steve,
I agree these functions are not doing much at the moment, but I would like to keep them for the reasons below:
1. There might be people using them, and we do not want to break their code. Removing them is an ABI change.
2. Although they are just placeholders for now, we might need to add more functionality to them going forward, as the implementation evolves. I don't want to change the API now by removing them, and change the API later when we need to add them back. Generally, I think it is good practice to have free functions.
Regards,
Cristian
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 2/2] rte_sched: remove useless bitmap_free
2015-09-11 17:28 3% ` Dumitrescu, Cristian
@ 2015-09-11 19:18 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2015-09-11 19:18 UTC (permalink / raw)
To: Dumitrescu, Cristian; +Cc: dev
> Hi Steve,
>
> I agree these functions are not doing much at the moment, but I would like to keep them for the reasons below:
>
> 1. There might be people using them, and we do not want to break their code. Removing them is an ABI change.
>
> 2. Although they are just placeholders for now, we might need to add more functionality to them going forward, as the implementation evolves. I don't want to change the API now by removing them, and change the API later when we need to add them back. Generally, I think it is good practice to have free functions.
The source code base is not a code repository for unused and dead code!
If you need to keep things, either get them out of previous history with git
or keep them out of tree.
Also have a number of patches to remove all #ifdef code that is dead
in QoS now.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] DPDK 2.2 roadmap
@ 2015-09-15 9:16 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2015-09-15 9:16 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hello all,
My turn.
As far as the 2.2 is concerned, I have some fixes/changes waiting for going
upstream :
- allow default mac removal (to be discussed)
- kvargs api updates / cleanup (no change on abi, I would say)
- vlan filtering api fixes and ixgbevf/igbvf associated fixes (might have
an impact on abi)
- ethdev fixes wrt hotplug framework
- minor fixes in testpmd
After this, depending on the schedule (so will most likely be for 2.3 or
later), I have some ideas on :
- cleanup for hotplug and maybe discussions on pci bind/unbind operations
- provide a little tool to have informations/capabilities on drivers (à la
modinfo)
- continue work on hotplug
By the way, I have some questions to the community :
- I noticed that with hotplug support, testpmd has become *really* hungry
on mbufs and memory.
The problem comes from the "basic" assumption that we must have enough
memory/mbufs for the maximum number of ports that might be available but
are not in the most common tests setup.
One solution might be to rework the way mbufs are reserved :
* either we let testpmd start with limited mbufs count the way it was
working before edab33b1 ("app/testpmd: support port hotplug"), then when
trying to start a port, this operation can fail if not enough mbufs are
available for it
* or we can try to create one mempool per port. The mempools would be
populated at the port init / close (?).
Anyone volunteers to rework this ?
Other ideas ?
- looking at a patch from Chao (
http://dpdk.org/ml/archives/dev/2015-August/022819.html), I think we need
to rework the way the numa nodes are handled in the dpdk.
The problem is that we rely on static arrays for some resources per socket.
I suppose this was designed with the idea that socket "physical" indexes
are contiguous, but this is not true on systems running power8 bare metal
(where numa indexes can be 0, 1, 16, 17 on quad nodes servers).
I suppose we can go with a mapping array (populated at the same time cpus
are discovered), then use this mapping array and preserve all apis, but
this might not be that trivial.
Volunteers ?
Ideas ?
- finally, looking at the eal, there are still some cleanups to do.
More specifically, are there any users of the ivshmem feature in dpdk ?
I can see little value in keeping the ivshmem feature in the eal (well
maybe because I don't use it) as it relies on hacks.
So I can see two options:
* someone still wants it to work, then we need a good rework to get rid of
those hacks under #ifdef in eal and the special configuration files can
disappear
* or if nobody complains, we can schedule its deprecation then removal.
Thanks.
--
David Marchand
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 1/4] librte_table: modify LPM table parameter structure
2015-09-17 16:03 3% [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Jasvinder Singh
@ 2015-09-17 16:03 3% ` Jasvinder Singh
2015-09-17 16:03 5% ` [dpdk-dev] [PATCH v2 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-09-17 16:13 0% ` [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-09-17 16:03 UTC (permalink / raw)
To: dev
This patch relates to ABI change proposed for librte_table
(lpm table). A new parameter to hold the table name has
been added to the LPM table parameter structures
rte_table_lpm_params and rte_table_lpm_ipv6_params.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
lib/librte_table/rte_table_lpm.c | 8 ++++++--
lib/librte_table/rte_table_lpm.h | 3 +++
lib/librte_table/rte_table_lpm_ipv6.c | 8 ++++++--
lib/librte_table/rte_table_lpm_ipv6.h | 3 +++
4 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index b218d64..849d899 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -103,7 +103,11 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
__func__);
return NULL;
}
-
+ if (p->name == NULL) {
+ RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ __func__);
+ return NULL;
+ }
entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
/* Memory allocation */
@@ -119,7 +123,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
}
/* LPM low-level table creation */
- lpm->lpm = rte_lpm_create("LPM", socket_id, p->n_rules, 0);
+ lpm->lpm = rte_lpm_create(p->name, socket_id, p->n_rules, 0);
if (lpm->lpm == NULL) {
rte_free(lpm);
RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n");
diff --git a/lib/librte_table/rte_table_lpm.h b/lib/librte_table/rte_table_lpm.h
index c08c958..06e8410 100644
--- a/lib/librte_table/rte_table_lpm.h
+++ b/lib/librte_table/rte_table_lpm.h
@@ -77,6 +77,9 @@ extern "C" {
/** LPM table parameters */
struct rte_table_lpm_params {
+ /** Table name */
+ const char *name;
+
/** Maximum number of LPM rules (i.e. IP routes) */
uint32_t n_rules;
diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
index ff4a9c2..e9bc6a7 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.c
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -109,7 +109,11 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
__func__);
return NULL;
}
-
+ if (p->name == NULL) {
+ RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ __func__);
+ return NULL;
+ }
entry_size = RTE_ALIGN(entry_size, sizeof(uint64_t));
/* Memory allocation */
@@ -128,7 +132,7 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
lpm6_config.max_rules = p->n_rules;
lpm6_config.number_tbl8s = p->number_tbl8s;
lpm6_config.flags = 0;
- lpm->lpm = rte_lpm6_create("LPM IPv6", socket_id, &lpm6_config);
+ lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config);
if (lpm->lpm == NULL) {
rte_free(lpm);
RTE_LOG(ERR, TABLE,
diff --git a/lib/librte_table/rte_table_lpm_ipv6.h b/lib/librte_table/rte_table_lpm_ipv6.h
index 91fb0d8..43aea39 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.h
+++ b/lib/librte_table/rte_table_lpm_ipv6.h
@@ -79,6 +79,9 @@ extern "C" {
/** LPM table parameters */
struct rte_table_lpm_ipv6_params {
+ /** Table name */
+ const char *name;
+
/** Maximum number of LPM rules (i.e. IP routes) */
uint32_t n_rules;
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 4/4] librte_table: modify release notes and deprecation notice
2015-09-17 16:03 3% [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-17 16:03 3% ` [dpdk-dev] [PATCH v2 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
@ 2015-09-17 16:03 5% ` Jasvinder Singh
2015-09-17 16:13 0% ` [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Dumitrescu, Cristian
2 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-09-17 16:03 UTC (permalink / raw)
To: dev
The LIBABIVER number is incremented. The release notes
is updated and the deprecation announcement is removed.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_table/Makefile | 2 +-
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5f6079b..ce6147e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -62,9 +62,6 @@ Deprecation Notices
as currently they are able to access any packet buffer location except the
packet mbuf structure.
-* librte_table LPM: A new parameter to hold the table name will be added to
- the LPM table parameter structure.
-
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index abe57b4..75fc1ab 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -44,6 +44,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* librte_table LPM: A new parameter to hold the table name will be added to
+ the LPM table parameter structure.
Shared Library Versions
-----------------------
@@ -76,6 +78,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ + librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
2.1.0
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table
@ 2015-09-17 16:03 3% Jasvinder Singh
2015-09-17 16:03 3% ` [dpdk-dev] [PATCH v2 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Jasvinder Singh @ 2015-09-17 16:03 UTC (permalink / raw)
To: dev
This patchset links to ABI change announced for librte_table. For
lpm table, name parameter has been included in LPM table parameters
structure. It will eventually allow applications to create more
than one instances of lpm table, if required.
Changes in v2:
- rte_table_lpm_ipv6.c: removed name varibale from
rte_zmalloc_socket() and inserted that in rte_lpm6_create().
Jasvinder Singh (4):
librte_table: modify LPM table parameter structure
app/test: modify table and pipeline test
ip_pipeline: modify lpm table for routing pipeline
librte_table: modify release notes and deprecation notice
app/test-pipeline/pipeline_lpm.c | 1 +
app/test-pipeline/pipeline_lpm_ipv6.c | 1 +
app/test/test_table_combined.c | 2 +
app/test/test_table_tables.c | 102 ++++++++++++---------
doc/guides/rel_notes/deprecation.rst | 3 -
doc/guides/rel_notes/release_2_2.rst | 4 +-
.../ip_pipeline/pipeline/pipeline_routing_be.c | 1 +
lib/librte_table/Makefile | 2 +-
lib/librte_table/rte_table_lpm.c | 8 +-
lib/librte_table/rte_table_lpm.h | 3 +
lib/librte_table/rte_table_lpm_ipv6.c | 8 +-
lib/librte_table/rte_table_lpm_ipv6.h | 3 +
12 files changed, 86 insertions(+), 52 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table
2015-09-17 16:03 3% [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-17 16:03 3% ` [dpdk-dev] [PATCH v2 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
2015-09-17 16:03 5% ` [dpdk-dev] [PATCH v2 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
@ 2015-09-17 16:13 0% ` Dumitrescu, Cristian
2015-10-12 14:06 0% ` Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-09-17 16:13 UTC (permalink / raw)
To: Singh, Jasvinder, dev
> -----Original Message-----
> From: Singh, Jasvinder
> Sent: Thursday, September 17, 2015 7:03 PM
> To: dev@dpdk.org
> Cc: Dumitrescu, Cristian
> Subject: [PATCH v2 0/4]librte_table: add name parameter to lpm table
>
> This patchset links to ABI change announced for librte_table. For
> lpm table, name parameter has been included in LPM table parameters
> structure. It will eventually allow applications to create more
> than one instances of lpm table, if required.
>
> Changes in v2:
> - rte_table_lpm_ipv6.c: removed name varibale from
> rte_zmalloc_socket() and inserted that in rte_lpm6_create().
>
>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] libdpdk upstream changes for ecosystem best practices
2015-09-02 14:18 3% ` Thomas Monjalon
2015-09-02 16:01 0% ` Stephen Hemminger
@ 2015-09-18 10:39 5% ` Robie Basak
1 sibling, 0 replies; 200+ results
From: Robie Basak @ 2015-09-18 10:39 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Stefan Bader
Hi Thomas,
On Wed, Sep 02, 2015 at 04:18:33PM +0200, Thomas Monjalon wrote:
> > First, it would be easier for us to ship a single binary package that
> > ships a single shared library to cover all of DPDK that library
> > consumers might need, rather than having it split up as you do. I
> > understand the build system is capable of doing this already, but what
> > we don’t have is a well defined soname and sover (currently
> > parameterized in the build) for ABI compatibility purposes. As a binary
>
> No it is now fixed:
> http://dpdk.org/browse/dpdk/commit/?id=c3ce2ad3548
It's great that the name "dpdk" is pinned down - thanks. But we need to
define the sover also, and make sure it is bumped when the ABI changes.
AIUI the build currently produces no sover - is this correct?
We'll use a sover of 0 in our packaging for now, unless you object. Then
we'll be able to move up to whatever you do when it is well-defined.
> > So that we can get DPDK packaging into Ubuntu immediately, please could
> > we agree to define (and burn) libdpdk.so.0 to be the ABI that builds
> > with upstream release 2.0.0 when built with the native-linuxapp-gcc
> > template options plus the following changes:
> > CONFIG_RTE_MACHINE=”default”
> > CONFIG_RTE_APP_TEST=n
> > CONFIG_LIBRTE_VHOST=y
> > CONFIG_RTE_EAL_IGB_UIO=n
> > CONFIG_RTE_LIBRTE_KNI=n
> > CONFIG_RTE_BUILD_COMBINE_LIBS=y
> > CONFIG_RTE_BUILD_SHARED_LIB=y
>
> I feel this configuration is the responsibility of the distribution.
> What do you expect to have in the source project?
I just wanted to make it clear what we were doing in case changing build
configuration parameters resulted in a different ABI. If this isn't the
case, then that's fine - it is solely the consider of the distribution
as to what build parameters we pick.
> > The combined library would be placed into /usr/lib/$(ARCH)-linux-gnu/
> > where it can be found without modification to the library search path.
> > We want to ship it like this in Ubuntu anyway, but I’d prefer upstream
> > to have defined it as such since then we’ll have a proper definition of
> > the ABI that can be shared across distributions and other consumers any
> > time ABI compatibility is expected.
>
> You mean you target ABI compatibility between Linux distributons?
> But other libraries could have different versions so you would be lucky
> to have a binary application finding the same dependencies.
In theory we do get ABI compatibility between distributions. Finding the
dependencies is a separate issue; but if the right binaries were
installed, there would be no conflicts in finding shared libraries
across binaries from different distributions if the ABI is managed
right.
But that isn't directly our target.
It's still useful to us to have this done right. It makes ABI
transitions in the distribution (coordinating updates to libraries and
their consumers concurrently) possible without breaking things in the
middle. It means that when we talk to upstreams (both libraries and
their consumers) then we're speaking the same language as other
distributions, and patches apply to them all without each distribution
having to kludge things independently. And it gives us options when
different library consumers require different ABI versions since we can
concurrently install two different ABIs of the same library (although we
prefer to avoid that).
> > Though not strictly part of a shared library ABI, I also propose some
> > build-related upstream changes at API level below, that I’d like to also
> > ship in the initial Ubuntu packaging of the header files. Clearly you
> > cannot make this change in an existing release, but I propose that you
> > do this for your next release so all library consumers will see a
> > consistent and standard API interface. If you agree to this, then I’d
> > also like to ship the Ubuntu package with patches to do the same thing
> > in your current release.
>
> Yes cleanup patches are welcome :)
I'm arranging to have someone work on these with you upstream and send
you patches, thanks.
Robie
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH 0/4] extend flow director to support VF filtering in i40e driver
@ 2015-09-22 3:45 3% Jingjing Wu
2015-09-22 3:45 19% ` [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation Jingjing Wu
2015-10-28 8:41 3% ` [dpdk-dev] [PATCH v2 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
0 siblings, 2 replies; 200+ results
From: Jingjing Wu @ 2015-09-22 3:45 UTC (permalink / raw)
To: dev
This patch set extends flow director to VF filtering in i40e driver.
Jingjing Wu (4):
ethdev: extend struct to support flow director in VFs
i40e: extend flow diretcor to support filtering in VFs
testpmd: extend commands
doc: extend commands in testpmd and remove related ABI deprecation
app/test-pmd/cmdline.c | 41 ++++++++++++++++++++++++++---
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++-----
drivers/net/i40e/i40e_ethdev.c | 4 +--
drivers/net/i40e/i40e_fdir.c | 15 ++++++++---
lib/librte_ether/rte_eth_ctrl.h | 2 ++
6 files changed, 59 insertions(+), 19 deletions(-)
--
2.4.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation
2015-09-22 3:45 3% [dpdk-dev] [PATCH 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
@ 2015-09-22 3:45 19% ` Jingjing Wu
2015-10-27 7:54 7% ` Zhang, Helin
2015-10-28 8:41 3% ` [dpdk-dev] [PATCH v2 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
1 sibling, 1 reply; 200+ results
From: Jingjing Wu @ 2015-09-22 3:45 UTC (permalink / raw)
To: dev
Modify the doc about flow director commands to support filtering in VFs.
Remove related ABI deprecation.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++++------
2 files changed, 6 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fffad80..e1a35b9 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -24,10 +24,6 @@ Deprecation Notices
Structures: rte_fdir_*, rte_eth_fdir.
Enums: rte_l4type, rte_iptype.
-* ABI changes are planned for struct rte_eth_fdir_flow_ext in order to support
- flow director filtering in VF. The release 2.1 does not contain these ABI
- changes, but release 2.2 will, and no backwards compatibility is planned.
-
* ABI changes are planned for struct rte_eth_fdir_filter and
rte_eth_fdir_masks in order to support new flow director modes,
MAC VLAN and Cloud, on x550. The MAC VLAN mode means the MAC and
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index aa77a91..9a0d18a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1624,30 +1624,30 @@ Different NICs may have different capabilities, command show port fdir (port_id)
flow_director_filter (port_id) (add|del|update) flow (ipv4-other|ipv4-frag|ipv6-other|ipv6-frag)
src (src_ip_address) dst (dst_ip_address) vlan (vlan_value) flexbytes (flexbytes_value)
-(drop|fwd) queue (queue_id) fd_id (fd_id_value)
+(drop|fwd) pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) flow (ipv4-tcp|ipv4-udp|ipv6-tcp|ipv6-udp)
src (src_ip_address) (src_port) dst (dst_ip_address) (dst_port) vlan (vlan_value)
-flexbytes (flexbytes_value) (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) flow (ipv4-sctp|ipv6-sctp)
src (src_ip_address) (src_port) dst (dst_ip_address) (dst_port) tag (verification_tag)
-vlan (vlan_value) flexbytes (flexbytes_value) (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+vlan (vlan_value) flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) flow l2_payload
-ether (ethertype) flexbytes (flexbytes_value) (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+ether (ethertype) flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
For example, to add an ipv4-udp flow type filter:
.. code-block:: console
- testpmd> flow_director_filter 0 add flow ipv4-udp src 2.2.2.3 32 dst 2.2.2.5 33 vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
+ testpmd> flow_director_filter 0 add flow ipv4-udp src 2.2.2.3 32 dst 2.2.2.5 33 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
For example, add an ipv4-other flow type filter:
.. code-block:: console
- testpmd> flow_director_filter 0 add flow ipv4-other src 2.2.2.3 dst 2.2.2.5 vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
+ testpmd> flow_director_filter 0 add flow ipv4-other src 2.2.2.3 dst 2.2.2.5 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
flush_flow_director
~~~~~~~~~~~~~~~~~~~
--
2.4.0
^ permalink raw reply [relevance 19%]
* Re: [dpdk-dev] [PATCH v2] hash: rename unused field to "reserved"
@ 2015-09-22 23:01 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2015-09-22 23:01 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
On Mon, 13 Jul 2015 17:38:45 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> The cuckoo hash has a fixed number of entries per bucket, so the
> configuration parameter for this is unused. We change this field in the
> parameters struct to "reserved" to indicate that there is now no such
> parameter value, while at the same time keeping ABI consistency.
>
> Fixes: 48a399119619 ("hash: replace with cuckoo hash implementation")
>
> Suggested-by: Thomas Monjalon <thomas.monjalon@6wind.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
I am sorry this patch went into 2.1 because it broke source code
compatibility. To me having source code recompile compatibility is MORE
important than ABI compatibility.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements
@ 2015-09-24 7:34 9% Panu Matilainen
2015-09-24 7:34 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:34 UTC (permalink / raw)
To: dev
For giggles, tried running abi-validator between 2.0 and 2.1 on
my Fedora 22 laptop, didn't work due to various build failures.
With this patch series the following now succeeds:
EXTRA_CFLAGS="-Wno-error" scripts/validate-abi.sh v2.0.0 v2.1.0 x86_64-native-linuxapp-gcc
Panu Matilainen (3):
scripts: permit passing extra compiler & linker flags to ABI validator
scripts: move two identical config fixups into a function
scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD
scripts/validate-abi.sh | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
--
2.4.3
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator
2015-09-24 7:34 9% [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements Panu Matilainen
@ 2015-09-24 7:34 19% ` Panu Matilainen
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:34 UTC (permalink / raw)
To: dev
Its sometimes necessary to disable warnings etc to get an older
version of code to build.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index 4476433..b9c9989 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -164,8 +164,8 @@ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
# Checking abi compliance relies on using the dwarf information in
# The shared objects. Thats only included in the DSO's if we build
# with -g
-export EXTRA_CFLAGS=-g
-export EXTRA_LDFLAGS=-g
+export EXTRA_CFLAGS="$EXTRA_CFLAGS -g"
+export EXTRA_LDFLAGS="$EXTRA_LDFLAGS -g"
# Now configure the build
log "INFO" "Configuring DPDK $TAG1"
--
2.4.3
^ permalink raw reply [relevance 19%]
* [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function
2015-09-24 7:34 9% [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:34 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
@ 2015-09-24 7:34 14% ` Panu Matilainen
2015-09-24 7:42 0% ` Panu Matilainen
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
3 siblings, 1 reply; 200+ results
From: Panu Matilainen @ 2015-09-24 7:34 UTC (permalink / raw)
To: dev
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index b9c9989..4b555de 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -81,6 +81,15 @@ cleanup_and_exit() {
exit $1
}
+# Make sure we configure SHARED libraries
+# Also turn off IGB and KNI as those require kernel headers to build
+fixup_config() {
+ sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+}
+
###########################################
#START
############################################
@@ -154,12 +163,7 @@ log "INFO" "Checking out version $TAG1 of the dpdk"
# Move to the old version of the tree
git checkout $TAG1
-# Make sure we configure SHARED libraries
-# Also turn off IGB and KNI as those require kernel headers to build
-sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+fixup_config
# Checking abi compliance relies on using the dwarf information in
# The shared objects. Thats only included in the DSO's if we build
@@ -167,6 +171,8 @@ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
export EXTRA_CFLAGS="$EXTRA_CFLAGS -g"
export EXTRA_LDFLAGS="$EXTRA_LDFLAGS -g"
+fixup_config
+
# Now configure the build
log "INFO" "Configuring DPDK $TAG1"
make config T=$TARGET O=$TARGET > $VERBOSE 2>&1
@@ -196,13 +202,6 @@ git reset --hard
log "INFO" "Checking out version $TAG2 of the dpdk"
git checkout $TAG2
-# Make sure we configure SHARED libraries
-# Also turn off IGB and KNI as those require kernel headers to build
-sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
-
# Now configure the build
log "INFO" "Configuring DPDK $TAG2"
make config T=$TARGET O=$TARGET > $VERBOSE 2>&1
--
2.4.3
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD
2015-09-24 7:34 9% [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:34 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
@ 2015-09-24 7:34 14% ` Panu Matilainen
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
3 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:34 UTC (permalink / raw)
To: dev
The validator attempts to disable all kernel modules but since
commit 36080ff96b0eb37a6da8c4fec1a2f8a57dfadf5b fails to do so
for KNI, causing the build stage to fail if kernel headers are missing.
With the introduction of CONFIG_RTE_KNI_KMOD, CONFIG_RTE_LIBRTE_KNI=n
can eventually be dropped but leaving it around for now as its
needed with pre-2.1 versions.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 1 +
1 file changed, 1 insertion(+)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index 4b555de..6be67d0 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -88,6 +88,7 @@ fixup_config() {
sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_KNI_KMOD=n" config/defconfig_$TARGET
}
###########################################
--
2.4.3
^ permalink raw reply [relevance 14%]
* Re: [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
@ 2015-09-24 7:42 0% ` Panu Matilainen
0 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:42 UTC (permalink / raw)
To: dev
On 09/24/2015 10:34 AM, Panu Matilainen wrote:
> # Checking abi compliance relies on using the dwarf information in
> # The shared objects. Thats only included in the DSO's if we build
> @@ -167,6 +171,8 @@ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
> export EXTRA_CFLAGS="$EXTRA_CFLAGS -g"
> export EXTRA_LDFLAGS="$EXTRA_LDFLAGS -g"
>
> +fixup_config
> +
> # Now configure the build
> log "INFO" "Configuring DPDK $TAG1"
> make config T=$TARGET O=$TARGET > $VERBOSE 2>&1
> @@ -196,13 +202,6 @@ git reset --hard
> log "INFO" "Checking out version $TAG2 of the dpdk"
> git checkout $TAG2
>
> -# Make sure we configure SHARED libraries
> -# Also turn off IGB and KNI as those require kernel headers to build
> -sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
> -sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
> -sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
> -sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
> -
Doh. Obvious brain damage here, will send v2 after fetching apparently
needed additional coffee.
- Panu -
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements
2015-09-24 7:34 9% [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements Panu Matilainen
` (2 preceding siblings ...)
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
@ 2015-09-24 7:50 9% ` Panu Matilainen
2015-09-24 7:50 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
` (3 more replies)
3 siblings, 4 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:50 UTC (permalink / raw)
To: dev
For giggles, tried running abi-validator between 2.0 and 2.1 on
my Fedora 22 laptop, didn't work due to various build failures.
With this patch series the following now succeeds:
EXTRA_CFLAGS="-Wno-error" scripts/validate-abi.sh v2.0.0 v2.1.0 x86_64-native-linuxapp-gcc
Panu Matilainen (3):
scripts: permit passing extra compiler & linker flags to ABI validator
scripts: move two identical config fixups into a function
scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD
scripts/validate-abi.sh | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
--
2.4.3
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
@ 2015-09-24 7:50 19% ` Panu Matilainen
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:50 UTC (permalink / raw)
To: dev
Its sometimes necessary to disable warnings etc to get an older
version of code to build.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index 4476433..b9c9989 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -164,8 +164,8 @@ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
# Checking abi compliance relies on using the dwarf information in
# The shared objects. Thats only included in the DSO's if we build
# with -g
-export EXTRA_CFLAGS=-g
-export EXTRA_LDFLAGS=-g
+export EXTRA_CFLAGS="$EXTRA_CFLAGS -g"
+export EXTRA_LDFLAGS="$EXTRA_LDFLAGS -g"
# Now configure the build
log "INFO" "Configuring DPDK $TAG1"
--
2.4.3
^ permalink raw reply [relevance 19%]
* [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:50 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
@ 2015-09-24 7:50 14% ` Panu Matilainen
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
2015-09-24 10:23 4% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Neil Horman
3 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:50 UTC (permalink / raw)
To: dev
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index b9c9989..12946d4 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -81,6 +81,15 @@ cleanup_and_exit() {
exit $1
}
+# Make sure we configure SHARED libraries
+# Also turn off IGB and KNI as those require kernel headers to build
+fixup_config() {
+ sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+}
+
###########################################
#START
############################################
@@ -154,12 +163,7 @@ log "INFO" "Checking out version $TAG1 of the dpdk"
# Move to the old version of the tree
git checkout $TAG1
-# Make sure we configure SHARED libraries
-# Also turn off IGB and KNI as those require kernel headers to build
-sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+fixup_config
# Checking abi compliance relies on using the dwarf information in
# The shared objects. Thats only included in the DSO's if we build
@@ -196,12 +200,7 @@ git reset --hard
log "INFO" "Checking out version $TAG2 of the dpdk"
git checkout $TAG2
-# Make sure we configure SHARED libraries
-# Also turn off IGB and KNI as those require kernel headers to build
-sed -i -e"$ a\CONFIG_RTE_BUILD_SHARED_LIB=y" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
-sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+fixup_config
# Now configure the build
log "INFO" "Configuring DPDK $TAG2"
--
2.4.3
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:50 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
@ 2015-09-24 7:50 14% ` Panu Matilainen
2015-09-24 10:23 4% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Neil Horman
3 siblings, 0 replies; 200+ results
From: Panu Matilainen @ 2015-09-24 7:50 UTC (permalink / raw)
To: dev
The validator attempts to disable all kernel modules but since
commit 36080ff96b0eb37a6da8c4fec1a2f8a57dfadf5b fails to do so
for KNI, causing the build stage to fail if kernel headers are missing.
With the introduction of CONFIG_RTE_KNI_KMOD, CONFIG_RTE_LIBRTE_KNI=n
can eventually be dropped but leaving it around for now as its
needed with pre-2.1 versions.
Signed-off-by: Panu Matilainen <pmatilai@redhat.com>
---
scripts/validate-abi.sh | 1 +
1 file changed, 1 insertion(+)
diff --git a/scripts/validate-abi.sh b/scripts/validate-abi.sh
index 12946d4..cbf9d7f 100755
--- a/scripts/validate-abi.sh
+++ b/scripts/validate-abi.sh
@@ -88,6 +88,7 @@ fixup_config() {
sed -i -e"$ a\CONFIG_RTE_NEXT_ABI=n" config/defconfig_$TARGET
sed -i -e"$ a\CONFIG_RTE_EAL_IGB_UIO=n" config/defconfig_$TARGET
sed -i -e"$ a\CONFIG_RTE_LIBRTE_KNI=n" config/defconfig_$TARGET
+ sed -i -e"$ a\CONFIG_RTE_KNI_KMOD=n" config/defconfig_$TARGET
}
###########################################
--
2.4.3
^ permalink raw reply [relevance 14%]
* Re: [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
` (2 preceding siblings ...)
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
@ 2015-09-24 10:23 4% ` Neil Horman
3 siblings, 0 replies; 200+ results
From: Neil Horman @ 2015-09-24 10:23 UTC (permalink / raw)
To: Panu Matilainen; +Cc: dev
On Thu, Sep 24, 2015 at 10:50:56AM +0300, Panu Matilainen wrote:
> For giggles, tried running abi-validator between 2.0 and 2.1 on
> my Fedora 22 laptop, didn't work due to various build failures.
> With this patch series the following now succeeds:
>
> EXTRA_CFLAGS="-Wno-error" scripts/validate-abi.sh v2.0.0 v2.1.0 x86_64-native-linuxapp-gcc
>
> Panu Matilainen (3):
> scripts: permit passing extra compiler & linker flags to ABI validator
> scripts: move two identical config fixups into a function
> scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD
>
> scripts/validate-abi.sh | 28 ++++++++++++++--------------
> 1 file changed, 14 insertions(+), 14 deletions(-)
>
> --
> 2.4.3
>
>
series
Acked-by: Neil Horman <nhorman@tuxdriver.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key
@ 2015-09-25 22:33 3% roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters roy.fan.zhang
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: roy.fan.zhang @ 2015-09-25 22:33 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patchset links to ABI change announced for librte_table. Key_mask
parameters has been added to the hash table parameter structure for
8-byte key and 16-byte key extendible bucket and LRU tables.
Fan Zhang (8):
librte_table: add key_mask parameter to 8-byte key hash parameters
librte_table: add key_mask parameter to 16-byte key hash parameters
librte_table: add 16 byte hash table operations with computed lookup
app/test: modify app/test_table_combined and app/test_table_tables
app/test-pipeline: modify pipeline test
example/ip_pipeline: add parse_hex_string for internal use
example/ip_pipeline/pipeline: update flow_classification pipeline
librte_table: modify release notes and deprecation notice
app/test-pipeline/pipeline_hash.c | 4 +
app/test/test_table_combined.c | 4 +
app/test/test_table_tables.c | 6 +-
doc/guides/rel_notes/deprecation.rst | 3 -
doc/guides/rel_notes/release_2_2.rst | 5 +-
examples/ip_pipeline/config_parse.c | 70 ++++
examples/ip_pipeline/pipeline.h | 4 +
.../pipeline/pipeline_flow_classification_be.c | 56 ++-
lib/librte_table/Makefile | 2 +-
lib/librte_table/rte_table_hash.h | 20 +
lib/librte_table/rte_table_hash_key16.c | 411 ++++++++++++++++++++-
lib/librte_table/rte_table_hash_key8.c | 54 ++-
12 files changed, 608 insertions(+), 31 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 2/8] librte_table: add key_mask parameter to 16-byte key hash parameters
2015-09-25 22:33 3% [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters roy.fan.zhang
@ 2015-09-25 22:33 3% ` roy.fan.zhang
2015-09-25 22:33 5% ` [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice roy.fan.zhang
2015-09-28 20:07 0% ` [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key Dumitrescu, Cristian
3 siblings, 0 replies; 200+ results
From: roy.fan.zhang @ 2015-09-25 22:33 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table. key_mask
parameter is added for 16-byte key extendible bucket and LRU tables.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/librte_table/rte_table_hash.h | 6 ++++
lib/librte_table/rte_table_hash_key16.c | 53 ++++++++++++++++++++++++++++-----
2 files changed, 52 insertions(+), 7 deletions(-)
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index ef65355..e2c60e1 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -263,6 +263,9 @@ struct rte_table_hash_key16_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -290,6 +293,9 @@ struct rte_table_hash_key16_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
index f6a3306..ffd3249 100644
--- a/lib/librte_table/rte_table_hash_key16.c
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -85,6 +85,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask[2];
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -164,6 +165,14 @@ rte_table_hash_create_key16_lru(void *params,
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = ((uint64_t *)p->key_mask)[0];
+ f->key_mask[1] = ((uint64_t *)p->key_mask)[1];
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_16 *bucket;
@@ -384,6 +393,14 @@ rte_table_hash_create_key16_ext(void *params,
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = (((uint64_t *)p->key_mask)[0]);
+ f->key_mask[1] = (((uint64_t *)p->key_mask)[1]);
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
return f;
}
@@ -609,11 +626,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -631,11 +651,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -658,12 +681,15 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket, pos); \
\
pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -749,13 +775,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -778,13 +810,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -1115,3 +1153,4 @@ struct rte_table_ops rte_table_hash_key16_ext_ops = {
.f_lookup = rte_table_hash_lookup_key16_ext,
.f_stats = rte_table_hash_key16_stats_read,
};
+
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters
2015-09-25 22:33 3% [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key roy.fan.zhang
@ 2015-09-25 22:33 3% ` roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 2/8] librte_table: add key_mask parameter to 16-byte " roy.fan.zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: roy.fan.zhang @ 2015-09-25 22:33 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table. key_mask
parameter is added for 8-byte key extendible bucket and LRU tables.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/librte_table/rte_table_hash.h | 6 ++++
lib/librte_table/rte_table_hash_key8.c | 54 +++++++++++++++++++++++++++-------
2 files changed, 50 insertions(+), 10 deletions(-)
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index 9181942..ef65355 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -196,6 +196,9 @@ struct rte_table_hash_key8_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -226,6 +229,9 @@ struct rte_table_hash_key8_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket hash table operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
index b351a49..ccb20cf 100644
--- a/lib/librte_table/rte_table_hash_key8.c
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -82,6 +82,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask;
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -160,6 +161,11 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_8 *bucket;
@@ -372,6 +378,11 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
f->stack = (uint32_t *)
&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
@@ -586,9 +597,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t *key; \
uint64_t signature; \
uint32_t bucket_index; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf1, f->key_offset);\
- signature = f->f_hash(key, RTE_TABLE_HASH_KEY_SIZE, f->seed);\
+ hash_key_buffer = *key & f->key_mask; \
+ signature = f->f_hash(&hash_key_buffer, \
+ RTE_TABLE_HASH_KEY_SIZE, f->seed); \
bucket_index = signature & (f->n_buckets - 1); \
bucket1 = (struct rte_bucket_4_8 *) \
&f->memory[bucket_index * f->bucket_size]; \
@@ -602,10 +616,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = key[0] & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -624,10 +640,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = *key & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -651,11 +669,13 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer = (*key) & f->key_mask; \
\
- lookup_key8_cmp(key, bucket, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket, pos); \
\
pkt_mask = ((bucket->signature >> pos) & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -736,6 +756,8 @@ rte_table_hash_entry_delete_key8_ext(
#define lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f)\
{ \
uint64_t *key10, *key11; \
+ uint64_t hash_offset_buffer10; \
+ uint64_t hash_offset_buffer11; \
uint64_t signature10, signature11; \
uint32_t bucket10_index, bucket11_index; \
rte_table_hash_op_hash f_hash = f->f_hash; \
@@ -744,14 +766,18 @@ rte_table_hash_entry_delete_key8_ext(
\
key10 = RTE_MBUF_METADATA_UINT64_PTR(mbuf10, key_offset);\
key11 = RTE_MBUF_METADATA_UINT64_PTR(mbuf11, key_offset);\
+ hash_offset_buffer10 = *key10 & f->key_mask; \
+ hash_offset_buffer11 = *key11 & f->key_mask; \
\
- signature10 = f_hash(key10, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature10 = f_hash(&hash_offset_buffer10, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket10_index = signature10 & (f->n_buckets - 1); \
bucket10 = (struct rte_bucket_4_8 *) \
&f->memory[bucket10_index * f->bucket_size]; \
rte_prefetch0(bucket10); \
\
- signature11 = f_hash(key11, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature11 = f_hash(&hash_offset_buffer11, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket11_index = signature11 & (f->n_buckets - 1); \
bucket11 = (struct rte_bucket_4_8 *) \
&f->memory[bucket11_index * f->bucket_size]; \
@@ -764,13 +790,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
@@ -793,13 +823,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice
2015-09-25 22:33 3% [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 2/8] librte_table: add key_mask parameter to 16-byte " roy.fan.zhang
@ 2015-09-25 22:33 5% ` roy.fan.zhang
2015-10-12 14:24 0% ` Thomas Monjalon
2015-09-28 20:07 0% ` [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key Dumitrescu, Cristian
3 siblings, 1 reply; 200+ results
From: roy.fan.zhang @ 2015-09-25 22:33 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
The LIBABIVER number is incremented. The release notes is updated and
the deprecation announce is removed.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 5 ++++-
lib/librte_table/Makefile | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fffad80..2b6954e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,9 +63,6 @@ Deprecation Notices
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
-* librte_table hash: Key mask parameter will be added to the hash table
- parameter structure for 8-byte key and 16-byte key extendible bucket and
- LRU tables.
* librte_pipeline: The prototype for the pipeline input port, output port
and table action handlers will be updated:
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 9a70dae..a62f641 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -95,6 +95,9 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* Key mask parameter is added to the hash table parameter structure for
+ 8-byte key and 16-byte key extendible bucket and LRU tables. The
+ deprecated field mem_location is removed.
Shared Library Versions
-----------------------
@@ -127,6 +130,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
2.1.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key
2015-09-25 22:33 3% [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key roy.fan.zhang
` (2 preceding siblings ...)
2015-09-25 22:33 5% ` [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice roy.fan.zhang
@ 2015-09-28 20:07 0% ` Dumitrescu, Cristian
3 siblings, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2015-09-28 20:07 UTC (permalink / raw)
To: Zhang, Roy Fan, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> roy.fan.zhang@intel.com
> Sent: Friday, September 25, 2015 11:33 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-
> byte key
>
> From: Fan Zhang <roy.fan.zhang@intel.com>
>
> This patchset links to ABI change announced for librte_table. Key_mask
> parameters has been added to the hash table parameter structure for
> 8-byte key and 16-byte key extendible bucket and LRU tables.
>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 2/4] rte_ring: store memzone pointer inside ring
@ 2015-09-30 12:12 3% ` Bruce Richardson
2015-10-13 14:29 0% ` Olivier MATZ
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-09-30 12:12 UTC (permalink / raw)
To: dev
Add a new field to the rte_ring structure to store the memzone pointer which
contains the ring. For rings created using rte_ring_create(), the field will
be set automatically.
This new field will allow users of the ring to query the numa node a ring is
allocated on, or to get the physical address of the ring, if so needed.
The rte_ring structure will also maintain ABI compatibility, as the
structure members, after the new one, are set to be cache line aligned,
so leaving a space.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/librte_ring/rte_ring.c | 1 +
lib/librte_ring/rte_ring.h | 4 ++++
2 files changed, 5 insertions(+)
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index c9e59d4..4e78e14 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -196,6 +196,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
rte_ring_init(r, name, count, flags);
te->data = (void *) r;
+ r->memzone = mz;
TAILQ_INSERT_TAIL(ring_list, te, next);
} else {
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index af68888..df45f3f 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -134,6 +134,8 @@ struct rte_ring_debug_stats {
* if RTE_RING_PAUSE_REP not defined. */
#endif
+struct rte_memzone; /* forward declaration, so as not to require memzone.h */
+
/**
* An RTE ring structure.
*
@@ -147,6 +149,8 @@ struct rte_ring_debug_stats {
struct rte_ring {
char name[RTE_RING_NAMESIZE]; /**< Name of the ring. */
int flags; /**< Flags supplied at creation. */
+ const struct rte_memzone *memzone;
+ /**< Memzone, if any, containing the rte_ring */
/** Ring producer status. */
struct prod {
--
2.4.3
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 02/20] librte_ether: add fields from rte_pci_driver to rte_eth_dev_data
@ 2015-09-30 13:18 3% ` Neil Horman
2015-09-30 13:23 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Neil Horman @ 2015-09-30 13:18 UTC (permalink / raw)
To: Bernard Iremonger; +Cc: dev
> +}
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index fa06554..9cd262b 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -1635,8 +1635,23 @@ struct rte_eth_dev_data {
> all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
> dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */
> lro : 1; /**< RX LRO is ON(1) / OFF(0) */
> + uint32_t dev_flags; /**< Flags controlling handling of device. */
> + enum rte_kernel_driver kdrv; /**< Kernel driver passthrough */
> + int numa_node;
> + const char *drv_name;
> };
>
Unrelated to my other questions on this code: Is rte_eth_dev_data ever
allocation by any applications? If so, this will have to go through the ABI
process. I don't think it is, but I wanted to ask to be sure
Neil
> +/** Device needs PCI BAR mapping (done with either IGB_UIO or VFIO) */
> +#define RTE_ETH_DEV_DRV_NEED_MAPPING RTE_PCI_DRV_NEED_MAPPING
> +/** Device needs to be unbound even if no module is provided */
> +#define RTE_ETH_DEV_DRV_FORCE_UNBIND RTE_PCI_DRV_FORCE_UNBIND
> +/** Device supports link state interrupt */
> +#define RTE_ETH_DEV_INTR_LSC RTE_PCI_DRV_INTR_LSC
> +/** Device supports detaching capability */
> +#define RTE_ETH_DEV_DETACHABLE RTE_PCI_DRV_DETACHABLE
> +/** Device is a bonded device */
> +#define RTE_ETH_DEV_BONDED 0x0020
> +
> /**
> * @internal
> * The pool of *rte_eth_dev* structures. The size of the pool
> --
> 1.9.1
>
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 02/20] librte_ether: add fields from rte_pci_driver to rte_eth_dev_data
2015-09-30 13:18 3% ` Neil Horman
@ 2015-09-30 13:23 3% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2015-09-30 13:23 UTC (permalink / raw)
To: Neil Horman; +Cc: dev
On Wed, Sep 30, 2015 at 09:18:53AM -0400, Neil Horman wrote:
> > +}
> > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> > index fa06554..9cd262b 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -1635,8 +1635,23 @@ struct rte_eth_dev_data {
> > all_multicast : 1, /**< RX all multicast mode ON(1) / OFF(0). */
> > dev_started : 1, /**< Device state: STARTED(1) / STOPPED(0). */
> > lro : 1; /**< RX LRO is ON(1) / OFF(0) */
> > + uint32_t dev_flags; /**< Flags controlling handling of device. */
> > + enum rte_kernel_driver kdrv; /**< Kernel driver passthrough */
> > + int numa_node;
> > + const char *drv_name;
> > };
> >
> Unrelated to my other questions on this code: Is rte_eth_dev_data ever
> allocation by any applications? If so, this will have to go through the ABI
> process. I don't think it is, but I wanted to ask to be sure
>
> Neil
>
No - applications do not allocate this structure directly, it's internal only, so
we should be safe here from an ABI perspective.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] ip_pipeline: add more functions to routing-pipeline
@ 2015-10-01 11:00 4% ` Neil Horman
2015-10-01 12:37 5% ` Dumitrescu, Cristian
0 siblings, 1 reply; 200+ results
From: Neil Horman @ 2015-10-01 11:00 UTC (permalink / raw)
To: Jasvinder Singh; +Cc: dev
>
> /*
> @@ -106,9 +164,7 @@ struct pipeline_routing_route_add_msg_req {
> struct pipeline_routing_route_key key;
>
> /* data */
> - uint32_t flags;
> - uint32_t port_id; /* Output port ID */
> - uint32_t ip; /* Next hop IP address (only valid for remote routes) */
> + struct pipeline_routing_route_data data;
> };
>
The example that you modifying appears to directly set the structure fields that
you removing above. As such these appear to be ABI breaking changes and need to
go through the ABI process
Neil
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] ip_pipeline: add more functions to routing-pipeline
2015-10-01 11:00 4% ` Neil Horman
@ 2015-10-01 12:37 5% ` Dumitrescu, Cristian
2015-10-01 17:18 0% ` Neil Horman
0 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2015-10-01 12:37 UTC (permalink / raw)
To: Neil Horman, Singh, Jasvinder; +Cc: dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Neil Horman
> Sent: Thursday, October 1, 2015 12:01 PM
> To: Singh, Jasvinder
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] ip_pipeline: add more functions to
> routing-pipeline
>
> >
> > /*
> > @@ -106,9 +164,7 @@ struct pipeline_routing_route_add_msg_req {
> > struct pipeline_routing_route_key key;
> >
> > /* data */
> > - uint32_t flags;
> > - uint32_t port_id; /* Output port ID */
> > - uint32_t ip; /* Next hop IP address (only valid for remote routes) */
> > + struct pipeline_routing_route_data data;
> > };
> >
>
> The example that you modifying appears to directly set the structure fields
> that
> you removing above. As such these appear to be ABI breaking changes and
> need to
> go through the ABI process
>
> Neil
Hi Neil,
This patch only changes application code (in DPDK examples/ip_pipeline folder), it does not change any library code (in DPDK lib folder). There is no ABI versioning required for the example applications, so I don't think the ABI restrictions are applicable here.
The pipelines in the ip_pipeline application are provided only as examples to encourage people to create their own pipelines, and their implementation is evolving as new features are added. They are intended to support only a limited set of protocols and features; for example, in this case of the routing pipeline, there is no intention to have them support an exhaustive list of routing protocols (as this would be virtually impossible). Therefore, there is no plan to standardize them and make them library code, where the API/ABI preservation is required.
The code where we are committed to keep the API compatibility and apply the ABI change process rigorously is the library code (e.g. librte_port, librte_table, librte_pipeline) and we are consistently doing this.
Thank you for your comment!
Regards,
Cristian
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] Having troubles binding an SR-IOV VF to uio_pci_generic on Amazon instance
@ 2015-10-01 14:47 3% ` Stephen Hemminger
2015-10-01 15:03 0% ` Vlad Zolotarov
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-01 14:47 UTC (permalink / raw)
To: Vlad Zolotarov; +Cc: dev, Michael S. Tsirkin
On Thu, 1 Oct 2015 11:00:28 +0300
Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
>
>
> On 10/01/15 00:36, Stephen Hemminger wrote:
> > On Wed, 30 Sep 2015 23:09:33 +0300
> > Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
> >
> >>
> >> On 09/30/15 22:39, Michael S. Tsirkin wrote:
> >>> On Wed, Sep 30, 2015 at 10:06:52PM +0300, Vlad Zolotarov wrote:
> >>>>>> How would iommu
> >>>>>> virtualization change anything?
> >>>>> Kernel can use an iommu to limit device access to memory of
> >>>>> the controlling application.
> >>>> Ok, this is obvious but what it has to do with enabling using MSI/MSI-X
> >>>> interrupts support in uio_pci_generic? kernel may continue to limit the
> >>>> above access with this support as well.
> >>> It could maybe. So if you write a patch to allow MSI by at the same time
> >>> creating an isolated IOMMU group and blocking DMA from device in
> >>> question anywhere, that sounds reasonable.
> >> No, I'm only planning to add MSI and MSI-X interrupts support for
> >> uio_pci_generic device.
> >> The rest mentioned above should naturally be a matter of a different
> >> patch and writing it is orthogonal to the patch I'm working on as has
> >> been extensively discussed in this thread.
> >>
> > I have a generic MSI and MSI-X driver (posted earlier on this list).
> > About to post to upstream kernel.
>
> Stephen, hi!
>
> I found the mentioned series and first thing I noticed was that it's
> been sent in May so the first question is how far in your list of tasks
> submitting it upstream is? We need it more or less yesterday and I'm
> working on it right now. Therefore if u don't have time for it I'd like
> to help... ;) However I'd like u to clarify a few small things. Pls.,
> see below...
>
> I noticed that u've created a separate msi_msix driver and the second
> question is what do u plan for the upstream? I was thinking of extending
> the existing uio_pci_generic with the MSI-X functionality similar to
> your code and preserving the INT#X functionality as it is now:
The igb_uio has a bunch of other things I didn't want to deal with:
the name (being specific to old Intel driver); compatibility with older
kernels; legacy ABI support. Therefore in effect uio_msi is a rebase
of igb_uio.
The submission upstream yesterday is the first step, I expect lots
of review feedback.
> * INT#X and MSI would provide the IRQ number to the UIO module while
> only MSI-X case would register with UIO_IRQ_CUSTOM.
I wanted all IRQ's to be the same for the driver, ie all go through
eventfd mechanism. This makes code on DPDK side consistent with less
special cases.
> I also noticed that u enable MSI-X on a first open() call. I assume
> there was a good reason (that I miss) for not doing it in probe(). Could
> u, pls., clarify?
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
@ 2015-10-01 14:57 3% ` Stephen Hemminger
2015-10-01 19:48 0% ` Alexander Duyck
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-01 14:57 UTC (permalink / raw)
To: Avi Kivity; +Cc: dev, hjk, gregkh, linux-kernel
On Thu, 1 Oct 2015 13:59:02 +0300
Avi Kivity <avi@scylladb.com> wrote:
> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
> > This is a new UIO device driver to allow supporting MSI-X and MSI devices
> > in userspace. It has been used in environments like VMware and older versions
> > of QEMU/KVM where no IOMMU support is available.
>
> Why not add msi/msix support to uio_pci_generic?
That is possible but that would meet ABI and other resistance from the author.
Also, uio_pci_generic makes it harder to find resources since it doesn't fully
utilize UIO infrastructure.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] Having troubles binding an SR-IOV VF to uio_pci_generic on Amazon instance
2015-10-01 14:47 3% ` Stephen Hemminger
@ 2015-10-01 15:03 0% ` Vlad Zolotarov
0 siblings, 0 replies; 200+ results
From: Vlad Zolotarov @ 2015-10-01 15:03 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Michael S. Tsirkin
On 10/01/15 17:47, Stephen Hemminger wrote:
> On Thu, 1 Oct 2015 11:00:28 +0300
> Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
>
>>
>> On 10/01/15 00:36, Stephen Hemminger wrote:
>>> On Wed, 30 Sep 2015 23:09:33 +0300
>>> Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
>>>
>>>> On 09/30/15 22:39, Michael S. Tsirkin wrote:
>>>>> On Wed, Sep 30, 2015 at 10:06:52PM +0300, Vlad Zolotarov wrote:
>>>>>>>> How would iommu
>>>>>>>> virtualization change anything?
>>>>>>> Kernel can use an iommu to limit device access to memory of
>>>>>>> the controlling application.
>>>>>> Ok, this is obvious but what it has to do with enabling using MSI/MSI-X
>>>>>> interrupts support in uio_pci_generic? kernel may continue to limit the
>>>>>> above access with this support as well.
>>>>> It could maybe. So if you write a patch to allow MSI by at the same time
>>>>> creating an isolated IOMMU group and blocking DMA from device in
>>>>> question anywhere, that sounds reasonable.
>>>> No, I'm only planning to add MSI and MSI-X interrupts support for
>>>> uio_pci_generic device.
>>>> The rest mentioned above should naturally be a matter of a different
>>>> patch and writing it is orthogonal to the patch I'm working on as has
>>>> been extensively discussed in this thread.
>>>>
>>> I have a generic MSI and MSI-X driver (posted earlier on this list).
>>> About to post to upstream kernel.
>> Stephen, hi!
>>
>> I found the mentioned series and first thing I noticed was that it's
>> been sent in May so the first question is how far in your list of tasks
>> submitting it upstream is? We need it more or less yesterday and I'm
>> working on it right now. Therefore if u don't have time for it I'd like
>> to help... ;) However I'd like u to clarify a few small things. Pls.,
>> see below...
>>
>> I noticed that u've created a separate msi_msix driver and the second
>> question is what do u plan for the upstream? I was thinking of extending
>> the existing uio_pci_generic with the MSI-X functionality similar to
>> your code and preserving the INT#X functionality as it is now:
> The igb_uio has a bunch of other things I didn't want to deal with:
> the name (being specific to old Intel driver); compatibility with older
> kernels; legacy ABI support. Therefore in effect uio_msi is a rebase
> of igb_uio.
>
> The submission upstream yesterday is the first step, I expect lots
> of review feedback.
Sure, we have lots of feedback already even before the patch has been
sent... ;)
So, I'm preparing the uio_pci_generic patch. Just wanted to make sure we
are not working on the same patch at the same time... ;)
It's going to enable both MSI and MSI-X support.
For a backward compatibility it'll enable INT#X by default.
It follows the concepts and uses some code pieces from your uio_msi
patch. If u want I'll put u as a signed-off when I send it.
>
>> * INT#X and MSI would provide the IRQ number to the UIO module while
>> only MSI-X case would register with UIO_IRQ_CUSTOM.
> I wanted all IRQ's to be the same for the driver, ie all go through
> eventfd mechanism. This makes code on DPDK side consistent with less
> special cases.
Of course. The name (uio_msi) is a bit confusing since it only adds
MSI-X support. I mistakenly thought that it adds both MSI and MSI-X but
it seems to only add MSI-X and then there are no further questions... ;)
>
>> I also noticed that u enable MSI-X on a first open() call. I assume
>> there was a good reason (that I miss) for not doing it in probe(). Could
>> u, pls., clarify?
What about this?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ip_pipeline: add more functions to routing-pipeline
2015-10-01 12:37 5% ` Dumitrescu, Cristian
@ 2015-10-01 17:18 0% ` Neil Horman
0 siblings, 0 replies; 200+ results
From: Neil Horman @ 2015-10-01 17:18 UTC (permalink / raw)
To: Dumitrescu, Cristian; +Cc: dev
On Thu, Oct 01, 2015 at 12:37:51PM +0000, Dumitrescu, Cristian wrote:
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Neil Horman
> > Sent: Thursday, October 1, 2015 12:01 PM
> > To: Singh, Jasvinder
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] ip_pipeline: add more functions to
> > routing-pipeline
> >
> > >
> > > /*
> > > @@ -106,9 +164,7 @@ struct pipeline_routing_route_add_msg_req {
> > > struct pipeline_routing_route_key key;
> > >
> > > /* data */
> > > - uint32_t flags;
> > > - uint32_t port_id; /* Output port ID */
> > > - uint32_t ip; /* Next hop IP address (only valid for remote routes) */
> > > + struct pipeline_routing_route_data data;
> > > };
> > >
> >
> > The example that you modifying appears to directly set the structure fields
> > that
> > you removing above. As such these appear to be ABI breaking changes and
> > need to
> > go through the ABI process
> >
> > Neil
>
> Hi Neil,
>
> This patch only changes application code (in DPDK examples/ip_pipeline folder), it does not change any library code (in DPDK lib folder). There is no ABI versioning required for the example applications, so I don't think the ABI restrictions are applicable here.
>
> The pipelines in the ip_pipeline application are provided only as examples to encourage people to create their own pipelines, and their implementation is evolving as new features are added. They are intended to support only a limited set of protocols and features; for example, in this case of the routing pipeline, there is no intention to have them support an exhaustive list of routing protocols (as this would be virtually impossible). Therefore, there is no plan to standardize them and make them library code, where the API/ABI preservation is required.
>
> The code where we are committed to keep the API compatibility and apply the ABI change process rigorously is the library code (e.g. librte_port, librte_table, librte_pipeline) and we are consistently doing this.
>
> Thank you for your comment!
>
Yup, youre right, my bad, I looked at the header file and thought it was part of
one of the libraries
Neil
> Regards,
> Cristian
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
2015-10-01 14:57 3% ` Stephen Hemminger
@ 2015-10-01 19:48 0% ` Alexander Duyck
2015-10-01 22:00 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Alexander Duyck @ 2015-10-01 19:48 UTC (permalink / raw)
To: Stephen Hemminger, Avi Kivity; +Cc: dev, hjk, gregkh, linux-kernel
On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
> On Thu, 1 Oct 2015 13:59:02 +0300
> Avi Kivity <avi@scylladb.com> wrote:
>
>> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
>>> This is a new UIO device driver to allow supporting MSI-X and MSI devices
>>> in userspace. It has been used in environments like VMware and older versions
>>> of QEMU/KVM where no IOMMU support is available.
>> Why not add msi/msix support to uio_pci_generic?
> That is possible but that would meet ABI and other resistance from the author.
> Also, uio_pci_generic makes it harder to find resources since it doesn't fully
> utilize UIO infrastructure.
I'd say you are better off actually taking this in the other direction.
>From what I have seen it seems like this driver is meant to deal with
mapping VFs contained inside of guests. If you are going to fork off
and create a UIO driver for mapping VFs why not just make it specialize
in that. You could probably simplify the code by dropping support for
legacy interrupts and IO regions since all that is already covered by
uio_pci_generic anyway if I am not mistaken.
You could then look at naming it something like uio_vf since the uio_msi
is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
- Alex
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCHv5 1/8] ethdev: add new API to retrieve RX/TX queue information
@ 2015-10-01 19:54 2% ` Konstantin Ananyev
0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2015-10-01 19:54 UTC (permalink / raw)
To: dev
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Add the ability for the upper layer to query RX/TX queue information.
Add into rte_eth_dev_info new fields to represent information about
RX/TX descriptors min/max/alig nnumbers per queue for the device.
Add new structures:
struct rte_eth_rxq_info
struct rte_eth_txq_info
new functions:
rte_eth_rx_queue_info_get
rte_eth_tx_queue_info_get
into rte_etdev API.
Left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage.
Add new fields:
rx_desc_lim
tx_desc_lim
into rte_eth_dev_info.
v2 changes:
- Add formal check for the qinfo input parameter.
- As suggested rename 'rx_qinfo/tx_qinfo' to 'rxq_info/txq_info'
v3 changes:
- Updated rte_ether_version.map
- Merged with latest changes
v4 changes:
- rte_ether_version.map: move new functions into DPDK_2.1 sub-space.
v5 changes:
- adressed previous code-review comments
- rte_ether_version.map: move new functions into DPDK_2.2 sub-space.
- added new fields into rte_eth_dev_info
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
3 files changed, 159 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b309309..66bd074 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1447,6 +1447,19 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
+ nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
+ nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
+
+ PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+ "should be: <= %hu, = %hu, and a product of %hu\n",
+ nb_rx_desc,
+ dev_info.rx_desc_lim.nb_max,
+ dev_info.rx_desc_lim.nb_min,
+ dev_info.rx_desc_lim.nb_align);
+ return -EINVAL;
+ }
+
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
@@ -1786,11 +1799,18 @@ void
rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
+ const struct rte_eth_desc_lim lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ };
VALID_PORTID_OR_RET(port_id);
dev = &rte_eth_devices[port_id];
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
+ dev_info->rx_desc_lim = lim;
+ dev_info->tx_desc_lim = lim;
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -3449,6 +3469,54 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
}
int
+rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
+rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
rte_eth_dev_set_mc_addr_list(uint8_t port_id,
struct ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index fa06554..2cb9bc2 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -653,6 +653,15 @@ struct rte_eth_txconf {
};
/**
+ * A structure contains information about HW descriptor ring limitations.
+ */
+struct rte_eth_desc_lim {
+ uint16_t nb_max; /**< Max allowed number of descriptors. */
+ uint16_t nb_min; /**< Min allowed number of descriptors. */
+ uint16_t nb_align; /**< Number of descriptors should be aligned to. */
+};
+
+/**
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
@@ -945,6 +954,8 @@ struct rte_eth_dev_info {
uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
+ struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
+ struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
};
/** Maximum name length for extended statistics counters */
@@ -962,6 +973,26 @@ struct rte_eth_xstats {
uint64_t value;
};
+/**
+ * Ethernet device RX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_rxq_info {
+ struct rte_mempool *mp; /**< mempool used by that queue. */
+ struct rte_eth_rxconf conf; /**< queue config parameters. */
+ uint8_t scattered_rx; /**< scattered packets RX supported. */
+ uint16_t nb_desc; /**< configured number of RXDs. */
+} __rte_cache_aligned;
+
+/**
+ * Ethernet device TX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_txq_info {
+ struct rte_eth_txconf conf; /**< queue config parameters. */
+ uint16_t nb_desc; /**< configured number of TXDs. */
+} __rte_cache_aligned;
+
struct rte_eth_dev;
struct rte_eth_dev_callback;
@@ -1073,6 +1104,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo);
+
+typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+
typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t mtu);
/**< @internal Set MTU. */
@@ -1465,9 +1502,13 @@ struct eth_dev_ops {
rss_hash_update_t rss_hash_update;
/** Get current RSS hash configuration. */
rss_hash_conf_get_t rss_hash_conf_get;
- eth_filter_ctrl_t filter_ctrl; /**< common filter control*/
+ eth_filter_ctrl_t filter_ctrl;
+ /**< common filter control. */
eth_set_mc_addr_list_t set_mc_addr_list; /**< set list of mcast addrs */
-
+ eth_rxq_info_get_t rxq_info_get;
+ /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get;
+ /**< retrieve TX queue information. */
/** Turn IEEE1588/802.1AS timestamping on. */
eth_timesync_enable_t timesync_enable;
/** Turn IEEE1588/802.1AS timestamping off. */
@@ -3821,6 +3862,46 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
struct rte_eth_rxtx_callback *user_cb);
/**
+ * Retrieve information about given port's RX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The RX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+/**
+ * Retrieve information about given port's TX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The TX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+/*
* Retrieve number of available registers for access
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..1fb4b87 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,11 @@ DPDK_2.1 {
rte_eth_timesync_read_tx_timestamp;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_eth_rx_queue_info_get;
+ rte_eth_tx_queue_info_get;
+
+} DPDK_2.1;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
2015-10-01 19:48 0% ` Alexander Duyck
@ 2015-10-01 22:00 0% ` Stephen Hemminger
2015-10-01 23:03 0% ` Alexander Duyck
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-01 22:00 UTC (permalink / raw)
To: Alexander Duyck; +Cc: dev, Avi Kivity, hjk, gregkh, linux-kernel
On Thu, 1 Oct 2015 12:48:36 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:
> On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
> > On Thu, 1 Oct 2015 13:59:02 +0300
> > Avi Kivity <avi@scylladb.com> wrote:
> >
> >> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
> >>> This is a new UIO device driver to allow supporting MSI-X and MSI devices
> >>> in userspace. It has been used in environments like VMware and older versions
> >>> of QEMU/KVM where no IOMMU support is available.
> >> Why not add msi/msix support to uio_pci_generic?
> > That is possible but that would meet ABI and other resistance from the author.
> > Also, uio_pci_generic makes it harder to find resources since it doesn't fully
> > utilize UIO infrastructure.
>
> I'd say you are better off actually taking this in the other direction.
> From what I have seen it seems like this driver is meant to deal with
> mapping VFs contained inside of guests. If you are going to fork off
> and create a UIO driver for mapping VFs why not just make it specialize
> in that. You could probably simplify the code by dropping support for
> legacy interrupts and IO regions since all that is already covered by
> uio_pci_generic anyway if I am not mistaken.
>
> You could then look at naming it something like uio_vf since the uio_msi
> is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
The support needs to cover:
- VF in guest
- VNIC in guest (vmxnet3)
it isn't just about VF's
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
2015-10-01 22:00 0% ` Stephen Hemminger
@ 2015-10-01 23:03 0% ` Alexander Duyck
2015-10-01 23:39 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Alexander Duyck @ 2015-10-01 23:03 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Avi Kivity, hjk, gregkh, linux-kernel
On 10/01/2015 03:00 PM, Stephen Hemminger wrote:
> On Thu, 1 Oct 2015 12:48:36 -0700
> Alexander Duyck <alexander.duyck@gmail.com> wrote:
>
>> On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
>>> On Thu, 1 Oct 2015 13:59:02 +0300
>>> Avi Kivity <avi@scylladb.com> wrote:
>>>
>>>> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
>>>>> This is a new UIO device driver to allow supporting MSI-X and MSI devices
>>>>> in userspace. It has been used in environments like VMware and older versions
>>>>> of QEMU/KVM where no IOMMU support is available.
>>>> Why not add msi/msix support to uio_pci_generic?
>>> That is possible but that would meet ABI and other resistance from the author.
>>> Also, uio_pci_generic makes it harder to find resources since it doesn't fully
>>> utilize UIO infrastructure.
>> I'd say you are better off actually taking this in the other direction.
>> From what I have seen it seems like this driver is meant to deal with
>> mapping VFs contained inside of guests. If you are going to fork off
>> and create a UIO driver for mapping VFs why not just make it specialize
>> in that. You could probably simplify the code by dropping support for
>> legacy interrupts and IO regions since all that is already covered by
>> uio_pci_generic anyway if I am not mistaken.
>>
>> You could then look at naming it something like uio_vf since the uio_msi
>> is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
> The support needs to cover:
> - VF in guest
> - VNIC in guest (vmxnet3)
> it isn't just about VF's
I get that, but the driver you are talking about adding is duplicating
much of what is already there in uio_pci_generic. If nothing else it
might be worth while to look at replacing the legacy interrupt with
MSI. Maybe look at naming it something like uio_pcie to indicate that
we are focusing on assigning PCIe and virtual devices that support MSI
and MSI-X and use memory BARs rather than legacy PCI devices that are
doing things like mapping I/O BARs and using INTx signaling.
My main argument is that we should probably look at dropping support for
anything that isn't going to be needed. If it is really important we
can always add it later. I just don't see the value in having code
around for things we aren't likely to ever use with real devices as we
are stuck supporting it for the life of the driver. I'll go ahead and
provide a inline review of your patch 2/2 as I think my feedback might
make a bit more sense that way.
- Alex
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] crc: deinline crc functions
@ 2015-10-01 23:37 1% Stephen Hemminger
2015-10-13 16:04 0% ` Richardson, Bruce
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-01 23:37 UTC (permalink / raw)
To: bruce.richardson, pablo.de.lara.guarch; +Cc: dev
Because the CRC functions are inline and defined purely in the header
file, every component that uses these functions gets its own copy
of the software CRC table which is a big space waster.
Just deinline which give better long term ABI stablity anyway.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test/test_hash_functions.c | 1 +
lib/librte_hash/Makefile | 1 +
lib/librte_hash/rte_hash_crc.c | 488 +++++++++++++++++++++++++++++++++++++++++
lib/librte_hash/rte_hash_crc.h | 458 +-------------------------------------
4 files changed, 495 insertions(+), 453 deletions(-)
create mode 100644 lib/librte_hash/rte_hash_crc.c
diff --git a/app/test/test_hash_functions.c b/app/test/test_hash_functions.c
index 3ad6d80..345ca88 100644
--- a/app/test/test_hash_functions.c
+++ b/app/test/test_hash_functions.c
@@ -44,6 +44,7 @@
#include <rte_hash.h>
#include <rte_jhash.h>
#include <rte_hash_crc.h>
+#include <rte_common.h>
#include "test.h"
diff --git a/lib/librte_hash/Makefile b/lib/librte_hash/Makefile
index 7902c2b..a16a6d9 100644
--- a/lib/librte_hash/Makefile
+++ b/lib/librte_hash/Makefile
@@ -44,6 +44,7 @@ LIBABIVER := 2
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_HASH) := rte_cuckoo_hash.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += rte_fbk_hash.c
+SRCS-$(CONFIG_RTE_LIBRTE_HASH) += rte_hash_crc.c
# install this header file
SYMLINK-$(CONFIG_RTE_LIBRTE_HASH)-include := rte_hash.h
diff --git a/lib/librte_hash/rte_hash_crc.c b/lib/librte_hash/rte_hash_crc.c
new file mode 100644
index 0000000..4db32e4
--- /dev/null
+++ b/lib/librte_hash/rte_hash_crc.c
@@ -0,0 +1,488 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+
+#include <rte_cpuflags.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_hash_crc.h>
+
+/* Lookup tables for software implementation of CRC32C */
+static const uint32_t crc32c_tables[8][256] = {{
+ 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
+ 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
+ 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
+ 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
+ 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
+ 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
+ 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
+ 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
+ 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
+ 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
+ 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
+ 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
+ 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
+ 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
+ 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
+ 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
+ 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
+ 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
+ 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
+ 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
+ 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
+ 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
+ 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
+ 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
+ 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
+ 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
+ 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
+ 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
+ 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
+ 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
+ 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
+ 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
+},
+{
+ 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
+ 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
+ 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
+ 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
+ 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
+ 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
+ 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
+ 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
+ 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
+ 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
+ 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
+ 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
+ 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
+ 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
+ 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
+ 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
+ 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
+ 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
+ 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
+ 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
+ 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
+ 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
+ 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
+ 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
+ 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
+ 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
+ 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
+ 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
+ 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
+ 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
+ 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
+ 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
+},
+{
+ 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
+ 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
+ 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
+ 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
+ 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
+ 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
+ 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
+ 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
+ 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
+ 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
+ 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
+ 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
+ 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
+ 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
+ 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
+ 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
+ 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
+ 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
+ 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
+ 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
+ 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
+ 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
+ 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
+ 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
+ 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
+ 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
+ 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
+ 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
+ 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
+ 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
+ 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
+ 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
+},
+{
+ 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
+ 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
+ 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
+ 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
+ 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
+ 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
+ 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
+ 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
+ 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
+ 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
+ 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
+ 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
+ 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
+ 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
+ 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
+ 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
+ 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
+ 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
+ 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
+ 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
+ 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
+ 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
+ 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
+ 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
+ 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
+ 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
+ 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
+ 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
+ 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
+ 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
+ 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
+ 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
+},
+{
+ 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
+ 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
+ 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
+ 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
+ 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
+ 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
+ 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
+ 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
+ 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
+ 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
+ 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
+ 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
+ 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
+ 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
+ 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
+ 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
+ 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
+ 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
+ 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
+ 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
+ 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
+ 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
+ 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
+ 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
+ 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
+ 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
+ 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
+ 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
+ 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
+ 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
+ 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
+ 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
+},
+{
+ 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
+ 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
+ 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
+ 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
+ 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
+ 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
+ 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
+ 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
+ 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
+ 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
+ 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
+ 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
+ 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
+ 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
+ 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
+ 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
+ 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
+ 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
+ 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
+ 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
+ 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
+ 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
+ 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
+ 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
+ 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
+ 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
+ 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
+ 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
+ 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
+ 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
+ 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
+ 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
+},
+{
+ 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
+ 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
+ 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
+ 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
+ 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
+ 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
+ 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
+ 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
+ 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
+ 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
+ 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
+ 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
+ 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
+ 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
+ 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
+ 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
+ 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
+ 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
+ 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
+ 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
+ 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
+ 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
+ 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
+ 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
+ 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
+ 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
+ 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
+ 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
+ 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
+ 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
+ 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
+ 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
+},
+{
+ 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
+ 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
+ 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
+ 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
+ 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
+ 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
+ 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
+ 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
+ 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
+ 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
+ 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
+ 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
+ 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
+ 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
+ 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
+ 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
+ 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
+ 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
+ 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
+ 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
+ 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
+ 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
+ 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
+ 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
+ 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
+ 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
+ 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
+ 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
+ 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
+ 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
+ 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
+ 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
+}};
+
+#define CRC32_UPD(crc, n) \
+ (crc32c_tables[(n)][(crc) & 0xFF] ^ \
+ crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
+
+static inline uint32_t
+crc32c_1word(uint32_t data, uint32_t init_val)
+{
+ uint32_t crc, term1, term2;
+ crc = init_val;
+ crc ^= data;
+
+ term1 = CRC32_UPD(crc, 3);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+static inline uint32_t
+crc32c_2words(uint64_t data, uint32_t init_val)
+{
+ union {
+ uint64_t u64;
+ uint32_t u32[2];
+ } d;
+ d.u64 = data;
+
+ uint32_t crc, term1, term2;
+
+ crc = init_val;
+ crc ^= d.u32[0];
+
+ term1 = CRC32_UPD(crc, 7);
+ term2 = crc >> 16;
+ crc = term1 ^ CRC32_UPD(term2, 5);
+ term1 = CRC32_UPD(d.u32[1], 3);
+ term2 = d.u32[1] >> 16;
+ crc ^= term1 ^ CRC32_UPD(term2, 1);
+
+ return crc;
+}
+
+#if defined(RTE_ARCH_I686) || defined(RTE_ARCH_X86_64)
+static inline uint32_t
+crc32c_sse42_u32(uint32_t data, uint32_t init_val)
+{
+ __asm__ volatile(
+ "crc32l %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+
+static inline uint32_t
+crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
+{
+ union {
+ uint32_t u32[2];
+ uint64_t u64;
+ } d;
+
+ d.u64 = data;
+ init_val = crc32c_sse42_u32(d.u32[0], init_val);
+ init_val = crc32c_sse42_u32(d.u32[1], init_val);
+ return init_val;
+}
+#endif
+
+#ifdef RTE_ARCH_X86_64
+static inline uint32_t
+crc32c_sse42_u64(uint64_t data, uint64_t init_val)
+{
+ __asm__ volatile(
+ "crc32q %[data], %[init_val];"
+ : [init_val] "+r" (init_val)
+ : [data] "rm" (data));
+ return init_val;
+}
+#endif
+
+static uint8_t crc32_alg = CRC32_SW;
+
+void rte_hash_crc_set_alg(uint8_t alg)
+{
+ switch (alg) {
+#if defined(RTE_ARCH_I686) || defined(RTE_ARCH_X86_64)
+ case CRC32_SSE42_x64:
+ if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T))
+ alg = CRC32_SSE42;
+ case CRC32_SSE42:
+ if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_2))
+ alg = CRC32_SW;
+#endif
+ case CRC32_SW:
+ crc32_alg = alg;
+ default:
+ break;
+ }
+}
+
+/* Setting the best available algorithm */
+static inline void __attribute__((constructor))
+rte_hash_crc_init_alg(void)
+{
+ rte_hash_crc_set_alg(CRC32_SSE42_x64);
+}
+
+uint32_t rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
+{
+#if defined RTE_ARCH_I686 || defined RTE_ARCH_X86_64
+ if (likely(crc32_alg & CRC32_SSE42))
+ return crc32c_sse42_u32(data, init_val);
+#endif
+
+ return crc32c_1word(data, init_val);
+}
+
+uint32_t rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
+{
+#ifdef RTE_ARCH_X86_64
+ if (likely(crc32_alg == CRC32_SSE42_x64))
+ return crc32c_sse42_u64(data, init_val);
+#endif
+
+#if defined RTE_ARCH_I686 || defined RTE_ARCH_X86_64
+ if (likely(crc32_alg & CRC32_SSE42))
+ return crc32c_sse42_u64_mimic(data, init_val);
+#endif
+
+ return crc32c_2words(data, init_val);
+}
+uint32_t rte_hash_crc(const void *data, uint32_t data_len, uint32_t init_val)
+{
+ unsigned i;
+ uint64_t temp = 0;
+ uintptr_t pd = (uintptr_t) data;
+
+ for (i = 0; i < data_len / 8; i++) {
+ init_val = rte_hash_crc_8byte(*(const uint64_t *)pd, init_val);
+ pd += 8;
+ }
+
+ switch (7 - (data_len & 0x07)) {
+ case 0:
+ temp |= (uint64_t) *((const uint8_t *)pd + 6) << 48;
+ /* Fallthrough */
+ case 1:
+ temp |= (uint64_t) *((const uint8_t *)pd + 5) << 40;
+ /* Fallthrough */
+ case 2:
+ temp |= (uint64_t) *((const uint8_t *)pd + 4) << 32;
+ temp |= *(const uint32_t *)pd;
+ init_val = rte_hash_crc_8byte(temp, init_val);
+ break;
+ case 3:
+ init_val = rte_hash_crc_4byte(*(const uint32_t *)pd, init_val);
+ break;
+ case 4:
+ temp |= *((const uint8_t *)pd + 2) << 16;
+ /* Fallthrough */
+ case 5:
+ temp |= *((const uint8_t *)pd + 1) << 8;
+ /* Fallthrough */
+ case 6:
+ temp |= *(const uint8_t *)pd;
+ init_val = rte_hash_crc_4byte(temp, init_val);
+ /* Fallthrough */
+ default:
+ break;
+ }
+
+ return init_val;
+}
diff --git a/lib/librte_hash/rte_hash_crc.h b/lib/librte_hash/rte_hash_crc.h
index 1f6f5bf..218515a 100644
--- a/lib/librte_hash/rte_hash_crc.h
+++ b/lib/librte_hash/rte_hash_crc.h
@@ -44,372 +44,12 @@
extern "C" {
#endif
-#include <stdint.h>
-#include <rte_cpuflags.h>
-#include <rte_branch_prediction.h>
-#include <rte_common.h>
-
-/* Lookup tables for software implementation of CRC32C */
-static const uint32_t crc32c_tables[8][256] = {{
- 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB,
- 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24,
- 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384,
- 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B,
- 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35,
- 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA,
- 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A,
- 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595,
- 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957,
- 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198,
- 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38,
- 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7,
- 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789,
- 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46,
- 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6,
- 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829,
- 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93,
- 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C,
- 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC,
- 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033,
- 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D,
- 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982,
- 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622,
- 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED,
- 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F,
- 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0,
- 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540,
- 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F,
- 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1,
- 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E,
- 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E,
- 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351
-},
-{
- 0x00000000, 0x13A29877, 0x274530EE, 0x34E7A899, 0x4E8A61DC, 0x5D28F9AB, 0x69CF5132, 0x7A6DC945,
- 0x9D14C3B8, 0x8EB65BCF, 0xBA51F356, 0xA9F36B21, 0xD39EA264, 0xC03C3A13, 0xF4DB928A, 0xE7790AFD,
- 0x3FC5F181, 0x2C6769F6, 0x1880C16F, 0x0B225918, 0x714F905D, 0x62ED082A, 0x560AA0B3, 0x45A838C4,
- 0xA2D13239, 0xB173AA4E, 0x859402D7, 0x96369AA0, 0xEC5B53E5, 0xFFF9CB92, 0xCB1E630B, 0xD8BCFB7C,
- 0x7F8BE302, 0x6C297B75, 0x58CED3EC, 0x4B6C4B9B, 0x310182DE, 0x22A31AA9, 0x1644B230, 0x05E62A47,
- 0xE29F20BA, 0xF13DB8CD, 0xC5DA1054, 0xD6788823, 0xAC154166, 0xBFB7D911, 0x8B507188, 0x98F2E9FF,
- 0x404E1283, 0x53EC8AF4, 0x670B226D, 0x74A9BA1A, 0x0EC4735F, 0x1D66EB28, 0x298143B1, 0x3A23DBC6,
- 0xDD5AD13B, 0xCEF8494C, 0xFA1FE1D5, 0xE9BD79A2, 0x93D0B0E7, 0x80722890, 0xB4958009, 0xA737187E,
- 0xFF17C604, 0xECB55E73, 0xD852F6EA, 0xCBF06E9D, 0xB19DA7D8, 0xA23F3FAF, 0x96D89736, 0x857A0F41,
- 0x620305BC, 0x71A19DCB, 0x45463552, 0x56E4AD25, 0x2C896460, 0x3F2BFC17, 0x0BCC548E, 0x186ECCF9,
- 0xC0D23785, 0xD370AFF2, 0xE797076B, 0xF4359F1C, 0x8E585659, 0x9DFACE2E, 0xA91D66B7, 0xBABFFEC0,
- 0x5DC6F43D, 0x4E646C4A, 0x7A83C4D3, 0x69215CA4, 0x134C95E1, 0x00EE0D96, 0x3409A50F, 0x27AB3D78,
- 0x809C2506, 0x933EBD71, 0xA7D915E8, 0xB47B8D9F, 0xCE1644DA, 0xDDB4DCAD, 0xE9537434, 0xFAF1EC43,
- 0x1D88E6BE, 0x0E2A7EC9, 0x3ACDD650, 0x296F4E27, 0x53028762, 0x40A01F15, 0x7447B78C, 0x67E52FFB,
- 0xBF59D487, 0xACFB4CF0, 0x981CE469, 0x8BBE7C1E, 0xF1D3B55B, 0xE2712D2C, 0xD69685B5, 0xC5341DC2,
- 0x224D173F, 0x31EF8F48, 0x050827D1, 0x16AABFA6, 0x6CC776E3, 0x7F65EE94, 0x4B82460D, 0x5820DE7A,
- 0xFBC3FAF9, 0xE861628E, 0xDC86CA17, 0xCF245260, 0xB5499B25, 0xA6EB0352, 0x920CABCB, 0x81AE33BC,
- 0x66D73941, 0x7575A136, 0x419209AF, 0x523091D8, 0x285D589D, 0x3BFFC0EA, 0x0F186873, 0x1CBAF004,
- 0xC4060B78, 0xD7A4930F, 0xE3433B96, 0xF0E1A3E1, 0x8A8C6AA4, 0x992EF2D3, 0xADC95A4A, 0xBE6BC23D,
- 0x5912C8C0, 0x4AB050B7, 0x7E57F82E, 0x6DF56059, 0x1798A91C, 0x043A316B, 0x30DD99F2, 0x237F0185,
- 0x844819FB, 0x97EA818C, 0xA30D2915, 0xB0AFB162, 0xCAC27827, 0xD960E050, 0xED8748C9, 0xFE25D0BE,
- 0x195CDA43, 0x0AFE4234, 0x3E19EAAD, 0x2DBB72DA, 0x57D6BB9F, 0x447423E8, 0x70938B71, 0x63311306,
- 0xBB8DE87A, 0xA82F700D, 0x9CC8D894, 0x8F6A40E3, 0xF50789A6, 0xE6A511D1, 0xD242B948, 0xC1E0213F,
- 0x26992BC2, 0x353BB3B5, 0x01DC1B2C, 0x127E835B, 0x68134A1E, 0x7BB1D269, 0x4F567AF0, 0x5CF4E287,
- 0x04D43CFD, 0x1776A48A, 0x23910C13, 0x30339464, 0x4A5E5D21, 0x59FCC556, 0x6D1B6DCF, 0x7EB9F5B8,
- 0x99C0FF45, 0x8A626732, 0xBE85CFAB, 0xAD2757DC, 0xD74A9E99, 0xC4E806EE, 0xF00FAE77, 0xE3AD3600,
- 0x3B11CD7C, 0x28B3550B, 0x1C54FD92, 0x0FF665E5, 0x759BACA0, 0x663934D7, 0x52DE9C4E, 0x417C0439,
- 0xA6050EC4, 0xB5A796B3, 0x81403E2A, 0x92E2A65D, 0xE88F6F18, 0xFB2DF76F, 0xCFCA5FF6, 0xDC68C781,
- 0x7B5FDFFF, 0x68FD4788, 0x5C1AEF11, 0x4FB87766, 0x35D5BE23, 0x26772654, 0x12908ECD, 0x013216BA,
- 0xE64B1C47, 0xF5E98430, 0xC10E2CA9, 0xD2ACB4DE, 0xA8C17D9B, 0xBB63E5EC, 0x8F844D75, 0x9C26D502,
- 0x449A2E7E, 0x5738B609, 0x63DF1E90, 0x707D86E7, 0x0A104FA2, 0x19B2D7D5, 0x2D557F4C, 0x3EF7E73B,
- 0xD98EEDC6, 0xCA2C75B1, 0xFECBDD28, 0xED69455F, 0x97048C1A, 0x84A6146D, 0xB041BCF4, 0xA3E32483
-},
-{
- 0x00000000, 0xA541927E, 0x4F6F520D, 0xEA2EC073, 0x9EDEA41A, 0x3B9F3664, 0xD1B1F617, 0x74F06469,
- 0x38513EC5, 0x9D10ACBB, 0x773E6CC8, 0xD27FFEB6, 0xA68F9ADF, 0x03CE08A1, 0xE9E0C8D2, 0x4CA15AAC,
- 0x70A27D8A, 0xD5E3EFF4, 0x3FCD2F87, 0x9A8CBDF9, 0xEE7CD990, 0x4B3D4BEE, 0xA1138B9D, 0x045219E3,
- 0x48F3434F, 0xEDB2D131, 0x079C1142, 0xA2DD833C, 0xD62DE755, 0x736C752B, 0x9942B558, 0x3C032726,
- 0xE144FB14, 0x4405696A, 0xAE2BA919, 0x0B6A3B67, 0x7F9A5F0E, 0xDADBCD70, 0x30F50D03, 0x95B49F7D,
- 0xD915C5D1, 0x7C5457AF, 0x967A97DC, 0x333B05A2, 0x47CB61CB, 0xE28AF3B5, 0x08A433C6, 0xADE5A1B8,
- 0x91E6869E, 0x34A714E0, 0xDE89D493, 0x7BC846ED, 0x0F382284, 0xAA79B0FA, 0x40577089, 0xE516E2F7,
- 0xA9B7B85B, 0x0CF62A25, 0xE6D8EA56, 0x43997828, 0x37691C41, 0x92288E3F, 0x78064E4C, 0xDD47DC32,
- 0xC76580D9, 0x622412A7, 0x880AD2D4, 0x2D4B40AA, 0x59BB24C3, 0xFCFAB6BD, 0x16D476CE, 0xB395E4B0,
- 0xFF34BE1C, 0x5A752C62, 0xB05BEC11, 0x151A7E6F, 0x61EA1A06, 0xC4AB8878, 0x2E85480B, 0x8BC4DA75,
- 0xB7C7FD53, 0x12866F2D, 0xF8A8AF5E, 0x5DE93D20, 0x29195949, 0x8C58CB37, 0x66760B44, 0xC337993A,
- 0x8F96C396, 0x2AD751E8, 0xC0F9919B, 0x65B803E5, 0x1148678C, 0xB409F5F2, 0x5E273581, 0xFB66A7FF,
- 0x26217BCD, 0x8360E9B3, 0x694E29C0, 0xCC0FBBBE, 0xB8FFDFD7, 0x1DBE4DA9, 0xF7908DDA, 0x52D11FA4,
- 0x1E704508, 0xBB31D776, 0x511F1705, 0xF45E857B, 0x80AEE112, 0x25EF736C, 0xCFC1B31F, 0x6A802161,
- 0x56830647, 0xF3C29439, 0x19EC544A, 0xBCADC634, 0xC85DA25D, 0x6D1C3023, 0x8732F050, 0x2273622E,
- 0x6ED23882, 0xCB93AAFC, 0x21BD6A8F, 0x84FCF8F1, 0xF00C9C98, 0x554D0EE6, 0xBF63CE95, 0x1A225CEB,
- 0x8B277743, 0x2E66E53D, 0xC448254E, 0x6109B730, 0x15F9D359, 0xB0B84127, 0x5A968154, 0xFFD7132A,
- 0xB3764986, 0x1637DBF8, 0xFC191B8B, 0x595889F5, 0x2DA8ED9C, 0x88E97FE2, 0x62C7BF91, 0xC7862DEF,
- 0xFB850AC9, 0x5EC498B7, 0xB4EA58C4, 0x11ABCABA, 0x655BAED3, 0xC01A3CAD, 0x2A34FCDE, 0x8F756EA0,
- 0xC3D4340C, 0x6695A672, 0x8CBB6601, 0x29FAF47F, 0x5D0A9016, 0xF84B0268, 0x1265C21B, 0xB7245065,
- 0x6A638C57, 0xCF221E29, 0x250CDE5A, 0x804D4C24, 0xF4BD284D, 0x51FCBA33, 0xBBD27A40, 0x1E93E83E,
- 0x5232B292, 0xF77320EC, 0x1D5DE09F, 0xB81C72E1, 0xCCEC1688, 0x69AD84F6, 0x83834485, 0x26C2D6FB,
- 0x1AC1F1DD, 0xBF8063A3, 0x55AEA3D0, 0xF0EF31AE, 0x841F55C7, 0x215EC7B9, 0xCB7007CA, 0x6E3195B4,
- 0x2290CF18, 0x87D15D66, 0x6DFF9D15, 0xC8BE0F6B, 0xBC4E6B02, 0x190FF97C, 0xF321390F, 0x5660AB71,
- 0x4C42F79A, 0xE90365E4, 0x032DA597, 0xA66C37E9, 0xD29C5380, 0x77DDC1FE, 0x9DF3018D, 0x38B293F3,
- 0x7413C95F, 0xD1525B21, 0x3B7C9B52, 0x9E3D092C, 0xEACD6D45, 0x4F8CFF3B, 0xA5A23F48, 0x00E3AD36,
- 0x3CE08A10, 0x99A1186E, 0x738FD81D, 0xD6CE4A63, 0xA23E2E0A, 0x077FBC74, 0xED517C07, 0x4810EE79,
- 0x04B1B4D5, 0xA1F026AB, 0x4BDEE6D8, 0xEE9F74A6, 0x9A6F10CF, 0x3F2E82B1, 0xD50042C2, 0x7041D0BC,
- 0xAD060C8E, 0x08479EF0, 0xE2695E83, 0x4728CCFD, 0x33D8A894, 0x96993AEA, 0x7CB7FA99, 0xD9F668E7,
- 0x9557324B, 0x3016A035, 0xDA386046, 0x7F79F238, 0x0B899651, 0xAEC8042F, 0x44E6C45C, 0xE1A75622,
- 0xDDA47104, 0x78E5E37A, 0x92CB2309, 0x378AB177, 0x437AD51E, 0xE63B4760, 0x0C158713, 0xA954156D,
- 0xE5F54FC1, 0x40B4DDBF, 0xAA9A1DCC, 0x0FDB8FB2, 0x7B2BEBDB, 0xDE6A79A5, 0x3444B9D6, 0x91052BA8
-},
-{
- 0x00000000, 0xDD45AAB8, 0xBF672381, 0x62228939, 0x7B2231F3, 0xA6679B4B, 0xC4451272, 0x1900B8CA,
- 0xF64463E6, 0x2B01C95E, 0x49234067, 0x9466EADF, 0x8D665215, 0x5023F8AD, 0x32017194, 0xEF44DB2C,
- 0xE964B13D, 0x34211B85, 0x560392BC, 0x8B463804, 0x924680CE, 0x4F032A76, 0x2D21A34F, 0xF06409F7,
- 0x1F20D2DB, 0xC2657863, 0xA047F15A, 0x7D025BE2, 0x6402E328, 0xB9474990, 0xDB65C0A9, 0x06206A11,
- 0xD725148B, 0x0A60BE33, 0x6842370A, 0xB5079DB2, 0xAC072578, 0x71428FC0, 0x136006F9, 0xCE25AC41,
- 0x2161776D, 0xFC24DDD5, 0x9E0654EC, 0x4343FE54, 0x5A43469E, 0x8706EC26, 0xE524651F, 0x3861CFA7,
- 0x3E41A5B6, 0xE3040F0E, 0x81268637, 0x5C632C8F, 0x45639445, 0x98263EFD, 0xFA04B7C4, 0x27411D7C,
- 0xC805C650, 0x15406CE8, 0x7762E5D1, 0xAA274F69, 0xB327F7A3, 0x6E625D1B, 0x0C40D422, 0xD1057E9A,
- 0xABA65FE7, 0x76E3F55F, 0x14C17C66, 0xC984D6DE, 0xD0846E14, 0x0DC1C4AC, 0x6FE34D95, 0xB2A6E72D,
- 0x5DE23C01, 0x80A796B9, 0xE2851F80, 0x3FC0B538, 0x26C00DF2, 0xFB85A74A, 0x99A72E73, 0x44E284CB,
- 0x42C2EEDA, 0x9F874462, 0xFDA5CD5B, 0x20E067E3, 0x39E0DF29, 0xE4A57591, 0x8687FCA8, 0x5BC25610,
- 0xB4868D3C, 0x69C32784, 0x0BE1AEBD, 0xD6A40405, 0xCFA4BCCF, 0x12E11677, 0x70C39F4E, 0xAD8635F6,
- 0x7C834B6C, 0xA1C6E1D4, 0xC3E468ED, 0x1EA1C255, 0x07A17A9F, 0xDAE4D027, 0xB8C6591E, 0x6583F3A6,
- 0x8AC7288A, 0x57828232, 0x35A00B0B, 0xE8E5A1B3, 0xF1E51979, 0x2CA0B3C1, 0x4E823AF8, 0x93C79040,
- 0x95E7FA51, 0x48A250E9, 0x2A80D9D0, 0xF7C57368, 0xEEC5CBA2, 0x3380611A, 0x51A2E823, 0x8CE7429B,
- 0x63A399B7, 0xBEE6330F, 0xDCC4BA36, 0x0181108E, 0x1881A844, 0xC5C402FC, 0xA7E68BC5, 0x7AA3217D,
- 0x52A0C93F, 0x8FE56387, 0xEDC7EABE, 0x30824006, 0x2982F8CC, 0xF4C75274, 0x96E5DB4D, 0x4BA071F5,
- 0xA4E4AAD9, 0x79A10061, 0x1B838958, 0xC6C623E0, 0xDFC69B2A, 0x02833192, 0x60A1B8AB, 0xBDE41213,
- 0xBBC47802, 0x6681D2BA, 0x04A35B83, 0xD9E6F13B, 0xC0E649F1, 0x1DA3E349, 0x7F816A70, 0xA2C4C0C8,
- 0x4D801BE4, 0x90C5B15C, 0xF2E73865, 0x2FA292DD, 0x36A22A17, 0xEBE780AF, 0x89C50996, 0x5480A32E,
- 0x8585DDB4, 0x58C0770C, 0x3AE2FE35, 0xE7A7548D, 0xFEA7EC47, 0x23E246FF, 0x41C0CFC6, 0x9C85657E,
- 0x73C1BE52, 0xAE8414EA, 0xCCA69DD3, 0x11E3376B, 0x08E38FA1, 0xD5A62519, 0xB784AC20, 0x6AC10698,
- 0x6CE16C89, 0xB1A4C631, 0xD3864F08, 0x0EC3E5B0, 0x17C35D7A, 0xCA86F7C2, 0xA8A47EFB, 0x75E1D443,
- 0x9AA50F6F, 0x47E0A5D7, 0x25C22CEE, 0xF8878656, 0xE1873E9C, 0x3CC29424, 0x5EE01D1D, 0x83A5B7A5,
- 0xF90696D8, 0x24433C60, 0x4661B559, 0x9B241FE1, 0x8224A72B, 0x5F610D93, 0x3D4384AA, 0xE0062E12,
- 0x0F42F53E, 0xD2075F86, 0xB025D6BF, 0x6D607C07, 0x7460C4CD, 0xA9256E75, 0xCB07E74C, 0x16424DF4,
- 0x106227E5, 0xCD278D5D, 0xAF050464, 0x7240AEDC, 0x6B401616, 0xB605BCAE, 0xD4273597, 0x09629F2F,
- 0xE6264403, 0x3B63EEBB, 0x59416782, 0x8404CD3A, 0x9D0475F0, 0x4041DF48, 0x22635671, 0xFF26FCC9,
- 0x2E238253, 0xF36628EB, 0x9144A1D2, 0x4C010B6A, 0x5501B3A0, 0x88441918, 0xEA669021, 0x37233A99,
- 0xD867E1B5, 0x05224B0D, 0x6700C234, 0xBA45688C, 0xA345D046, 0x7E007AFE, 0x1C22F3C7, 0xC167597F,
- 0xC747336E, 0x1A0299D6, 0x782010EF, 0xA565BA57, 0xBC65029D, 0x6120A825, 0x0302211C, 0xDE478BA4,
- 0x31035088, 0xEC46FA30, 0x8E647309, 0x5321D9B1, 0x4A21617B, 0x9764CBC3, 0xF54642FA, 0x2803E842
-},
-{
- 0x00000000, 0x38116FAC, 0x7022DF58, 0x4833B0F4, 0xE045BEB0, 0xD854D11C, 0x906761E8, 0xA8760E44,
- 0xC5670B91, 0xFD76643D, 0xB545D4C9, 0x8D54BB65, 0x2522B521, 0x1D33DA8D, 0x55006A79, 0x6D1105D5,
- 0x8F2261D3, 0xB7330E7F, 0xFF00BE8B, 0xC711D127, 0x6F67DF63, 0x5776B0CF, 0x1F45003B, 0x27546F97,
- 0x4A456A42, 0x725405EE, 0x3A67B51A, 0x0276DAB6, 0xAA00D4F2, 0x9211BB5E, 0xDA220BAA, 0xE2336406,
- 0x1BA8B557, 0x23B9DAFB, 0x6B8A6A0F, 0x539B05A3, 0xFBED0BE7, 0xC3FC644B, 0x8BCFD4BF, 0xB3DEBB13,
- 0xDECFBEC6, 0xE6DED16A, 0xAEED619E, 0x96FC0E32, 0x3E8A0076, 0x069B6FDA, 0x4EA8DF2E, 0x76B9B082,
- 0x948AD484, 0xAC9BBB28, 0xE4A80BDC, 0xDCB96470, 0x74CF6A34, 0x4CDE0598, 0x04EDB56C, 0x3CFCDAC0,
- 0x51EDDF15, 0x69FCB0B9, 0x21CF004D, 0x19DE6FE1, 0xB1A861A5, 0x89B90E09, 0xC18ABEFD, 0xF99BD151,
- 0x37516AAE, 0x0F400502, 0x4773B5F6, 0x7F62DA5A, 0xD714D41E, 0xEF05BBB2, 0xA7360B46, 0x9F2764EA,
- 0xF236613F, 0xCA270E93, 0x8214BE67, 0xBA05D1CB, 0x1273DF8F, 0x2A62B023, 0x625100D7, 0x5A406F7B,
- 0xB8730B7D, 0x806264D1, 0xC851D425, 0xF040BB89, 0x5836B5CD, 0x6027DA61, 0x28146A95, 0x10050539,
- 0x7D1400EC, 0x45056F40, 0x0D36DFB4, 0x3527B018, 0x9D51BE5C, 0xA540D1F0, 0xED736104, 0xD5620EA8,
- 0x2CF9DFF9, 0x14E8B055, 0x5CDB00A1, 0x64CA6F0D, 0xCCBC6149, 0xF4AD0EE5, 0xBC9EBE11, 0x848FD1BD,
- 0xE99ED468, 0xD18FBBC4, 0x99BC0B30, 0xA1AD649C, 0x09DB6AD8, 0x31CA0574, 0x79F9B580, 0x41E8DA2C,
- 0xA3DBBE2A, 0x9BCAD186, 0xD3F96172, 0xEBE80EDE, 0x439E009A, 0x7B8F6F36, 0x33BCDFC2, 0x0BADB06E,
- 0x66BCB5BB, 0x5EADDA17, 0x169E6AE3, 0x2E8F054F, 0x86F90B0B, 0xBEE864A7, 0xF6DBD453, 0xCECABBFF,
- 0x6EA2D55C, 0x56B3BAF0, 0x1E800A04, 0x269165A8, 0x8EE76BEC, 0xB6F60440, 0xFEC5B4B4, 0xC6D4DB18,
- 0xABC5DECD, 0x93D4B161, 0xDBE70195, 0xE3F66E39, 0x4B80607D, 0x73910FD1, 0x3BA2BF25, 0x03B3D089,
- 0xE180B48F, 0xD991DB23, 0x91A26BD7, 0xA9B3047B, 0x01C50A3F, 0x39D46593, 0x71E7D567, 0x49F6BACB,
- 0x24E7BF1E, 0x1CF6D0B2, 0x54C56046, 0x6CD40FEA, 0xC4A201AE, 0xFCB36E02, 0xB480DEF6, 0x8C91B15A,
- 0x750A600B, 0x4D1B0FA7, 0x0528BF53, 0x3D39D0FF, 0x954FDEBB, 0xAD5EB117, 0xE56D01E3, 0xDD7C6E4F,
- 0xB06D6B9A, 0x887C0436, 0xC04FB4C2, 0xF85EDB6E, 0x5028D52A, 0x6839BA86, 0x200A0A72, 0x181B65DE,
- 0xFA2801D8, 0xC2396E74, 0x8A0ADE80, 0xB21BB12C, 0x1A6DBF68, 0x227CD0C4, 0x6A4F6030, 0x525E0F9C,
- 0x3F4F0A49, 0x075E65E5, 0x4F6DD511, 0x777CBABD, 0xDF0AB4F9, 0xE71BDB55, 0xAF286BA1, 0x9739040D,
- 0x59F3BFF2, 0x61E2D05E, 0x29D160AA, 0x11C00F06, 0xB9B60142, 0x81A76EEE, 0xC994DE1A, 0xF185B1B6,
- 0x9C94B463, 0xA485DBCF, 0xECB66B3B, 0xD4A70497, 0x7CD10AD3, 0x44C0657F, 0x0CF3D58B, 0x34E2BA27,
- 0xD6D1DE21, 0xEEC0B18D, 0xA6F30179, 0x9EE26ED5, 0x36946091, 0x0E850F3D, 0x46B6BFC9, 0x7EA7D065,
- 0x13B6D5B0, 0x2BA7BA1C, 0x63940AE8, 0x5B856544, 0xF3F36B00, 0xCBE204AC, 0x83D1B458, 0xBBC0DBF4,
- 0x425B0AA5, 0x7A4A6509, 0x3279D5FD, 0x0A68BA51, 0xA21EB415, 0x9A0FDBB9, 0xD23C6B4D, 0xEA2D04E1,
- 0x873C0134, 0xBF2D6E98, 0xF71EDE6C, 0xCF0FB1C0, 0x6779BF84, 0x5F68D028, 0x175B60DC, 0x2F4A0F70,
- 0xCD796B76, 0xF56804DA, 0xBD5BB42E, 0x854ADB82, 0x2D3CD5C6, 0x152DBA6A, 0x5D1E0A9E, 0x650F6532,
- 0x081E60E7, 0x300F0F4B, 0x783CBFBF, 0x402DD013, 0xE85BDE57, 0xD04AB1FB, 0x9879010F, 0xA0686EA3
-},
-{
- 0x00000000, 0xEF306B19, 0xDB8CA0C3, 0x34BCCBDA, 0xB2F53777, 0x5DC55C6E, 0x697997B4, 0x8649FCAD,
- 0x6006181F, 0x8F367306, 0xBB8AB8DC, 0x54BAD3C5, 0xD2F32F68, 0x3DC34471, 0x097F8FAB, 0xE64FE4B2,
- 0xC00C303E, 0x2F3C5B27, 0x1B8090FD, 0xF4B0FBE4, 0x72F90749, 0x9DC96C50, 0xA975A78A, 0x4645CC93,
- 0xA00A2821, 0x4F3A4338, 0x7B8688E2, 0x94B6E3FB, 0x12FF1F56, 0xFDCF744F, 0xC973BF95, 0x2643D48C,
- 0x85F4168D, 0x6AC47D94, 0x5E78B64E, 0xB148DD57, 0x370121FA, 0xD8314AE3, 0xEC8D8139, 0x03BDEA20,
- 0xE5F20E92, 0x0AC2658B, 0x3E7EAE51, 0xD14EC548, 0x570739E5, 0xB83752FC, 0x8C8B9926, 0x63BBF23F,
- 0x45F826B3, 0xAAC84DAA, 0x9E748670, 0x7144ED69, 0xF70D11C4, 0x183D7ADD, 0x2C81B107, 0xC3B1DA1E,
- 0x25FE3EAC, 0xCACE55B5, 0xFE729E6F, 0x1142F576, 0x970B09DB, 0x783B62C2, 0x4C87A918, 0xA3B7C201,
- 0x0E045BEB, 0xE13430F2, 0xD588FB28, 0x3AB89031, 0xBCF16C9C, 0x53C10785, 0x677DCC5F, 0x884DA746,
- 0x6E0243F4, 0x813228ED, 0xB58EE337, 0x5ABE882E, 0xDCF77483, 0x33C71F9A, 0x077BD440, 0xE84BBF59,
- 0xCE086BD5, 0x213800CC, 0x1584CB16, 0xFAB4A00F, 0x7CFD5CA2, 0x93CD37BB, 0xA771FC61, 0x48419778,
- 0xAE0E73CA, 0x413E18D3, 0x7582D309, 0x9AB2B810, 0x1CFB44BD, 0xF3CB2FA4, 0xC777E47E, 0x28478F67,
- 0x8BF04D66, 0x64C0267F, 0x507CEDA5, 0xBF4C86BC, 0x39057A11, 0xD6351108, 0xE289DAD2, 0x0DB9B1CB,
- 0xEBF65579, 0x04C63E60, 0x307AF5BA, 0xDF4A9EA3, 0x5903620E, 0xB6330917, 0x828FC2CD, 0x6DBFA9D4,
- 0x4BFC7D58, 0xA4CC1641, 0x9070DD9B, 0x7F40B682, 0xF9094A2F, 0x16392136, 0x2285EAEC, 0xCDB581F5,
- 0x2BFA6547, 0xC4CA0E5E, 0xF076C584, 0x1F46AE9D, 0x990F5230, 0x763F3929, 0x4283F2F3, 0xADB399EA,
- 0x1C08B7D6, 0xF338DCCF, 0xC7841715, 0x28B47C0C, 0xAEFD80A1, 0x41CDEBB8, 0x75712062, 0x9A414B7B,
- 0x7C0EAFC9, 0x933EC4D0, 0xA7820F0A, 0x48B26413, 0xCEFB98BE, 0x21CBF3A7, 0x1577387D, 0xFA475364,
- 0xDC0487E8, 0x3334ECF1, 0x0788272B, 0xE8B84C32, 0x6EF1B09F, 0x81C1DB86, 0xB57D105C, 0x5A4D7B45,
- 0xBC029FF7, 0x5332F4EE, 0x678E3F34, 0x88BE542D, 0x0EF7A880, 0xE1C7C399, 0xD57B0843, 0x3A4B635A,
- 0x99FCA15B, 0x76CCCA42, 0x42700198, 0xAD406A81, 0x2B09962C, 0xC439FD35, 0xF08536EF, 0x1FB55DF6,
- 0xF9FAB944, 0x16CAD25D, 0x22761987, 0xCD46729E, 0x4B0F8E33, 0xA43FE52A, 0x90832EF0, 0x7FB345E9,
- 0x59F09165, 0xB6C0FA7C, 0x827C31A6, 0x6D4C5ABF, 0xEB05A612, 0x0435CD0B, 0x308906D1, 0xDFB96DC8,
- 0x39F6897A, 0xD6C6E263, 0xE27A29B9, 0x0D4A42A0, 0x8B03BE0D, 0x6433D514, 0x508F1ECE, 0xBFBF75D7,
- 0x120CEC3D, 0xFD3C8724, 0xC9804CFE, 0x26B027E7, 0xA0F9DB4A, 0x4FC9B053, 0x7B757B89, 0x94451090,
- 0x720AF422, 0x9D3A9F3B, 0xA98654E1, 0x46B63FF8, 0xC0FFC355, 0x2FCFA84C, 0x1B736396, 0xF443088F,
- 0xD200DC03, 0x3D30B71A, 0x098C7CC0, 0xE6BC17D9, 0x60F5EB74, 0x8FC5806D, 0xBB794BB7, 0x544920AE,
- 0xB206C41C, 0x5D36AF05, 0x698A64DF, 0x86BA0FC6, 0x00F3F36B, 0xEFC39872, 0xDB7F53A8, 0x344F38B1,
- 0x97F8FAB0, 0x78C891A9, 0x4C745A73, 0xA344316A, 0x250DCDC7, 0xCA3DA6DE, 0xFE816D04, 0x11B1061D,
- 0xF7FEE2AF, 0x18CE89B6, 0x2C72426C, 0xC3422975, 0x450BD5D8, 0xAA3BBEC1, 0x9E87751B, 0x71B71E02,
- 0x57F4CA8E, 0xB8C4A197, 0x8C786A4D, 0x63480154, 0xE501FDF9, 0x0A3196E0, 0x3E8D5D3A, 0xD1BD3623,
- 0x37F2D291, 0xD8C2B988, 0xEC7E7252, 0x034E194B, 0x8507E5E6, 0x6A378EFF, 0x5E8B4525, 0xB1BB2E3C
-},
-{
- 0x00000000, 0x68032CC8, 0xD0065990, 0xB8057558, 0xA5E0C5D1, 0xCDE3E919, 0x75E69C41, 0x1DE5B089,
- 0x4E2DFD53, 0x262ED19B, 0x9E2BA4C3, 0xF628880B, 0xEBCD3882, 0x83CE144A, 0x3BCB6112, 0x53C84DDA,
- 0x9C5BFAA6, 0xF458D66E, 0x4C5DA336, 0x245E8FFE, 0x39BB3F77, 0x51B813BF, 0xE9BD66E7, 0x81BE4A2F,
- 0xD27607F5, 0xBA752B3D, 0x02705E65, 0x6A7372AD, 0x7796C224, 0x1F95EEEC, 0xA7909BB4, 0xCF93B77C,
- 0x3D5B83BD, 0x5558AF75, 0xED5DDA2D, 0x855EF6E5, 0x98BB466C, 0xF0B86AA4, 0x48BD1FFC, 0x20BE3334,
- 0x73767EEE, 0x1B755226, 0xA370277E, 0xCB730BB6, 0xD696BB3F, 0xBE9597F7, 0x0690E2AF, 0x6E93CE67,
- 0xA100791B, 0xC90355D3, 0x7106208B, 0x19050C43, 0x04E0BCCA, 0x6CE39002, 0xD4E6E55A, 0xBCE5C992,
- 0xEF2D8448, 0x872EA880, 0x3F2BDDD8, 0x5728F110, 0x4ACD4199, 0x22CE6D51, 0x9ACB1809, 0xF2C834C1,
- 0x7AB7077A, 0x12B42BB2, 0xAAB15EEA, 0xC2B27222, 0xDF57C2AB, 0xB754EE63, 0x0F519B3B, 0x6752B7F3,
- 0x349AFA29, 0x5C99D6E1, 0xE49CA3B9, 0x8C9F8F71, 0x917A3FF8, 0xF9791330, 0x417C6668, 0x297F4AA0,
- 0xE6ECFDDC, 0x8EEFD114, 0x36EAA44C, 0x5EE98884, 0x430C380D, 0x2B0F14C5, 0x930A619D, 0xFB094D55,
- 0xA8C1008F, 0xC0C22C47, 0x78C7591F, 0x10C475D7, 0x0D21C55E, 0x6522E996, 0xDD279CCE, 0xB524B006,
- 0x47EC84C7, 0x2FEFA80F, 0x97EADD57, 0xFFE9F19F, 0xE20C4116, 0x8A0F6DDE, 0x320A1886, 0x5A09344E,
- 0x09C17994, 0x61C2555C, 0xD9C72004, 0xB1C40CCC, 0xAC21BC45, 0xC422908D, 0x7C27E5D5, 0x1424C91D,
- 0xDBB77E61, 0xB3B452A9, 0x0BB127F1, 0x63B20B39, 0x7E57BBB0, 0x16549778, 0xAE51E220, 0xC652CEE8,
- 0x959A8332, 0xFD99AFFA, 0x459CDAA2, 0x2D9FF66A, 0x307A46E3, 0x58796A2B, 0xE07C1F73, 0x887F33BB,
- 0xF56E0EF4, 0x9D6D223C, 0x25685764, 0x4D6B7BAC, 0x508ECB25, 0x388DE7ED, 0x808892B5, 0xE88BBE7D,
- 0xBB43F3A7, 0xD340DF6F, 0x6B45AA37, 0x034686FF, 0x1EA33676, 0x76A01ABE, 0xCEA56FE6, 0xA6A6432E,
- 0x6935F452, 0x0136D89A, 0xB933ADC2, 0xD130810A, 0xCCD53183, 0xA4D61D4B, 0x1CD36813, 0x74D044DB,
- 0x27180901, 0x4F1B25C9, 0xF71E5091, 0x9F1D7C59, 0x82F8CCD0, 0xEAFBE018, 0x52FE9540, 0x3AFDB988,
- 0xC8358D49, 0xA036A181, 0x1833D4D9, 0x7030F811, 0x6DD54898, 0x05D66450, 0xBDD31108, 0xD5D03DC0,
- 0x8618701A, 0xEE1B5CD2, 0x561E298A, 0x3E1D0542, 0x23F8B5CB, 0x4BFB9903, 0xF3FEEC5B, 0x9BFDC093,
- 0x546E77EF, 0x3C6D5B27, 0x84682E7F, 0xEC6B02B7, 0xF18EB23E, 0x998D9EF6, 0x2188EBAE, 0x498BC766,
- 0x1A438ABC, 0x7240A674, 0xCA45D32C, 0xA246FFE4, 0xBFA34F6D, 0xD7A063A5, 0x6FA516FD, 0x07A63A35,
- 0x8FD9098E, 0xE7DA2546, 0x5FDF501E, 0x37DC7CD6, 0x2A39CC5F, 0x423AE097, 0xFA3F95CF, 0x923CB907,
- 0xC1F4F4DD, 0xA9F7D815, 0x11F2AD4D, 0x79F18185, 0x6414310C, 0x0C171DC4, 0xB412689C, 0xDC114454,
- 0x1382F328, 0x7B81DFE0, 0xC384AAB8, 0xAB878670, 0xB66236F9, 0xDE611A31, 0x66646F69, 0x0E6743A1,
- 0x5DAF0E7B, 0x35AC22B3, 0x8DA957EB, 0xE5AA7B23, 0xF84FCBAA, 0x904CE762, 0x2849923A, 0x404ABEF2,
- 0xB2828A33, 0xDA81A6FB, 0x6284D3A3, 0x0A87FF6B, 0x17624FE2, 0x7F61632A, 0xC7641672, 0xAF673ABA,
- 0xFCAF7760, 0x94AC5BA8, 0x2CA92EF0, 0x44AA0238, 0x594FB2B1, 0x314C9E79, 0x8949EB21, 0xE14AC7E9,
- 0x2ED97095, 0x46DA5C5D, 0xFEDF2905, 0x96DC05CD, 0x8B39B544, 0xE33A998C, 0x5B3FECD4, 0x333CC01C,
- 0x60F48DC6, 0x08F7A10E, 0xB0F2D456, 0xD8F1F89E, 0xC5144817, 0xAD1764DF, 0x15121187, 0x7D113D4F
-},
-{
- 0x00000000, 0x493C7D27, 0x9278FA4E, 0xDB448769, 0x211D826D, 0x6821FF4A, 0xB3657823, 0xFA590504,
- 0x423B04DA, 0x0B0779FD, 0xD043FE94, 0x997F83B3, 0x632686B7, 0x2A1AFB90, 0xF15E7CF9, 0xB86201DE,
- 0x847609B4, 0xCD4A7493, 0x160EF3FA, 0x5F328EDD, 0xA56B8BD9, 0xEC57F6FE, 0x37137197, 0x7E2F0CB0,
- 0xC64D0D6E, 0x8F717049, 0x5435F720, 0x1D098A07, 0xE7508F03, 0xAE6CF224, 0x7528754D, 0x3C14086A,
- 0x0D006599, 0x443C18BE, 0x9F789FD7, 0xD644E2F0, 0x2C1DE7F4, 0x65219AD3, 0xBE651DBA, 0xF759609D,
- 0x4F3B6143, 0x06071C64, 0xDD439B0D, 0x947FE62A, 0x6E26E32E, 0x271A9E09, 0xFC5E1960, 0xB5626447,
- 0x89766C2D, 0xC04A110A, 0x1B0E9663, 0x5232EB44, 0xA86BEE40, 0xE1579367, 0x3A13140E, 0x732F6929,
- 0xCB4D68F7, 0x827115D0, 0x593592B9, 0x1009EF9E, 0xEA50EA9A, 0xA36C97BD, 0x782810D4, 0x31146DF3,
- 0x1A00CB32, 0x533CB615, 0x8878317C, 0xC1444C5B, 0x3B1D495F, 0x72213478, 0xA965B311, 0xE059CE36,
- 0x583BCFE8, 0x1107B2CF, 0xCA4335A6, 0x837F4881, 0x79264D85, 0x301A30A2, 0xEB5EB7CB, 0xA262CAEC,
- 0x9E76C286, 0xD74ABFA1, 0x0C0E38C8, 0x453245EF, 0xBF6B40EB, 0xF6573DCC, 0x2D13BAA5, 0x642FC782,
- 0xDC4DC65C, 0x9571BB7B, 0x4E353C12, 0x07094135, 0xFD504431, 0xB46C3916, 0x6F28BE7F, 0x2614C358,
- 0x1700AEAB, 0x5E3CD38C, 0x857854E5, 0xCC4429C2, 0x361D2CC6, 0x7F2151E1, 0xA465D688, 0xED59ABAF,
- 0x553BAA71, 0x1C07D756, 0xC743503F, 0x8E7F2D18, 0x7426281C, 0x3D1A553B, 0xE65ED252, 0xAF62AF75,
- 0x9376A71F, 0xDA4ADA38, 0x010E5D51, 0x48322076, 0xB26B2572, 0xFB575855, 0x2013DF3C, 0x692FA21B,
- 0xD14DA3C5, 0x9871DEE2, 0x4335598B, 0x0A0924AC, 0xF05021A8, 0xB96C5C8F, 0x6228DBE6, 0x2B14A6C1,
- 0x34019664, 0x7D3DEB43, 0xA6796C2A, 0xEF45110D, 0x151C1409, 0x5C20692E, 0x8764EE47, 0xCE589360,
- 0x763A92BE, 0x3F06EF99, 0xE44268F0, 0xAD7E15D7, 0x572710D3, 0x1E1B6DF4, 0xC55FEA9D, 0x8C6397BA,
- 0xB0779FD0, 0xF94BE2F7, 0x220F659E, 0x6B3318B9, 0x916A1DBD, 0xD856609A, 0x0312E7F3, 0x4A2E9AD4,
- 0xF24C9B0A, 0xBB70E62D, 0x60346144, 0x29081C63, 0xD3511967, 0x9A6D6440, 0x4129E329, 0x08159E0E,
- 0x3901F3FD, 0x703D8EDA, 0xAB7909B3, 0xE2457494, 0x181C7190, 0x51200CB7, 0x8A648BDE, 0xC358F6F9,
- 0x7B3AF727, 0x32068A00, 0xE9420D69, 0xA07E704E, 0x5A27754A, 0x131B086D, 0xC85F8F04, 0x8163F223,
- 0xBD77FA49, 0xF44B876E, 0x2F0F0007, 0x66337D20, 0x9C6A7824, 0xD5560503, 0x0E12826A, 0x472EFF4D,
- 0xFF4CFE93, 0xB67083B4, 0x6D3404DD, 0x240879FA, 0xDE517CFE, 0x976D01D9, 0x4C2986B0, 0x0515FB97,
- 0x2E015D56, 0x673D2071, 0xBC79A718, 0xF545DA3F, 0x0F1CDF3B, 0x4620A21C, 0x9D642575, 0xD4585852,
- 0x6C3A598C, 0x250624AB, 0xFE42A3C2, 0xB77EDEE5, 0x4D27DBE1, 0x041BA6C6, 0xDF5F21AF, 0x96635C88,
- 0xAA7754E2, 0xE34B29C5, 0x380FAEAC, 0x7133D38B, 0x8B6AD68F, 0xC256ABA8, 0x19122CC1, 0x502E51E6,
- 0xE84C5038, 0xA1702D1F, 0x7A34AA76, 0x3308D751, 0xC951D255, 0x806DAF72, 0x5B29281B, 0x1215553C,
- 0x230138CF, 0x6A3D45E8, 0xB179C281, 0xF845BFA6, 0x021CBAA2, 0x4B20C785, 0x906440EC, 0xD9583DCB,
- 0x613A3C15, 0x28064132, 0xF342C65B, 0xBA7EBB7C, 0x4027BE78, 0x091BC35F, 0xD25F4436, 0x9B633911,
- 0xA777317B, 0xEE4B4C5C, 0x350FCB35, 0x7C33B612, 0x866AB316, 0xCF56CE31, 0x14124958, 0x5D2E347F,
- 0xE54C35A1, 0xAC704886, 0x7734CFEF, 0x3E08B2C8, 0xC451B7CC, 0x8D6DCAEB, 0x56294D82, 0x1F1530A5
-}};
-
-#define CRC32_UPD(crc, n) \
- (crc32c_tables[(n)][(crc) & 0xFF] ^ \
- crc32c_tables[(n)-1][((crc) >> 8) & 0xFF])
-
-static inline uint32_t
-crc32c_1word(uint32_t data, uint32_t init_val)
-{
- uint32_t crc, term1, term2;
- crc = init_val;
- crc ^= data;
-
- term1 = CRC32_UPD(crc, 3);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-static inline uint32_t
-crc32c_2words(uint64_t data, uint32_t init_val)
-{
- union {
- uint64_t u64;
- uint32_t u32[2];
- } d;
- d.u64 = data;
-
- uint32_t crc, term1, term2;
-
- crc = init_val;
- crc ^= d.u32[0];
-
- term1 = CRC32_UPD(crc, 7);
- term2 = crc >> 16;
- crc = term1 ^ CRC32_UPD(term2, 5);
- term1 = CRC32_UPD(d.u32[1], 3);
- term2 = d.u32[1] >> 16;
- crc ^= term1 ^ CRC32_UPD(term2, 1);
-
- return crc;
-}
-
-#if defined(RTE_ARCH_I686) || defined(RTE_ARCH_X86_64)
-static inline uint32_t
-crc32c_sse42_u32(uint32_t data, uint32_t init_val)
-{
- __asm__ volatile(
- "crc32l %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-
-static inline uint32_t
-crc32c_sse42_u64_mimic(uint64_t data, uint64_t init_val)
-{
- union {
- uint32_t u32[2];
- uint64_t u64;
- } d;
-
- d.u64 = data;
- init_val = crc32c_sse42_u32(d.u32[0], init_val);
- init_val = crc32c_sse42_u32(d.u32[1], init_val);
- return init_val;
-}
-#endif
-
-#ifdef RTE_ARCH_X86_64
-static inline uint32_t
-crc32c_sse42_u64(uint64_t data, uint64_t init_val)
-{
- __asm__ volatile(
- "crc32q %[data], %[init_val];"
- : [init_val] "+r" (init_val)
- : [data] "rm" (data));
- return init_val;
-}
-#endif
-
+/* Algorithm types */
#define CRC32_SW (1U << 0)
#define CRC32_SSE42 (1U << 1)
#define CRC32_x64 (1U << 2)
#define CRC32_SSE42_x64 (CRC32_x64|CRC32_SSE42)
-static uint8_t crc32_alg = CRC32_SW;
-
/**
* Allow or disallow use of SSE4.2 instrinsics for CRC32 hash
* calculation.
@@ -421,31 +61,8 @@ static uint8_t crc32_alg = CRC32_SW;
* - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default)
*
*/
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
- switch (alg) {
-#if defined(RTE_ARCH_I686) || defined(RTE_ARCH_X86_64)
- case CRC32_SSE42_x64:
- if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T))
- alg = CRC32_SSE42;
- case CRC32_SSE42:
- if (! rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_2))
- alg = CRC32_SW;
-#endif
- case CRC32_SW:
- crc32_alg = alg;
- default:
- break;
- }
-}
+void rte_hash_crc_set_alg(uint8_t alg);
-/* Setting the best available algorithm */
-static inline void __attribute__((constructor))
-rte_hash_crc_init_alg(void)
-{
- rte_hash_crc_set_alg(CRC32_SSE42_x64);
-}
/**
* Use single crc32 instruction to perform a hash on a 4 byte value.
@@ -459,16 +76,7 @@ rte_hash_crc_init_alg(void)
* @return
* 32bit calculated hash value.
*/
-static inline uint32_t
-rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
-{
-#if defined RTE_ARCH_I686 || defined RTE_ARCH_X86_64
- if (likely(crc32_alg & CRC32_SSE42))
- return crc32c_sse42_u32(data, init_val);
-#endif
-
- return crc32c_1word(data, init_val);
-}
+uint32_t rte_hash_crc_4byte(uint32_t data, uint32_t init_val);
/**
* Use single crc32 instruction to perform a hash on a 8 byte value.
@@ -482,21 +90,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
* @return
* 32bit calculated hash value.
*/
-static inline uint32_t
-rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
-{
-#ifdef RTE_ARCH_X86_64
- if (likely(crc32_alg == CRC32_SSE42_x64))
- return crc32c_sse42_u64(data, init_val);
-#endif
-
-#if defined RTE_ARCH_I686 || defined RTE_ARCH_X86_64
- if (likely(crc32_alg & CRC32_SSE42))
- return crc32c_sse42_u64_mimic(data, init_val);
-#endif
-
- return crc32c_2words(data, init_val);
-}
+uint32_t rte_hash_crc_8byte(uint64_t data, uint32_t init_val);
/**
* Calculate CRC32 hash on user-supplied byte array.
@@ -510,49 +104,7 @@ rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
* @return
* 32bit calculated hash value.
*/
-static inline uint32_t
-rte_hash_crc(const void *data, uint32_t data_len, uint32_t init_val)
-{
- unsigned i;
- uint64_t temp = 0;
- uintptr_t pd = (uintptr_t) data;
-
- for (i = 0; i < data_len / 8; i++) {
- init_val = rte_hash_crc_8byte(*(const uint64_t *)pd, init_val);
- pd += 8;
- }
-
- switch (7 - (data_len & 0x07)) {
- case 0:
- temp |= (uint64_t) *((const uint8_t *)pd + 6) << 48;
- /* Fallthrough */
- case 1:
- temp |= (uint64_t) *((const uint8_t *)pd + 5) << 40;
- /* Fallthrough */
- case 2:
- temp |= (uint64_t) *((const uint8_t *)pd + 4) << 32;
- temp |= *(const uint32_t *)pd;
- init_val = rte_hash_crc_8byte(temp, init_val);
- break;
- case 3:
- init_val = rte_hash_crc_4byte(*(const uint32_t *)pd, init_val);
- break;
- case 4:
- temp |= *((const uint8_t *)pd + 2) << 16;
- /* Fallthrough */
- case 5:
- temp |= *((const uint8_t *)pd + 1) << 8;
- /* Fallthrough */
- case 6:
- temp |= *(const uint8_t *)pd;
- init_val = rte_hash_crc_4byte(temp, init_val);
- /* Fallthrough */
- default:
- break;
- }
-
- return init_val;
-}
+uint32_t rte_hash_crc(const void *data, uint32_t data_len, uint32_t init_val);
#ifdef __cplusplus
}
--
2.1.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
2015-10-01 23:03 0% ` Alexander Duyck
@ 2015-10-01 23:39 0% ` Stephen Hemminger
2015-10-01 23:43 0% ` Alexander Duyck
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-01 23:39 UTC (permalink / raw)
To: Alexander Duyck; +Cc: dev, Avi Kivity, hjk, gregkh, linux-kernel
On Thu, 1 Oct 2015 16:03:06 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:
> On 10/01/2015 03:00 PM, Stephen Hemminger wrote:
> > On Thu, 1 Oct 2015 12:48:36 -0700
> > Alexander Duyck <alexander.duyck@gmail.com> wrote:
> >
> >> On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
> >>> On Thu, 1 Oct 2015 13:59:02 +0300
> >>> Avi Kivity <avi@scylladb.com> wrote:
> >>>
> >>>> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
> >>>>> This is a new UIO device driver to allow supporting MSI-X and MSI devices
> >>>>> in userspace. It has been used in environments like VMware and older versions
> >>>>> of QEMU/KVM where no IOMMU support is available.
> >>>> Why not add msi/msix support to uio_pci_generic?
> >>> That is possible but that would meet ABI and other resistance from the author.
> >>> Also, uio_pci_generic makes it harder to find resources since it doesn't fully
> >>> utilize UIO infrastructure.
> >> I'd say you are better off actually taking this in the other direction.
> >> From what I have seen it seems like this driver is meant to deal with
> >> mapping VFs contained inside of guests. If you are going to fork off
> >> and create a UIO driver for mapping VFs why not just make it specialize
> >> in that. You could probably simplify the code by dropping support for
> >> legacy interrupts and IO regions since all that is already covered by
> >> uio_pci_generic anyway if I am not mistaken.
> >>
> >> You could then look at naming it something like uio_vf since the uio_msi
> >> is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
> > The support needs to cover:
> > - VF in guest
> > - VNIC in guest (vmxnet3)
> > it isn't just about VF's
>
> I get that, but the driver you are talking about adding is duplicating
> much of what is already there in uio_pci_generic. If nothing else it
> might be worth while to look at replacing the legacy interrupt with
> MSI. Maybe look at naming it something like uio_pcie to indicate that
> we are focusing on assigning PCIe and virtual devices that support MSI
> and MSI-X and use memory BARs rather than legacy PCI devices that are
> doing things like mapping I/O BARs and using INTx signaling.
>
> My main argument is that we should probably look at dropping support for
> anything that isn't going to be needed. If it is really important we
> can always add it later. I just don't see the value in having code
> around for things we aren't likely to ever use with real devices as we
> are stuck supporting it for the life of the driver. I'll go ahead and
> provide a inline review of your patch 2/2 as I think my feedback might
> make a bit more sense that way.
Ok, but having one driver that can deal with failures with msi-x vector
setup and fallback seemed like a better strategy.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/2] uio_msi: device driver
2015-10-01 23:39 0% ` Stephen Hemminger
@ 2015-10-01 23:43 0% ` Alexander Duyck
0 siblings, 0 replies; 200+ results
From: Alexander Duyck @ 2015-10-01 23:43 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Avi Kivity, hjk, gregkh, linux-kernel
On 10/01/2015 04:39 PM, Stephen Hemminger wrote:
> On Thu, 1 Oct 2015 16:03:06 -0700
> Alexander Duyck <alexander.duyck@gmail.com> wrote:
>
>> On 10/01/2015 03:00 PM, Stephen Hemminger wrote:
>>> On Thu, 1 Oct 2015 12:48:36 -0700
>>> Alexander Duyck <alexander.duyck@gmail.com> wrote:
>>>
>>>> On 10/01/2015 07:57 AM, Stephen Hemminger wrote:
>>>>> On Thu, 1 Oct 2015 13:59:02 +0300
>>>>> Avi Kivity <avi@scylladb.com> wrote:
>>>>>
>>>>>> On 10/01/2015 01:28 AM, Stephen Hemminger wrote:
>>>>>>> This is a new UIO device driver to allow supporting MSI-X and MSI devices
>>>>>>> in userspace. It has been used in environments like VMware and older versions
>>>>>>> of QEMU/KVM where no IOMMU support is available.
>>>>>> Why not add msi/msix support to uio_pci_generic?
>>>>> That is possible but that would meet ABI and other resistance from the author.
>>>>> Also, uio_pci_generic makes it harder to find resources since it doesn't fully
>>>>> utilize UIO infrastructure.
>>>> I'd say you are better off actually taking this in the other direction.
>>>> From what I have seen it seems like this driver is meant to deal with
>>>> mapping VFs contained inside of guests. If you are going to fork off
>>>> and create a UIO driver for mapping VFs why not just make it specialize
>>>> in that. You could probably simplify the code by dropping support for
>>>> legacy interrupts and IO regions since all that is already covered by
>>>> uio_pci_generic anyway if I am not mistaken.
>>>>
>>>> You could then look at naming it something like uio_vf since the uio_msi
>>>> is a bit of a misnomer since it is MSI-X it supports, not MSI interrupts.
>>> The support needs to cover:
>>> - VF in guest
>>> - VNIC in guest (vmxnet3)
>>> it isn't just about VF's
>> I get that, but the driver you are talking about adding is duplicating
>> much of what is already there in uio_pci_generic. If nothing else it
>> might be worth while to look at replacing the legacy interrupt with
>> MSI. Maybe look at naming it something like uio_pcie to indicate that
>> we are focusing on assigning PCIe and virtual devices that support MSI
>> and MSI-X and use memory BARs rather than legacy PCI devices that are
>> doing things like mapping I/O BARs and using INTx signaling.
>>
>> My main argument is that we should probably look at dropping support for
>> anything that isn't going to be needed. If it is really important we
>> can always add it later. I just don't see the value in having code
>> around for things we aren't likely to ever use with real devices as we
>> are stuck supporting it for the life of the driver. I'll go ahead and
>> provide a inline review of your patch 2/2 as I think my feedback might
>> make a bit more sense that way.
> Ok, but having one driver that can deal with failures with msi-x vector
> setup and fallback seemed like a better strategy.
Yes, but in the case of something like a VF it is going to just make a
bigger mess of things since INTx doesn't work. So what would you expect
your driver to do in that case? Also we have to keep in mind that the
MSI-X failure case is very unlikely.
One other thing that just occurred to me is that you may want to try
using the range allocation call instead of a hard set number of
interrupts. Then if you start running short on vectors you don't hard
fail and instead just allocate what you can.
- Alex
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 8/8] mk: Add rule for installing runtime files
@ 2015-10-02 11:25 3% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2015-10-02 11:25 UTC (permalink / raw)
To: Panu Matilainen; +Cc: dev
On Fri, Oct 02, 2015 at 02:15:29PM +0300, Panu Matilainen wrote:
> On 10/01/2015 03:11 AM, Mario Carrillo wrote:
> >Add hierarchy-file support to the DPDK libraries, modules,
> >binary files, nic bind files and documentation,
> >when invoking "make install-fhs" (filesystem hierarchy standard)
> >runtime files will be by default installed in:
> >$(DESTDIR)/$(BIN_DIR) where BIN_DIR=/usr/bin (binary files)
> >$(DESTDIR)/$(SBIN_DIR) where SBIN_DIR=/usr/sbin/dpdk_nic_bind (nic bind
> >files)
> >$(DESTDIR)/$(DOC_DIR) where DOC_DIR=/usr/share/doc/dpdk (documentation)
> >$(DESTDIR)/$(LIB_DIR) (libraries)
> >if the architecture is 64 bits then LIB_DIR=/usr/lib64
> >else LIB_DIR=/usr/lib
> >$(DESTDIR)/$(KERNEL_DIR) (modules)
> >if RTE_EXEC_ENV=linuxapp then
> >KERNEL_DIR=/lib/modules/$(uname -r)/build
> >else KERNEL_DIR=/boot/modules
> >All directory variables mentioned above can be overridden.
> >This hierarchy is based on:
> >http://www.freedesktop.org/software/systemd/man/file-hierarchy.html
> >
>
> Hmm, I think there's a slight misunderstanding here.
>
> What I meant earlier by install-sdk and install-fhs is to preserve the
> current behavior of "make install" as "make install-sdk" and have "make
> install-fhs" behave like normal OSS app on "make install", which installs
> everything (both devel and runtime parts)
>
> This patch series eliminates the current behavior of "make install"
> entirely. I personally would not miss it at all, but there likely are people
> relying on it since its quite visibly documented and all. So I think the
> idea was to introduce a separate FHS-installation target and then deal with
> the notion of default behaviors etc separately.
>
> I guess it was already this way in v2 of the series, apologies for missing
> it there.
>
> - Panu -
I also think that having some way to get the old behaviour for those relying on
it would be good. Even though it's not ABI affecting, for those compiling from
source it would be nice to follow some sort of gradual deprecation process rather
than just changing everything in one go.
/Bruce
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 01/12] mk: Introduce ARMv7 architecture
@ 2015-10-03 8:58 3% ` Jan Viktorin
0 siblings, 0 replies; 200+ results
From: Jan Viktorin @ 2015-10-03 8:58 UTC (permalink / raw)
To: dev; +Cc: Vlastimil Kosar, Jan Viktorin
From: Vlastimil Kosar <kosar@rehivetech.com>
Make DPDK run on ARMv7-A architecture. This patch assumes
ARM Cortex-A9.
Signed-off-by: Vlastimil Kosar <kosar@rehivetech.com>
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
---
config/defconfig_arm-armv7-a-linuxapp-gcc | 76 +++++++++++++++++++++++++++++++
mk/arch/arm/rte.vars.mk | 39 ++++++++++++++++
mk/machine/armv7-a/rte.vars.mk | 60 ++++++++++++++++++++++++
3 files changed, 175 insertions(+)
create mode 100644 config/defconfig_arm-armv7-a-linuxapp-gcc
create mode 100644 mk/arch/arm/rte.vars.mk
create mode 100644 mk/machine/armv7-a/rte.vars.mk
diff --git a/config/defconfig_arm-armv7-a-linuxapp-gcc b/config/defconfig_arm-armv7-a-linuxapp-gcc
new file mode 100644
index 0000000..a466d13
--- /dev/null
+++ b/config/defconfig_arm-armv7-a-linuxapp-gcc
@@ -0,0 +1,76 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All right reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "common_linuxapp"
+
+CONFIG_RTE_MACHINE="armv7-a"
+
+CONFIG_RTE_ARCH="arm"
+CONFIG_RTE_ARCH_ARM=y
+CONFIG_RTE_ARCH_ARMv7=y
+
+CONFIG_RTE_TOOLCHAIN="gcc"
+CONFIG_RTE_TOOLCHAIN_GCC=y
+
+# ARM doesn't have support for vmware TSC map
+CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
+
+# avoids using i686/x86_64 SIMD instructions, nothing for ARM
+CONFIG_RTE_BITMAP_OPTIMIZATIONS=0
+
+# KNI is not supported on 32-bit
+CONFIG_RTE_LIBRTE_KNI=n
+
+# PCI is usually not used on ARM
+CONFIG_RTE_EAL_IGB_UIO=n
+
+# missing rte_vect.h for ARM
+CONFIG_XMM_SIZE=16
+
+# fails to compile on ARM
+CONFIG_RTE_LIBRTE_ACL=n
+CONFIG_RTE_LIBRTE_LPM=n
+
+# cannot use those on ARM
+CONFIG_RTE_KNI_KMOD=n
+CONFIG_RTE_LIBRTE_EM_PMD=n
+CONFIG_RTE_LIBRTE_IGB_PMD=n
+CONFIG_RTE_LIBRTE_CXGBE_PMD=n
+CONFIG_RTE_LIBRTE_E1000_PMD=n
+CONFIG_RTE_LIBRTE_ENIC_PMD=n
+CONFIG_RTE_LIBRTE_FM10K_PMD=n
+CONFIG_RTE_LIBRTE_I40E_PMD=n
+CONFIG_RTE_LIBRTE_IXGBE_PMD=n
+CONFIG_RTE_LIBRTE_MLX4_PMD=n
+CONFIG_RTE_LIBRTE_MPIPE_PMD=n
+CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
+CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
+CONFIG_RTE_LIBRTE_PMD_XENVIRT=n
+CONFIG_RTE_LIBRTE_PMD_BNX2X=n
diff --git a/mk/arch/arm/rte.vars.mk b/mk/arch/arm/rte.vars.mk
new file mode 100644
index 0000000..df0c043
--- /dev/null
+++ b/mk/arch/arm/rte.vars.mk
@@ -0,0 +1,39 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+ARCH ?= arm
+CROSS ?=
+
+CPU_CFLAGS ?= -marm -DRTE_CACHE_LINE_SIZE=64 -munaligned-access
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?= -felf
+
+export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
diff --git a/mk/machine/armv7-a/rte.vars.mk b/mk/machine/armv7-a/rte.vars.mk
new file mode 100644
index 0000000..7764e4f
--- /dev/null
+++ b/mk/machine/armv7-a/rte.vars.mk
@@ -0,0 +1,60 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+
+CPU_CFLAGS += -mfloat-abi=softfp
+
+MACHINE_CFLAGS = -march=armv7-a -mtune=cortex-a9
+MACHINE_CFLAGS += -mfpu=neon
--
2.5.2
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 4/4] doc: update with link changes
@ 2015-10-04 21:12 13% ` Marc Sune
1 sibling, 0 replies; 200+ results
From: Marc Sune @ 2015-10-04 21:12 UTC (permalink / raw)
To: dev
Add new features, ABI changes and resolved issues notice for
the refactored link patch.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
---
doc/guides/rel_notes/release_2_2.rst | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 5687676..e8bd4d6 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -4,6 +4,17 @@ DPDK Release 2.2
New Features
------------
+* **ethdev: define a set of advertised link speeds.**
+
+ Allowing to define a set of advertised speeds for auto-negociation,
+ explicitely disable link auto-negociation (single speed) and full
+ auto-negociation.
+
+* **ethdev: add speed_cap bitmap to recover eth device link speed capabilities
+ define a set of advertised link speeds.**
+
+ ``struct rte_eth_dev_info`` has now speed_cap bitmap, which allows the
+ application to recover the supported speeds for that ethernet device.
Resolved Issues
---------------
@@ -48,6 +59,11 @@ Libraries
Fixed issue where an incorrect Cuckoo Hash key table size could be
calculated limiting the size to 4GB.
+* **ethdev: Fixed link_speed overflow in rte_eth_link for 100Gbps.**
+
+ 100Gbps in Mbps (100000) exceeds 16 bit max value of ``link_speed`` in
+ ``rte_eth_link``.
+
Examples
~~~~~~~~
@@ -81,7 +97,6 @@ API Changes
* The deprecated ring PMD functions are removed:
rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
-
ABI Changes
-----------
@@ -91,6 +106,11 @@ ABI Changes
* The ethdev flow director entries for SCTP were changed.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* The ethdev rte_eth_link and rte_eth_conf structures were changes to
+ support the new link API.
+
+* The ethdev rte_eth_dev_info was changed to support device speed capabilities.
+
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
--
2.1.4
^ permalink raw reply [relevance 13%]
* Re: [dpdk-dev] [PATCH v5 3/4] ethdev: redesign link speed config API
@ 2015-10-05 10:59 4% ` Neil Horman
2015-10-07 13:31 5% ` Marc Sune
0 siblings, 1 reply; 200+ results
From: Neil Horman @ 2015-10-05 10:59 UTC (permalink / raw)
To: Marc Sune; +Cc: dev
On Sun, Oct 04, 2015 at 11:12:46PM +0200, Marc Sune wrote:
> This patch redesigns the API to set the link speed/s configure
> for an ethernet port. Specifically:
>
> - it allows to define a set of advertised speeds for
> auto-negociation.
> - it allows to disable link auto-negociation (single fixed speed).
> - default: auto-negociate all supported speeds.
>
> Other changes:
>
> * Added utility MACROs ETH_SPEED_NUM_XXX with the numeric
> values of all supported link speeds, in Mbps.
> * Converted link_speed to uint32_t to accomodate 100G speeds
> (bug).
> * Added autoneg flag in struct rte_eth_link to indicate if
> link speed was a result of auto-negociation or was fixed
> by configuration.
> * Added utility function to convert numeric speeds to bitmap
> fields.
> * Adapted testpmd to the new link API.
>
> Signed-off-by: Marc Sune <marcdevel@gmail.com>
>
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index f593f6e..29b2960 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1072,6 +1072,55 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> int
> +rte_eth_speed_to_bm_flag(uint32_t speed, int duplex, uint32_t *flag)
> +{
> + switch (speed) {
> + case ETH_SPEED_NUM_10M:
> + *flag = (duplex) ? ETH_LINK_SPEED_10M :
> + ETH_LINK_SPEED_10M_HD;
> + break;
> + case ETH_SPEED_NUM_100M:
> + *flag = (duplex) ? ETH_LINK_SPEED_100M :
> + ETH_LINK_SPEED_100M_HD;
> + break;
> + case ETH_SPEED_NUM_1G:
> + *flag = ETH_LINK_SPEED_1G;
> + break;
> + case ETH_SPEED_NUM_2_5G:
> + *flag = ETH_LINK_SPEED_2_5G;
> + break;
> + case ETH_SPEED_NUM_5G:
> + *flag = ETH_LINK_SPEED_5G;
> + break;
> + case ETH_SPEED_NUM_10G:
> + *flag = ETH_LINK_SPEED_10G;
> + break;
> + case ETH_SPEED_NUM_20G:
> + *flag = ETH_LINK_SPEED_20G;
> + break;
> + case ETH_SPEED_NUM_25G:
> + *flag = ETH_LINK_SPEED_25G;
> + break;
> + case ETH_SPEED_NUM_40G:
> + *flag = ETH_LINK_SPEED_40G;
> + break;
> + case ETH_SPEED_NUM_50G:
> + *flag = ETH_LINK_SPEED_50G;
> + break;
> + case ETH_SPEED_NUM_56G:
> + *flag = ETH_LINK_SPEED_56G;
> + break;
> + case ETH_SPEED_NUM_100G:
> + *flag = ETH_LINK_SPEED_100G;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
This nees to go in the appropriate version.map file for shared library building.
> +int
> rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> {
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 951a423..bd333e4 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -238,26 +238,59 @@ struct rte_eth_stats {
> };
>
> /**
> + * Device supported speeds bitmap flags
> + */
> +#define ETH_LINK_SPEED_AUTONEG (0 << 0) /*< Autonegociate (all speeds) */
> +#define ETH_LINK_SPEED_NO_AUTONEG (1 << 0) /*< Disable autoneg (fixed speed) */
> +#define ETH_LINK_SPEED_10M_HD (1 << 1) /*< 10 Mbps half-duplex */
> +#define ETH_LINK_SPEED_10M (1 << 2) /*< 10 Mbps full-duplex */
> +#define ETH_LINK_SPEED_100M_HD (1 << 3) /*< 100 Mbps half-duplex */
> +#define ETH_LINK_SPEED_100M (1 << 4) /*< 100 Mbps full-duplex */
> +#define ETH_LINK_SPEED_1G (1 << 5) /*< 1 Gbps */
> +#define ETH_LINK_SPEED_2_5G (1 << 6) /*< 2.5 Gbps */
> +#define ETH_LINK_SPEED_5G (1 << 7) /*< 5 Gbps */
> +#define ETH_LINK_SPEED_10G (1 << 8) /*< 10 Mbps */
> +#define ETH_LINK_SPEED_20G (1 << 9) /*< 20 Gbps */
> +#define ETH_LINK_SPEED_25G (1 << 10) /*< 25 Gbps */
> +#define ETH_LINK_SPEED_40G (1 << 11) /*< 40 Gbps */
> +#define ETH_LINK_SPEED_50G (1 << 12) /*< 50 Gbps */
> +#define ETH_LINK_SPEED_56G (1 << 13) /*< 56 Gbps */
> +#define ETH_LINK_SPEED_100G (1 << 14) /*< 100 Gbps */
> +
> +/**
> + * Ethernet numeric link speeds in Mbps
> + */
> +#define ETH_SPEED_NUM_NONE 0 /*< Not defined */
> +#define ETH_SPEED_NUM_10M 10 /*< 10 Mbps */
> +#define ETH_SPEED_NUM_100M 100 /*< 100 Mbps */
> +#define ETH_SPEED_NUM_1G 1000 /*< 1 Gbps */
> +#define ETH_SPEED_NUM_2_5G 2500 /*< 2.5 Gbps */
> +#define ETH_SPEED_NUM_5G 5000 /*< 5 Gbps */
> +#define ETH_SPEED_NUM_10G 10000 /*< 10 Mbps */
> +#define ETH_SPEED_NUM_20G 20000 /*< 20 Gbps */
> +#define ETH_SPEED_NUM_25G 25000 /*< 25 Gbps */
> +#define ETH_SPEED_NUM_40G 40000 /*< 40 Gbps */
> +#define ETH_SPEED_NUM_50G 50000 /*< 50 Gbps */
> +#define ETH_SPEED_NUM_56G 56000 /*< 56 Gbps */
> +#define ETH_SPEED_NUM_100G 100000 /*< 100 Gbps */
> +
> +/**
> * A structure used to retrieve link-level information of an Ethernet port.
> */
> struct rte_eth_link {
> - uint16_t link_speed; /**< ETH_LINK_SPEED_[10, 100, 1000, 10000] */
> - uint16_t link_duplex; /**< ETH_LINK_[HALF_DUPLEX, FULL_DUPLEX] */
> - uint8_t link_status : 1; /**< 1 -> link up, 0 -> link down */
> -}__attribute__((aligned(8))); /**< aligned for atomic64 read/write */
> -
> -#define ETH_LINK_SPEED_AUTONEG 0 /**< Auto-negotiate link speed. */
> -#define ETH_LINK_SPEED_10 10 /**< 10 megabits/second. */
> -#define ETH_LINK_SPEED_100 100 /**< 100 megabits/second. */
> -#define ETH_LINK_SPEED_1000 1000 /**< 1 gigabits/second. */
> -#define ETH_LINK_SPEED_10000 10000 /**< 10 gigabits/second. */
> -#define ETH_LINK_SPEED_10G 10000 /**< alias of 10 gigabits/second. */
> -#define ETH_LINK_SPEED_20G 20000 /**< 20 gigabits/second. */
> -#define ETH_LINK_SPEED_40G 40000 /**< 40 gigabits/second. */
> + uint32_t link_speed; /**< Link speed (ETH_SPEED_NUM_) */
> + uint16_t link_duplex; /**< 1 -> full duplex, 0 -> half duplex */
> + uint8_t link_autoneg : 1; /**< 1 -> link speed has been autoneg */
> + uint8_t link_status : 1; /**< 1 -> link up, 0 -> link down */
> +} __attribute__((aligned(8))); /**< aligned for atomic64 read/write */
>
> -#define ETH_LINK_AUTONEG_DUPLEX 0 /**< Auto-negotiate duplex. */
> -#define ETH_LINK_HALF_DUPLEX 1 /**< Half-duplex connection. */
> -#define ETH_LINK_FULL_DUPLEX 2 /**< Full-duplex connection. */
> +/* Utility constants */
> +#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection. */
> +#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection. */
> +#define ETH_LINK_SPEED_FIXED 0 /**< Link speed was not autonegociated. */
> +#define ETH_LINK_SPEED_NEG 1 /**< Link speed was autonegociated. */
> +#define ETH_LINK_DOWN 0 /**< Link is down. */
> +#define ETH_LINK_UP 1 /**< Link is up. */
>
> /**
> * A structure used to configure the ring threshold registers of an RX/TX
> @@ -747,10 +780,14 @@ struct rte_intr_conf {
> * configuration settings may be needed.
> */
> struct rte_eth_conf {
> - uint16_t link_speed;
> - /**< ETH_LINK_SPEED_10[0|00|000], or 0 for autonegotation */
> - uint16_t link_duplex;
> - /**< ETH_LINK_[HALF_DUPLEX|FULL_DUPLEX], or 0 for autonegotation */
> + uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
> + used. ETH_LINK_SPEED_NO_AUTONEG disables link
> + autonegociation, and a unique speed shall be
> + set. Otherwise, the bitmap defines the set of
> + speeds to be advertised. If the special value
> + ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
> + supported are advertised.
> + */
This structure is allocated by applications, and the link_speed/duplex field may
be accessed by them. As such this is an ABI change and should go through the
ABI process. Arguably, while its not really supposed to be done, given that
there are so many changes comming to rte_eth_dev for 2.2, I could see an
argument for sliding this in with those changes, so we didn't have to wait for
an additional relelase cycle.
The removal of the Speed/Duplex macros should also be noted.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 0/3] Add RETA configuration to mlx5
@ 2015-10-05 17:57 3% Adrien Mazarguil
0 siblings, 0 replies; 200+ results
From: Adrien Mazarguil @ 2015-10-05 17:57 UTC (permalink / raw)
To: dev
mlx5 devices support indirection tables of variable size up to 512 entries,
which requires a larger configuration structure (requiring a change in the
ABI).
This patchset can be considered as a first RFC step because the current API
is not very practical due to the following limitations:
- Configuration with chunks of 64 entries.
- Fixed total number of entries (previously 128, now 512).
- RETA configuration with testpmd is quite tedious (all entries must be
specified with really long lines).
Nelio Laranjeiro (3):
cmdline: increase command line buffer
ethdev: change RETA type in rte_eth_rss_reta_entry64
mlx5: RETA query/update support
drivers/net/mlx5/mlx5.c | 4 +
drivers/net/mlx5/mlx5.h | 7 ++
drivers/net/mlx5/mlx5_ethdev.c | 29 ++++++
drivers/net/mlx5/mlx5_rss.c | 163 ++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_rxq.c | 53 ++--------
drivers/net/mlx5/mlx5_utils.h | 20 ++++
lib/librte_cmdline/cmdline_parse.h | 2 +-
lib/librte_cmdline/cmdline_parse_string.h | 2 +-
lib/librte_cmdline/cmdline_rdline.h | 2 +-
lib/librte_ether/rte_ethdev.h | 2 +-
10 files changed, 235 insertions(+), 49 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 3/4] ethdev: redesign link speed config API
2015-10-05 10:59 4% ` Neil Horman
@ 2015-10-07 13:31 5% ` Marc Sune
0 siblings, 0 replies; 200+ results
From: Marc Sune @ 2015-10-07 13:31 UTC (permalink / raw)
To: Neil Horman; +Cc: dev
2015-10-05 12:59 GMT+02:00 Neil Horman <nhorman@tuxdriver.com>:
> On Sun, Oct 04, 2015 at 11:12:46PM +0200, Marc Sune wrote:
> > This patch redesigns the API to set the link speed/s configure
> > for an ethernet port. Specifically:
> >
> > - it allows to define a set of advertised speeds for
> > auto-negociation.
> > - it allows to disable link auto-negociation (single fixed speed).
> > - default: auto-negociate all supported speeds.
> >
> > Other changes:
> >
> > * Added utility MACROs ETH_SPEED_NUM_XXX with the numeric
> > values of all supported link speeds, in Mbps.
> > * Converted link_speed to uint32_t to accomodate 100G speeds
> > (bug).
> > * Added autoneg flag in struct rte_eth_link to indicate if
> > link speed was a result of auto-negociation or was fixed
> > by configuration.
> > * Added utility function to convert numeric speeds to bitmap
> > fields.
> > * Adapted testpmd to the new link API.
> >
> > Signed-off-by: Marc Sune <marcdevel@gmail.com>
> >
> > diff --git a/lib/librte_ether/rte_ethdev.c
> b/lib/librte_ether/rte_ethdev.c
> > index f593f6e..29b2960 100644
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> > @@ -1072,6 +1072,55 @@ rte_eth_dev_check_mq_mode(uint8_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> > }
> >
> > int
> > +rte_eth_speed_to_bm_flag(uint32_t speed, int duplex, uint32_t *flag)
> > +{
> > + switch (speed) {
> > + case ETH_SPEED_NUM_10M:
> > + *flag = (duplex) ? ETH_LINK_SPEED_10M :
> > +
> ETH_LINK_SPEED_10M_HD;
> > + break;
> > + case ETH_SPEED_NUM_100M:
> > + *flag = (duplex) ? ETH_LINK_SPEED_100M :
> > +
> ETH_LINK_SPEED_100M_HD;
> > + break;
> > + case ETH_SPEED_NUM_1G:
> > + *flag = ETH_LINK_SPEED_1G;
> > + break;
> > + case ETH_SPEED_NUM_2_5G:
> > + *flag = ETH_LINK_SPEED_2_5G;
> > + break;
> > + case ETH_SPEED_NUM_5G:
> > + *flag = ETH_LINK_SPEED_5G;
> > + break;
> > + case ETH_SPEED_NUM_10G:
> > + *flag = ETH_LINK_SPEED_10G;
> > + break;
> > + case ETH_SPEED_NUM_20G:
> > + *flag = ETH_LINK_SPEED_20G;
> > + break;
> > + case ETH_SPEED_NUM_25G:
> > + *flag = ETH_LINK_SPEED_25G;
> > + break;
> > + case ETH_SPEED_NUM_40G:
> > + *flag = ETH_LINK_SPEED_40G;
> > + break;
> > + case ETH_SPEED_NUM_50G:
> > + *flag = ETH_LINK_SPEED_50G;
> > + break;
> > + case ETH_SPEED_NUM_56G:
> > + *flag = ETH_LINK_SPEED_56G;
> > + break;
> > + case ETH_SPEED_NUM_100G:
> > + *flag = ETH_LINK_SPEED_100G;
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
>
> This nees to go in the appropriate version.map file for shared library
> building.
>
> > +int
> > rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> > const struct rte_eth_conf *dev_conf)
> > {
> > diff --git a/lib/librte_ether/rte_ethdev.h
> b/lib/librte_ether/rte_ethdev.h
> > index 951a423..bd333e4 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -238,26 +238,59 @@ struct rte_eth_stats {
> > };
> >
> > /**
> > + * Device supported speeds bitmap flags
> > + */
> > +#define ETH_LINK_SPEED_AUTONEG (0 << 0) /*<
> Autonegociate (all speeds) */
> > +#define ETH_LINK_SPEED_NO_AUTONEG (1 << 0) /*< Disable autoneg
> (fixed speed) */
> > +#define ETH_LINK_SPEED_10M_HD (1 << 1) /*< 10 Mbps
> half-duplex */
> > +#define ETH_LINK_SPEED_10M (1 << 2) /*< 10 Mbps full-duplex
> */
> > +#define ETH_LINK_SPEED_100M_HD (1 << 3) /*< 100 Mbps
> half-duplex */
> > +#define ETH_LINK_SPEED_100M (1 << 4) /*< 100 Mbps full-duplex
> */
> > +#define ETH_LINK_SPEED_1G (1 << 5) /*< 1 Gbps */
> > +#define ETH_LINK_SPEED_2_5G (1 << 6) /*< 2.5 Gbps */
> > +#define ETH_LINK_SPEED_5G (1 << 7) /*< 5 Gbps */
> > +#define ETH_LINK_SPEED_10G (1 << 8) /*< 10 Mbps */
> > +#define ETH_LINK_SPEED_20G (1 << 9) /*< 20 Gbps */
> > +#define ETH_LINK_SPEED_25G (1 << 10) /*< 25 Gbps */
> > +#define ETH_LINK_SPEED_40G (1 << 11) /*< 40 Gbps */
> > +#define ETH_LINK_SPEED_50G (1 << 12) /*< 50 Gbps */
> > +#define ETH_LINK_SPEED_56G (1 << 13) /*< 56 Gbps */
> > +#define ETH_LINK_SPEED_100G (1 << 14) /*< 100 Gbps */
> > +
> > +/**
> > + * Ethernet numeric link speeds in Mbps
> > + */
> > +#define ETH_SPEED_NUM_NONE 0 /*< Not defined */
> > +#define ETH_SPEED_NUM_10M 10 /*< 10 Mbps */
> > +#define ETH_SPEED_NUM_100M 100 /*< 100 Mbps */
> > +#define ETH_SPEED_NUM_1G 1000 /*< 1 Gbps */
> > +#define ETH_SPEED_NUM_2_5G 2500 /*< 2.5 Gbps */
> > +#define ETH_SPEED_NUM_5G 5000 /*< 5 Gbps */
> > +#define ETH_SPEED_NUM_10G 10000 /*< 10 Mbps */
> > +#define ETH_SPEED_NUM_20G 20000 /*< 20 Gbps */
> > +#define ETH_SPEED_NUM_25G 25000 /*< 25 Gbps */
> > +#define ETH_SPEED_NUM_40G 40000 /*< 40 Gbps */
> > +#define ETH_SPEED_NUM_50G 50000 /*< 50 Gbps */
> > +#define ETH_SPEED_NUM_56G 56000 /*< 56 Gbps */
> > +#define ETH_SPEED_NUM_100G 100000 /*< 100 Gbps */
> > +
> > +/**
> > * A structure used to retrieve link-level information of an Ethernet
> port.
> > */
> > struct rte_eth_link {
> > - uint16_t link_speed; /**< ETH_LINK_SPEED_[10, 100, 1000,
> 10000] */
> > - uint16_t link_duplex; /**< ETH_LINK_[HALF_DUPLEX, FULL_DUPLEX]
> */
> > - uint8_t link_status : 1; /**< 1 -> link up, 0 -> link down */
> > -}__attribute__((aligned(8))); /**< aligned for atomic64 read/write
> */
> > -
> > -#define ETH_LINK_SPEED_AUTONEG 0 /**< Auto-negotiate link speed.
> */
> > -#define ETH_LINK_SPEED_10 10 /**< 10 megabits/second. */
> > -#define ETH_LINK_SPEED_100 100 /**< 100 megabits/second. */
> > -#define ETH_LINK_SPEED_1000 1000 /**< 1 gigabits/second. */
> > -#define ETH_LINK_SPEED_10000 10000 /**< 10 gigabits/second. */
> > -#define ETH_LINK_SPEED_10G 10000 /**< alias of 10
> gigabits/second. */
> > -#define ETH_LINK_SPEED_20G 20000 /**< 20 gigabits/second. */
> > -#define ETH_LINK_SPEED_40G 40000 /**< 40 gigabits/second. */
> > + uint32_t link_speed; /**< Link speed (ETH_SPEED_NUM_) */
> > + uint16_t link_duplex; /**< 1 -> full duplex, 0 -> half duplex
> */
> > + uint8_t link_autoneg : 1; /**< 1 -> link speed has been autoneg */
> > + uint8_t link_status : 1; /**< 1 -> link up, 0 -> link down */
> > +} __attribute__((aligned(8))); /**< aligned for atomic64
> read/write */
> >
> > -#define ETH_LINK_AUTONEG_DUPLEX 0 /**< Auto-negotiate duplex. */
> > -#define ETH_LINK_HALF_DUPLEX 1 /**< Half-duplex connection. */
> > -#define ETH_LINK_FULL_DUPLEX 2 /**< Full-duplex connection. */
> > +/* Utility constants */
> > +#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection. */
> > +#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection. */
> > +#define ETH_LINK_SPEED_FIXED 0 /**< Link speed was not
> autonegociated. */
> > +#define ETH_LINK_SPEED_NEG 1 /**< Link speed was
> autonegociated. */
> > +#define ETH_LINK_DOWN 0 /**< Link is down. */
> > +#define ETH_LINK_UP 1 /**< Link is up. */
> >
> > /**
> > * A structure used to configure the ring threshold registers of an
> RX/TX
> > @@ -747,10 +780,14 @@ struct rte_intr_conf {
> > * configuration settings may be needed.
> > */
> > struct rte_eth_conf {
> > - uint16_t link_speed;
> > - /**< ETH_LINK_SPEED_10[0|00|000], or 0 for autonegotation */
> > - uint16_t link_duplex;
> > - /**< ETH_LINK_[HALF_DUPLEX|FULL_DUPLEX], or 0 for autonegotation */
> > + uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds
> to be
> > + used. ETH_LINK_SPEED_NO_AUTONEG disables
> link
> > + autonegociation, and a unique speed shall
> be
> > + set. Otherwise, the bitmap defines the set
> of
> > + speeds to be advertised. If the special
> value
> > + ETH_LINK_SPEED_AUTONEG (0) is used, all
> speeds
> > + supported are advertised.
> > + */
> This structure is allocated by applications, and the link_speed/duplex
> field may
> be accessed by them. As such this is an ABI change and should go through
> the
> ABI process. Arguably, while its not really supposed to be done, given
> that
> there are so many changes comming to rte_eth_dev for 2.2, I could see an
> argument for sliding this in with those changes, so we didn't have to wait
> for
> an additional relelase cycle.
>
If I understand you correctly you are arguing that this patch should go
through the process of NEXT_ABI -> unique implementation. I keep saying I
do not understand what is the benefit of doing so when the ABI for 2.2 is
going to change anyway (even other patches will modify ethdev ABI).
Having a strict rule for ABI changes between minor releases (e.g. 2.1.0 and
2.1.1) is a must or if the next release is announced to be ABI compatible.
Other than that I don't see it, specially since users have to recompile
anyway their binaries due to (other) ABI changes.
But maybe I am missing something?
> The removal of the Speed/Duplex macros should also be noted.
>
Agree. I will add it into the doc
Marc
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] rte_eal_init() alternative?
@ 2015-10-09 8:25 3% ` Panu Matilainen
2015-10-09 10:03 3% ` Montorsi, Francesco
0 siblings, 1 reply; 200+ results
From: Panu Matilainen @ 2015-10-09 8:25 UTC (permalink / raw)
To: Montorsi, Francesco, Thomas Monjalon; +Cc: dev
On 10/08/2015 05:58 PM, Montorsi, Francesco wrote:
> Hi,
>
>> -----Original Message-----
>> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
>> Sent: mercoledì 2 settembre 2015 15:10
>> To: Montorsi, Francesco <fmontorsi@empirix.com>
>> Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>
>> Subject: Re: [dpdk-dev] rte_eal_init() alternative?
>>
>> 2015-09-02 13:56, Bruce Richardson:
>>> On Wed, Sep 02, 2015 at 12:49:40PM +0000, Montorsi, Francesco wrote:
>>>> Hi all,
>>>>
>>>> Currently it seems that the only way to initialize EAL is using rte_eal_init()
>> function, correct?
>>>>
>>>> I have the problem that rte_eal_init() will call rte_panic() whenever
>> something fails to initialize or in other cases it will call exit().
>>>> In my application, I would rather like to attempt DPDK initialization. If it
>> fails I don't want to exit.
>>>> Unfortunately I cannot even copy&paste the rte_eal_init() code into my
>> application (removing rte_panic and exit calls) since it uses a lot of DPDK
>> internal private functions.
>>>>
>>>> I think that my requirements (avoid abort/exit calls when init fails) is a
>> basic requirement... would you accept a patch that adds an alternative
>> rte_eal_init() function that just returns an error code upon failure, instead of
>> immediately exiting?
>>>>
>>>> Thanks for your hard work!
>>>>
>>>> Francesco Montorsi
>>>>
>>> I, for one, would welcome such a patch. I think the code is overly
>>> quick in many places to panic or exit the app, when an error code would be
>> more appropriate.
>>> Feel free to also look at other libraries in DPDK too, if you like :-)
>>
>> Yes but please, do not create an alternative init function.
>> We just need to replace panic/exit with error codes and be sure that apps
>> and examples handle them correctly.
>
> To maintain compatibility with existing applications I think that
> perhaps the best would be to have a core initialization function
> rte_eal_init_raw() that never calls rte_panic() and returns an error
> code. Then we can maintain compatibility having an rte_eal_init()
> function that does call rte_panic() if rte_eal_init_raw() fails.
Note that callers are already required to check rte_eal_init() return
code for errors, and any app failing to do so would be buggy to begin
with. So just turning the panics into error returns is not an
incompatible change.
I agree with Thomas here, lets just fix rte_eal_init() to do the right
thing instead of adding alternatives just for the error return.
Especially when _raw() in the name suggests that is not the thing you'd
commonly want to use.
> Something like the attached patch.
It seems the patch missed the boat :)
> Note that the attached patch exposes also a way to skip the
> argv/argc configuration process by directly providing a populated
> configuration structure...
> Let me know what you think about it (the patch is just a draft and
> needs more work).
Can't comment on what I've not seen, but based on comments seen on this
list, having an alternative way to initialize with structures would be
welcomed by many. The downside is that those structures will need to be
exposed in the API forever which means any changes there are subject to
the ABI process.
- Panu -
> Thanks,
> Francesco
>
>
>
>
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] rte_eal_init() alternative?
2015-10-09 8:25 3% ` Panu Matilainen
@ 2015-10-09 10:03 3% ` Montorsi, Francesco
2015-10-09 10:40 0% ` Panu Matilainen
0 siblings, 1 reply; 200+ results
From: Montorsi, Francesco @ 2015-10-09 10:03 UTC (permalink / raw)
To: Panu Matilainen, Thomas Monjalon; +Cc: dev
Hi Panu,
> -----Original Message-----
> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> Sent: venerdì 9 ottobre 2015 10:26
> To: Montorsi, Francesco <fmontorsi@empirix.com>; Thomas Monjalon
> <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] rte_eal_init() alternative?
>
> > Something like the attached patch.
>
> It seems the patch missed the boat :)
Correct, sorry. I'm attaching it now.
>
> > Note that the attached patch exposes also a way to skip the argv/argc
> > configuration process by directly providing a populated configuration
> > structure...
> > Let me know what you think about it (the patch is just a draft and
> > needs more work).
>
> Can't comment on what I've not seen, but based on comments seen on this
> list, having an alternative way to initialize with structures would be welcomed
> by many. The downside is that those structures will need to be exposed in
> the API forever which means any changes there are subject to the ABI
> process.
>
Perhaps the init function taking a structure could be an exception for ABI changes... i.e., the format of the configuration is not garantueed to stay the same between different versions, and applications using a shared build of DPDK libraries must avoid using the configuration structure... would that be a possible solution?
Thanks,
Francesco
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] rte_eal_init() alternative?
2015-10-09 10:03 3% ` Montorsi, Francesco
@ 2015-10-09 10:40 0% ` Panu Matilainen
2015-10-09 16:03 0% ` Thomas F Herbert
0 siblings, 1 reply; 200+ results
From: Panu Matilainen @ 2015-10-09 10:40 UTC (permalink / raw)
To: Montorsi, Francesco, Thomas Monjalon; +Cc: dev
On 10/09/2015 01:03 PM, Montorsi, Francesco wrote:
> Hi Panu,
>
>
>
>> -----Original Message-----
>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>> Sent: venerdì 9 ottobre 2015 10:26
>> To: Montorsi, Francesco <fmontorsi@empirix.com>; Thomas Monjalon
>> <thomas.monjalon@6wind.com>
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] rte_eal_init() alternative?
>>
>>> Something like the attached patch.
>>
>> It seems the patch missed the boat :)
>
> Correct, sorry. I'm attaching it now.
>
>
>>
>>> Note that the attached patch exposes also a way to skip the argv/argc
>>> configuration process by directly providing a populated configuration
>>> structure...
>>> Let me know what you think about it (the patch is just a draft and
>>> needs more work).
>>
>> Can't comment on what I've not seen, but based on comments seen on this
>> list, having an alternative way to initialize with structures would be welcomed
>> by many. The downside is that those structures will need to be exposed in
>> the API forever which means any changes there are subject to the ABI
>> process.
>>
> Perhaps the init function taking a structure could be an exception
> for ABI changes... i.e., the format of the configuration is not
> garantueed to stay the same between different versions, and
> applications using a shared build of DPDK libraries must avoid using
> the configuration structure... would that be a possible solution?
Sorry but no, down the path of exceptions lies madness. It'd also be
giving the middle finger to people using DPDK as a shared library.
Exported structs are always a PITA and even more so in something like
configuration which is expected to keep expanding and/or otherwise
changing.
I'd much rather see an rte_eal_init() which takes struct *rte_cfgfile as
the configuration argument. That, plus maybe enhance librte_cfgfile to
allow constructing one entirely in memory + setting values in addition
to getting.
- Panu -
> Thanks,
> Francesco
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table
2015-09-08 12:57 0% ` [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Dumitrescu, Cristian
@ 2015-10-09 10:49 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-09 10:49 UTC (permalink / raw)
To: Singh, Jasvinder; +Cc: dev
2015-09-08 12:57, Dumitrescu, Cristian:
> From: Singh, Jasvinder
> > This patchset links to ABI change announced for librte_table. For lpm table,
> > name parameter has been included in LPM table parameters structure.
> > It will eventually allow applications to create more than one instances
> > of lpm table, if required.
> >
>
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Applied, thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] rte_eal_init() alternative?
2015-10-09 10:40 0% ` Panu Matilainen
@ 2015-10-09 16:03 0% ` Thomas F Herbert
0 siblings, 0 replies; 200+ results
From: Thomas F Herbert @ 2015-10-09 16:03 UTC (permalink / raw)
To: dev
On 10/9/15 11:40 AM, Panu Matilainen wrote:
> On 10/09/2015 01:03 PM, Montorsi, Francesco wrote:
>> Hi Panu,
>>
>>
>>
>>> -----Original Message-----
>>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>>> Sent: venerdì 9 ottobre 2015 10:26
>>> To: Montorsi, Francesco <fmontorsi@empirix.com>; Thomas Monjalon
>>> <thomas.monjalon@6wind.com>
>>> Cc: dev@dpdk.org
>>> Subject: Re: [dpdk-dev] rte_eal_init() alternative?
>>>
>>>> Something like the attached patch.
>>>
>>> It seems the patch missed the boat :)
>>
>> Correct, sorry. I'm attaching it now.
>>
>>
>>>
>>>> Note that the attached patch exposes also a way to skip the argv/argc
>>>> configuration process by directly providing a populated configuration
>>>> structure...
>>>> Let me know what you think about it (the patch is just a draft and
>>>> needs more work).
>>>
>>> Can't comment on what I've not seen, but based on comments seen on this
>>> list, having an alternative way to initialize with structures would
>>> be welcomed
>>> by many. The downside is that those structures will need to be
>>> exposed in
>>> the API forever which means any changes there are subject to the ABI
>>> process.
>>>
>> Perhaps the init function taking a structure could be an exception
>> for ABI changes... i.e., the format of the configuration is not
>> garantueed to stay the same between different versions, and
>> applications using a shared build of DPDK libraries must avoid using
>> the configuration structure... would that be a possible solution?
>
> Sorry but no, down the path of exceptions lies madness. It'd also be
> giving the middle finger to people using DPDK as a shared library.
>
> Exported structs are always a PITA and even more so in something like
> configuration which is expected to keep expanding and/or otherwise
> changing.
>
> I'd much rather see an rte_eal_init() which takes struct *rte_cfgfile
> as the configuration argument. That, plus maybe enhance librte_cfgfile
> to allow constructing one entirely in memory + setting values in
> addition to getting.
>
> - Panu -
It is very difficult for application writers to write their own command
parsers with implementation for -h option. How about a function that
would verify the init parameters and return with a benign error if the
options are not correct.
>
>
>
>
>> Thanks,
>> Francesco
>>
>>
>>
>
--
Thomas F Herbert Red Hat
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] lib: added support for armv7 architecture
@ 2015-10-11 21:17 2% ` Jan Viktorin
0 siblings, 0 replies; 200+ results
From: Jan Viktorin @ 2015-10-11 21:17 UTC (permalink / raw)
To: David Hunt; +Cc: dev, Vlastimil Kosar
Hello David, all,
I am reviewing your patch for things to be included in the final ARMv7
support... It is quite long, you know. We should probably break the
potential discussion in shorter e-mails, a thread per topic. See my
comments below.
Regards
Jan
On Fri, 2 Oct 2015 22:50:02 +0100
David Hunt <david.hunt@intel.com> wrote:
> From: Amruta Zende <amruta.zende@intel.com>
>
> Signed-off-by: Amruta Zende <amruta.zende@intel.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
>
> ---
> MAINTAINERS | 5 +
> config/defconfig_arm-native-linuxapp-gcc | 56 ++++
> .../common/include/arch/arm/rte_atomic.h | 269 ++++++++++++++++++++
> .../common/include/arch/arm/rte_byteorder.h | 146 +++++++++++
> .../common/include/arch/arm/rte_cpuflags.h | 138 ++++++++++
> .../common/include/arch/arm/rte_cycles.h | 77 ++++++
> .../common/include/arch/arm/rte_memcpy.h | 101 ++++++++
> .../common/include/arch/arm/rte_prefetch.h | 64 +++++
> .../common/include/arch/arm/rte_rwlock.h | 70 +++++
> .../common/include/arch/arm/rte_spinlock.h | 116 +++++++++
I do not know yet, how different is the ARMv8 support, how it
integrates with the ARMv7 part... Will we have a single directory "arm"?
> lib/librte_eal/common/include/arch/arm/rte_vect.h | 37 +++
> lib/librte_eal/linuxapp/Makefile | 3 +
> lib/librte_eal/linuxapp/arm_pmu/Makefile | 52 ++++
> lib/librte_eal/linuxapp/arm_pmu/rte_enable_pmu.c | 83 ++++++
> mk/arch/arm/rte.vars.mk | 58 +++++
> mk/machine/armv7-a/rte.vars.mk | 63 +++++
> mk/toolchain/gcc/rte.vars.mk | 8 +-
> 17 files changed, 1343 insertions(+), 3 deletions(-)
> create mode 100644 config/defconfig_arm-native-linuxapp-gcc
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_atomic.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_byteorder.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_cpuflags.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_cycles.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_memcpy.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_prefetch.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_rwlock.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> create mode 100644 lib/librte_eal/common/include/arch/arm/rte_vect.h
> create mode 100755 lib/librte_eal/linuxapp/arm_pmu/Makefile
> create mode 100644 lib/librte_eal/linuxapp/arm_pmu/rte_enable_pmu.c
> create mode 100644 mk/arch/arm/rte.vars.mk
> create mode 100644 mk/machine/armv7-a/rte.vars.mk
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 080a8e8..9d99d53 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -124,6 +124,11 @@ IBM POWER
> M: Chao Zhu <chaozhu@linux.vnet.ibm.com>
> F: lib/librte_eal/common/include/arch/ppc_64/
>
> +Arm V7
> +M: Amrute Zende <amruta.zende@intel.com>
> +M: David Hunt <david.hunt@intel.com>
> +F: lib/librte_eal/common/include/arch/arm/
We were discussing the ARM maintainer with Thomas Manjalon. As Dave
seems, he is not going to do the maintanance in a long-term, I can take
care of the ARM maintanance as well. However, I will not have a proper
ARMv8 machine available for some time.
> +
> Intel x86
> M: Bruce Richardson <bruce.richardson@intel.com>
> M: Konstantin Ananyev <konstantin.ananyev@intel.com>
> diff --git a/config/defconfig_arm-native-linuxapp-gcc b/config/defconfig_arm-native-linuxapp-gcc
This is interesting. Should we have a configuration for "arm" or better
for "armv7" or "armv7-a" or even more specifically for
"cortex-a9/15/..."?
> new file mode 100644
> index 0000000..159aa36
> --- /dev/null
> +++ b/config/defconfig_arm-native-linuxapp-gcc
> @@ -0,0 +1,56 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2015 Intel Corporation. All rights reserved.
> +#
> [...]
> +#
> +
> +#include "common_linuxapp"
> +
> +CONFIG_RTE_MACHINE="armv7-a"
> +
> +CONFIG_RTE_ARCH="arm"
> +CONFIG_RTE_ARCH_ARM32=y
> +CONFIG_RTE_ARCH_32=y
> +
> +CONFIG_RTE_TOOLCHAIN="gcc"
> +CONFIG_RTE_TOOLCHAIN_GCC=y
> +
> +CONFIG_RTE_FORCE_INTRINSICS=y
> +CONFIG_RTE_LIBRTE_VHOST=n
> +CONFIG_RTE_LIBRTE_KNI=n
> +CONFIG_RTE_KNI_KMOD=n
I could see a note somewhere that KNI is for 64-bit systems only. Am I
right? Would it be possible to have KNI for ARM?
> +CONFIG_RTE_LIBRTE_LPM=n
> +CONFIG_RTE_LIBRTE_ACL=n
> +CONFIG_RTE_LIBRTE_SCHED=n
> +CONFIG_RTE_LIBRTE_PORT=n
> +CONFIG_RTE_LIBRTE_PIPELINE=n
> +CONFIG_RTE_LIBRTE_TABLE=n
The libraries compiles (expect of LPM and ACL due to SSE issues). So
some of them will be enabled as default.
> +CONFIG_RTE_IXGBE_INC_VECTOR=n
> +CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
> +CONFIG_RTE_LIBRTE_CXGBE_PMD=n
> +
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic.h b/lib/librte_eal/common/include/arch/arm/rte_atomic.h
> new file mode 100644
> index 0000000..2a7339c
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_atomic.h
> @@ -0,0 +1,269 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_ATOMIC_ARM32_H_
> +#define _RTE_ATOMIC_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_atomic.h"
> +
> +/**
> + * @file
> + * Atomic Operations
> + *
> + * This file defines a API for atomic operations.
> + */
> +
> +/**
> + * General memory barrier.
> + *
> + * Guarantees that the LOAD and STORE operations generated before the
> + * barrier occur before the LOAD and STORE operations generated after.
> + */
> +#define rte_mb() { __sync_synchronize(); }
> +
> +/**
> + * Write memory barrier.
> + *
> + * Guarantees that the STORE operations generated before the barrier
> + * occur before the STORE operations generated after.
> + */
> +#define rte_wmb() {asm volatile("dsb st" : : : "memory"); }
> +
> +/**
> + * Read memory barrier.
> + *
> + * Guarantees that the LOAD operations generated before the barrier
> + * occur before the LOAD operations generated after.
> + */
> +#define rte_rmb() {asm volatile("dsb " : : : "memory"); }
> +
This is probably too strong barrier. Our patch series also doesn't do
it properly. It seems, we better should have
rte_wmb: dmb st
rte_rmb: dmb sy (= __sync_synchronize)
> +
> +
> +/*------------------------- 16 bit atomic operations -------------------------*/
> +
> +#ifndef RTE_FORCE_INTRINSICS
> +static inline int
> +rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
> +{
> + return __sync_bool_compare_and_swap(dst, exp, src);
Should we go with __sync_* or __atomic_* constructs? The __atomic_*
instrinsics are coming with GCC 4.7. Our patch set uses __atomic_* ones.
> +}
> +
> +static inline void
> +rte_atomic16_inc(rte_atomic16_t *v)
> +{
> + rte_atomic16_add(v, 1);
> +}
> +
> +static inline void
> +rte_atomic16_dec(rte_atomic16_t *v)
> +{
> + rte_atomic16_sub(v, 1);
> +}
> +
> +static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
> +{
> + return (__sync_add_and_fetch(&v->cnt, 1) == 0);
> +}
> +
> +static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
> +{
> + return (__sync_sub_and_fetch(&v->cnt, 1) == 0);
> +}
> +
> +static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
> +{
> + return rte_atomic16_cmpset((volatile uint16_t *)&v->cnt, 0, 1);
> +}
> +
> +
> +/*------------------------- 32 bit atomic operations -------------------------*/
> +
> +static inline int
> +rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
> +{
> + return __sync_bool_compare_and_swap(dst, exp, src);
> +}
> +
> +static inline void
> +rte_atomic32_inc(rte_atomic32_t *v)
> +{
> + rte_atomic32_add(v, 1);
> +}
> +
> +static inline void
> +rte_atomic32_dec(rte_atomic32_t *v)
> +{
> + rte_atomic32_sub(v, 1);
> +}
> +
> +static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
> +{
> + return (__sync_add_and_fetch(&v->cnt, 1) == 0);
> +}
> +
> +static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
> +{
> + return (__sync_sub_and_fetch(&v->cnt, 1) == 0);
> +}
> +
> +static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
> +{
> + return rte_atomic32_cmpset((volatile uint32_t *)&v->cnt, 0, 1);
> +}
> +
> +/*------------------------- 64 bit atomic operations -------------------------*/
> +
> +static inline int
> +rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
> +{
> + return __sync_bool_compare_and_swap(dst, exp, src);
> +}
> +
> +static inline void
> +rte_atomic64_init(rte_atomic64_t *v)
> +{
> +#ifdef __LP64__
> + v->cnt = 0;
I assume, this is for ARMv8...
> +#else
> + int success = 0;
> + uint64_t tmp;
> +
> + while (success == 0) {
> + tmp = v->cnt;
> + success = rte_atomic64_cmpset((volatile uint64_t *)&v->cnt,
> + tmp, 0);
> + }
> +#endif
> +}
> +
> +static inline int64_t
> +rte_atomic64_read(rte_atomic64_t *v)
> +{
> +#ifdef __LP64__
> + return v->cnt;
> +#else
> + int success = 0;
> + uint64_t tmp;
> +
> + while (success == 0) {
> + tmp = v->cnt;
> + /* replace the value by itself */
> + success = rte_atomic64_cmpset((volatile uint64_t *)&v->cnt,
> + tmp, tmp);
> + }
> + return tmp;
> +#endif
> +}
> +
> +static inline void
> +rte_atomic64_set(rte_atomic64_t *v, int64_t new_value)
> +{
> +#ifdef __LP64__
> + v->cnt = new_value;
> +#else
> + int success = 0;
> + uint64_t tmp;
> +
> + while (success == 0) {
> + tmp = v->cnt;
> + success = rte_atomic64_cmpset((volatile uint64_t *)&v->cnt,
> + tmp, new_value);
> + }
> +#endif
> +}
> +
> +static inline void
> +rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
> +{
> + __sync_fetch_and_add(&v->cnt, inc);
> +}
> +
> +static inline void
> +rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
> +{
> + __sync_fetch_and_sub(&v->cnt, dec);
> +}
> +
> +static inline void
> +rte_atomic64_inc(rte_atomic64_t *v)
> +{
> + rte_atomic64_add(v, 1);
> +}
> +
> +static inline void
> +rte_atomic64_dec(rte_atomic64_t *v)
> +{
> + rte_atomic64_sub(v, 1);
> +}
> +
> +static inline int64_t
> +rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
> +{
> + return __sync_add_and_fetch(&v->cnt, inc);
> +}
> +
> +static inline int64_t
> +rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
> +{
> + return __sync_sub_and_fetch(&v->cnt, dec);
> +}
> +
> +static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
> +{
> + return rte_atomic64_add_return(v, 1) == 0;
> +}
> +
> +static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
> +{
> + return rte_atomic64_sub_return(v, 1) == 0;
> +}
> +
> +static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
> +{
> + return rte_atomic64_cmpset((volatile uint64_t *)&v->cnt, 0, 1);
> +}
> +
> +static inline void rte_atomic64_clear(rte_atomic64_t *v)
> +{
> + rte_atomic64_set(v, 0);
> +}
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_ATOMIC_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_byteorder.h b/lib/librte_eal/common/include/arch/arm/rte_byteorder.h
> new file mode 100644
> index 0000000..effbd62
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_byteorder.h
> @@ -0,0 +1,146 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (C) IBM Corporation 2014. All rights reserved.
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +/* Inspired from FreeBSD src/sys/powerpc/include/endian.h
> + * Copyright (c) 1987, 1991, 1993
> + * The Regents of the University of California. All rights reserved.
> +*/
> +
> +#ifndef _RTE_BYTEORDER_ARM32_H_
> +#define _RTE_BYTEORDER_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_byteorder.h"
> +
> +/*
> + * An architecture-optimized byte swap for a 16-bit value.
> + *
> + * Do not use this function directly. The preferred function is rte_bswap16().
> + */
> +static inline uint16_t
> +rte_arch_bswap16(uint16_t _x)
> +{
> + return __builtin_bswap16(_x);
This has been available since GCC 4.8. Our code uses assembler for
this (so we can use 4.7, with using __sync_* constructs instead of
__atomic_* we can support even 4.6).
http://www.serverphorums.com/read.php?12,641912
Do we want to support GCC 4.6+, or is it sufficient to stay at 4.7+
or even 4.8+?
> +}
> +
> +/*
> + * An architecture-optimized byte swap for a 32-bit value.
> + *
> + * Do not use this function directly. The preferred function is rte_bswap32().
> + */
> +static inline uint32_t
> +rte_arch_bswap32(uint32_t _x)
> +{
> + return __builtin_bswap32(_x);
> +}
> +
> +/*
> + * An architecture-optimized byte swap for a 64-bit value.
> + *
> + * Do not use this function directly. The preferred function is rte_bswap64().
> + */
> +/* 64-bit mode */
> +static inline uint64_t
> +rte_arch_bswap64(uint64_t _x)
> +{
> + return __builtin_bswap64(_x);
> +}
> +
> +#ifndef RTE_FORCE_INTRINSICS
> +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \
> + rte_constant_bswap16(x) : \
> + rte_arch_bswap16(x)))
> +
> +#define rte_bswap32(x) ((uint32_t)(__builtin_constant_p(x) ? \
> + rte_constant_bswap32(x) : \
> + rte_arch_bswap32(x)))
> +
> +#define rte_bswap64(x) ((uint64_t)(__builtin_constant_p(x) ? \
> + rte_constant_bswap64(x) : \
> + rte_arch_bswap64(x)))
> +#else
> + /*
> + * __builtin_bswap16 is only available gcc 4.8 and upwards
> + */
> +#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 8)
> +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \
> + rte_constant_bswap16(x) : \
> + rte_arch_bswap16(x)))
> +#endif
> +#endif
> +
> +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> +
> +#define rte_cpu_to_le_16(x) (x)
> +#define rte_cpu_to_le_32(x) (x)
> +#define rte_cpu_to_le_64(x) (x)
> +
> +#define rte_cpu_to_be_16(x) rte_bswap16(x)
> +#define rte_cpu_to_be_32(x) rte_bswap32(x)
> +#define rte_cpu_to_be_64(x) rte_bswap64(x)
> +
> +#define rte_le_to_cpu_16(x) (x)
> +#define rte_le_to_cpu_32(x) (x)
> +#define rte_le_to_cpu_64(x) (x)
> +
> +#define rte_be_to_cpu_16(x) rte_bswap16(x)
> +#define rte_be_to_cpu_32(x) rte_bswap32(x)
> +#define rte_be_to_cpu_64(x) rte_bswap64(x)
> +
> +#else /* RTE_BIG_ENDIAN */
> +
> +#define rte_cpu_to_le_16(x) rte_bswap16(x)
> +#define rte_cpu_to_le_32(x) rte_bswap32(x)
> +#define rte_cpu_to_le_64(x) rte_bswap64(x)
> +
> +#define rte_cpu_to_be_16(x) (x)
> +#define rte_cpu_to_be_32(x) (x)
> +#define rte_cpu_to_be_64(x) (x)
> +
> +#define rte_le_to_cpu_16(x) rte_bswap16(x)
> +#define rte_le_to_cpu_32(x) rte_bswap32(x)
> +#define rte_le_to_cpu_64(x) rte_bswap64(x)
> +
> +#define rte_be_to_cpu_16(x) (x)
> +#define rte_be_to_cpu_32(x) (x)
> +#define rte_be_to_cpu_64(x) (x)
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_BYTEORDER_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_cpuflags.h b/lib/librte_eal/common/include/arch/arm/rte_cpuflags.h
> new file mode 100644
> index 0000000..411ca37
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_cpuflags.h
> @@ -0,0 +1,138 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (C) IBM Corporation 2014. All rights reserved.
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_CPUFLAGS_ARM32_H_
> +#define _RTE_CPUFLAGS_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <elf.h>
> +#include <fcntl.h>
> +#include <assert.h>
> +#include <unistd.h>
> +#include <string.h>
> +
> +#include "generic/rte_cpuflags.h"
> +
> +/* Symbolic values for the entries in the auxiliary table */
> +#define AT_HWCAP 16
> +#define AT_HWCAP2 26
> +#define AT_PLATFORM 15
> +
> +/* software based registers */
> +enum cpu_register_t {
> + REG_HWCAP = 0,
> + AARCH_MODE,
> +};
Our code can also read REG_HWCAP2 but not the AT_PLATFORM.
> +
> +/**
> + * Enumeration of all CPU features supported
> + */
> +enum rte_cpu_flag_t {
> + RTE_CPUFLAG_FP = 0,
> + RTE_CPUFLAG_ASIMD,
> + RTE_CPUFLAG_EVTSTRM,
> + RTE_CPUFLAG_AARCH64,
> + RTE_CPUFLAG_AARCH32,
Nice, you can detect 32/64 bit ARM architecture here. We didn't even
try.
> + /* The last item */
> + RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> +};
> +
> +static const struct feature_entry cpu_feature_table[] = {
> + FEAT_DEF(FP, 0x00000001, 0, REG_HWCAP, 0)
> + FEAT_DEF(ASIMD, 0x00000001, 0, REG_HWCAP, 1)
> + FEAT_DEF(EVTSTRM, 0x00000001, 0, REG_HWCAP, 2)
> + FEAT_DEF(AARCH64, 0x00000001, 0, AARCH_MODE, 3)
> + FEAT_DEF(AARCH32, 0x00000001, 0, AARCH_MODE, 4)
> +};
> +
> +/*
> + * Read AUXV software register and get cpu features for Power
> + */
> +static inline void
> +rte_cpu_get_features(__attribute__((unused)) uint32_t leaf,
> + __attribute__((unused)) uint32_t subleaf, cpuid_registers_t out)
> +{
> + int auxv_fd;
> + Elf32_auxv_t auxv;
> +
> + auxv_fd = open("/proc/self/auxv", O_RDONLY);
> + assert(auxv_fd);
> + while (read(auxv_fd, &auxv,
> + sizeof(Elf32_auxv_t)) == sizeof(Elf32_auxv_t)) {
> + if (auxv.a_type == AT_HWCAP)
> + out[REG_HWCAP] = auxv.a_un.a_val;
> + if (auxv.a_type == AT_PLATFORM) {
> + if (strcmp((const char *)auxv.a_un.a_val, "aarch64")
> + == 0)
> + out[AARCH_MODE] = (1 << 3);
> + else if (strcmp((const char *)auxv.a_un.a_val,
> + "aarch32") == 0)
> + out[AARCH_MODE] = (1 << 4);
Why there are such strange values 1 << 3 and 1 << 4?
> + }
> + }
> +}
> +
> +/*
> + * Checks if a particular flag is available on current machine.
> + */
> +static inline int
> +rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> +{
> + const struct feature_entry *feat;
> + cpuid_registers_t regs = {0};
> +
> + if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + /* Flag does not match anything in the feature tables */
> + return -ENOENT;
> +
> + feat = &cpu_feature_table[feature];
> +
> + if (!feat->leaf)
> + /* This entry in the table wasn't filled out! */
> + return -EFAULT;
> +
> + /* get the cpuid leaf containing the desired feature */
> + rte_cpu_get_features(feat->leaf, feat->subleaf, regs);
> +
> + /* check if the feature is enabled */
> + return (regs[feat->reg] >> feat->bit) & 1;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_CPUFLAGS_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_cycles.h b/lib/librte_eal/common/include/arch/arm/rte_cycles.h
> new file mode 100644
> index 0000000..140d1bb
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_cycles.h
> @@ -0,0 +1,77 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (C) IBM Corporation 2014. All rights reserved.
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> [...]
> + */
> +
> +#ifndef _RTE_CYCLES_ARM32_H_
> +#define _RTE_CYCLES_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_cycles.h"
> +
> +/**
> + * Read the time base register.
> + *
> + * @return
> + * The time base for this lcore.
> + */
> +static inline uint64_t
> +rte_rdtsc(void)
> +{
> + unsigned tsc;
> + uint64_t final_tsc;
> +
> + /* Read PMCCNTR */
> + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r"(tsc));
> + /* 1 tick = 64 clocks */
> + final_tsc = ((uint64_t)tsc) << 6;
> +
> + return (uint64_t)final_tsc;
> +}
Are you sure, the PMCCNTR is not used by the Linux Kernel? Would we
break something there? At least, the
arch/arm/kernel/perf_event_v7.c
does use the counter and it has routines for both reading and writing
it. So, I am afraid that we can break perf by this approach...
Our one, not the very best...
+static inline uint64_t
+rte_rdtsc(void)
+{
+ struct timespec val;
+ uint64_t v;
+
+ while (clock_gettime(CLOCK_MONOTONIC_RAW, &val) != 0)
+ /* no body */;
+
+ v = (uint64_t) val.tv_sec * 1000000000LL;
+ v += (uint64_t) val.tv_nsec;
+ return v;
+}
We can make it configurable. If somebody doesn't care about the perf
support but likes the lower-latency PMU counter, it might be possible
to switch for another implementation.
> +
> +static inline uint64_t
> +rte_rdtsc_precise(void)
> +{
> + asm volatile("isb sy" : : : );
> + return rte_rdtsc();
> +}
> +
> +static inline uint64_t
> +rte_get_tsc_cycles(void) { return rte_rdtsc(); }
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_CYCLES_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_memcpy.h b/lib/librte_eal/common/include/arch/arm/rte_memcpy.h
> new file mode 100644
> index 0000000..341052e
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_memcpy.h
> @@ -0,0 +1,101 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (C) IBM Corporation 2014. All rights reserved.
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_MEMCPY_ARM32_H_
> +#define _RTE_MEMCPY_ARM32_H_
> +
> +#include <stdint.h>
> +#include <string.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_memcpy.h"
> +
> +
> +
> +static inline void
> +rte_mov16(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 16);
> +}
> +
> +static inline void
> +rte_mov32(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 32);
> +}
> +
> +static inline void
> +rte_mov48(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 48);
> +}
> +
> +static inline void
> +rte_mov64(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 64);
> +}
> +
> +static inline void
> +rte_mov128(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 128);
> +}
> +
> +static inline void
> +rte_mov256(uint8_t *dst, const uint8_t *src)
> +{
> + memcpy(dst, src, 256);
> +}
> +
> +static inline void *
> +rte_memcpy(void *dst, const void *src, size_t n)
> +{
> + return memcpy(dst, src, n);
> +}
> +
> +
> +static inline void *
> +rte_memcpy_func(void *dst, const void *src, size_t n)
> +{
> + return memcpy(dst, src, n);
> +}
Here we do better with the NEON optimizations. However, I am not quite
sure how to behave on architectures _without_ NEON. Second, if you look
at the statistics we made (http://dpdk.org/dev/patchwork/patch/7389/)
you can see that the optimization makes sense for Cortex-A15 only.
For the other cores (A7, A9), it helps for lengths <= 32 B.
Should we do some runtime check for NEON presence?
Should we do some runtime check for the current ARM machine type?
Should we optimize only for the specific lengths?
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_MEMCPY_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_prefetch.h b/lib/librte_eal/common/include/arch/arm/rte_prefetch.h
> new file mode 100644
> index 0000000..47be3b8
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_prefetch.h
> @@ -0,0 +1,64 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_PREFETCH_ARM32_H_
> +#define _RTE_PREFETCH_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_prefetch.h"
> +
> +static inline void
> +rte_prefetch0(const volatile void __attribute__((unused)) *p)
> +{
> + asm volatile("nop");
> +}
> +
> +static inline void
> +rte_prefetch1(const volatile void __attribute__((unused)) *p)
> +{
> + asm volatile("nop");
> +}
> +
> +static inline void
> +rte_prefetch2(const volatile void __attribute__((unused)) *p)
> +{
> + asm volatile("nop");
> +}
We use PLD instruction here. Probably, we can use __builtin_prefetch.
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_PREFETCH_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_rwlock.h b/lib/librte_eal/common/include/arch/arm/rte_rwlock.h
> new file mode 100644
> index 0000000..87349b9
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_rwlock.h
> @@ -0,0 +1,70 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_RWLOCK_ARM32_H_
> +#define _RTE_RWLOCK_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include "generic/rte_rwlock.h"
> +
> +static inline void
> +rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
> +{
> + rte_rwlock_read_lock(rwl);
> +}
> +
> +static inline void
> +rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
> +{
> + rte_rwlock_read_unlock(rwl);
> +}
> +
> +static inline void
> +rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
> +{
> + rte_rwlock_write_lock(rwl);
> +}
> +
> +static inline void
> +rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
> +{
> + rte_rwlock_write_unlock(rwl);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_RWLOCK_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_spinlock.h b/lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> new file mode 100644
> index 0000000..45d01a0
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_spinlock.h
> @@ -0,0 +1,116 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright (C) IBM Corporation 2014. All rights reserved.
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_SPINLOCK_ARM32_H_
> +#define _RTE_SPINLOCK_ARM32_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_common.h>
> +
> +#include "generic/rte_spinlock.h"
> +
> +
> +#ifndef RTE_FORCE_INTRINSICS
> +
> +static inline void
> +rte_spinlock_lock(rte_spinlock_t *sl)
> +{
> + while (__sync_lock_test_and_set(&sl->locked, 1))
> + while (sl->locked)
> + rte_pause();
> +}
> +
> +static inline void
> +rte_spinlock_unlock(rte_spinlock_t *sl)
> +{
> + __sync_lock_release(&sl->locked);
> +}
> +
> +static inline int
> +rte_spinlock_trylock(rte_spinlock_t *sl)
> +{
> + return (__sync_lock_test_and_set(&sl->locked, 1) == 0);
> +}
> +
> +#endif
> +
> +static inline int
> +rte_tm_supported(void)
> +{
> + return 0;
> +}
> +
> +static inline void
> +rte_spinlock_lock_tm(rte_spinlock_t *sl)
> +{
> + rte_spinlock_lock(sl); /* fall-back */
> +}
> +
> +static inline int
> +rte_spinlock_trylock_tm(rte_spinlock_t *sl)
> +{
> + return rte_spinlock_trylock(sl);
> +}
> +
> +static inline void
> +rte_spinlock_unlock_tm(rte_spinlock_t *sl)
> +{
> + rte_spinlock_unlock(sl);
> +}
> +
> +static inline void
> +rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
> +{
> + rte_spinlock_recursive_lock(slr); /* fall-back */
> +}
> +
> +static inline void
> +rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
> +{
> + rte_spinlock_recursive_unlock(slr);
> +}
> +
> +static inline int
> +rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
> +{
> + return rte_spinlock_recursive_trylock(slr);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_SPINLOCK_ARM32_H_ */
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_vect.h b/lib/librte_eal/common/include/arch/arm/rte_vect.h
> new file mode 100644
> index 0000000..5994efd
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/arm/rte_vect.h
> @@ -0,0 +1,37 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> +#ifndef _RTE_VECT_ARM32_H_
> +#define _RTE_VECT_ARM32_H_
> +
> +
> +#endif /* _RTE_VECT_ARM32_H_*/
> diff --git a/lib/librte_eal/linuxapp/Makefile b/lib/librte_eal/linuxapp/Makefile
> index d9c5233..549f2ed 100644
> --- a/lib/librte_eal/linuxapp/Makefile
> +++ b/lib/librte_eal/linuxapp/Makefile
> @@ -35,6 +35,9 @@ ifeq ($(CONFIG_RTE_EAL_IGB_UIO),y)
> DIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += igb_uio
> endif
> DIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += eal
> +ifeq ($(CONFIG_RTE_ARCH), "arm")
> +DIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += arm_pmu
> +endif
> ifeq ($(CONFIG_RTE_KNI_KMOD),y)
> DIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += kni
> endif
> diff --git a/lib/librte_eal/linuxapp/arm_pmu/Makefile b/lib/librte_eal/linuxapp/arm_pmu/Makefile
> new file mode 100755
> index 0000000..0adb23b
> --- /dev/null
> +++ b/lib/librte_eal/linuxapp/arm_pmu/Makefile
> @@ -0,0 +1,52 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2015 Intel Corporation. All rights reserved.
> +#
> [...]
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# module name and path
> +#
> +MODULE = arm_pmu
> +MODULE_PATH = drivers/net/arm_pmu
> +
> +#
> +# CFLAGS
> +#
> +MODULE_CFLAGS += -I$(SRCDIR) --param max-inline-insns-single=100
> +MODULE_CFLAGS += -I$(RTE_OUTPUT)/include
> +MODULE_CFLAGS += -Winline -Wall -Werror
> +MODULE_CFLAGS += -include $(RTE_OUTPUT)/include/rte_config.h
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-y := rte_enable_pmu.c
> +
> +include $(RTE_SDK)/mk/rte.module.mk
> diff --git a/lib/librte_eal/linuxapp/arm_pmu/rte_enable_pmu.c b/lib/librte_eal/linuxapp/arm_pmu/rte_enable_pmu.c
> new file mode 100644
> index 0000000..9d77271
> --- /dev/null
> +++ b/lib/librte_eal/linuxapp/arm_pmu/rte_enable_pmu.c
> @@ -0,0 +1,83 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2015 Intel Corporation. All rights reserved.
> + *
> [...]
> + */
> +
> + /**
> + * @file
> + * This file enables the ARM PMU.
> + *
> + * It contains the code to enable the usage of PMU registers
> + * by application code on all cores.
> + */
> +
> +#include <linux/device.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/uio_driver.h>
> +#include <linux/io.h>
> +#include <linux/msi.h>
> +#include <linux/version.h>
> +#include <linux/slab.h>
> +
> +void enable_pmu(void __attribute__((unused)) *info)
> +{
> + int core_id = 0;
> +
> + core_id = smp_processor_id();
> + /* Configure PMUSERENR - Enables user mode to have access to the
> + * Performance Monitor Registers. */
> + asm volatile("mcr p15, 0, %0, c9, c14, 0" : : "r"(1));
> + /* Configure PMCR */
> + asm volatile("mcr p15, 0, %0, c9, c12, 0" : : "r"(29));
> + /* Configure PMCNTENSET Enable counters */
> + asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r"(0x8000000f));
> + return;
> +}
> +
> +int __init init_module_set_arm_pmu(void)
> +{
> + int ret = 0;
> + /* smp_call_function will make sure that the registers
> + * for all the cores are enabled for tracking their PMUs
> + * This is required for the rdtsc equivalent on ARMv7 */
> + ret = smp_call_function(&enable_pmu, NULL, 0);
> + /* smp_call_function would only call on all other modules
> + * so call it for the current module */
> + enable_pmu(NULL);
> + return ret;
> +}
> +
> +void __exit cleanup_module_set_arm_pmu(void)
> +{
> +}
> +
> +module_init(init_module_set_arm_pmu);
> +module_exit(cleanup_module_set_arm_pmu);
Nice, but as I've already stated, setting the PMU can break the perf.
Otherwise, it is quite nice solution. Anyway, a kernel module will be
necessary for the ARM machines utilizing the internal EMACs. We have
to deal with I/O coherency, phyters and clocks control.
The I/O coherency might not be an issue for Cortex-A15 (and newer) as
there is a cache-coherent interconnect available (using AMBA ACE
extension). However, I didn't find out, which SoCs integrates this
interconnect. For older SoCs, we have to deal with this.
I am also thinking of using the non-cachable memory only, at least at
the beginning. This needs to set the proper MMU/TLB flags to the
allocated huge pages. The processing will be slow this way, but there
will be no need for other system calls to the kernel during
sending/receiving packets. I've measured that accessing only few bytes
(say 128-512 B) in non-cacheable areas does not impact the processing
that much, so it could be sufficient for tasks like L2/L3 forwarding,
etc. But, I have no idea, how to properly access the MMU/TLB settings of
the selected (or all?) huge pages. This will be a hack into the
Linux Kernel.
> diff --git a/mk/arch/arm/rte.vars.mk b/mk/arch/arm/rte.vars.mk
> new file mode 100644
> index 0000000..3dfeb16
> --- /dev/null
> +++ b/mk/arch/arm/rte.vars.mk
> @@ -0,0 +1,58 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2015 Intel Corporation. All rights reserved.
> +#
> [...]
> +#
> +# arch:
> +#
> +# - define ARCH variable (overriden by cmdline or by previous
> +# optional define in machine .mk)
> +# - define CROSS variable (overriden by cmdline or previous define
> +# in machine .mk)
> +# - define CPU_CFLAGS variable (overriden by cmdline or previous
> +# define in machine .mk)
> +# - define CPU_LDFLAGS variable (overriden by cmdline or previous
> +# define in machine .mk)
> +# - define CPU_ASFLAGS variable (overriden by cmdline or previous
> +# define in machine .mk)
> +# - may override any previously defined variable
> +#
> +# examples for CONFIG_RTE_ARCH: i686, x86_64, x86_64_32
> +#
> +
> +ARCH ?= arm
> +# common arch dir in eal headers
> +ARCH_DIR := arm
> +CROSS ?=
> +
> +CPU_CFLAGS += -flax-vector-conversions
https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html
-flax-vector-conversions
Allow implicit conversions between vectors with differing numbers of elements and/or incompatible element types.
This option should not be used for new code.
I have no idea what is this option about... is it necessary?
> +CPU_LDFLAGS ?=
> +CPU_ASFLAGS ?=
We do:
+CPU_CFLAGS ?= -marm -DRTE_CACHE_LINE_SIZE=64 -munaligned-access
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?= -felf
The -marm can be probably omitted as the machine specific flags
override it anyway.
How is the cache line size specified for other platforms? Is it OK to go
this way?
https://gcc.gnu.org/onlinedocs/gcc-4.8.1/gcc/ARM-Options.html#ARM-Options
-munaligned-access
-mno-unaligned-access
Enables (or disables) reading and writing of 16- and 32- bit values from addresses
that are not 16- or 32- bit aligned. By default unaligned access is disabled for all
pre-ARMv6 and all ARMv6-M architectures, and enabled for all other architectures. If
unaligned access is not enabled then words in packed data structures will be accessed
a byte at a time.
The ARM attribute Tag_CPU_unaligned_access will be set in the generated object file to
either true or false, depending upon the setting of this option. If unaligned access is
enabled then the preprocessor symbol __ARM_FEATURE_UNALIGNED will also be defined.
> +
> +export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
> diff --git a/mk/machine/armv7-a/rte.vars.mk b/mk/machine/armv7-a/rte.vars.mk
> new file mode 100644
> index 0000000..feeebfa
> --- /dev/null
> +++ b/mk/machine/armv7-a/rte.vars.mk
> @@ -0,0 +1,63 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2015 Intel Corporation. All rights reserved.
> +#
> [...]
> +
> +#
> +# machine:
> +#
> +# - can define ARCH variable (overriden by cmdline value)
> +# - can define CROSS variable (overriden by cmdline value)
> +# - define MACHINE_CFLAGS variable (overriden by cmdline value)
> +# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
> +# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
> +# - can define CPU_CFLAGS variable (overriden by cmdline value) that
> +# overrides the one defined in arch.
> +# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
> +# overrides the one defined in arch.
> +# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
> +# overrides the one defined in arch.
> +# - may override any previously defined variable
> +#
> +
> +# ARCH =
> +# CROSS =
> +# MACHINE_CFLAGS =
> +# MACHINE_LDFLAGS =
> +# MACHINE_ASFLAGS =
> +# CPU_CFLAGS =
> +# CPU_LDFLAGS =
> +# CPU_ASFLAGS =
> +
> +MACHINE_CFLAGS += -march=armv7-a
We do:
+CPU_CFLAGS += -mfloat-abi=softfp
+
+MACHINE_CFLAGS = -march=armv7-a -mtune=cortex-a9
+MACHINE_CFLAGS += -mfpu=neon
This is quite specific. How to deal with those flags? Should we have a
specific config for Cortex A7, A9, A15, ...? What about NEON?
> +
> +#Tried this for gdb - did not work - Dwarf version error seen
> +# we need a gdb that can understand dwarf version 4
> +#CPU_CFLAGS += -g -gdwarf-2 -gstrict-dwarf
> +#CPU_CFLAGS := $(filter-out -O3,$(CPU_CFLAGS))
> +#CPU_CFLAGS := $(filter-out -O2,$(CPU_CFLAGS))
I can see, those lines are commented out. However, no idea about those,
debugging should not be enabled by default, is it true? Is there a way
in DPDK how to enable debugging at the compiler level?
I do not understand what is the goal of the -O2/-O3 lines...
> diff --git a/mk/toolchain/gcc/rte.vars.mk b/mk/toolchain/gcc/rte.vars.mk
> index 0f51c66..501fce5 100644
> --- a/mk/toolchain/gcc/rte.vars.mk
> +++ b/mk/toolchain/gcc/rte.vars.mk
> @@ -1,7 +1,6 @@
> # BSD LICENSE
> #
> -# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> -# All rights reserved.
> +# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
> #
> # Redistribution and use in source and binary forms, with or without
> # modification, are permitted provided that the following conditions
> @@ -73,7 +72,10 @@ endif
>
> WERROR_FLAGS := -W -Wall -Werror -Wstrict-prototypes -Wmissing-prototypes
> WERROR_FLAGS += -Wmissing-declarations -Wold-style-definition -Wpointer-arith
> -WERROR_FLAGS += -Wcast-align -Wnested-externs -Wcast-qual
> +ifneq ($(CONFIG_RTE_ARCH), "arm")
> +WERROR_FLAGS += -Wcast-align
> +endif
> +WERROR_FLAGS += -Wnested-externs -Wcast-qual
> WERROR_FLAGS += -Wformat-nonliteral -Wformat-security
> WERROR_FLAGS += -Wundef -Wwrite-strings
>
When you start compiling the libraries, the -Wcast-align will break
the build (at least, it does for us).
Here is, how we do after a short discussion with Thomas M.
+# There are many issues reported for ARMv7 architecture
+# which are not necessarily fatal. Report as warnings.
+ifeq ($(CONFIG_RTE_ARCH_ARMv7),y)
+WERROR_FLAGS += -Wno-error
+endif
We should identify the alignment issues and fix them somehow(?).
--
Jan Viktorin E-mail: Viktorin@RehiveTech.com
System Architect Web: www.RehiveTech.com
RehiveTech
Brno, Czech Republic
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table
2015-09-17 16:13 0% ` [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Dumitrescu, Cristian
@ 2015-10-12 14:06 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-12 14:06 UTC (permalink / raw)
To: Singh, Jasvinder; +Cc: dev
2015-09-17 16:13, Dumitrescu, Cristian:
> From: Singh, Jasvinder
> > This patchset links to ABI change announced for librte_table. For
> > lpm table, name parameter has been included in LPM table parameters
> > structure. It will eventually allow applications to create more
> > than one instances of lpm table, if required.
> >
> > Changes in v2:
> > - rte_table_lpm_ipv6.c: removed name varibale from
> > rte_zmalloc_socket() and inserted that in rte_lpm6_create().
>
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Applied, thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice
2015-09-25 22:33 5% ` [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice roy.fan.zhang
@ 2015-10-12 14:24 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-12 14:24 UTC (permalink / raw)
To: roy.fan.zhang; +Cc: dev
Hi and welcome,
(it seems to be your first patch on DPDK)
2015-09-25 23:33, roy.fan.zhang@intel.com:
> --- a/doc/guides/rel_notes/release_2_2.rst
> +++ b/doc/guides/rel_notes/release_2_2.rst
> @@ -95,6 +95,9 @@ ABI Changes
>
> * The LPM structure is changed. The deprecated field mem_location is removed.
>
> +* Key mask parameter is added to the hash table parameter structure for
> + 8-byte key and 16-byte key extendible bucket and LRU tables. The
> + deprecated field mem_location is removed.
It seems the last sentence is a wrong copy/paste.
If a v2 is needed, please squash the release notes changes with the API changes
(patches 1 and 2).
Thanks
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 5/5] doc: modify release notes and deprecation notice for table and pipeline
@ 2015-10-13 7:34 5% ` Marcin Kerlin
0 siblings, 0 replies; 200+ results
From: Marcin Kerlin @ 2015-10-13 7:34 UTC (permalink / raw)
To: dev
The LIBABIVER number is incremented for table and pipeline libraries.
The release notes is updated and the deprecation announce is removed.
Signed-off-by: Maciej Gajdzica <maciejx.t.gajdzica at intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu at intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 6 ++++--
lib/librte_pipeline/Makefile | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 8 ++++++++
lib/librte_table/Makefile | 2 +-
5 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fa55117..2bf2df4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -53,9 +53,6 @@ Deprecation Notices
* librte_table LPM: A new parameter to hold the table name will be added to
the LPM table parameter structure.
-* librte_table: New functions for table entry bulk add/delete will be added
- to the table operations structure.
-
* librte_table hash: Key mask parameter will be added to the hash table
parameter structure for 8-byte key and 16-byte key extendible bucket and
LRU tables.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 5687676..b46d2ae 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -98,6 +98,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* Added functions add/delete bulk to table and pipeline libraries.
+
Shared Library Versions
-----------------------
@@ -122,7 +124,7 @@ The libraries prepended with a plus sign were incremented in this version.
+ librte_mbuf.so.2
librte_mempool.so.1
librte_meter.so.1
- librte_pipeline.so.1
+ + librte_pipeline.so.2
librte_pmd_bond.so.1
+ librte_pmd_ring.so.2
librte_port.so.1
@@ -130,6 +132,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ + librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_pipeline/Makefile b/lib/librte_pipeline/Makefile
index 15e406b..1166d3c 100644
--- a/lib/librte_pipeline/Makefile
+++ b/lib/librte_pipeline/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_pipeline_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 8f25d0f..4cc86f6 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -29,3 +29,11 @@ DPDK_2.1 {
rte_pipeline_table_stats_read;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete_bulk;
+
+} DPDK_2.1;
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
1.9.1
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key
@ 2015-10-13 13:57 3% ` Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters Jasvinder Singh
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Jasvinder Singh @ 2015-10-13 13:57 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patchset links to ABI change announced for librte_table.
Key_mask parameters has been added to the hash table parameter
structure for 8-byte key and 16-byte key extendible bucket
and LRU tables.
v2:
*change in release note.
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Fan Zhang (8):
librte_table: add key_mask parameter to 8-byte key hash parameters
librte_table: add key_mask parameter to 16-byte key hash parameters
librte_table: add 16 byte hash table operations with computed lookup
app/test: modify app/test_table_combined and app/test_table_tables
app/test-pipeline: modify pipeline test
example/ip_pipeline: add parse_hex_string for internal use
example/ip_pipeline/pipeline: update flow_classification pipeline
librte_table: modify release notes and deprecation notice
app/test-pipeline/pipeline_hash.c | 4 +
app/test/test_table_combined.c | 4 +
app/test/test_table_tables.c | 6 +-
doc/guides/rel_notes/deprecation.rst | 3 -
doc/guides/rel_notes/release_2_2.rst | 4 +-
examples/ip_pipeline/config_parse.c | 70 ++++
examples/ip_pipeline/pipeline.h | 4 +
.../pipeline/pipeline_flow_classification_be.c | 56 ++-
lib/librte_table/Makefile | 2 +-
lib/librte_table/rte_table_hash.h | 20 +
lib/librte_table/rte_table_hash_key16.c | 411 ++++++++++++++++++++-
lib/librte_table/rte_table_hash_key8.c | 54 ++-
12 files changed, 607 insertions(+), 31 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 2/8] librte_table: add key_mask parameter to 16-byte key hash parameters
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters Jasvinder Singh
@ 2015-10-13 13:57 3% ` Jasvinder Singh
2015-10-13 13:57 5% ` [dpdk-dev] [PATCH v2 8/8] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-10-21 12:18 3% ` [dpdk-dev] [PATCH v3 0/6] librte_table: add key_mask parameter to hash table parameter structure Jasvinder Singh
3 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-10-13 13:57 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table. key_mask
parameter is added for 16-byte key extendible bucket and LRU tables.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/librte_table/rte_table_hash.h | 6 ++++
lib/librte_table/rte_table_hash_key16.c | 53 ++++++++++++++++++++++++++++-----
2 files changed, 52 insertions(+), 7 deletions(-)
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index ef65355..e2c60e1 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -263,6 +263,9 @@ struct rte_table_hash_key16_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -290,6 +293,9 @@ struct rte_table_hash_key16_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
index f6a3306..ffd3249 100644
--- a/lib/librte_table/rte_table_hash_key16.c
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -85,6 +85,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask[2];
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -164,6 +165,14 @@ rte_table_hash_create_key16_lru(void *params,
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = ((uint64_t *)p->key_mask)[0];
+ f->key_mask[1] = ((uint64_t *)p->key_mask)[1];
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_16 *bucket;
@@ -384,6 +393,14 @@ rte_table_hash_create_key16_ext(void *params,
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = (((uint64_t *)p->key_mask)[0]);
+ f->key_mask[1] = (((uint64_t *)p->key_mask)[1]);
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
return f;
}
@@ -609,11 +626,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -631,11 +651,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -658,12 +681,15 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket, pos); \
\
pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -749,13 +775,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -778,13 +810,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -1115,3 +1153,4 @@ struct rte_table_ops rte_table_hash_key16_ext_ops = {
.f_lookup = rte_table_hash_lookup_key16_ext,
.f_stats = rte_table_hash_key16_stats_read,
};
+
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key Jasvinder Singh
@ 2015-10-13 13:57 3% ` Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 2/8] librte_table: add key_mask parameter to 16-byte " Jasvinder Singh
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-10-13 13:57 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table. key_mask
parameter is added for 8-byte key extendible bucket and LRU tables.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
lib/librte_table/rte_table_hash.h | 6 ++++
lib/librte_table/rte_table_hash_key8.c | 54 +++++++++++++++++++++++++++-------
2 files changed, 50 insertions(+), 10 deletions(-)
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index 9181942..ef65355 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -196,6 +196,9 @@ struct rte_table_hash_key8_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -226,6 +229,9 @@ struct rte_table_hash_key8_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket hash table operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
index b351a49..ccb20cf 100644
--- a/lib/librte_table/rte_table_hash_key8.c
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -82,6 +82,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask;
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -160,6 +161,11 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_8 *bucket;
@@ -372,6 +378,11 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
f->stack = (uint32_t *)
&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
@@ -586,9 +597,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t *key; \
uint64_t signature; \
uint32_t bucket_index; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf1, f->key_offset);\
- signature = f->f_hash(key, RTE_TABLE_HASH_KEY_SIZE, f->seed);\
+ hash_key_buffer = *key & f->key_mask; \
+ signature = f->f_hash(&hash_key_buffer, \
+ RTE_TABLE_HASH_KEY_SIZE, f->seed); \
bucket_index = signature & (f->n_buckets - 1); \
bucket1 = (struct rte_bucket_4_8 *) \
&f->memory[bucket_index * f->bucket_size]; \
@@ -602,10 +616,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = key[0] & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -624,10 +640,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = *key & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -651,11 +669,13 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer = (*key) & f->key_mask; \
\
- lookup_key8_cmp(key, bucket, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket, pos); \
\
pkt_mask = ((bucket->signature >> pos) & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -736,6 +756,8 @@ rte_table_hash_entry_delete_key8_ext(
#define lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f)\
{ \
uint64_t *key10, *key11; \
+ uint64_t hash_offset_buffer10; \
+ uint64_t hash_offset_buffer11; \
uint64_t signature10, signature11; \
uint32_t bucket10_index, bucket11_index; \
rte_table_hash_op_hash f_hash = f->f_hash; \
@@ -744,14 +766,18 @@ rte_table_hash_entry_delete_key8_ext(
\
key10 = RTE_MBUF_METADATA_UINT64_PTR(mbuf10, key_offset);\
key11 = RTE_MBUF_METADATA_UINT64_PTR(mbuf11, key_offset);\
+ hash_offset_buffer10 = *key10 & f->key_mask; \
+ hash_offset_buffer11 = *key11 & f->key_mask; \
\
- signature10 = f_hash(key10, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature10 = f_hash(&hash_offset_buffer10, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket10_index = signature10 & (f->n_buckets - 1); \
bucket10 = (struct rte_bucket_4_8 *) \
&f->memory[bucket10_index * f->bucket_size]; \
rte_prefetch0(bucket10); \
\
- signature11 = f_hash(key11, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature11 = f_hash(&hash_offset_buffer11, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket11_index = signature11 & (f->n_buckets - 1); \
bucket11 = (struct rte_bucket_4_8 *) \
&f->memory[bucket11_index * f->bucket_size]; \
@@ -764,13 +790,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
@@ -793,13 +823,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 8/8] librte_table: modify release notes and deprecation notice
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 2/8] librte_table: add key_mask parameter to 16-byte " Jasvinder Singh
@ 2015-10-13 13:57 5% ` Jasvinder Singh
2015-10-21 12:18 3% ` [dpdk-dev] [PATCH v3 0/6] librte_table: add key_mask parameter to hash table parameter structure Jasvinder Singh
3 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-10-13 13:57 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
The LIBABIVER number is incremented. The release notes is
updated and the deprecation announce is removed.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++-
lib/librte_table/Makefile | 2 +-
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fa55117..06e0078 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -56,9 +56,6 @@ Deprecation Notices
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
-* librte_table hash: Key mask parameter will be added to the hash table
- parameter structure for 8-byte key and 16-byte key extendible bucket and
- LRU tables.
* librte_pipeline: The prototype for the pipeline input port, output port
and table action handlers will be updated:
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 5687676..30197ec 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -98,6 +98,8 @@ ABI Changes
* The LPM structure is changed. The deprecated field mem_location is removed.
+* Key mask parameter is added to the hash table parameter structure for
+ 8-byte key and 16-byte key extendible bucket and LRU tables.
Shared Library Versions
-----------------------
@@ -130,6 +132,6 @@ The libraries prepended with a plus sign were incremented in this version.
librte_reorder.so.1
librte_ring.so.1
librte_sched.so.1
- librte_table.so.1
+ librte_table.so.2
librte_timer.so.1
librte_vhost.so.1
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index c5b3eaf..7f02af3 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_table_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
--
2.1.0
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH 2/4] rte_ring: store memzone pointer inside ring
2015-09-30 12:12 3% ` [dpdk-dev] [PATCH 2/4] rte_ring: store memzone pointer inside ring Bruce Richardson
@ 2015-10-13 14:29 0% ` Olivier MATZ
0 siblings, 0 replies; 200+ results
From: Olivier MATZ @ 2015-10-13 14:29 UTC (permalink / raw)
To: Bruce Richardson, dev
Hi Bruce,
On 09/30/2015 02:12 PM, Bruce Richardson wrote:
> Add a new field to the rte_ring structure to store the memzone pointer which
> contains the ring. For rings created using rte_ring_create(), the field will
> be set automatically.
>
> This new field will allow users of the ring to query the numa node a ring is
> allocated on, or to get the physical address of the ring, if so needed.
>
> The rte_ring structure will also maintain ABI compatibility, as the
> structure members, after the new one, are set to be cache line aligned,
> so leaving a space.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] crc: deinline crc functions
2015-10-01 23:37 1% [dpdk-dev] [PATCH] crc: deinline crc functions Stephen Hemminger
@ 2015-10-13 16:04 0% ` Richardson, Bruce
0 siblings, 0 replies; 200+ results
From: Richardson, Bruce @ 2015-10-13 16:04 UTC (permalink / raw)
To: Stephen Hemminger, De Lara Guarch, Pablo; +Cc: dev
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Friday, October 2, 2015 12:38 AM
> To: Richardson, Bruce; De Lara Guarch, Pablo
> Cc: dev@dpdk.org; Stephen Hemminger
> Subject: [PATCH] crc: deinline crc functions
>
> Because the CRC functions are inline and defined purely in the header
> file, every component that uses these functions gets its own copy
> of the software CRC table which is a big space waster.
>
> Just deinline which give better long term ABI stablity anyway.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
While I think it's a good idea to de-inline the functions that
do the calculations using the lookup tables, I think the
functions that consist of a single assembly instruction should be
kept as inline.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-dev, PATCHv5, 1/8] ethdev: add new API to retrieve RX/TX queue information
@ 2015-10-14 11:49 4% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2015-10-14 11:49 UTC (permalink / raw)
To: Amine Kherbouche, dev
Hi Amine,
> -----Original Message-----
> From: Amine Kherbouche [mailto:amine.kherbouche@6wind.com]
> Sent: Wednesday, October 14, 2015 12:40 PM
> To: Ananyev, Konstantin; dev@dpdk.org
> Subject: Re: [dpdk-dev, PATCHv5, 1/8] ethdev: add new API to retrieve RX/TX queue information
>
>
>
> Hi Konstantin
> > +/**
> > + * Ethernet device RX queue information structure.
> > + * Used to retieve information about configured queue.
> > + */
> > +struct rte_eth_rxq_info {
> > + struct rte_mempool *mp; /**< mempool used by that queue. */
> > + struct rte_eth_rxconf conf; /**< queue config parameters. */
> > + uint8_t scattered_rx; /**< scattered packets RX supported. */
> > + uint16_t nb_desc; /**< configured number of RXDs. */
> Here i need two more fields in this struct :
> uint16_t free_desc : for free queue descriptors
> uint16_t used_desc : for used queue descriptors
> > +} __rte_cache_aligned;
Yep, that’s good thing to have.
But to fill them properly extra work is required, so my thought was to left it for future expansion.
> > +
> > +/**
> > + * Ethernet device TX queue information structure.
> > + * Used to retieve information about configured queue.
> > + */
> > +struct rte_eth_txq_info {
> > + struct rte_eth_txconf conf; /**< queue config parameters. */
> > + uint16_t nb_desc; /**< configured number of TXDs. */
> And also here.
> > +} __rte_cache_aligned;
> > +
> > struct rte_eth_dev;
> How to add them without breaking API ?
As I said in the patch comments:
"left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage."
As you can see both queue info structures have size 64B each.
So there is plenty of space for expansion without ABI breakage.
I would prefer to see them now, so
> I'll send an update on your patch series that i'll use this 2 more
> fields. The purpose will be to provide analysis of the usage of the RX
> and TX queues.
Well, in general I am not opposed to add these fields to the structures.
In fact, I think it is a good thing to have.
But if we'll add them without making queue_info_get() feeling them properly -
it might create a lot of controversy.
So unless you prepared to make changes in all queue_info_get() too,
my opinion - let's leave it as it is for 2.2, and add new fields in later releases.
As I said above - ABI wouldn't be affected.
Konstantin
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
@ 2015-10-15 15:43 3% ` Ananyev, Konstantin
2015-10-19 16:30 0% ` Zoltan Kiss
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2015-10-15 15:43 UTC (permalink / raw)
To: Zoltan Kiss, Richardson, Bruce, dev
> -----Original Message-----
> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> Sent: Thursday, October 15, 2015 11:33 AM
> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>
>
>
> On 15/10/15 00:23, Ananyev, Konstantin wrote:
> >
> >
> >> -----Original Message-----
> >> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >> Sent: Wednesday, October 14, 2015 5:11 PM
> >> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
> >> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
> >>
> >>
> >>
> >> On 28/09/15 00:19, Ananyev, Konstantin wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>> Sent: Friday, September 25, 2015 7:29 PM
> >>>> To: Richardson, Bruce; dev@dpdk.org
> >>>> Cc: Ananyev, Konstantin
> >>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
> >>>>
> >>>> On 07/09/15 07:41, Richardson, Bruce wrote:
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>>>> Sent: Monday, September 7, 2015 3:15 PM
> >>>>>> To: Richardson, Bruce; dev@dpdk.org
> >>>>>> Cc: Ananyev, Konstantin
> >>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive
> >>>>>> function
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On 07/09/15 13:57, Richardson, Bruce wrote:
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>>>>>> Sent: Monday, September 7, 2015 1:26 PM
> >>>>>>>> To: dev@dpdk.org
> >>>>>>>> Cc: Ananyev, Konstantin; Richardson, Bruce
> >>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD
> >>>>>>>> receive function
> >>>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> I just realized I've missed the "[PATCH]" tag from the subject. Did
> >>>>>>>> anyone had time to review this?
> >>>>>>>>
> >>>>>>> Hi Zoltan,
> >>>>>>>
> >>>>>>> the big thing that concerns me with this is the addition of new
> >>>>>>> instructions for each packet in the fast path. Ideally, this
> >>>>>>> prefetching would be better handled in the application itself, as for
> >>>>>>> some apps, e.g. those using pipelining, the core doing the RX from the
> >>>>>>> NIC may not touch the packet data at all, and the prefetches will
> >>>>>> instead cause a performance slowdown.
> >>>>>>> Is it possible to get the same performance increase - or something
> >>>>>>> close to it - by making changes in OVS?
> >>>>>> OVS already does a prefetch when it's processing the previous packet, but
> >>>>>> apparently it's not early enough. At least for my test scenario, where I'm
> >>>>>> forwarding UDP packets with the least possible overhead. I guess in tests
> >>>>>> where OVS does more complex processing it should be fine.
> >>>>>> I'll try to move the prefetch earlier in OVS codebase, but I'm not sure if
> >>>>>> it'll help.
> >>>>> I would suggest trying to prefetch more than one packet ahead. Prefetching 4 or
> >>>>> 8 ahead might work better, depending on the processing being done.
> >>>>
> >>>> I've moved the prefetch earlier, and it seems to work:
> >>>>
> >>>> https://patchwork.ozlabs.org/patch/519017/
> >>>>
> >>>> However it raises the question: should we remove header prefetch from
> >>>> all the other drivers, and the CONFIG_RTE_PMD_PACKET_PREFETCH option?
> >>>
> >>> My vote would be for that.
> >>> Konstantin
> >>
> >> After some further thinking I would rather support the
> >> rte_packet_prefetch() macro (prefetch the header in the driver, and
> >> configure it through CONFIG_RTE_PMD_PACKET_PREFETCH)
> >>
> >> - the prefetch can happen earlier, so if an application needs the header
> >> right away, this is the fastest
> >> - if the application doesn't need header prefetch, it can turn it off
> >> compile time. Same as if we wouldn't have this option.
> >> - if the application has mixed needs (sometimes it needs the header
> >> right away, sometimes it doesn't), it can turn it off and do what it
> >> needs. Same as if we wouldn't have this option.
> >>
> >> A harder decision would be whether it should be turned on or off by
> >> default. Currently it's on, and half of the receive functions don't use it.
> >
> > Yep, it is sort of a mess right now.
> > Another question if we'd like to keep it and standardise it:
> > at what moment to prefetch: as soon as we realize that HW is done with that buffer,
> > or as late inside rx_burst() as possible?
> > I suppose there is no clear answer for that.
> I think if the application needs the driver start the prefetching, it
> does it because it's already too late when rte_eth_rx_burst() returns.
> So I think it's best to do it as soon as we learn that the HW is done.
Probably, but it might be situations when it would be more plausible to do it later too.
Again it might depend on particular HW and your memory access pattern.
>
> > That's why my thought was to just get rid of it.
> > Though if it would be implemented in some standardized way and disabled by default -
> > that's probably ok too.
> > BTW, couldn't that be implemented just via rte_ethdev rx callbacks mechanism?
> > Then we can have the code one place and don't need compile option at all -
> > could be ebabled/disabled dynamically on a per nic basis.
> > Or would it be too high overhead for that?
>
> I think if we go that way, it's better to pass an option to
> rte_eth_rx_burst() and tell if you want the header prefetched by the
> driver. That would be the most flexible.
That would mean ABI breakage for rx_burst()...
Probably then better a new parameter in rx_conf or something.
Though it still means that each PMD has to implement it on its own.
And again there might be PMDs that would just ignore that parameter.
While for callback it would be all in one and known place.
But again, I didn't measure it, so not sure what will be an impact of the callback itself.
Might be it not worth it.
Konstantin
>
> > Konstantin
> >
> >
> >
> >>
> >>>
> >>>
> >>>>
> >>>>>
> >>>>> /Bruce
> >>>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] doc: add contributors guide
@ 2015-10-15 16:51 3% ` John McNamara
2015-10-15 21:36 0% ` Thomas Monjalon
2015-10-20 11:03 2% ` [dpdk-dev] [PATCH v2] " John McNamara
2015-10-23 10:18 2% ` [dpdk-dev] [PATCH v3] " John McNamara
2 siblings, 1 reply; 200+ results
From: John McNamara @ 2015-10-15 16:51 UTC (permalink / raw)
To: dev
Add a document to explain the DPDK patch submission and review process.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/contributing/index.rst | 1 +
doc/guides/contributing/patches.rst | 309 ++++++++++++++++++++++++++++++++++++
2 files changed, 310 insertions(+)
create mode 100644 doc/guides/contributing/patches.rst
diff --git a/doc/guides/contributing/index.rst b/doc/guides/contributing/index.rst
index 561427b..f49ca88 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -9,3 +9,4 @@ Contributor's Guidelines
design
versioning
documentation
+ patches
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
new file mode 100644
index 0000000..e5d28d5
--- /dev/null
+++ b/doc/guides/contributing/patches.rst
@@ -0,0 +1,309 @@
+.. submitting_patches:
+
+Contributing Code to DPDK
+=========================
+
+This document outlines the guidelines for submitting code to DPDK.
+
+The DPDK development process is modeled (loosely) on the Linux Kernel development model so it is worth reading the
+Linux kernel guide on submitting patches:
+`How to Get Your Change Into the Linux Kernel <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_.
+The rationale for many of the DPDK guidelines is explained in greater detail in the kernel guidelines.
+
+
+The DPDK Development Process
+-----------------------------
+
+The DPDK development process has the following features:
+
+* The code is hosted in a public git repository.
+* There is a mailing list where developers submit patches.
+* There are maintainers for hierarchical components.
+* Patches are reviewed publicly on the mailing list.
+* Successfully reviewed patches are merged to the master branch of the repository.
+
+The mailing list for DPDK development is `dev@dpkg.org <http://dpdk.org/ml/archives/dev/>`_.
+Contributors will need to `register for the mailing list <http://dpdk.org/ml/listinfo/dev>`_ in order to submit patches.
+It is also worth registering for the DPDK `Patchwork <http://dpdk.org/dev/patchwxispork/project/dpdk/list/>`_
+
+There are also DPDK mailing lists for:
+
+* users: `general usage questions <http://dpdk.org/ml/listinfo/users>`_.
+* announce: `release announcements <http://dpdk.org/ml/listinfo/announce>`_ (also forwarded to the dev list).
+* dts: `test suite reviews and discussions <http://dpdk.org/ml/listinfo/dts>`_.
+* test-reports: `test reports <http://dpdk.org/ml/listinfo/test-report>`_ (from continuous integration testing).
+
+The development process requires some familiarity with the ``git`` version control system.
+Refer to the `Pro Git Book <http://www.git-scm.com/book/>`_ for further information.
+
+
+Getting the Source Code
+-----------------------
+
+The source code can be cloned using either of the following::
+
+ git clone git://dpdk.org/dpdk
+
+ git clone http://dpdk.org/git/dpdk
+
+You can also `browse the source code <http://www.dpdk.org/browse/dpdk/tree/>`_ online.
+
+
+Make your Changes
+-----------------
+
+Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines and requirements:
+
+* Follow the :ref:`coding_style` guidelines.
+
+* If you add new files or directories you should add your name to the ``MAINTAINERS`` file.
+
+* If your changes add new external functions then they should be added to the local ``version.map`` file.
+ See the :doc:`Guidelines for ABI policy and versioning </contributing/versioning>`.
+
+* Most changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
+ See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
+
+* Don’t break compilation between commits with forward dependencies.
+ Each commit should compile on its own to allow for ``git bisect`` and continuous integration testing.
+
+* Add tests to the the ``app/test`` unit test framework where possible.
+
+* Add documentation, if required, in the form of Doxygen comments or a User Guide in RST format.
+ See the :ref:`Documentation Guidelines <doc_guidelines>`.
+
+Once the changes have been made you should commit them to your local repo.
+The commits should be separated into logical patches in a patchset.
+In general commits should be separated based on their directory such as ``lib``, ``drivers``, ``scripts`` although
+some of these, such as ``drivers`` may require finer grained separation.
+The easiest way of determining this is to do a ``git log`` on changed or similar files.
+
+Example of a logical patchset separation::
+
+ [patch 1/6] ethdev: add support for ieee1588 timestamping
+ [patch 2/6] e1000: add support for ieee1588 timestamping
+ [patch 3/6] ixgbe: add support for ieee1588 timestamping
+ [patch 4/6] i40e: add support for ieee1588 timestamping
+ [patch 5/6] app/testpmd: refactor ieee1588 forwarding
+ [patch 6/6] doc: document ieee1588 forwarding mode
+
+
+The component separation will also be used in the subject line of the commit message, see below.
+The required format of the commit messages is shown in the next sections.
+
+
+Commit Messages: Subject Line
+-----------------------------
+
+The first, summary, line of the git commit message becomes the subject line of the patch email.
+Here are some guidelines for the summary line:
+
+* The summary line should be around 50 characters.
+
+* The summary line should be lowercase.
+
+* It should be prefixed with the component name (use git log to check existing components).
+ For example::
+
+ config: enable same drivers options for linux and bsd
+
+ ixgbe: fix offload config option name
+
+* Use the imperative of the verb (like instructions to the code base).
+ For example::
+
+ ixgbe: fix bug in xyz
+
+ ixgbe: add refcount to foo struct
+
+* Don't add a period/full stop to the subject line or you will end up two in the patch name: ``dpdk_description..patch``.
+
+The actual email subject line should be prefixed by ``[PATCH]`` and the version, if greater than v1,
+for example: ``PATCH v2``.
+The is generally added by ``git send-email`` or ``git format-patch``, see below.
+
+If you are submitting a RFC draft of a feature you can use ``[RFC]`` instead of ``[PATCH]``.
+
+
+Commit Messages: Body
+---------------------
+
+Here are some guidelines for the body of a commit message:
+
+* You must provide a body to the commit message after the subject/summary line.
+ Do not leave it blank.
+
+* The body of the message should describe the issue being fixed or the feature being added.
+ It is important to provide enough information to allow a reviewer to understand the purpose of the patch.
+
+* The commit message must end with a ``Signed-off-by:`` line which is added using::
+
+ git commit --signoff # or -s
+
+ The purpose of the signoff is explained in the
+ `Developer's Certificate of Origin <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_
+ section of the Linux kernel guidelines.
+
+ .. Note::
+
+ All developers must ensure that they have read and understood the
+ Developer's Certificate of Origin section of the documentation prior
+ to applying the signoff and submitting a patch.
+
+* The signoff must be a real name and not an alias or nickname.
+ More than one signoff is allowed.
+
+* The text of the commit message should be wrapped at 72 characters.
+
+* When fixing a regression, it is a good idea to reference the id of the commit which introduced the bug.
+ You can generate the required text as follows::
+
+ git log -1 COMMIT_ID --abbrev=12 --format='Fixes: %h ("%s")'
+
+ Fixes: a4024448efa6 ("i40e: add ieee1588 timestamping")
+
+* When fixing an error or warning it is useful to add the error message or output.
+
+* Use correct capitalization, punctuation and spelling.
+
+In addition to the ``Signed-off-by:`` name the commit messages can also have one or more of the following:
+
+* ``Reported-by:`` The reporter of the issue.
+* ``Tested-by:`` The tester of the change.
+* ``Reviewed-by:`` The reviewer of the change.
+* ``Suggested-by:`` The person who suggested the change.
+
+
+Creating Patches
+----------------
+
+It is possible to send patches directly from git but for new contributors it is recommended to generate the
+patches with ``git format-patch`` and then when everything looks okay, and the patches have been checked, to
+send them with ``git send-mail``.
+
+Here are some examples of using ``git format-patch`` to generate patches:
+
+.. code-block:: console
+
+ # Generate a patch from the last commit.
+ git format-patch -1
+
+ # Generate a patch from the last 3 commits.
+ git format-patch -3
+
+ # Generate the patches in a directory.
+ git format-patch -3 -o ~/patch/
+
+ # Add a cover letter to explain a patchset.
+ git format-patch -3 -o --cover-letter
+
+ # Add a prefix with a version number.
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+
+Cover letters are useful for explaining a patchset.
+Smaller notes can be put inline in the patch after the ``---`` separator, for example::
+
+ Subject: [PATCH] fm10k/base: add FM10420 device ids
+
+ Add the device ID for Boulder Rapids and Atwood Channel to enable
+ drivers to support those devices.
+
+ Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
+ ---
+
+ ADD NOTES HERE.
+
+ drivers/net/fm10k/base/fm10k_api.c | 6 ++++++
+ drivers/net/fm10k/base/fm10k_type.h | 6 ++++++
+ 2 files changed, 12 insertions(+)
+ ...
+
+Version 2 and later of a patchset should also include a short log of the changes so the reviewer knows what has changed.
+This can go either in the cover letter on the annotations.
+For example::
+
+ v3:
+ * Fixed issued with version.map.
+
+ v2:
+ * Added i40e support.
+ * Renamed ethdev functions from rte_eth_ieee15888_*() to rte_eth_timesync_*()
+ since 802.1AS can be supported through the same interfaces.
+
+
+Checking the Patches
+--------------------
+
+Patches should be checked for formatting and syntax issues using the Linux scripts tool ``checkpatch``.
+
+The ``checkpatch`` utility can be obtained by cloning, and periodically, updating the Linux kernel sources.
+
+The kernel guidelines that are tested by ``checkpatch`` don't match the DPDK Coding Style guidelines exactly but
+they provide a good indication of conformance.
+Warnings about not using kernel data types or about split strings can be ignored::
+
+ /path/checkpatch.pl --ignore PREFER_KERNEL_TYPES,SPLIT_STRING -q files*.patch
+
+
+Sending Patches
+---------------
+
+Patches should be sent to the mailing list using ``git send-email``.
+This will require a working and configured ``sendmail`` or similar application.
+See the `Git send-mail <https://git-scm.com/docs/git-send-email>`_ documentation for more details.
+
+The patches should be sent to ``dev@dpdk.org``::
+
+ git send-email --to dev@dpdk.org 000*.patch
+
+If the patches are a change to existing files then you should CC the maintainer(s) of the changed files.
+The appropriate maintainer can be found in the ``MAINTAINERS`` file::
+
+ git send-email --to dev@dpdk.org --cc maintainer@some.org 000*.patch
+
+You can test the emails by sending it to yourself or with the ``--dry-run`` option.
+
+If the patch is in relation to a previous email thread you can add it to the same thread using the Message ID::
+
+ git send-email --to dev@dpdk.org --in-reply-to <1234-foo@bar.com> 000*.patch
+
+The Message ID can be found in the raw text of emails or at the top of each Patchwork patch,
+`for example <http://dpdk.org/dev/patchwork/patch/7646/>`_.
+
+
+Once submitted your patches will appear on the mailing list and in Patchwork.
+
+Experienced commiters may send patches directly with ``git send-email`` without the ``git format-patch`` step.
+
+
+The Review Process
+------------------
+
+The more work you put into the previous steps the easier it will be to get a patch accepted.
+
+The general cycle for patch review and acceptance is:
+
+#. Submit the patch.
+
+#. Wait for review comments. While you are waiting review some other patches.
+
+#. Fix the review comments and submit a ``v n+1`` patchset::
+
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+#. Update Patchwork to mark your previous patches as "Superseded".
+
+#. If the patch is deemed suitable for merging by the relevant maintainer(s) or other developers they will ``ack``
+ the patch with an email that includes something like::
+
+ Acked-by: Alex Smith <alex.smith@example.com>
+
+ **Note**: When acking patches please remove as much of the text of the patch email as possible.
+ It is generally best to delete everything after the ``Signed-off-by:`` line.
+
+#. If the patch isn't deemed suitable based on being out of scope or conflicting with existing functionality
+ it may receive a ``nack``.
+ In this case you will need to make a more convincing technical argument in favor of your patches.
+
+#. Acked patches will be merged in the next merge window.
--
1.8.1.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: add contributors guide
2015-10-15 16:51 3% ` [dpdk-dev] [PATCH] doc: add " John McNamara
@ 2015-10-15 21:36 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-15 21:36 UTC (permalink / raw)
To: John McNamara; +Cc: dev
Hi John,
2015-10-15 17:51, John McNamara:
> Add a document to explain the DPDK patch submission and review process.
Thanks
> +There are also DPDK mailing lists for:
> +
> +* users: `general usage questions <http://dpdk.org/ml/listinfo/users>`_.
> +* announce: `release announcements <http://dpdk.org/ml/listinfo/announce>`_ (also forwarded to the dev list).
> +* dts: `test suite reviews and discussions <http://dpdk.org/ml/listinfo/dts>`_.
I think these lists are not relevant for patch submission.
> +* test-reports: `test reports <http://dpdk.org/ml/listinfo/test-report>`_ (from continuous integration testing).
[...]
> +Getting the Source Code
> +-----------------------
> +
> +The source code can be cloned using either of the following::
> +
> + git clone git://dpdk.org/dpdk
> +
> + git clone http://dpdk.org/git/dpdk
> +
> +You can also `browse the source code <http://www.dpdk.org/browse/dpdk/tree/>`_ online.
The online browse doesn't help for patch contribution.
[...]
> +* If you add new files or directories you should add your name to the ``MAINTAINERS`` file.
yes
> +* If your changes add new external functions then they should be added to the local ``version.map`` file.
> + See the :doc:`Guidelines for ABI policy and versioning </contributing/versioning>`.
> +
> +* Most changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
> + See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
s/Most/Important/ ?
> +* Don’t break compilation between commits with forward dependencies.
> + Each commit should compile on its own to allow for ``git bisect`` and continuous integration testing.
no, please, don't break compilation :)
> +* Add tests to the the ``app/test`` unit test framework where possible.
> +
> +* Add documentation, if required, in the form of Doxygen comments or a User Guide in RST format.
s/required/relevant/ ?
> +The commits should be separated into logical patches in a patchset.
Yes
> +In general commits should be separated based on their directory such as ``lib``, ``drivers``, ``scripts`` although
> +some of these, such as ``drivers`` may require finer grained separation.
No. The directory is not so important.
It must be easy to review first.
If changes are not so big and do not require specific explanations,
it's better to keep things together in the same patch.
A good way of thinking about patch split is to consider backports:
will it be easy to backport this change with its dependencies?
will it be easy to backport this feature/fix without useless bloat?
> +The easiest way of determining this is to do a ``git log`` on changed or similar files.
Yes
> +Example of a logical patchset separation::
> +
> + [patch 1/6] ethdev: add support for ieee1588 timestamping
> + [patch 2/6] e1000: add support for ieee1588 timestamping
> + [patch 3/6] ixgbe: add support for ieee1588 timestamping
> + [patch 4/6] i40e: add support for ieee1588 timestamping
> + [patch 5/6] app/testpmd: refactor ieee1588 forwarding
> + [patch 6/6] doc: document ieee1588 forwarding mode
The doc must be committed with the API change (ethdev).
Splitting driver implementations is useful only if they are really big or
require some specific explanations in the commit message.
> +* The summary line should be lowercase.
The acronyms can be uppercase.
> + For example::
> +
> + ixgbe: fix bug in xyz
After "fix", the word "bug" is useless.
It's better to briefly explain the impact of the bug, e.g. "fix RSS on 32-bit".
So people interested in RSS or 32-bit will look at this fix.
> + ixgbe: add refcount to foo struct
Generally, using the name of a struct, a variable or a file in the title
reveals that you don't know how to explain your change simply.
The implementation details may be explained in the long message.
The title must help to catch the area and the impact of the change.
> +If you are submitting a RFC draft of a feature you can use ``[RFC]`` instead of ``[PATCH]``.
A RFC may be incomplete.
It helps to have feedbacks before doing more.
> +* You must provide a body to the commit message after the subject/summary line.
> + Do not leave it blank.
When it is totally obvious, the Signed-off is enough.
> +* When fixing a regression, it is a good idea to reference the id of the commit which introduced the bug.
> + You can generate the required text as follows::
> +
> + git log -1 COMMIT_ID --abbrev=12 --format='Fixes: %h ("%s")'
git alias: fixline = log -1 --abbrev=12 --format='Fixes: %h (\"%s\")'
> + Fixes: a4024448efa6 ("i40e: add ieee1588 timestamping")
Yes it will help the backports.
> +* When fixing an error or warning it is useful to add the error message or output.
The steps to reproduce the bugs are also required.
> +* ``Reported-by:`` The reporter of the issue.
> +* ``Tested-by:`` The tester of the change.
> +* ``Reviewed-by:`` The reviewer of the change.
> +* ``Suggested-by:`` The person who suggested the change.
Yes, and Acked-by:
When it is commented between 2 versions of the patch, it can be added in the
new version if it is still relevant.
> +Cover letters are useful for explaining a patchset.
And it helps to have a correct threading of the patches.
> +Version 2 and later of a patchset should also include a short log of the changes so the reviewer knows what has changed.
> +This can go either in the cover letter on the annotations.
s/on/or/
> +The kernel guidelines that are tested by ``checkpatch`` don't match the DPDK Coding Style guidelines exactly but
> +they provide a good indication of conformance.
> +Warnings about not using kernel data types or about split strings can be ignored::
> +
> + /path/checkpatch.pl --ignore PREFER_KERNEL_TYPES,SPLIT_STRING -q files*.patch
OK
I plan to suggest a script with more checkpatch configurations.
We should enforce using "make test" before sending.
> +Patches should be sent to the mailing list using ``git send-email``.
> +This will require a working and configured ``sendmail`` or similar application.
No, you can configure an external SMTP:
smtpuser = name@domain.com
smtpserver = smtp.domain.com
smtpserverport = 465
smtpencryption = ssl
> +If the patches are a change to existing files then you should CC the maintainer(s) of the changed files.
> +The appropriate maintainer can be found in the ``MAINTAINERS`` file::
> +
> + git send-email --to dev@dpdk.org --cc maintainer@some.org 000*.patch
I would say to send --to the maintainers and -cc dev@dpdk.org.
Some maintainers can have stronger filter if their name is in the "To" field.
> +If the patch is in relation to a previous email thread you can add it to the same thread using the Message ID::
> +
> + git send-email --to dev@dpdk.org --in-reply-to <1234-foo@bar.com> 000*.patch
Yes please.
s/can/should/
> +Experienced commiters may send patches directly with ``git send-email`` without the ``git format-patch`` step.
The options --annotate and "confirm = always" are recommended to check before sending.
> +The more work you put into the previous steps the easier it will be to get a patch accepted.
Yes :)
> +#. Submit the patch.
Check the automatic test reports in the coming hours.
> +#. Wait for review comments. While you are waiting review some other patches.
> +#. If the patch is deemed suitable for merging by the relevant maintainer(s) or other developers they will ``ack``
> + the patch with an email that includes something like::
We don"t use Reviewed-by a lot.
My understanding is that "Acked-by" doesn't mean it has been fully reviewed and tested.
But Reviewed-by is stronger without implying that we think it's the best solution.
It's an interpretation. Should it be explained here?
> +#. If the patch isn't deemed suitable based on being out of scope or conflicting with existing functionality
> + it may receive a ``nack``.
> + In this case you will need to make a more convincing technical argument in favor of your patches.
More generally, a patch should not be accepted if there are some comments not
addressed by a new version or some strong arguments.
> +#. Acked patches will be merged in the next merge window.
Next or current?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
@ 2015-10-19 13:26 4% ` Panu Matilainen
2015-10-19 13:27 0% ` Richardson, Bruce
0 siblings, 1 reply; 200+ results
From: Panu Matilainen @ 2015-10-19 13:26 UTC (permalink / raw)
To: Tetsuya Mukawa, Bruce Richardson, Loftus, Ciara; +Cc: dev, ann.zhuangyanying
On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
> On 2015/10/19 18:45, Bruce Richardson wrote:
>> On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
>>>> On 2015/10/16 21:52, Bruce Richardson wrote:
>>>>> On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
>>>>>> The patch introduces a new PMD. This PMD is implemented as thin
>>>> wrapper
>>>>>> of librte_vhost. It means librte_vhost is also needed to compile the PMD.
>>>>>> The PMD can have 'iface' parameter like below to specify a path to
>>>> connect
>>>>>> to a virtio-net device.
>>>>>>
>>>>>> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
>>>>>>
>>>>>> To connect above testpmd, here is qemu command example.
>>>>>>
>>>>>> $ qemu-system-x86_64 \
>>>>>> <snip>
>>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
>>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
>>>>>> -device virtio-net-pci,netdev=net0
>>>>>>
>>>>>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
>>>>> With this PMD in place, is there any need to keep the existing vhost library
>>>>> around as a separate entity? Can the existing library be
>>>> subsumed/converted into
>>>>> a standard PMD?
>>>>>
>>>>> /Bruce
>>>> Hi Bruce,
>>>>
>>>> I concern about whether the PMD has all features of librte_vhost,
>>>> because librte_vhost provides more features and freedom than ethdev API
>>>> provides.
>>>> In some cases, user needs to choose limited implementation without
>>>> librte_vhost.
>>>> I am going to eliminate such cases while implementing the PMD.
>>>> But I don't have strong belief that we can remove librte_vhost now.
>>>>
>>>> So how about keeping current separation in next DPDK?
>>>> I guess people will try to replace librte_vhost to vhost PMD, because
>>>> apparently using ethdev APIs will be useful in many cases.
>>>> And we will get feedbacks like "vhost PMD needs to support like this usage".
>>>> (Or we will not have feedbacks, but it's also OK.)
>>>> Then, we will be able to merge librte_vhost and vhost PMD.
>>> I agree with the above. One the concerns I had when reviewing the patch was that the PMD removes some freedom that is available with the library. Eg. Ability to implement the new_device and destroy_device callbacks. If using the PMD you are constrained to the implementations of these in the PMD driver, but if using librte_vhost, you can implement your own with whatever functionality you like - a good example of this can be seen in the vhost sample app.
>>> On the other hand, the PMD is useful in that it removes a lot of complexity for the user and may work for some more general use cases. So I would be in favour of having both options available too.
>>>
>>> Ciara
>>>
>> Thanks.
>> However, just because the libraries are merged does not mean that you need
>> be limited by PMD functionality. Many PMDs provide additional library-specific
>> functions over and above their PMD capabilities. The bonded PMD is a good example
>> here, as it has a whole set of extra functions to create and manipulate bonded
>> devices - things that are obviously not part of the general ethdev API. Other
>> vPMDs similarly include functions to allow them to be created on the fly too.
>>
>> regards,
>> /Bruce
>
> Hi Bruce,
>
> I appreciate for showing a good example. I haven't noticed the PMD.
> I will check the bonding PMD, and try to remove librte_vhost without
> losing freedom and features of the library.
Hi,
Just a gentle reminder - if you consider removing (even if by just
replacing/renaming) an entire library, it needs to happen the ABI
deprecation process.
It seems obvious enough. But for all the ABI policing here, somehow we
all failed to notice the two compatibility breaking rename-elephants in
the room during 2.1 development:
- libintel_dpdk was renamed to libdpdk
- librte_pmd_virtio_uio was renamed to librte_pmd_virtio
Of course these cases are easy to work around with symlinks, and are
unrelated to the matter at hand. Just wanting to make sure such things
dont happen again.
- Panu -
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
2015-10-19 13:26 4% ` Panu Matilainen
@ 2015-10-19 13:27 0% ` Richardson, Bruce
2015-10-21 4:35 0% ` Tetsuya Mukawa
0 siblings, 1 reply; 200+ results
From: Richardson, Bruce @ 2015-10-19 13:27 UTC (permalink / raw)
To: Panu Matilainen, Tetsuya Mukawa, Loftus, Ciara; +Cc: dev, ann.zhuangyanying
> -----Original Message-----
> From: Panu Matilainen [mailto:pmatilai@redhat.com]
> Sent: Monday, October 19, 2015 2:26 PM
> To: Tetsuya Mukawa <mukawa@igel.co.jp>; Richardson, Bruce
> <bruce.richardson@intel.com>; Loftus, Ciara <ciara.loftus@intel.com>
> Cc: dev@dpdk.org; ann.zhuangyanying@huawei.com
> Subject: Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
>
> On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
> > On 2015/10/19 18:45, Bruce Richardson wrote:
> >> On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
> >>>> On 2015/10/16 21:52, Bruce Richardson wrote:
> >>>>> On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
> >>>>>> The patch introduces a new PMD. This PMD is implemented as thin
> >>>> wrapper
> >>>>>> of librte_vhost. It means librte_vhost is also needed to compile
> the PMD.
> >>>>>> The PMD can have 'iface' parameter like below to specify a path
> >>>>>> to
> >>>> connect
> >>>>>> to a virtio-net device.
> >>>>>>
> >>>>>> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
> >>>>>>
> >>>>>> To connect above testpmd, here is qemu command example.
> >>>>>>
> >>>>>> $ qemu-system-x86_64 \
> >>>>>> <snip>
> >>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
> >>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
> >>>>>> -device virtio-net-pci,netdev=net0
> >>>>>>
> >>>>>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
> >>>>> With this PMD in place, is there any need to keep the existing
> >>>>> vhost library around as a separate entity? Can the existing
> >>>>> library be
> >>>> subsumed/converted into
> >>>>> a standard PMD?
> >>>>>
> >>>>> /Bruce
> >>>> Hi Bruce,
> >>>>
> >>>> I concern about whether the PMD has all features of librte_vhost,
> >>>> because librte_vhost provides more features and freedom than ethdev
> >>>> API provides.
> >>>> In some cases, user needs to choose limited implementation without
> >>>> librte_vhost.
> >>>> I am going to eliminate such cases while implementing the PMD.
> >>>> But I don't have strong belief that we can remove librte_vhost now.
> >>>>
> >>>> So how about keeping current separation in next DPDK?
> >>>> I guess people will try to replace librte_vhost to vhost PMD,
> >>>> because apparently using ethdev APIs will be useful in many cases.
> >>>> And we will get feedbacks like "vhost PMD needs to support like this
> usage".
> >>>> (Or we will not have feedbacks, but it's also OK.) Then, we will be
> >>>> able to merge librte_vhost and vhost PMD.
> >>> I agree with the above. One the concerns I had when reviewing the
> patch was that the PMD removes some freedom that is available with the
> library. Eg. Ability to implement the new_device and destroy_device
> callbacks. If using the PMD you are constrained to the implementations of
> these in the PMD driver, but if using librte_vhost, you can implement your
> own with whatever functionality you like - a good example of this can be
> seen in the vhost sample app.
> >>> On the other hand, the PMD is useful in that it removes a lot of
> complexity for the user and may work for some more general use cases. So I
> would be in favour of having both options available too.
> >>>
> >>> Ciara
> >>>
> >> Thanks.
> >> However, just because the libraries are merged does not mean that you
> >> need be limited by PMD functionality. Many PMDs provide additional
> >> library-specific functions over and above their PMD capabilities. The
> >> bonded PMD is a good example here, as it has a whole set of extra
> >> functions to create and manipulate bonded devices - things that are
> >> obviously not part of the general ethdev API. Other vPMDs similarly
> include functions to allow them to be created on the fly too.
> >>
> >> regards,
> >> /Bruce
> >
> > Hi Bruce,
> >
> > I appreciate for showing a good example. I haven't noticed the PMD.
> > I will check the bonding PMD, and try to remove librte_vhost without
> > losing freedom and features of the library.
>
> Hi,
>
> Just a gentle reminder - if you consider removing (even if by just
> replacing/renaming) an entire library, it needs to happen the ABI
> deprecation process.
>
> It seems obvious enough. But for all the ABI policing here, somehow we all
> failed to notice the two compatibility breaking rename-elephants in the
> room during 2.1 development:
> - libintel_dpdk was renamed to libdpdk
> - librte_pmd_virtio_uio was renamed to librte_pmd_virtio
>
> Of course these cases are easy to work around with symlinks, and are
> unrelated to the matter at hand. Just wanting to make sure such things
> dont happen again.
>
> - Panu -
Still doesn't hurt to remind us, Panu! Thanks. :-)
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data
2015-09-11 13:40 0% ` [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data Dumitrescu, Cristian
@ 2015-10-19 15:02 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-19 15:02 UTC (permalink / raw)
To: Zhang, Roy Fan; +Cc: dev
> > From: Fan Zhang <roy.fan.zhang@intel.com>
> >
> > This patchset links to ABI change announced for librte_port. Macros to
> > access the packet meta-data stored within the packet buffer has been
> > adjusted to cover the packet mbuf structure.
>
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Applied in 1 commit (such change must be atomic), thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
2015-10-15 15:43 3% ` Ananyev, Konstantin
@ 2015-10-19 16:30 0% ` Zoltan Kiss
2015-10-19 18:57 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Zoltan Kiss @ 2015-10-19 16:30 UTC (permalink / raw)
To: Ananyev, Konstantin, Richardson, Bruce, dev
On 15/10/15 16:43, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>> Sent: Thursday, October 15, 2015 11:33 AM
>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>
>>
>>
>> On 15/10/15 00:23, Ananyev, Konstantin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>> Sent: Wednesday, October 14, 2015 5:11 PM
>>>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>>>
>>>>
>>>>
>>>> On 28/09/15 00:19, Ananyev, Konstantin wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>> Sent: Friday, September 25, 2015 7:29 PM
>>>>>> To: Richardson, Bruce; dev@dpdk.org
>>>>>> Cc: Ananyev, Konstantin
>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>>>>>
>>>>>> On 07/09/15 07:41, Richardson, Bruce wrote:
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>>>> Sent: Monday, September 7, 2015 3:15 PM
>>>>>>>> To: Richardson, Bruce; dev@dpdk.org
>>>>>>>> Cc: Ananyev, Konstantin
>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive
>>>>>>>> function
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 07/09/15 13:57, Richardson, Bruce wrote:
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>>>>>> Sent: Monday, September 7, 2015 1:26 PM
>>>>>>>>>> To: dev@dpdk.org
>>>>>>>>>> Cc: Ananyev, Konstantin; Richardson, Bruce
>>>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD
>>>>>>>>>> receive function
>>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I just realized I've missed the "[PATCH]" tag from the subject. Did
>>>>>>>>>> anyone had time to review this?
>>>>>>>>>>
>>>>>>>>> Hi Zoltan,
>>>>>>>>>
>>>>>>>>> the big thing that concerns me with this is the addition of new
>>>>>>>>> instructions for each packet in the fast path. Ideally, this
>>>>>>>>> prefetching would be better handled in the application itself, as for
>>>>>>>>> some apps, e.g. those using pipelining, the core doing the RX from the
>>>>>>>>> NIC may not touch the packet data at all, and the prefetches will
>>>>>>>> instead cause a performance slowdown.
>>>>>>>>> Is it possible to get the same performance increase - or something
>>>>>>>>> close to it - by making changes in OVS?
>>>>>>>> OVS already does a prefetch when it's processing the previous packet, but
>>>>>>>> apparently it's not early enough. At least for my test scenario, where I'm
>>>>>>>> forwarding UDP packets with the least possible overhead. I guess in tests
>>>>>>>> where OVS does more complex processing it should be fine.
>>>>>>>> I'll try to move the prefetch earlier in OVS codebase, but I'm not sure if
>>>>>>>> it'll help.
>>>>>>> I would suggest trying to prefetch more than one packet ahead. Prefetching 4 or
>>>>>>> 8 ahead might work better, depending on the processing being done.
>>>>>>
>>>>>> I've moved the prefetch earlier, and it seems to work:
>>>>>>
>>>>>> https://patchwork.ozlabs.org/patch/519017/
>>>>>>
>>>>>> However it raises the question: should we remove header prefetch from
>>>>>> all the other drivers, and the CONFIG_RTE_PMD_PACKET_PREFETCH option?
>>>>>
>>>>> My vote would be for that.
>>>>> Konstantin
>>>>
>>>> After some further thinking I would rather support the
>>>> rte_packet_prefetch() macro (prefetch the header in the driver, and
>>>> configure it through CONFIG_RTE_PMD_PACKET_PREFETCH)
>>>>
>>>> - the prefetch can happen earlier, so if an application needs the header
>>>> right away, this is the fastest
>>>> - if the application doesn't need header prefetch, it can turn it off
>>>> compile time. Same as if we wouldn't have this option.
>>>> - if the application has mixed needs (sometimes it needs the header
>>>> right away, sometimes it doesn't), it can turn it off and do what it
>>>> needs. Same as if we wouldn't have this option.
>>>>
>>>> A harder decision would be whether it should be turned on or off by
>>>> default. Currently it's on, and half of the receive functions don't use it.
>>>
>>> Yep, it is sort of a mess right now.
>>> Another question if we'd like to keep it and standardise it:
>>> at what moment to prefetch: as soon as we realize that HW is done with that buffer,
>>> or as late inside rx_burst() as possible?
>>> I suppose there is no clear answer for that.
>> I think if the application needs the driver start the prefetching, it
>> does it because it's already too late when rte_eth_rx_burst() returns.
>> So I think it's best to do it as soon as we learn that the HW is done.
>
> Probably, but it might be situations when it would be more plausible to do it later too.
Could you elaborate?
If the application wants prefetch to start earlier than could be done by
itself (right after rte_eth_rx_burst() returns), earlier is better. So
it will have the best chances to have the data in cache.
If it would need the data later, then it could do the prefetch by itself.
> Again it might depend on particular HW and your memory access pattern.
>
>>
>>> That's why my thought was to just get rid of it.
>>> Though if it would be implemented in some standardized way and disabled by default -
>>> that's probably ok too.
>>> BTW, couldn't that be implemented just via rte_ethdev rx callbacks mechanism?
>>> Then we can have the code one place and don't need compile option at all -
>>> could be ebabled/disabled dynamically on a per nic basis.
>>> Or would it be too high overhead for that?
>>
>> I think if we go that way, it's better to pass an option to
>> rte_eth_rx_burst() and tell if you want the header prefetched by the
>> driver. That would be the most flexible.
>
> That would mean ABI breakage for rx_burst()...
> Probably then better a new parameter in rx_conf or something.
> Though it still means that each PMD has to implement it on its own.
That would be the case in any way, as we are are talking about
prefetching in the receive function.
> And again there might be PMDs that would just ignore that parameter.
If the PMD has a good reason to do that (e.g. prefetch has undesirable
effects), I think it's fine.
> While for callback it would be all in one and known place.
Who and when would call that callback? If the application after
rte_eth_rx_burst() returned, it wouldn't have too much use, it could
just call prefetch by itself.
> But again, I didn't measure it, so not sure what will be an impact of the callback itself.
> Might be it not worth it.
> Konstantin
>
>>
>>> Konstantin
>>>
>>>
>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> /Bruce
>>>>>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
2015-10-19 16:30 0% ` Zoltan Kiss
@ 2015-10-19 18:57 0% ` Ananyev, Konstantin
2015-10-20 9:58 0% ` Zoltan Kiss
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2015-10-19 18:57 UTC (permalink / raw)
To: Zoltan Kiss, Richardson, Bruce, dev
> -----Original Message-----
> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> Sent: Monday, October 19, 2015 5:30 PM
> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>
>
>
> On 15/10/15 16:43, Ananyev, Konstantin wrote:
> >
> >
> >> -----Original Message-----
> >> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >> Sent: Thursday, October 15, 2015 11:33 AM
> >> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
> >> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
> >>
> >>
> >>
> >> On 15/10/15 00:23, Ananyev, Konstantin wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>> Sent: Wednesday, October 14, 2015 5:11 PM
> >>>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
> >>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
> >>>>
> >>>>
> >>>>
> >>>> On 28/09/15 00:19, Ananyev, Konstantin wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>>>> Sent: Friday, September 25, 2015 7:29 PM
> >>>>>> To: Richardson, Bruce; dev@dpdk.org
> >>>>>> Cc: Ananyev, Konstantin
> >>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
> >>>>>>
> >>>>>> On 07/09/15 07:41, Richardson, Bruce wrote:
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>>>>>> Sent: Monday, September 7, 2015 3:15 PM
> >>>>>>>> To: Richardson, Bruce; dev@dpdk.org
> >>>>>>>> Cc: Ananyev, Konstantin
> >>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive
> >>>>>>>> function
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 07/09/15 13:57, Richardson, Bruce wrote:
> >>>>>>>>>
> >>>>>>>>>> -----Original Message-----
> >>>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
> >>>>>>>>>> Sent: Monday, September 7, 2015 1:26 PM
> >>>>>>>>>> To: dev@dpdk.org
> >>>>>>>>>> Cc: Ananyev, Konstantin; Richardson, Bruce
> >>>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD
> >>>>>>>>>> receive function
> >>>>>>>>>>
> >>>>>>>>>> Hi,
> >>>>>>>>>>
> >>>>>>>>>> I just realized I've missed the "[PATCH]" tag from the subject. Did
> >>>>>>>>>> anyone had time to review this?
> >>>>>>>>>>
> >>>>>>>>> Hi Zoltan,
> >>>>>>>>>
> >>>>>>>>> the big thing that concerns me with this is the addition of new
> >>>>>>>>> instructions for each packet in the fast path. Ideally, this
> >>>>>>>>> prefetching would be better handled in the application itself, as for
> >>>>>>>>> some apps, e.g. those using pipelining, the core doing the RX from the
> >>>>>>>>> NIC may not touch the packet data at all, and the prefetches will
> >>>>>>>> instead cause a performance slowdown.
> >>>>>>>>> Is it possible to get the same performance increase - or something
> >>>>>>>>> close to it - by making changes in OVS?
> >>>>>>>> OVS already does a prefetch when it's processing the previous packet, but
> >>>>>>>> apparently it's not early enough. At least for my test scenario, where I'm
> >>>>>>>> forwarding UDP packets with the least possible overhead. I guess in tests
> >>>>>>>> where OVS does more complex processing it should be fine.
> >>>>>>>> I'll try to move the prefetch earlier in OVS codebase, but I'm not sure if
> >>>>>>>> it'll help.
> >>>>>>> I would suggest trying to prefetch more than one packet ahead. Prefetching 4 or
> >>>>>>> 8 ahead might work better, depending on the processing being done.
> >>>>>>
> >>>>>> I've moved the prefetch earlier, and it seems to work:
> >>>>>>
> >>>>>> https://patchwork.ozlabs.org/patch/519017/
> >>>>>>
> >>>>>> However it raises the question: should we remove header prefetch from
> >>>>>> all the other drivers, and the CONFIG_RTE_PMD_PACKET_PREFETCH option?
> >>>>>
> >>>>> My vote would be for that.
> >>>>> Konstantin
> >>>>
> >>>> After some further thinking I would rather support the
> >>>> rte_packet_prefetch() macro (prefetch the header in the driver, and
> >>>> configure it through CONFIG_RTE_PMD_PACKET_PREFETCH)
> >>>>
> >>>> - the prefetch can happen earlier, so if an application needs the header
> >>>> right away, this is the fastest
> >>>> - if the application doesn't need header prefetch, it can turn it off
> >>>> compile time. Same as if we wouldn't have this option.
> >>>> - if the application has mixed needs (sometimes it needs the header
> >>>> right away, sometimes it doesn't), it can turn it off and do what it
> >>>> needs. Same as if we wouldn't have this option.
> >>>>
> >>>> A harder decision would be whether it should be turned on or off by
> >>>> default. Currently it's on, and half of the receive functions don't use it.
> >>>
> >>> Yep, it is sort of a mess right now.
> >>> Another question if we'd like to keep it and standardise it:
> >>> at what moment to prefetch: as soon as we realize that HW is done with that buffer,
> >>> or as late inside rx_burst() as possible?
> >>> I suppose there is no clear answer for that.
> >> I think if the application needs the driver start the prefetching, it
> >> does it because it's already too late when rte_eth_rx_burst() returns.
> >> So I think it's best to do it as soon as we learn that the HW is done.
> >
> > Probably, but it might be situations when it would be more plausible to do it later too.
> Could you elaborate?
> If the application wants prefetch to start earlier than could be done by
> itself (right after rte_eth_rx_burst() returns), earlier is better. So
> it will have the best chances to have the data in cache.
> If it would need the data later, then it could do the prefetch by itself.
>
> > Again it might depend on particular HW and your memory access pattern.
> >
> >>
> >>> That's why my thought was to just get rid of it.
> >>> Though if it would be implemented in some standardized way and disabled by default -
> >>> that's probably ok too.
> >>> BTW, couldn't that be implemented just via rte_ethdev rx callbacks mechanism?
> >>> Then we can have the code one place and don't need compile option at all -
> >>> could be ebabled/disabled dynamically on a per nic basis.
> >>> Or would it be too high overhead for that?
> >>
> >> I think if we go that way, it's better to pass an option to
> >> rte_eth_rx_burst() and tell if you want the header prefetched by the
> >> driver. That would be the most flexible.
> >
> > That would mean ABI breakage for rx_burst()...
> > Probably then better a new parameter in rx_conf or something.
> > Though it still means that each PMD has to implement it on its own.
> That would be the case in any way, as we are are talking about
> prefetching in the receive function.
>
> > And again there might be PMDs that would just ignore that parameter.
> If the PMD has a good reason to do that (e.g. prefetch has undesirable
> effects), I think it's fine.
>
> > While for callback it would be all in one and known place.
> Who and when would call that callback? If the application after
> rte_eth_rx_burst() returned, it wouldn't have too much use, it could
> just call prefetch by itself.
I am talking about the callbacks called by rx_burst() itself:
static inline uint16_t
rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
struct rte_eth_dev *dev;
dev = &rte_eth_devices[port_id];
int16_t nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
rx_pkts, nb_pkts);
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
struct rte_eth_rxtx_callback *cb = dev->post_rx_burst_cbs[queue_id];
if (unlikely(cb != NULL)) {
do {
nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
nb_pkts, cb->param);
cb = cb->next;
} while (cb != NULL);
#endif
return nb_rx;
}
Konstantin
>
> > But again, I didn't measure it, so not sure what will be an impact of the callback itself.
> > Might be it not worth it.
> > Konstantin
> >
> >>
> >>> Konstantin
> >>>
> >>>
> >>>
> >>>>
> >>>>>
> >>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> /Bruce
> >>>>>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] doc: Add missing new line before code block
@ 2015-10-20 2:41 3% Tetsuya Mukawa
2015-10-28 9:33 0% ` Mcnamara, John
0 siblings, 1 reply; 200+ results
From: Tetsuya Mukawa @ 2015-10-20 2:41 UTC (permalink / raw)
To: dev
The patch adds missing new line to "Managing ABI updates" section.
Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
---
doc/guides/contributing/versioning.rst | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/doc/guides/contributing/versioning.rst b/doc/guides/contributing/versioning.rst
index 8a739dd..d824762 100644
--- a/doc/guides/contributing/versioning.rst
+++ b/doc/guides/contributing/versioning.rst
@@ -382,7 +382,8 @@ easy. Start by removing the symbol from the requisite version map file:
} DPDK_2.0;
-Next remove the corresponding versioned export
+Next remove the corresponding versioned export.
+
.. code-block:: c
-VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
--
2.1.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
2015-10-19 18:57 0% ` Ananyev, Konstantin
@ 2015-10-20 9:58 0% ` Zoltan Kiss
0 siblings, 0 replies; 200+ results
From: Zoltan Kiss @ 2015-10-20 9:58 UTC (permalink / raw)
To: Ananyev, Konstantin, Richardson, Bruce, dev
On 19/10/15 19:57, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>> Sent: Monday, October 19, 2015 5:30 PM
>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>
>>
>>
>> On 15/10/15 16:43, Ananyev, Konstantin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>> Sent: Thursday, October 15, 2015 11:33 AM
>>>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>>>
>>>>
>>>>
>>>> On 15/10/15 00:23, Ananyev, Konstantin wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>> Sent: Wednesday, October 14, 2015 5:11 PM
>>>>>> To: Ananyev, Konstantin; Richardson, Bruce; dev@dpdk.org
>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 28/09/15 00:19, Ananyev, Konstantin wrote:
>>>>>>>
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>>>> Sent: Friday, September 25, 2015 7:29 PM
>>>>>>>> To: Richardson, Bruce; dev@dpdk.org
>>>>>>>> Cc: Ananyev, Konstantin
>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function
>>>>>>>>
>>>>>>>> On 07/09/15 07:41, Richardson, Bruce wrote:
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>>>>>> Sent: Monday, September 7, 2015 3:15 PM
>>>>>>>>>> To: Richardson, Bruce; dev@dpdk.org
>>>>>>>>>> Cc: Ananyev, Konstantin
>>>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive
>>>>>>>>>> function
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 07/09/15 13:57, Richardson, Bruce wrote:
>>>>>>>>>>>
>>>>>>>>>>>> -----Original Message-----
>>>>>>>>>>>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>>>>>>>>>>>> Sent: Monday, September 7, 2015 1:26 PM
>>>>>>>>>>>> To: dev@dpdk.org
>>>>>>>>>>>> Cc: Ananyev, Konstantin; Richardson, Bruce
>>>>>>>>>>>> Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD
>>>>>>>>>>>> receive function
>>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I just realized I've missed the "[PATCH]" tag from the subject. Did
>>>>>>>>>>>> anyone had time to review this?
>>>>>>>>>>>>
>>>>>>>>>>> Hi Zoltan,
>>>>>>>>>>>
>>>>>>>>>>> the big thing that concerns me with this is the addition of new
>>>>>>>>>>> instructions for each packet in the fast path. Ideally, this
>>>>>>>>>>> prefetching would be better handled in the application itself, as for
>>>>>>>>>>> some apps, e.g. those using pipelining, the core doing the RX from the
>>>>>>>>>>> NIC may not touch the packet data at all, and the prefetches will
>>>>>>>>>> instead cause a performance slowdown.
>>>>>>>>>>> Is it possible to get the same performance increase - or something
>>>>>>>>>>> close to it - by making changes in OVS?
>>>>>>>>>> OVS already does a prefetch when it's processing the previous packet, but
>>>>>>>>>> apparently it's not early enough. At least for my test scenario, where I'm
>>>>>>>>>> forwarding UDP packets with the least possible overhead. I guess in tests
>>>>>>>>>> where OVS does more complex processing it should be fine.
>>>>>>>>>> I'll try to move the prefetch earlier in OVS codebase, but I'm not sure if
>>>>>>>>>> it'll help.
>>>>>>>>> I would suggest trying to prefetch more than one packet ahead. Prefetching 4 or
>>>>>>>>> 8 ahead might work better, depending on the processing being done.
>>>>>>>>
>>>>>>>> I've moved the prefetch earlier, and it seems to work:
>>>>>>>>
>>>>>>>> https://patchwork.ozlabs.org/patch/519017/
>>>>>>>>
>>>>>>>> However it raises the question: should we remove header prefetch from
>>>>>>>> all the other drivers, and the CONFIG_RTE_PMD_PACKET_PREFETCH option?
>>>>>>>
>>>>>>> My vote would be for that.
>>>>>>> Konstantin
>>>>>>
>>>>>> After some further thinking I would rather support the
>>>>>> rte_packet_prefetch() macro (prefetch the header in the driver, and
>>>>>> configure it through CONFIG_RTE_PMD_PACKET_PREFETCH)
>>>>>>
>>>>>> - the prefetch can happen earlier, so if an application needs the header
>>>>>> right away, this is the fastest
>>>>>> - if the application doesn't need header prefetch, it can turn it off
>>>>>> compile time. Same as if we wouldn't have this option.
>>>>>> - if the application has mixed needs (sometimes it needs the header
>>>>>> right away, sometimes it doesn't), it can turn it off and do what it
>>>>>> needs. Same as if we wouldn't have this option.
>>>>>>
>>>>>> A harder decision would be whether it should be turned on or off by
>>>>>> default. Currently it's on, and half of the receive functions don't use it.
>>>>>
>>>>> Yep, it is sort of a mess right now.
>>>>> Another question if we'd like to keep it and standardise it:
>>>>> at what moment to prefetch: as soon as we realize that HW is done with that buffer,
>>>>> or as late inside rx_burst() as possible?
>>>>> I suppose there is no clear answer for that.
>>>> I think if the application needs the driver start the prefetching, it
>>>> does it because it's already too late when rte_eth_rx_burst() returns.
>>>> So I think it's best to do it as soon as we learn that the HW is done.
>>>
>>> Probably, but it might be situations when it would be more plausible to do it later too.
>> Could you elaborate?
>> If the application wants prefetch to start earlier than could be done by
>> itself (right after rte_eth_rx_burst() returns), earlier is better. So
>> it will have the best chances to have the data in cache.
>> If it would need the data later, then it could do the prefetch by itself.
>>
>>> Again it might depend on particular HW and your memory access pattern.
>>>
>>>>
>>>>> That's why my thought was to just get rid of it.
>>>>> Though if it would be implemented in some standardized way and disabled by default -
>>>>> that's probably ok too.
>>>>> BTW, couldn't that be implemented just via rte_ethdev rx callbacks mechanism?
>>>>> Then we can have the code one place and don't need compile option at all -
>>>>> could be ebabled/disabled dynamically on a per nic basis.
>>>>> Or would it be too high overhead for that?
>>>>
>>>> I think if we go that way, it's better to pass an option to
>>>> rte_eth_rx_burst() and tell if you want the header prefetched by the
>>>> driver. That would be the most flexible.
>>>
>>> That would mean ABI breakage for rx_burst()...
>>> Probably then better a new parameter in rx_conf or something.
>>> Though it still means that each PMD has to implement it on its own.
>> That would be the case in any way, as we are are talking about
>> prefetching in the receive function.
>>
>>> And again there might be PMDs that would just ignore that parameter.
>> If the PMD has a good reason to do that (e.g. prefetch has undesirable
>> effects), I think it's fine.
>>
>>> While for callback it would be all in one and known place.
>> Who and when would call that callback? If the application after
>> rte_eth_rx_burst() returned, it wouldn't have too much use, it could
>> just call prefetch by itself.
>
> I am talking about the callbacks called by rx_burst() itself:
I don't think it would give us any advantage: the app can just prefetch
the headers after rte_eth_rx_burst(). If you look at the original patch
I've sent in about this topic:
http://dpdk.org/ml/archives/dev/2015-September/023144.html
It will start prefetching immediately when an iteration of packets is
finished. If RTE_IXGBE_DESCS_PER_LOOP is 4 and the burst is 32, by the
time the receive function returns the first packet's headers could be
almost there in the cache. And actually you can do it earlier than in
that patch: right when the pointers become available at "B.2 copy 2 mbuf
point into rx_pkts"
>
> static inline uint16_t
> rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id,
> struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> {
> struct rte_eth_dev *dev;
>
> dev = &rte_eth_devices[port_id];
>
> int16_t nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> rx_pkts, nb_pkts);
>
> #ifdef RTE_ETHDEV_RXTX_CALLBACKS
> struct rte_eth_rxtx_callback *cb = dev->post_rx_burst_cbs[queue_id];
>
> if (unlikely(cb != NULL)) {
> do {
> nb_rx = cb->fn.rx(port_id, queue_id, rx_pkts, nb_rx,
> nb_pkts, cb->param);
> cb = cb->next;
> } while (cb != NULL);
> #endif
>
> return nb_rx;
> }
>
> Konstantin
>
>>
>>> But again, I didn't measure it, so not sure what will be an impact of the callback itself.
>>> Might be it not worth it.
>>> Konstantin
>>>
>>>>
>>>>> Konstantin
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> /Bruce
>>>>>>>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 1/1] ethdev: remove the imissed deprecation tag
@ 2015-10-20 10:34 4% Maryam Tahhan
0 siblings, 0 replies; 200+ results
From: Maryam Tahhan @ 2015-10-20 10:34 UTC (permalink / raw)
To: dev
Remove the deprecation tag and notice for imissed as it is a generic
register that accounts for packets that were dropped by the HW,
because there are no available mbufs (RX queues are full). imissed is
different to ierrors and can help with general debug.
Fixes: 49f386542af4 ("ethdev: remove driver specific stats")
Signed-off-by: Maryam Tahhan <maryam.tahhan@intel.com>
---
v2:
- Clarify why imissed is no longer deprecated.
- Improve definition of imissed in the documentation.
---
doc/guides/rel_notes/deprecation.rst | 2 +-
lib/librte_ether/rte_ethdev.h | 4 +++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fa55117..c4babbd 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -14,7 +14,7 @@ Deprecation Notices
All binaries will need to be rebuilt from release 2.2.
* The following fields have been deprecated in rte_eth_stats:
- imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
+ ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
* ABI changes are planned for struct rte_eth_fdir_flow_ext in order to support
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..d404f85 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -195,7 +195,9 @@ struct rte_eth_stats {
uint64_t ibytes; /**< Total number of successfully received bytes. */
uint64_t obytes; /**< Total number of successfully transmitted bytes. */
uint64_t imissed;
- /**< Deprecated; Total of RX missed packets (e.g full FIFO). */
+ /**< Total of RX missed packets (packets that were dropped by the HW,
+ * because there are no available mbufs i.e. RX queues are full).
+ */
uint64_t ibadcrc;
/**< Deprecated; Total of RX packets with CRC error. */
uint64_t ibadlen;
--
2.4.3
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2] doc: add contributors guide
2015-10-15 16:51 3% ` [dpdk-dev] [PATCH] doc: add " John McNamara
@ 2015-10-20 11:03 2% ` John McNamara
2015-10-23 10:18 2% ` [dpdk-dev] [PATCH v3] " John McNamara
2 siblings, 0 replies; 200+ results
From: John McNamara @ 2015-10-20 11:03 UTC (permalink / raw)
To: dev
Add a document to explain the DPDK patch submission and review process.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
v2:
* Fixes for mailing list comments.
* Fix for broken link target.
doc/guides/contributing/documentation.rst | 2 +-
doc/guides/contributing/index.rst | 1 +
doc/guides/contributing/patches.rst | 327 ++++++++++++++++++++++++++++++
3 files changed, 329 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/contributing/patches.rst
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 7c1eb41..0e37f01 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -1,4 +1,4 @@
-.. doc_guidelines:
+.. _doc_guidelines:
DPDK Documentation Guidelines
=============================
diff --git a/doc/guides/contributing/index.rst b/doc/guides/contributing/index.rst
index 561427b..f49ca88 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -9,3 +9,4 @@ Contributor's Guidelines
design
versioning
documentation
+ patches
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
new file mode 100644
index 0000000..7166014
--- /dev/null
+++ b/doc/guides/contributing/patches.rst
@@ -0,0 +1,327 @@
+.. submitting_patches:
+
+Contributing Code to DPDK
+=========================
+
+This document outlines the guidelines for submitting code to DPDK.
+
+The DPDK development process is modeled (loosely) on the Linux Kernel development model so it is worth reading the
+Linux kernel guide on submitting patches:
+`How to Get Your Change Into the Linux Kernel <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_.
+The rationale for many of the DPDK guidelines is explained in greater detail in the kernel guidelines.
+
+
+The DPDK Development Process
+-----------------------------
+
+The DPDK development process has the following features:
+
+* The code is hosted in a public git repository.
+* There is a mailing list where developers submit patches.
+* There are maintainers for hierarchical components.
+* Patches are reviewed publicly on the mailing list.
+* Successfully reviewed patches are merged to the master branch of the repository.
+
+The mailing list for DPDK development is `dev@dpkg.org <http://dpdk.org/ml/archives/dev/>`_.
+Contributors will need to `register for the mailing list <http://dpdk.org/ml/listinfo/dev>`_ in order to submit patches.
+It is also worth registering for the DPDK `Patchwork <http://dpdk.org/dev/patchwxispork/project/dpdk/list/>`_
+
+The development process requires some familiarity with the ``git`` version control system.
+Refer to the `Pro Git Book <http://www.git-scm.com/book/>`_ for further information.
+
+
+Getting the Source Code
+-----------------------
+
+The source code can be cloned using either of the following::
+
+ git clone git://dpdk.org/dpdk
+
+ git clone http://dpdk.org/git/dpdk
+
+
+Make your Changes
+-----------------
+
+Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines and requirements:
+
+* Follow the :ref:`coding_style` guidelines.
+
+* If you add new files or directories you should add your name to the ``MAINTAINERS`` file.
+
+* New external functions should be added to the local ``version.map`` file.
+ See the :doc:`Guidelines for ABI policy and versioning </contributing/versioning>`.
+
+* Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
+ See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
+
+* Run ``make test`` and ``make examples`` to ensure the changes haven't broken existing code.
+
+* Don’t break compilation between commits with forward dependencies in a patchset.
+ Each commit should compile on its own to allow for ``git bisect`` and continuous integration testing.
+
+* Add tests to the the ``app/test`` unit test framework where possible.
+
+* Add documentation, if relevant, in the form of Doxygen comments or a User Guide in RST format.
+ See the :ref:`Documentation Guidelines <doc_guidelines>`.
+
+Once the changes have been made you should commit them to your local repo.
+
+For small changes, that do not require specific explanations, it is better to keep things together in the
+same patch.
+Larger changes that require different explanations should be separated into logical patches in a patchset.
+A good way of thinking about whether a patch should be split is to consider whether the change could be
+applied without dependencies as a backport.
+
+As a guide to how patches should be structured run ``git log`` on similar files.
+
+
+Commit Messages: Subject Line
+-----------------------------
+
+The first, summary, line of the git commit message becomes the subject line of the patch email.
+Here are some guidelines for the summary line:
+
+* The summary line must capture the area and the impact of the change.
+
+* The summary line should be around 50 characters.
+
+* The summary line should be lowercase apart from acronyms.
+
+* It should be prefixed with the component name (use git log to check existing components).
+ For example::
+
+ config: enable same drivers options for linux and bsd
+
+ ixgbe: fix offload config option name
+
+* Use the imperative of the verb (like instructions to the code base).
+ For example::
+
+ ixgbe: fix rss in 32 bit
+
+* Don't add a period/full stop to the subject line or you will end up two in the patch name: ``dpdk_description..patch``.
+
+The actual email subject line should be prefixed by ``[PATCH]`` and the version, if greater than v1,
+for example: ``PATCH v2``.
+The is generally added by ``git send-email`` or ``git format-patch``, see below.
+
+If you are submitting an RFC draft of a feature you can use ``[RFC]`` instead of ``[PATCH]``.
+An RFC patch doesn't have to be complete.
+It is intended as a way of getting early feedback.
+
+
+Commit Messages: Body
+---------------------
+
+Here are some guidelines for the body of a commit message:
+
+* The body of the message should describe the issue being fixed or the feature being added.
+ It is important to provide enough information to allow a reviewer to understand the purpose of the patch.
+
+* When the change is obvious the body can be blank, apart from the signoff.
+
+* The commit message must end with a ``Signed-off-by:`` line which is added using::
+
+ git commit --signoff # or -s
+
+ The purpose of the signoff is explained in the
+ `Developer's Certificate of Origin <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_
+ section of the Linux kernel guidelines.
+
+ .. Note::
+
+ All developers must ensure that they have read and understood the
+ Developer's Certificate of Origin section of the documentation prior
+ to applying the signoff and submitting a patch.
+
+* The signoff must be a real name and not an alias or nickname.
+ More than one signoff is allowed.
+
+* The text of the commit message should be wrapped at 72 characters.
+
+* When fixing a regression, it is a good idea to reference the id of the commit which introduced the bug.
+ You can generate the required text using the following git alias::
+
+ git alias: fixline = log -1 --abbrev=12 --format='Fixes: %h (\"%s\")'
+
+
+ The ``Fixes:`` line can then be added to the commit message::
+
+ doc: fix vhost sample parameter
+
+ Update the docs to reflect removed dev-index.
+
+ Fixes: 17b8320a3e11 ("vhost: remove index parameter")
+
+ Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
+
+* When fixing an error or warning it is useful to add the error message and instructions on how to reproduce it.
+
+* Use correct capitalization, punctuation and spelling.
+
+In addition to the ``Signed-off-by:`` name the commit messages can also have one or more of the following:
+
+* ``Reported-by:`` The reporter of the issue.
+* ``Tested-by:`` The tester of the change.
+* ``Reviewed-by:`` The reviewer of the change.
+* ``Suggested-by:`` The person who suggested the change.
+* ``Acked-by:`` When a previous version of the patch was acked and the ack is still relevant.
+
+
+Creating Patches
+----------------
+
+It is possible to send patches directly from git but for new contributors it is recommended to generate the
+patches with ``git format-patch`` and then when everything looks okay, and the patches have been checked, to
+send them with ``git send-mail``.
+
+Here are some examples of using ``git format-patch`` to generate patches:
+
+.. code-block:: console
+
+ # Generate a patch from the last commit.
+ git format-patch -1
+
+ # Generate a patch from the last 3 commits.
+ git format-patch -3
+
+ # Generate the patches in a directory.
+ git format-patch -3 -o ~/patch/
+
+ # Add a cover letter to explain a patchset.
+ git format-patch -3 -o --cover-letter
+
+ # Add a prefix with a version number.
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+
+Cover letters are useful for explaining a patchset and help to generate a logical threading to the patches.
+Smaller notes can be put inline in the patch after the ``---`` separator, for example::
+
+ Subject: [PATCH] fm10k/base: add FM10420 device ids
+
+ Add the device ID for Boulder Rapids and Atwood Channel to enable
+ drivers to support those devices.
+
+ Signed-off-by: Wang Xiao W <xiao.w.wang@intel.com>
+ ---
+
+ ADD NOTES HERE.
+
+ drivers/net/fm10k/base/fm10k_api.c | 6 ++++++
+ drivers/net/fm10k/base/fm10k_type.h | 6 ++++++
+ 2 files changed, 12 insertions(+)
+ ...
+
+Version 2 and later of a patchset should also include a short log of the changes so the reviewer knows what has changed.
+This can be added to the cover letter or the annotations.
+For example::
+
+ v3:
+ * Fixed issued with version.map.
+
+ v2:
+ * Added i40e support.
+ * Renamed ethdev functions from rte_eth_ieee15888_*() to rte_eth_timesync_*()
+ since 802.1AS can be supported through the same interfaces.
+
+
+Checking the Patches
+--------------------
+
+Patches should be checked for formatting and syntax issues using the Linux scripts tool ``checkpatch``.
+
+The ``checkpatch`` utility can be obtained by cloning, and periodically, updating the Linux kernel sources.
+
+The kernel guidelines tested by ``checkpatch`` don't match the DPDK Coding Style guidelines exactly but
+they provide a good indication of conformance.
+Warnings about kernel data types or about split strings can be ignored::
+
+ /path/checkpatch.pl --ignore PREFER_KERNEL_TYPES,SPLIT_STRING -q files*.patch
+
+Ensure that the code compiles with ``gcc`` and ``clang``::
+
+ make T=x86_64-native-linuxapp-gcc install
+ make T=x86_64-native-linuxapp-clang install
+
+Confirm that the changes haven't broken any existing code by using ``make test`` and ``make examples``.
+
+
+Sending Patches
+---------------
+
+Patches should be sent to the mailing list using ``git send-email``.
+You can configure an external SMTP with something like the following::
+
+ [sendemail]
+ smtpuser = name@domain.com
+ smtpserver = smtp.domain.com
+ smtpserverport = 465
+ smtpencryption = ssl
+
+See the `Git send-mail <https://git-scm.com/docs/git-send-email>`_ documentation for more details.
+
+The patches should be sent to ``dev@dpdk.org``.
+If the patches are a change to existing files then you should send them TO the maintainer(s) and CC ``dev@dpdk.org``.
+The appropriate maintainer can be found in the ``MAINTAINERS`` file::
+
+ git send-email --to maintainer@some.org --cc dev@dpdk.org 000*.patch
+
+New additions can be sent without a maintainer::
+
+ git send-email --to dev@dpdk.org 000*.patch
+
+You can test the emails by sending it to yourself or with the ``--dry-run`` option.
+
+If the patch is in relation to a previous email thread you can add it to the same thread using the Message ID::
+
+ git send-email --to dev@dpdk.org --in-reply-to <1234-foo@bar.com> 000*.patch
+
+The Message ID can be found in the raw text of emails or at the top of each Patchwork patch,
+`for example <http://dpdk.org/dev/patchwork/patch/7646/>`_.
+Shallow threading (``--thread --no-chain-reply-to``) is preferred for a patch series.
+
+Once submitted your patches will appear on the mailing list and in Patchwork.
+
+Experienced committers may send patches directly with ``git send-email`` without the ``git format-patch`` step.
+The options ``--annotate`` and ``confirm = always`` are recommended for checking patches before sending.
+
+
+The Review Process
+------------------
+
+The more work you put into the previous steps the easier it will be to get a patch accepted.
+
+The general cycle for patch review and acceptance is:
+
+#. Submit the patch.
+
+#. Check the automatic test reports in the coming hours.
+
+#. Wait for review comments. While you are waiting review some other patches.
+
+#. Fix the review comments and submit a ``v n+1`` patchset::
+
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+#. Update Patchwork to mark your previous patches as "Superseded".
+
+#. If the patch is deemed suitable for merging by the relevant maintainer(s) or other developers they will ``ack``
+ the patch with an email that includes something like::
+
+ Acked-by: Alex Smith <alex.smith@example.com>
+
+ **Note**: When acking patches please remove as much of the text of the patch email as possible.
+ It is generally best to delete everything after the ``Signed-off-by:`` line.
+
+#. Having the patch ``Reviewed-by:`` and/or ``Tested-by:`` will also help the patch to be accepted.
+
+#. If the patch isn't deemed suitable based on being out of scope or conflicting with existing functionality
+ it may receive a ``nack``.
+ In this case you will need to make a more convincing technical argument in favor of your patches.
+
+#. In addition a patch will not be accepted if it doesn't address comments from a previous version with fixes or
+ valid arguments.
+
+#. Acked patches will be merged in the current or next merge window.
--
1.8.1.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
2015-10-19 13:27 0% ` Richardson, Bruce
@ 2015-10-21 4:35 0% ` Tetsuya Mukawa
2015-10-21 6:25 3% ` Panu Matilainen
0 siblings, 1 reply; 200+ results
From: Tetsuya Mukawa @ 2015-10-21 4:35 UTC (permalink / raw)
To: Richardson, Bruce, Panu Matilainen, Loftus, Ciara; +Cc: dev, ann.zhuangyanying
On 2015/10/19 22:27, Richardson, Bruce wrote:
>> -----Original Message-----
>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>> Sent: Monday, October 19, 2015 2:26 PM
>> To: Tetsuya Mukawa <mukawa@igel.co.jp>; Richardson, Bruce
>> <bruce.richardson@intel.com>; Loftus, Ciara <ciara.loftus@intel.com>
>> Cc: dev@dpdk.org; ann.zhuangyanying@huawei.com
>> Subject: Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
>>
>> On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
>>> On 2015/10/19 18:45, Bruce Richardson wrote:
>>>> On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
>>>>>> On 2015/10/16 21:52, Bruce Richardson wrote:
>>>>>>> On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
>>>>>>>> The patch introduces a new PMD. This PMD is implemented as thin
>>>>>> wrapper
>>>>>>>> of librte_vhost. It means librte_vhost is also needed to compile
>> the PMD.
>>>>>>>> The PMD can have 'iface' parameter like below to specify a path
>>>>>>>> to
>>>>>> connect
>>>>>>>> to a virtio-net device.
>>>>>>>>
>>>>>>>> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
>>>>>>>>
>>>>>>>> To connect above testpmd, here is qemu command example.
>>>>>>>>
>>>>>>>> $ qemu-system-x86_64 \
>>>>>>>> <snip>
>>>>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
>>>>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
>>>>>>>> -device virtio-net-pci,netdev=net0
>>>>>>>>
>>>>>>>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
>>>>>>> With this PMD in place, is there any need to keep the existing
>>>>>>> vhost library around as a separate entity? Can the existing
>>>>>>> library be
>>>>>> subsumed/converted into
>>>>>>> a standard PMD?
>>>>>>>
>>>>>>> /Bruce
>>>>>> Hi Bruce,
>>>>>>
>>>>>> I concern about whether the PMD has all features of librte_vhost,
>>>>>> because librte_vhost provides more features and freedom than ethdev
>>>>>> API provides.
>>>>>> In some cases, user needs to choose limited implementation without
>>>>>> librte_vhost.
>>>>>> I am going to eliminate such cases while implementing the PMD.
>>>>>> But I don't have strong belief that we can remove librte_vhost now.
>>>>>>
>>>>>> So how about keeping current separation in next DPDK?
>>>>>> I guess people will try to replace librte_vhost to vhost PMD,
>>>>>> because apparently using ethdev APIs will be useful in many cases.
>>>>>> And we will get feedbacks like "vhost PMD needs to support like this
>> usage".
>>>>>> (Or we will not have feedbacks, but it's also OK.) Then, we will be
>>>>>> able to merge librte_vhost and vhost PMD.
>>>>> I agree with the above. One the concerns I had when reviewing the
>> patch was that the PMD removes some freedom that is available with the
>> library. Eg. Ability to implement the new_device and destroy_device
>> callbacks. If using the PMD you are constrained to the implementations of
>> these in the PMD driver, but if using librte_vhost, you can implement your
>> own with whatever functionality you like - a good example of this can be
>> seen in the vhost sample app.
>>>>> On the other hand, the PMD is useful in that it removes a lot of
>> complexity for the user and may work for some more general use cases. So I
>> would be in favour of having both options available too.
>>>>> Ciara
>>>>>
>>>> Thanks.
>>>> However, just because the libraries are merged does not mean that you
>>>> need be limited by PMD functionality. Many PMDs provide additional
>>>> library-specific functions over and above their PMD capabilities. The
>>>> bonded PMD is a good example here, as it has a whole set of extra
>>>> functions to create and manipulate bonded devices - things that are
>>>> obviously not part of the general ethdev API. Other vPMDs similarly
>> include functions to allow them to be created on the fly too.
>>>> regards,
>>>> /Bruce
>>> Hi Bruce,
>>>
>>> I appreciate for showing a good example. I haven't noticed the PMD.
>>> I will check the bonding PMD, and try to remove librte_vhost without
>>> losing freedom and features of the library.
>> Hi,
>>
>> Just a gentle reminder - if you consider removing (even if by just
>> replacing/renaming) an entire library, it needs to happen the ABI
>> deprecation process.
>>
>> It seems obvious enough. But for all the ABI policing here, somehow we all
>> failed to notice the two compatibility breaking rename-elephants in the
>> room during 2.1 development:
>> - libintel_dpdk was renamed to libdpdk
>> - librte_pmd_virtio_uio was renamed to librte_pmd_virtio
>>
>> Of course these cases are easy to work around with symlinks, and are
>> unrelated to the matter at hand. Just wanting to make sure such things
>> dont happen again.
>>
>> - Panu -
> Still doesn't hurt to remind us, Panu! Thanks. :-)
Hi,
Thanks for reminder. I've checked the DPDK documentation.
I will submit deprecation notice to follow DPDK deprecation process.
(Probably we will be able to remove vhost library in DPDK-2.3 or later.)
BTW, I will merge vhost library and PMD like below.
Step1. Move vhost library under vhost PMD.
Step2. Rename current APIs.
Step3. Add a function to get a pointer of "struct virtio_net device" by
a portno.
Last steps allows us to be able to convert a portno to the pointer of
corresponding vrtio_net device.
And we can still use features and freedom vhost library APIs provided.
Thanks,
Tetsuya
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
2015-10-21 4:35 0% ` Tetsuya Mukawa
@ 2015-10-21 6:25 3% ` Panu Matilainen
2015-10-21 10:22 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Panu Matilainen @ 2015-10-21 6:25 UTC (permalink / raw)
To: Tetsuya Mukawa, Richardson, Bruce, Loftus, Ciara; +Cc: dev, ann.zhuangyanying
On 10/21/2015 07:35 AM, Tetsuya Mukawa wrote:
> On 2015/10/19 22:27, Richardson, Bruce wrote:
>>> -----Original Message-----
>>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>>> Sent: Monday, October 19, 2015 2:26 PM
>>> To: Tetsuya Mukawa <mukawa@igel.co.jp>; Richardson, Bruce
>>> <bruce.richardson@intel.com>; Loftus, Ciara <ciara.loftus@intel.com>
>>> Cc: dev@dpdk.org; ann.zhuangyanying@huawei.com
>>> Subject: Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
>>>
>>> On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
>>>> On 2015/10/19 18:45, Bruce Richardson wrote:
>>>>> On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
>>>>>>> On 2015/10/16 21:52, Bruce Richardson wrote:
>>>>>>>> On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
>>>>>>>>> The patch introduces a new PMD. This PMD is implemented as thin
>>>>>>> wrapper
>>>>>>>>> of librte_vhost. It means librte_vhost is also needed to compile
>>> the PMD.
>>>>>>>>> The PMD can have 'iface' parameter like below to specify a path
>>>>>>>>> to
>>>>>>> connect
>>>>>>>>> to a virtio-net device.
>>>>>>>>>
>>>>>>>>> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
>>>>>>>>>
>>>>>>>>> To connect above testpmd, here is qemu command example.
>>>>>>>>>
>>>>>>>>> $ qemu-system-x86_64 \
>>>>>>>>> <snip>
>>>>>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
>>>>>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
>>>>>>>>> -device virtio-net-pci,netdev=net0
>>>>>>>>>
>>>>>>>>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
>>>>>>>> With this PMD in place, is there any need to keep the existing
>>>>>>>> vhost library around as a separate entity? Can the existing
>>>>>>>> library be
>>>>>>> subsumed/converted into
>>>>>>>> a standard PMD?
>>>>>>>>
>>>>>>>> /Bruce
>>>>>>> Hi Bruce,
>>>>>>>
>>>>>>> I concern about whether the PMD has all features of librte_vhost,
>>>>>>> because librte_vhost provides more features and freedom than ethdev
>>>>>>> API provides.
>>>>>>> In some cases, user needs to choose limited implementation without
>>>>>>> librte_vhost.
>>>>>>> I am going to eliminate such cases while implementing the PMD.
>>>>>>> But I don't have strong belief that we can remove librte_vhost now.
>>>>>>>
>>>>>>> So how about keeping current separation in next DPDK?
>>>>>>> I guess people will try to replace librte_vhost to vhost PMD,
>>>>>>> because apparently using ethdev APIs will be useful in many cases.
>>>>>>> And we will get feedbacks like "vhost PMD needs to support like this
>>> usage".
>>>>>>> (Or we will not have feedbacks, but it's also OK.) Then, we will be
>>>>>>> able to merge librte_vhost and vhost PMD.
>>>>>> I agree with the above. One the concerns I had when reviewing the
>>> patch was that the PMD removes some freedom that is available with the
>>> library. Eg. Ability to implement the new_device and destroy_device
>>> callbacks. If using the PMD you are constrained to the implementations of
>>> these in the PMD driver, but if using librte_vhost, you can implement your
>>> own with whatever functionality you like - a good example of this can be
>>> seen in the vhost sample app.
>>>>>> On the other hand, the PMD is useful in that it removes a lot of
>>> complexity for the user and may work for some more general use cases. So I
>>> would be in favour of having both options available too.
>>>>>> Ciara
>>>>>>
>>>>> Thanks.
>>>>> However, just because the libraries are merged does not mean that you
>>>>> need be limited by PMD functionality. Many PMDs provide additional
>>>>> library-specific functions over and above their PMD capabilities. The
>>>>> bonded PMD is a good example here, as it has a whole set of extra
>>>>> functions to create and manipulate bonded devices - things that are
>>>>> obviously not part of the general ethdev API. Other vPMDs similarly
>>> include functions to allow them to be created on the fly too.
>>>>> regards,
>>>>> /Bruce
>>>> Hi Bruce,
>>>>
>>>> I appreciate for showing a good example. I haven't noticed the PMD.
>>>> I will check the bonding PMD, and try to remove librte_vhost without
>>>> losing freedom and features of the library.
>>> Hi,
>>>
>>> Just a gentle reminder - if you consider removing (even if by just
>>> replacing/renaming) an entire library, it needs to happen the ABI
>>> deprecation process.
>>>
>>> It seems obvious enough. But for all the ABI policing here, somehow we all
>>> failed to notice the two compatibility breaking rename-elephants in the
>>> room during 2.1 development:
>>> - libintel_dpdk was renamed to libdpdk
>>> - librte_pmd_virtio_uio was renamed to librte_pmd_virtio
>>>
>>> Of course these cases are easy to work around with symlinks, and are
>>> unrelated to the matter at hand. Just wanting to make sure such things
>>> dont happen again.
>>>
>>> - Panu -
>> Still doesn't hurt to remind us, Panu! Thanks. :-)
>
> Hi,
>
> Thanks for reminder. I've checked the DPDK documentation.
> I will submit deprecation notice to follow DPDK deprecation process.
> (Probably we will be able to remove vhost library in DPDK-2.3 or later.)
>
> BTW, I will merge vhost library and PMD like below.
> Step1. Move vhost library under vhost PMD.
> Step2. Rename current APIs.
> Step3. Add a function to get a pointer of "struct virtio_net device" by
> a portno.
>
> Last steps allows us to be able to convert a portno to the pointer of
> corresponding vrtio_net device.
> And we can still use features and freedom vhost library APIs provided.
Just wondering, is that *really* worth the price of breaking every
single vhost library user out there?
I mean, this is not about removing some bitrotten function or two which
nobody cares about anymore but removing (by renaming) one of the more
widely (AFAICS) used libraries and its entire API.
If current APIs are kept then compatibility is largely a matter of
planting a strategic symlink or two, but it might make the API look
inconsistent.
But just wondering about the benefit of this merge thing, compared to
just adding a vhost pmd and leaving the library be. The ABI process is
not there to make life miserable for DPDK developers, its there to help
make DPDK nicer for *other* developers. And the first and the foremost
rule is simply: dont break backwards compatibility. Not unless there's a
damn good reason to doing so, and I fail to see that reason here.
- Panu -
> Thanks,
> Tetsuya
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
2015-10-21 6:25 3% ` Panu Matilainen
@ 2015-10-21 10:22 0% ` Bruce Richardson
2015-10-22 9:50 0% ` Tetsuya Mukawa
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-10-21 10:22 UTC (permalink / raw)
To: Panu Matilainen; +Cc: dev, ann.zhuangyanying
On Wed, Oct 21, 2015 at 09:25:12AM +0300, Panu Matilainen wrote:
> On 10/21/2015 07:35 AM, Tetsuya Mukawa wrote:
> >On 2015/10/19 22:27, Richardson, Bruce wrote:
> >>>-----Original Message-----
> >>>From: Panu Matilainen [mailto:pmatilai@redhat.com]
> >>>Sent: Monday, October 19, 2015 2:26 PM
> >>>To: Tetsuya Mukawa <mukawa@igel.co.jp>; Richardson, Bruce
> >>><bruce.richardson@intel.com>; Loftus, Ciara <ciara.loftus@intel.com>
> >>>Cc: dev@dpdk.org; ann.zhuangyanying@huawei.com
> >>>Subject: Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
> >>>
> >>>On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
> >>>>On 2015/10/19 18:45, Bruce Richardson wrote:
> >>>>>On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
> >>>>>>>On 2015/10/16 21:52, Bruce Richardson wrote:
> >>>>>>>>On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
> >>>>>>>>>The patch introduces a new PMD. This PMD is implemented as thin
> >>>>>>>wrapper
> >>>>>>>>>of librte_vhost. It means librte_vhost is also needed to compile
> >>>the PMD.
> >>>>>>>>>The PMD can have 'iface' parameter like below to specify a path
> >>>>>>>>>to
> >>>>>>>connect
> >>>>>>>>>to a virtio-net device.
> >>>>>>>>>
> >>>>>>>>>$ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
> >>>>>>>>>
> >>>>>>>>>To connect above testpmd, here is qemu command example.
> >>>>>>>>>
> >>>>>>>>>$ qemu-system-x86_64 \
> >>>>>>>>> <snip>
> >>>>>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
> >>>>>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
> >>>>>>>>> -device virtio-net-pci,netdev=net0
> >>>>>>>>>
> >>>>>>>>>Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
> >>>>>>>>With this PMD in place, is there any need to keep the existing
> >>>>>>>>vhost library around as a separate entity? Can the existing
> >>>>>>>>library be
> >>>>>>>subsumed/converted into
> >>>>>>>>a standard PMD?
> >>>>>>>>
> >>>>>>>>/Bruce
> >>>>>>>Hi Bruce,
> >>>>>>>
> >>>>>>>I concern about whether the PMD has all features of librte_vhost,
> >>>>>>>because librte_vhost provides more features and freedom than ethdev
> >>>>>>>API provides.
> >>>>>>>In some cases, user needs to choose limited implementation without
> >>>>>>>librte_vhost.
> >>>>>>>I am going to eliminate such cases while implementing the PMD.
> >>>>>>>But I don't have strong belief that we can remove librte_vhost now.
> >>>>>>>
> >>>>>>>So how about keeping current separation in next DPDK?
> >>>>>>>I guess people will try to replace librte_vhost to vhost PMD,
> >>>>>>>because apparently using ethdev APIs will be useful in many cases.
> >>>>>>>And we will get feedbacks like "vhost PMD needs to support like this
> >>>usage".
> >>>>>>>(Or we will not have feedbacks, but it's also OK.) Then, we will be
> >>>>>>>able to merge librte_vhost and vhost PMD.
> >>>>>>I agree with the above. One the concerns I had when reviewing the
> >>>patch was that the PMD removes some freedom that is available with the
> >>>library. Eg. Ability to implement the new_device and destroy_device
> >>>callbacks. If using the PMD you are constrained to the implementations of
> >>>these in the PMD driver, but if using librte_vhost, you can implement your
> >>>own with whatever functionality you like - a good example of this can be
> >>>seen in the vhost sample app.
> >>>>>>On the other hand, the PMD is useful in that it removes a lot of
> >>>complexity for the user and may work for some more general use cases. So I
> >>>would be in favour of having both options available too.
> >>>>>>Ciara
> >>>>>>
> >>>>>Thanks.
> >>>>>However, just because the libraries are merged does not mean that you
> >>>>>need be limited by PMD functionality. Many PMDs provide additional
> >>>>>library-specific functions over and above their PMD capabilities. The
> >>>>>bonded PMD is a good example here, as it has a whole set of extra
> >>>>>functions to create and manipulate bonded devices - things that are
> >>>>>obviously not part of the general ethdev API. Other vPMDs similarly
> >>>include functions to allow them to be created on the fly too.
> >>>>>regards,
> >>>>>/Bruce
> >>>>Hi Bruce,
> >>>>
> >>>>I appreciate for showing a good example. I haven't noticed the PMD.
> >>>>I will check the bonding PMD, and try to remove librte_vhost without
> >>>>losing freedom and features of the library.
> >>>Hi,
> >>>
> >>>Just a gentle reminder - if you consider removing (even if by just
> >>>replacing/renaming) an entire library, it needs to happen the ABI
> >>>deprecation process.
> >>>
> >>>It seems obvious enough. But for all the ABI policing here, somehow we all
> >>>failed to notice the two compatibility breaking rename-elephants in the
> >>>room during 2.1 development:
> >>>- libintel_dpdk was renamed to libdpdk
> >>>- librte_pmd_virtio_uio was renamed to librte_pmd_virtio
> >>>
> >>>Of course these cases are easy to work around with symlinks, and are
> >>>unrelated to the matter at hand. Just wanting to make sure such things
> >>>dont happen again.
> >>>
> >>> - Panu -
> >>Still doesn't hurt to remind us, Panu! Thanks. :-)
> >
> >Hi,
> >
> >Thanks for reminder. I've checked the DPDK documentation.
> >I will submit deprecation notice to follow DPDK deprecation process.
> >(Probably we will be able to remove vhost library in DPDK-2.3 or later.)
> >
> >BTW, I will merge vhost library and PMD like below.
> >Step1. Move vhost library under vhost PMD.
> >Step2. Rename current APIs.
> >Step3. Add a function to get a pointer of "struct virtio_net device" by
> >a portno.
> >
> >Last steps allows us to be able to convert a portno to the pointer of
> >corresponding vrtio_net device.
> >And we can still use features and freedom vhost library APIs provided.
>
> Just wondering, is that *really* worth the price of breaking every single
> vhost library user out there?
>
> I mean, this is not about removing some bitrotten function or two which
> nobody cares about anymore but removing (by renaming) one of the more widely
> (AFAICS) used libraries and its entire API.
>
> If current APIs are kept then compatibility is largely a matter of planting
> a strategic symlink or two, but it might make the API look inconsistent.
>
> But just wondering about the benefit of this merge thing, compared to just
> adding a vhost pmd and leaving the library be. The ABI process is not there
> to make life miserable for DPDK developers, its there to help make DPDK
> nicer for *other* developers. And the first and the foremost rule is simply:
> dont break backwards compatibility. Not unless there's a damn good reason to
> doing so, and I fail to see that reason here.
>
> - Panu -
>
Good question, and I'll accept that maybe it's not worth doing. I'm not that
much of an expert on the internals and APIs of vhost library.
However, the merge I was looking for was more from a code locality point
of view, to have all the vhost code in one directory (under drivers/net),
than spread across multiple ones. What API's need to be deprecated
or not as part of that work, is a separate question, and so in theory we could
create a combined vhost library that does not deprecate anything (though to
avoid a build-up of technical debt, we'll probably want to deprecate some
functions).
I'll leave it up to the vhost experts do decide what's best, but for me, any
library that handles transmission and reception of packets outside of a DPDK
app should be a PMD library using ethdev rx/tx burst routines, and located
under drivers/net. (KNI is another obvious target for such a move and conversion).
Regards,
/Bruce
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 0/6] librte_table: add key_mask parameter to hash table parameter structure
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key Jasvinder Singh
` (2 preceding siblings ...)
2015-10-13 13:57 5% ` [dpdk-dev] [PATCH v2 8/8] librte_table: modify release notes and deprecation notice Jasvinder Singh
@ 2015-10-21 12:18 3% ` Jasvinder Singh
2015-10-21 12:18 6% ` [dpdk-dev] [PATCH v3 1/6] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters Jasvinder Singh
3 siblings, 1 reply; 200+ results
From: Jasvinder Singh @ 2015-10-21 12:18 UTC (permalink / raw)
To: dev
This patchset links to ABI change announced for librte_table.
The key_mask parameters has been added to the hash table
parameter structure for 8-byte key and 16-byte key extendible
bucket and LRU tables.
v2:
*updated release note
v3:
*merged release note with source code patch
*fixed build error: added missing symbol to
librte_table/rte_table_version.map
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Fan Zhang (6):
librte_table: add 16 byte hash table operations with computed lookup
app/test: modify app/test_table_combined and app/test_table_tables
app/test-pipeline: modify pipeline test
example/ip_pipeline: add parse_hex_string for internal use
example/ip_pipeline/pipeline: update flow_classification pipeline
librte_table: add key_mask parameter to 8- and 16-bytes key hash
parameters
app/test-pipeline/pipeline_hash.c | 4 +
app/test/test_table_combined.c | 5 +-
app/test/test_table_tables.c | 6 +-
doc/guides/rel_notes/deprecation.rst | 4 -
doc/guides/rel_notes/release_2_2.rst | 4 +
examples/ip_pipeline/config_parse.c | 70 ++++
examples/ip_pipeline/pipeline.h | 4 +
.../pipeline/pipeline_flow_classification_be.c | 56 ++-
lib/librte_table/rte_table_hash.h | 20 +
lib/librte_table/rte_table_hash_key16.c | 411 ++++++++++++++++++++-
lib/librte_table/rte_table_hash_key8.c | 54 ++-
lib/librte_table/rte_table_version.map | 7 +
12 files changed, 614 insertions(+), 31 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 1/6] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters
2015-10-21 12:18 3% ` [dpdk-dev] [PATCH v3 0/6] librte_table: add key_mask parameter to hash table parameter structure Jasvinder Singh
@ 2015-10-21 12:18 6% ` Jasvinder Singh
0 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-10-21 12:18 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table.
The key_mask parameter is added for 8-byte and 16-byte
key extendible bucket and LRU tables.The release notes
is updated and the deprecation notice is removed.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++
lib/librte_table/rte_table_hash.h | 12 ++++++++
lib/librte_table/rte_table_hash_key16.c | 52 ++++++++++++++++++++++++++-----
lib/librte_table/rte_table_hash_key8.c | 54 +++++++++++++++++++++++++++------
lib/librte_table/rte_table_version.map | 7 +++++
6 files changed, 112 insertions(+), 21 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 018a119..f8b8ca9 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -48,10 +48,6 @@ Deprecation Notices
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
-* librte_table hash: Key mask parameter will be added to the hash table
- parameter structure for 8-byte key and 16-byte key extendible bucket and
- LRU tables.
-
* librte_pipeline: The prototype for the pipeline input port, output port
and table action handlers will be updated:
the pipeline parameter will be added, the packets mask parameter will be
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 4f75cff..deb36c7 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -117,6 +117,10 @@ ABI Changes
* librte_port: Macros to access the packet meta-data stored within the packet
buffer has been adjusted to cover the packet mbuf structure.
+* librte_table hash: The key mask parameter is added to the hash table
+ parameter structure for 8-byte key and 16-byte key extendible bucket
+ and LRU tables.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index 9181942..e2c60e1 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -196,6 +196,9 @@ struct rte_table_hash_key8_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -226,6 +229,9 @@ struct rte_table_hash_key8_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket hash table operations for pre-computed key signature */
@@ -257,6 +263,9 @@ struct rte_table_hash_key16_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -284,6 +293,9 @@ struct rte_table_hash_key16_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
index f6a3306..0d6cc55 100644
--- a/lib/librte_table/rte_table_hash_key16.c
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -85,6 +85,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask[2];
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -164,6 +165,14 @@ rte_table_hash_create_key16_lru(void *params,
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = ((uint64_t *)p->key_mask)[0];
+ f->key_mask[1] = ((uint64_t *)p->key_mask)[1];
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_16 *bucket;
@@ -384,6 +393,14 @@ rte_table_hash_create_key16_ext(void *params,
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = (((uint64_t *)p->key_mask)[0]);
+ f->key_mask[1] = (((uint64_t *)p->key_mask)[1]);
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
return f;
}
@@ -609,11 +626,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -631,11 +651,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -658,12 +681,15 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket, pos); \
\
pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -749,13 +775,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -778,13 +810,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
index b351a49..ccb20cf 100644
--- a/lib/librte_table/rte_table_hash_key8.c
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -82,6 +82,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask;
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -160,6 +161,11 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_8 *bucket;
@@ -372,6 +378,11 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
f->stack = (uint32_t *)
&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
@@ -586,9 +597,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t *key; \
uint64_t signature; \
uint32_t bucket_index; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf1, f->key_offset);\
- signature = f->f_hash(key, RTE_TABLE_HASH_KEY_SIZE, f->seed);\
+ hash_key_buffer = *key & f->key_mask; \
+ signature = f->f_hash(&hash_key_buffer, \
+ RTE_TABLE_HASH_KEY_SIZE, f->seed); \
bucket_index = signature & (f->n_buckets - 1); \
bucket1 = (struct rte_bucket_4_8 *) \
&f->memory[bucket_index * f->bucket_size]; \
@@ -602,10 +616,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = key[0] & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -624,10 +640,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = *key & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -651,11 +669,13 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer = (*key) & f->key_mask; \
\
- lookup_key8_cmp(key, bucket, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket, pos); \
\
pkt_mask = ((bucket->signature >> pos) & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -736,6 +756,8 @@ rte_table_hash_entry_delete_key8_ext(
#define lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f)\
{ \
uint64_t *key10, *key11; \
+ uint64_t hash_offset_buffer10; \
+ uint64_t hash_offset_buffer11; \
uint64_t signature10, signature11; \
uint32_t bucket10_index, bucket11_index; \
rte_table_hash_op_hash f_hash = f->f_hash; \
@@ -744,14 +766,18 @@ rte_table_hash_entry_delete_key8_ext(
\
key10 = RTE_MBUF_METADATA_UINT64_PTR(mbuf10, key_offset);\
key11 = RTE_MBUF_METADATA_UINT64_PTR(mbuf11, key_offset);\
+ hash_offset_buffer10 = *key10 & f->key_mask; \
+ hash_offset_buffer11 = *key11 & f->key_mask; \
\
- signature10 = f_hash(key10, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature10 = f_hash(&hash_offset_buffer10, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket10_index = signature10 & (f->n_buckets - 1); \
bucket10 = (struct rte_bucket_4_8 *) \
&f->memory[bucket10_index * f->bucket_size]; \
rte_prefetch0(bucket10); \
\
- signature11 = f_hash(key11, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature11 = f_hash(&hash_offset_buffer11, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket11_index = signature11 & (f->n_buckets - 1); \
bucket11 = (struct rte_bucket_4_8 *) \
&f->memory[bucket11_index * f->bucket_size]; \
@@ -764,13 +790,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
@@ -793,13 +823,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index d33f926..f767a50 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -19,3 +19,10 @@ DPDK_2.0 {
local: *;
};
+
+DPDK_2.2 {
+ global:
+
+ rte_table_hash_key16_ext_dosig_ops;
+
+} DPDK_2.0;
\ No newline at end of file
--
2.1.0
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH v2 1/2] Remove ABI requierment for external library builds.
@ 2015-10-21 16:31 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2015-10-21 16:31 UTC (permalink / raw)
To: Liang-Min Larry Wang; +Cc: dev
2015-07-23 11:00, Liang-Min Larry Wang:
> From: "Andrew G. Harvey" <agh@cisco.com>
> --- a/mk/rte.extlib.mk
> +++ b/mk/rte.extlib.mk
> @@ -31,6 +31,8 @@
>
> MAKEFLAGS += --no-print-directory
>
> +export EXTLIB_BUILD := 1
Is export really needed?
Except that, this patch looks OK.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
2015-10-21 10:22 0% ` Bruce Richardson
@ 2015-10-22 9:50 0% ` Tetsuya Mukawa
0 siblings, 0 replies; 200+ results
From: Tetsuya Mukawa @ 2015-10-22 9:50 UTC (permalink / raw)
To: Bruce Richardson, Panu Matilainen; +Cc: dev, ann.zhuangyanying
On 2015/10/21 19:22, Bruce Richardson wrote:
> On Wed, Oct 21, 2015 at 09:25:12AM +0300, Panu Matilainen wrote:
>> On 10/21/2015 07:35 AM, Tetsuya Mukawa wrote:
>>> On 2015/10/19 22:27, Richardson, Bruce wrote:
>>>>> -----Original Message-----
>>>>> From: Panu Matilainen [mailto:pmatilai@redhat.com]
>>>>> Sent: Monday, October 19, 2015 2:26 PM
>>>>> To: Tetsuya Mukawa <mukawa@igel.co.jp>; Richardson, Bruce
>>>>> <bruce.richardson@intel.com>; Loftus, Ciara <ciara.loftus@intel.com>
>>>>> Cc: dev@dpdk.org; ann.zhuangyanying@huawei.com
>>>>> Subject: Re: [dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD
>>>>>
>>>>> On 10/19/2015 01:50 PM, Tetsuya Mukawa wrote:
>>>>>> On 2015/10/19 18:45, Bruce Richardson wrote:
>>>>>>> On Mon, Oct 19, 2015 at 10:32:50AM +0100, Loftus, Ciara wrote:
>>>>>>>>> On 2015/10/16 21:52, Bruce Richardson wrote:
>>>>>>>>>> On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
>>>>>>>>>>> The patch introduces a new PMD. This PMD is implemented as thin
>>>>>>>>> wrapper
>>>>>>>>>>> of librte_vhost. It means librte_vhost is also needed to compile
>>>>> the PMD.
>>>>>>>>>>> The PMD can have 'iface' parameter like below to specify a path
>>>>>>>>>>> to
>>>>>>>>> connect
>>>>>>>>>>> to a virtio-net device.
>>>>>>>>>>>
>>>>>>>>>>> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
>>>>>>>>>>>
>>>>>>>>>>> To connect above testpmd, here is qemu command example.
>>>>>>>>>>>
>>>>>>>>>>> $ qemu-system-x86_64 \
>>>>>>>>>>> <snip>
>>>>>>>>>>> -chardev socket,id=chr0,path=/tmp/sock0 \
>>>>>>>>>>> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
>>>>>>>>>>> -device virtio-net-pci,netdev=net0
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
>>>>>>>>>> With this PMD in place, is there any need to keep the existing
>>>>>>>>>> vhost library around as a separate entity? Can the existing
>>>>>>>>>> library be
>>>>>>>>> subsumed/converted into
>>>>>>>>>> a standard PMD?
>>>>>>>>>>
>>>>>>>>>> /Bruce
>>>>>>>>> Hi Bruce,
>>>>>>>>>
>>>>>>>>> I concern about whether the PMD has all features of librte_vhost,
>>>>>>>>> because librte_vhost provides more features and freedom than ethdev
>>>>>>>>> API provides.
>>>>>>>>> In some cases, user needs to choose limited implementation without
>>>>>>>>> librte_vhost.
>>>>>>>>> I am going to eliminate such cases while implementing the PMD.
>>>>>>>>> But I don't have strong belief that we can remove librte_vhost now.
>>>>>>>>>
>>>>>>>>> So how about keeping current separation in next DPDK?
>>>>>>>>> I guess people will try to replace librte_vhost to vhost PMD,
>>>>>>>>> because apparently using ethdev APIs will be useful in many cases.
>>>>>>>>> And we will get feedbacks like "vhost PMD needs to support like this
>>>>> usage".
>>>>>>>>> (Or we will not have feedbacks, but it's also OK.) Then, we will be
>>>>>>>>> able to merge librte_vhost and vhost PMD.
>>>>>>>> I agree with the above. One the concerns I had when reviewing the
>>>>> patch was that the PMD removes some freedom that is available with the
>>>>> library. Eg. Ability to implement the new_device and destroy_device
>>>>> callbacks. If using the PMD you are constrained to the implementations of
>>>>> these in the PMD driver, but if using librte_vhost, you can implement your
>>>>> own with whatever functionality you like - a good example of this can be
>>>>> seen in the vhost sample app.
>>>>>>>> On the other hand, the PMD is useful in that it removes a lot of
>>>>> complexity for the user and may work for some more general use cases. So I
>>>>> would be in favour of having both options available too.
>>>>>>>> Ciara
>>>>>>>>
>>>>>>> Thanks.
>>>>>>> However, just because the libraries are merged does not mean that you
>>>>>>> need be limited by PMD functionality. Many PMDs provide additional
>>>>>>> library-specific functions over and above their PMD capabilities. The
>>>>>>> bonded PMD is a good example here, as it has a whole set of extra
>>>>>>> functions to create and manipulate bonded devices - things that are
>>>>>>> obviously not part of the general ethdev API. Other vPMDs similarly
>>>>> include functions to allow them to be created on the fly too.
>>>>>>> regards,
>>>>>>> /Bruce
>>>>>> Hi Bruce,
>>>>>>
>>>>>> I appreciate for showing a good example. I haven't noticed the PMD.
>>>>>> I will check the bonding PMD, and try to remove librte_vhost without
>>>>>> losing freedom and features of the library.
>>>>> Hi,
>>>>>
>>>>> Just a gentle reminder - if you consider removing (even if by just
>>>>> replacing/renaming) an entire library, it needs to happen the ABI
>>>>> deprecation process.
>>>>>
>>>>> It seems obvious enough. But for all the ABI policing here, somehow we all
>>>>> failed to notice the two compatibility breaking rename-elephants in the
>>>>> room during 2.1 development:
>>>>> - libintel_dpdk was renamed to libdpdk
>>>>> - librte_pmd_virtio_uio was renamed to librte_pmd_virtio
>>>>>
>>>>> Of course these cases are easy to work around with symlinks, and are
>>>>> unrelated to the matter at hand. Just wanting to make sure such things
>>>>> dont happen again.
>>>>>
>>>>> - Panu -
>>>> Still doesn't hurt to remind us, Panu! Thanks. :-)
>>> Hi,
>>>
>>> Thanks for reminder. I've checked the DPDK documentation.
>>> I will submit deprecation notice to follow DPDK deprecation process.
>>> (Probably we will be able to remove vhost library in DPDK-2.3 or later.)
>>>
>>> BTW, I will merge vhost library and PMD like below.
>>> Step1. Move vhost library under vhost PMD.
>>> Step2. Rename current APIs.
>>> Step3. Add a function to get a pointer of "struct virtio_net device" by
>>> a portno.
>>>
>>> Last steps allows us to be able to convert a portno to the pointer of
>>> corresponding vrtio_net device.
>>> And we can still use features and freedom vhost library APIs provided.
>> Just wondering, is that *really* worth the price of breaking every single
>> vhost library user out there?
>>
>> I mean, this is not about removing some bitrotten function or two which
>> nobody cares about anymore but removing (by renaming) one of the more widely
>> (AFAICS) used libraries and its entire API.
>>
>> If current APIs are kept then compatibility is largely a matter of planting
>> a strategic symlink or two, but it might make the API look inconsistent.
>>
>> But just wondering about the benefit of this merge thing, compared to just
>> adding a vhost pmd and leaving the library be. The ABI process is not there
>> to make life miserable for DPDK developers, its there to help make DPDK
>> nicer for *other* developers. And the first and the foremost rule is simply:
>> dont break backwards compatibility. Not unless there's a damn good reason to
>> doing so, and I fail to see that reason here.
>>
>> - Panu -
>>
> Good question, and I'll accept that maybe it's not worth doing. I'm not that
> much of an expert on the internals and APIs of vhost library.
>
> However, the merge I was looking for was more from a code locality point
> of view, to have all the vhost code in one directory (under drivers/net),
> than spread across multiple ones. What API's need to be deprecated
> or not as part of that work, is a separate question, and so in theory we could
> create a combined vhost library that does not deprecate anything (though to
> avoid a build-up of technical debt, we'll probably want to deprecate some
> functions).
>
> I'll leave it up to the vhost experts do decide what's best, but for me, any
> library that handles transmission and reception of packets outside of a DPDK
> app should be a PMD library using ethdev rx/tx burst routines, and located
> under drivers/net. (KNI is another obvious target for such a move and conversion).
>
> Regards,
> /Bruce
>
Hi,
I have submitted latest patches.
I will keep vhost library until we will have agreement to merge it to
vhost PMD.
Regards,
Testuya
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCHv6 1/9] ethdev: add new API to retrieve RX/TX queue information
@ 2015-10-22 12:06 2% ` Konstantin Ananyev
2015-10-27 12:51 3% ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
` (2 more replies)
2015-10-22 12:06 4% ` [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get() Konstantin Ananyev
1 sibling, 3 replies; 200+ results
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Add the ability for the upper layer to query RX/TX queue information.
Add into rte_eth_dev_info new fields to represent information about
RX/TX descriptors min/max/alig nnumbers per queue for the device.
Add new structures:
struct rte_eth_rxq_info
struct rte_eth_txq_info
new functions:
rte_eth_rx_queue_info_get
rte_eth_tx_queue_info_get
into rte_etdev API.
Left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage.
Add new fields:
rx_desc_lim
tx_desc_lim
into rte_eth_dev_info.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
3 files changed, 159 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..d18ecb5 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1447,6 +1447,19 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
+ nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
+ nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
+
+ PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+ "should be: <= %hu, = %hu, and a product of %hu\n",
+ nb_rx_desc,
+ dev_info.rx_desc_lim.nb_max,
+ dev_info.rx_desc_lim.nb_min,
+ dev_info.rx_desc_lim.nb_align);
+ return -EINVAL;
+ }
+
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
@@ -1786,11 +1799,18 @@ void
rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
+ const struct rte_eth_desc_lim lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ };
VALID_PORTID_OR_RET(port_id);
dev = &rte_eth_devices[port_id];
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
+ dev_info->rx_desc_lim = lim;
+ dev_info->tx_desc_lim = lim;
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -3221,6 +3241,54 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
}
int
+rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
+rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
rte_eth_dev_set_mc_addr_list(uint8_t port_id,
struct ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..4d7b6f2 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -653,6 +653,15 @@ struct rte_eth_txconf {
};
/**
+ * A structure contains information about HW descriptor ring limitations.
+ */
+struct rte_eth_desc_lim {
+ uint16_t nb_max; /**< Max allowed number of descriptors. */
+ uint16_t nb_min; /**< Min allowed number of descriptors. */
+ uint16_t nb_align; /**< Number of descriptors should be aligned to. */
+};
+
+/**
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
@@ -837,6 +846,8 @@ struct rte_eth_dev_info {
uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
+ struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
+ struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
};
/** Maximum name length for extended statistics counters */
@@ -854,6 +865,26 @@ struct rte_eth_xstats {
uint64_t value;
};
+/**
+ * Ethernet device RX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_rxq_info {
+ struct rte_mempool *mp; /**< mempool used by that queue. */
+ struct rte_eth_rxconf conf; /**< queue config parameters. */
+ uint8_t scattered_rx; /**< scattered packets RX supported. */
+ uint16_t nb_desc; /**< configured number of RXDs. */
+} __rte_cache_aligned;
+
+/**
+ * Ethernet device TX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_txq_info {
+ struct rte_eth_txconf conf; /**< queue config parameters. */
+ uint16_t nb_desc; /**< configured number of TXDs. */
+} __rte_cache_aligned;
+
struct rte_eth_dev;
struct rte_eth_dev_callback;
@@ -965,6 +996,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo);
+
+typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+
typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t mtu);
/**< @internal Set MTU. */
@@ -1301,9 +1338,13 @@ struct eth_dev_ops {
rss_hash_update_t rss_hash_update;
/** Get current RSS hash configuration. */
rss_hash_conf_get_t rss_hash_conf_get;
- eth_filter_ctrl_t filter_ctrl; /**< common filter control*/
+ eth_filter_ctrl_t filter_ctrl;
+ /**< common filter control. */
eth_set_mc_addr_list_t set_mc_addr_list; /**< set list of mcast addrs */
-
+ eth_rxq_info_get_t rxq_info_get;
+ /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get;
+ /**< retrieve TX queue information. */
/** Turn IEEE1588/802.1AS timestamping on. */
eth_timesync_enable_t timesync_enable;
/** Turn IEEE1588/802.1AS timestamping off. */
@@ -3441,6 +3482,46 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
struct rte_eth_rxtx_callback *user_cb);
/**
+ * Retrieve information about given port's RX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The RX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+/**
+ * Retrieve information about given port's TX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The TX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+/*
* Retrieve number of available registers for access
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..1fb4b87 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,11 @@ DPDK_2.1 {
rte_eth_timesync_read_tx_timestamp;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_eth_rx_queue_info_get;
+ rte_eth_tx_queue_info_get;
+
+} DPDK_2.1;
--
1.8.5.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get()
2015-10-22 12:06 2% ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
@ 2015-10-22 12:06 4% ` Konstantin Ananyev
1 sibling, 0 replies; 200+ results
From: Konstantin Ananyev @ 2015-10-22 12:06 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_2_2.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 4f75cff..33ea399 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -9,6 +9,11 @@ New Features
* Added support for Jumbo Frames.
* Optimize forwarding performance for Chelsio T5 40GbE cards.
+* **Add new API into rte_ethdev to retrieve RX/TX queue information.**
+
+ * Add the ability for the upper layer to query RX/TX queue information.
+ * Add into rte_eth_dev_info new fields to represent information about
+ RX/TX descriptors min/max/alig nnumbers per queue for the device.
Resolved Issues
---------------
@@ -94,6 +99,8 @@ API Changes
* The deprecated ring PMD functions are removed:
rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
+* New functions rte_eth_rx_queue_info_get() and rte_eth_tx_queue_info_get()
+ are introduced.
ABI Changes
-----------
--
1.8.5.3
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures for fdir new modes
@ 2015-10-22 12:57 3% ` Bruce Richardson
2015-10-23 1:22 0% ` Lu, Wenzhuo
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-10-22 12:57 UTC (permalink / raw)
To: Wenzhuo Lu; +Cc: dev
On Thu, Oct 22, 2015 at 03:11:36PM +0800, Wenzhuo Lu wrote:
> Define the new modes and modify the filter and mask structures for
> the mac vlan and tunnel modes.
>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
Hi Wenzhuo,
couple of stylistic comments below, which would help with patch review, especially
with regards to checking for ABI issues.
/Bruce
> ---
> lib/librte_ether/rte_eth_ctrl.h | 69 ++++++++++++++++++++++++++++++-----------
> 1 file changed, 51 insertions(+), 18 deletions(-)
>
> diff --git a/lib/librte_ether/rte_eth_ctrl.h b/lib/librte_ether/rte_eth_ctrl.h
> index 26b7b33..078faf9 100644
> --- a/lib/librte_ether/rte_eth_ctrl.h
> +++ b/lib/librte_ether/rte_eth_ctrl.h
> @@ -248,6 +248,17 @@ enum rte_eth_tunnel_type {
> };
>
> /**
> + * Flow Director setting modes: none, signature or perfect.
> + */
> +enum rte_fdir_mode {
> + RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> + RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter mode. */
> + RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode for IP. */
> + RTE_FDIR_MODE_PERFECT_MAC_VLAN, /**< Enable FDIR filter mode - MAC VLAN. */
> + RTE_FDIR_MODE_PERFECT_TUNNEL, /**< Enable FDIR filter mode - tunnel. */
> +};
> +
Why is this structure definition moved in the file, it makes seeing the
diff vs the old version difficult.
> +/**
> * filter type of tunneling packet
> */
> #define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr */
> @@ -377,18 +388,46 @@ struct rte_eth_sctpv6_flow {
> };
>
> /**
> + * A structure used to define the input for MAC VLAN flow
> + */
> +struct rte_eth_mac_vlan_flow {
> + struct ether_addr mac_addr; /**< Mac address to match. */
> +};
> +
> +/**
> + * Tunnel type for flow director.
> + */
> +enum rte_eth_fdir_tunnel_type {
> + RTE_FDIR_TUNNEL_TYPE_NVGRE = 0,
> + RTE_FDIR_TUNNEL_TYPE_VXLAN,
> + RTE_FDIR_TUNNEL_TYPE_UNKNOWN,
> +};
> +
> +/**
> + * A structure used to define the input for tunnel flow, now it's VxLAN or
> + * NVGRE
> + */
> +struct rte_eth_tunnel_flow {
> + enum rte_eth_fdir_tunnel_type tunnel_type; /**< Tunnel type to match. */
> + uint32_t tunnel_id; /**< Tunnel ID to match. TNI, VNI... */
> + struct ether_addr mac_addr; /**< Mac address to match. */
> +};
> +
> +/**
> * An union contains the inputs for all types of flow
> */
> union rte_eth_fdir_flow {
> - struct rte_eth_l2_flow l2_flow;
> - struct rte_eth_udpv4_flow udp4_flow;
> - struct rte_eth_tcpv4_flow tcp4_flow;
> - struct rte_eth_sctpv4_flow sctp4_flow;
> - struct rte_eth_ipv4_flow ip4_flow;
> - struct rte_eth_udpv6_flow udp6_flow;
> - struct rte_eth_tcpv6_flow tcp6_flow;
> - struct rte_eth_sctpv6_flow sctp6_flow;
> - struct rte_eth_ipv6_flow ipv6_flow;
> + struct rte_eth_l2_flow l2_flow;
> + struct rte_eth_udpv4_flow udp4_flow;
> + struct rte_eth_tcpv4_flow tcp4_flow;
> + struct rte_eth_sctpv4_flow sctp4_flow;
> + struct rte_eth_ipv4_flow ip4_flow;
> + struct rte_eth_udpv6_flow udp6_flow;
> + struct rte_eth_tcpv6_flow tcp6_flow;
> + struct rte_eth_sctpv6_flow sctp6_flow;
> + struct rte_eth_ipv6_flow ipv6_flow;
> + struct rte_eth_mac_vlan_flow mac_vlan_flow;
> + struct rte_eth_tunnel_flow tunnel_flow;
Can you please minimize the whitespace changes here. It looks in the diff
like you are replacing the entire set of entries, but on closer inspection
it looks like you are just adding in two extra lines.
> };
>
> /**
> @@ -465,6 +504,9 @@ struct rte_eth_fdir_masks {
> struct rte_eth_ipv6_flow ipv6_mask;
> uint16_t src_port_mask;
> uint16_t dst_port_mask;
> + uint8_t mac_addr_byte_mask; /** Per byte MAC address mask */
> + uint32_t tunnel_id_mask; /** tunnel ID mask */
> + uint8_t tunnel_type_mask;
> };
>
> /**
> @@ -515,15 +557,6 @@ struct rte_eth_fdir_flex_conf {
> /**< Flex mask configuration for each flow type */
> };
>
> -/**
> - * Flow Director setting modes: none, signature or perfect.
> - */
> -enum rte_fdir_mode {
> - RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> - RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter mode. */
> - RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode. */
> -};
> -
> #define UINT32_BIT (CHAR_BIT * sizeof(uint32_t))
> #define RTE_FLOW_MASK_ARRAY_SIZE \
> (RTE_ALIGN(RTE_ETH_FLOW_MAX, UINT32_BIT)/UINT32_BIT)
> --
> 1.9.3
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 2/2] librte_cfgfile(rte_cfgfile.h): modify the macros values
2015-10-22 14:03 4% ` [dpdk-dev] [PATCH v4 0/2] cfgfile: " Jasvinder Singh
@ 2015-10-22 14:03 8% ` Jasvinder Singh
0 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2015-10-22 14:03 UTC (permalink / raw)
To: dev
This patch refers to the ABI change proposed for
librte_cfgfile(rte_cfgfile.h). In order to allow
for longer names and values, the values of macro
CFG_NAME_LEN and CFG_VAL_LEN is increased.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_2_2.rst | 6 +++++-
lib/librte_cfgfile/Makefile | 2 +-
lib/librte_cfgfile/rte_cfgfile.h | 9 +++++++--
4 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 018a119..a391ff0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -41,10 +41,6 @@ Deprecation Notices
* The scheduler statistics structure will change to allow keeping track of
RED actions.
-* librte_cfgfile: In order to allow for longer names and values,
- the value of macros CFG_NAME_LEN and CFG_NAME_VAL will be increased.
- Most likely, the new values will be 64 and 256, respectively.
-
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 4f75cff..3c85c92 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -117,6 +117,10 @@ ABI Changes
* librte_port: Macros to access the packet meta-data stored within the packet
buffer has been adjusted to cover the packet mbuf structure.
+* librte_cfgfile: In order to allow for longer names and values,
+ the value of macros CFG_NAME_LEN and CFG_VALUE_LEN is increased,
+ the new values are 64 and 256, respectively.
+
Shared Library Versions
-----------------------
@@ -127,7 +131,7 @@ The libraries prepended with a plus sign were incremented in this version.
+ libethdev.so.2
+ librte_acl.so.2
- librte_cfgfile.so.1
+ librte_cfgfile.so.2
librte_cmdline.so.1
librte_distributor.so.1
+ librte_eal.so.2
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
index 032c240..616aef0 100644
--- a/lib/librte_cfgfile/Makefile
+++ b/lib/librte_cfgfile/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_cfgfile_version.map
-LIBABIVER := 1
+LIBABIVER := 2
#
# all source are stored in SRCS-y
diff --git a/lib/librte_cfgfile/rte_cfgfile.h b/lib/librte_cfgfile/rte_cfgfile.h
index 7c9fc91..d443782 100644
--- a/lib/librte_cfgfile/rte_cfgfile.h
+++ b/lib/librte_cfgfile/rte_cfgfile.h
@@ -47,8 +47,13 @@ extern "C" {
*
***/
-#define CFG_NAME_LEN 32
-#define CFG_VALUE_LEN 64
+#ifndef CFG_NAME_LEN
+#define CFG_NAME_LEN 64
+#endif
+
+#ifndef CFG_VALUE_LEN
+#define CFG_VALUE_LEN 256
+#endif
/** Configuration file */
struct rte_cfgfile;
--
2.1.0
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v4 0/2] cfgfile: modify the macros values
2015-09-04 10:58 8% ` [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
2015-09-07 11:23 0% ` Dumitrescu, Cristian
@ 2015-10-22 14:03 4% ` Jasvinder Singh
2015-10-22 14:03 8% ` [dpdk-dev] [PATCH v4 2/2] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
1 sibling, 1 reply; 200+ results
From: Jasvinder Singh @ 2015-10-22 14:03 UTC (permalink / raw)
To: dev
This patchset is modifying two macros in rte_cfgfile
library thus ABI needs versioning. To meet requirements
of ABI compatibility release notes and Makefile is also
modified. Additionally a fix for qos_sched application is
send as previously it was incomplete in 2.1 and application
was redefining that macros.
v2:
*changed commit message
*removed deprecation notice
*updated makefile
v3:
*updated release note.
v4:
*fixed build error for qos_sched sample app.
*supplement incomplete implementation in 2.1
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Michal Jastrzebski (1):
qos_sched: fix example modification to use librte_cfgfile
Jasvinder Singh (1):
librte_cfgfile(rte_cfgfile.h): modify the macros values
doc/guides/rel_notes/deprecation.rst | 4 -
doc/guides/rel_notes/release_2_2.rst | 6 +-
examples/qos_sched/cfg_file.c | 183 -----------------------------------
examples/qos_sched/cfg_file.h | 29 ------
lib/librte_cfgfile/Makefile | 2 +-
lib/librte_cfgfile/rte_cfgfile.h | 9 +-
6 files changed, 13 insertions(+), 220 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures for fdir new modes
2015-10-22 12:57 3% ` Bruce Richardson
@ 2015-10-23 1:22 0% ` Lu, Wenzhuo
2015-10-23 9:58 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Lu, Wenzhuo @ 2015-10-23 1:22 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
Hi Bruce,
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, October 22, 2015 8:57 PM
> To: Lu, Wenzhuo
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures
> for fdir new modes
>
> On Thu, Oct 22, 2015 at 03:11:36PM +0800, Wenzhuo Lu wrote:
> > Define the new modes and modify the filter and mask structures for the
> > mac vlan and tunnel modes.
> >
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>
> Hi Wenzhuo,
>
> couple of stylistic comments below, which would help with patch review,
> especially with regards to checking for ABI issues.
>
> /Bruce
>
> > ---
> > lib/librte_ether/rte_eth_ctrl.h | 69
> > ++++++++++++++++++++++++++++++-----------
> > 1 file changed, 51 insertions(+), 18 deletions(-)
> >
> > diff --git a/lib/librte_ether/rte_eth_ctrl.h
> > b/lib/librte_ether/rte_eth_ctrl.h index 26b7b33..078faf9 100644
> > --- a/lib/librte_ether/rte_eth_ctrl.h
> > +++ b/lib/librte_ether/rte_eth_ctrl.h
> > @@ -248,6 +248,17 @@ enum rte_eth_tunnel_type { };
> >
> > /**
> > + * Flow Director setting modes: none, signature or perfect.
> > + */
> > +enum rte_fdir_mode {
> > + RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > + RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter mode.
> */
> > + RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode for
> IP. */
> > + RTE_FDIR_MODE_PERFECT_MAC_VLAN, /**< Enable FDIR filter mode
> - MAC VLAN. */
> > + RTE_FDIR_MODE_PERFECT_TUNNEL, /**< Enable FDIR filter mode -
> tunnel. */
> > +};
> > +
>
> Why is this structure definition moved in the file, it makes seeing the diff vs
> the old version difficult.
I remember in the original version I move this enum to resolve a compile issue.
But now after all the code changed, the issue is not here. So, I'll move it back.
>
> > +/**
> > * filter type of tunneling packet
> > */
> > #define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr
> */
> > @@ -377,18 +388,46 @@ struct rte_eth_sctpv6_flow { };
> >
> > /**
> > + * A structure used to define the input for MAC VLAN flow */ struct
> > +rte_eth_mac_vlan_flow {
> > + struct ether_addr mac_addr; /**< Mac address to match. */ };
> > +
> > +/**
> > + * Tunnel type for flow director.
> > + */
> > +enum rte_eth_fdir_tunnel_type {
> > + RTE_FDIR_TUNNEL_TYPE_NVGRE = 0,
> > + RTE_FDIR_TUNNEL_TYPE_VXLAN,
> > + RTE_FDIR_TUNNEL_TYPE_UNKNOWN,
> > +};
> > +
> > +/**
> > + * A structure used to define the input for tunnel flow, now it's
> > +VxLAN or
> > + * NVGRE
> > + */
> > +struct rte_eth_tunnel_flow {
> > + enum rte_eth_fdir_tunnel_type tunnel_type; /**< Tunnel type to
> match. */
> > + uint32_t tunnel_id; /**< Tunnel ID to match. TNI, VNI...
> */
> > + struct ether_addr mac_addr; /**< Mac address to match. */
> > +};
> > +
> > +/**
> > * An union contains the inputs for all types of flow
> > */
> > union rte_eth_fdir_flow {
> > - struct rte_eth_l2_flow l2_flow;
> > - struct rte_eth_udpv4_flow udp4_flow;
> > - struct rte_eth_tcpv4_flow tcp4_flow;
> > - struct rte_eth_sctpv4_flow sctp4_flow;
> > - struct rte_eth_ipv4_flow ip4_flow;
> > - struct rte_eth_udpv6_flow udp6_flow;
> > - struct rte_eth_tcpv6_flow tcp6_flow;
> > - struct rte_eth_sctpv6_flow sctp6_flow;
> > - struct rte_eth_ipv6_flow ipv6_flow;
> > + struct rte_eth_l2_flow l2_flow;
> > + struct rte_eth_udpv4_flow udp4_flow;
> > + struct rte_eth_tcpv4_flow tcp4_flow;
> > + struct rte_eth_sctpv4_flow sctp4_flow;
> > + struct rte_eth_ipv4_flow ip4_flow;
> > + struct rte_eth_udpv6_flow udp6_flow;
> > + struct rte_eth_tcpv6_flow tcp6_flow;
> > + struct rte_eth_sctpv6_flow sctp6_flow;
> > + struct rte_eth_ipv6_flow ipv6_flow;
> > + struct rte_eth_mac_vlan_flow mac_vlan_flow;
> > + struct rte_eth_tunnel_flow tunnel_flow;
>
> Can you please minimize the whitespace changes here. It looks in the diff
> like you are replacing the entire set of entries, but on closer inspection it
> looks like you are just adding in two extra lines.
Using vi or other editing tools, we can see all this fields are aligned. I think it's
worth to keep it.
>
> > };
> >
> > /**
> > @@ -465,6 +504,9 @@ struct rte_eth_fdir_masks {
> > struct rte_eth_ipv6_flow ipv6_mask;
> > uint16_t src_port_mask;
> > uint16_t dst_port_mask;
> > + uint8_t mac_addr_byte_mask; /** Per byte MAC address mask */
> > + uint32_t tunnel_id_mask; /** tunnel ID mask */
> > + uint8_t tunnel_type_mask;
> > };
> >
> > /**
> > @@ -515,15 +557,6 @@ struct rte_eth_fdir_flex_conf {
> > /**< Flex mask configuration for each flow type */ };
> >
> > -/**
> > - * Flow Director setting modes: none, signature or perfect.
> > - */
> > -enum rte_fdir_mode {
> > - RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > - RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter
> mode. */
> > - RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode.
> */
> > -};
> > -
> > #define UINT32_BIT (CHAR_BIT * sizeof(uint32_t)) #define
> > RTE_FLOW_MASK_ARRAY_SIZE \
> > (RTE_ALIGN(RTE_ETH_FLOW_MAX, UINT32_BIT)/UINT32_BIT)
> > --
> > 1.9.3
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures for fdir new modes
2015-10-23 1:22 0% ` Lu, Wenzhuo
@ 2015-10-23 9:58 0% ` Bruce Richardson
2015-10-23 13:06 0% ` Lu, Wenzhuo
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-10-23 9:58 UTC (permalink / raw)
To: Lu, Wenzhuo; +Cc: dev
On Fri, Oct 23, 2015 at 02:22:28AM +0100, Lu, Wenzhuo wrote:
> Hi Bruce,
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, October 22, 2015 8:57 PM
> > To: Lu, Wenzhuo
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures
> > for fdir new modes
> >
> > On Thu, Oct 22, 2015 at 03:11:36PM +0800, Wenzhuo Lu wrote:
> > > Define the new modes and modify the filter and mask structures for the
> > > mac vlan and tunnel modes.
> > >
> > > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >
> > Hi Wenzhuo,
> >
> > couple of stylistic comments below, which would help with patch review,
> > especially with regards to checking for ABI issues.
> >
> > /Bruce
> >
> > > ---
> > > lib/librte_ether/rte_eth_ctrl.h | 69
> > > ++++++++++++++++++++++++++++++-----------
> > > 1 file changed, 51 insertions(+), 18 deletions(-)
> > >
> > > diff --git a/lib/librte_ether/rte_eth_ctrl.h
> > > b/lib/librte_ether/rte_eth_ctrl.h index 26b7b33..078faf9 100644
> > > --- a/lib/librte_ether/rte_eth_ctrl.h
> > > +++ b/lib/librte_ether/rte_eth_ctrl.h
> > > @@ -248,6 +248,17 @@ enum rte_eth_tunnel_type { };
> > >
> > > /**
> > > + * Flow Director setting modes: none, signature or perfect.
> > > + */
> > > +enum rte_fdir_mode {
> > > + RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > > + RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter mode.
> > */
> > > + RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode for
> > IP. */
> > > + RTE_FDIR_MODE_PERFECT_MAC_VLAN, /**< Enable FDIR filter mode
> > - MAC VLAN. */
> > > + RTE_FDIR_MODE_PERFECT_TUNNEL, /**< Enable FDIR filter mode -
> > tunnel. */
> > > +};
> > > +
> >
> > Why is this structure definition moved in the file, it makes seeing the diff vs
> > the old version difficult.
> I remember in the original version I move this enum to resolve a compile issue.
> But now after all the code changed, the issue is not here. So, I'll move it back.
>
> >
> > > +/**
> > > * filter type of tunneling packet
> > > */
> > > #define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC addr
> > */
> > > @@ -377,18 +388,46 @@ struct rte_eth_sctpv6_flow { };
> > >
> > > /**
> > > + * A structure used to define the input for MAC VLAN flow */ struct
> > > +rte_eth_mac_vlan_flow {
> > > + struct ether_addr mac_addr; /**< Mac address to match. */ };
> > > +
> > > +/**
> > > + * Tunnel type for flow director.
> > > + */
> > > +enum rte_eth_fdir_tunnel_type {
> > > + RTE_FDIR_TUNNEL_TYPE_NVGRE = 0,
> > > + RTE_FDIR_TUNNEL_TYPE_VXLAN,
> > > + RTE_FDIR_TUNNEL_TYPE_UNKNOWN,
> > > +};
> > > +
> > > +/**
> > > + * A structure used to define the input for tunnel flow, now it's
> > > +VxLAN or
> > > + * NVGRE
> > > + */
> > > +struct rte_eth_tunnel_flow {
> > > + enum rte_eth_fdir_tunnel_type tunnel_type; /**< Tunnel type to
> > match. */
> > > + uint32_t tunnel_id; /**< Tunnel ID to match. TNI, VNI...
> > */
> > > + struct ether_addr mac_addr; /**< Mac address to match. */
> > > +};
> > > +
> > > +/**
> > > * An union contains the inputs for all types of flow
> > > */
> > > union rte_eth_fdir_flow {
> > > - struct rte_eth_l2_flow l2_flow;
> > > - struct rte_eth_udpv4_flow udp4_flow;
> > > - struct rte_eth_tcpv4_flow tcp4_flow;
> > > - struct rte_eth_sctpv4_flow sctp4_flow;
> > > - struct rte_eth_ipv4_flow ip4_flow;
> > > - struct rte_eth_udpv6_flow udp6_flow;
> > > - struct rte_eth_tcpv6_flow tcp6_flow;
> > > - struct rte_eth_sctpv6_flow sctp6_flow;
> > > - struct rte_eth_ipv6_flow ipv6_flow;
> > > + struct rte_eth_l2_flow l2_flow;
> > > + struct rte_eth_udpv4_flow udp4_flow;
> > > + struct rte_eth_tcpv4_flow tcp4_flow;
> > > + struct rte_eth_sctpv4_flow sctp4_flow;
> > > + struct rte_eth_ipv4_flow ip4_flow;
> > > + struct rte_eth_udpv6_flow udp6_flow;
> > > + struct rte_eth_tcpv6_flow tcp6_flow;
> > > + struct rte_eth_sctpv6_flow sctp6_flow;
> > > + struct rte_eth_ipv6_flow ipv6_flow;
> > > + struct rte_eth_mac_vlan_flow mac_vlan_flow;
> > > + struct rte_eth_tunnel_flow tunnel_flow;
> >
> > Can you please minimize the whitespace changes here. It looks in the diff
> > like you are replacing the entire set of entries, but on closer inspection it
> > looks like you are just adding in two extra lines.
> Using vi or other editing tools, we can see all this fields are aligned. I think it's
> worth to keep it.
>
I'm a bit uncertain about this. It really makes the diff confusing, so ideally
the whitespace cleanup should go in a separate patch. On the other hand, it's
a fairly small change, so maybe it's ok here.
Right now, I'd prefer it as a separate patch, because I had to manually go
through each line added/removed to see how they tallied before I understood that
you were just adding in two new lines, and nothing else.
Thomas, perhaps you'd like to comment here?
> >
> > > };
> > >
> > > /**
> > > @@ -465,6 +504,9 @@ struct rte_eth_fdir_masks {
> > > struct rte_eth_ipv6_flow ipv6_mask;
> > > uint16_t src_port_mask;
> > > uint16_t dst_port_mask;
> > > + uint8_t mac_addr_byte_mask; /** Per byte MAC address mask */
> > > + uint32_t tunnel_id_mask; /** tunnel ID mask */
> > > + uint8_t tunnel_type_mask;
> > > };
> > >
> > > /**
> > > @@ -515,15 +557,6 @@ struct rte_eth_fdir_flex_conf {
> > > /**< Flex mask configuration for each flow type */ };
> > >
> > > -/**
> > > - * Flow Director setting modes: none, signature or perfect.
> > > - */
> > > -enum rte_fdir_mode {
> > > - RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > > - RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter
> > mode. */
> > > - RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode.
> > */
> > > -};
> > > -
> > > #define UINT32_BIT (CHAR_BIT * sizeof(uint32_t)) #define
> > > RTE_FLOW_MASK_ARRAY_SIZE \
> > > (RTE_ALIGN(RTE_ETH_FLOW_MAX, UINT32_BIT)/UINT32_BIT)
> > > --
> > > 1.9.3
> > >
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3] doc: add contributors guide
2015-10-15 16:51 3% ` [dpdk-dev] [PATCH] doc: add " John McNamara
2015-10-20 11:03 2% ` [dpdk-dev] [PATCH v2] " John McNamara
@ 2015-10-23 10:18 2% ` John McNamara
2 siblings, 0 replies; 200+ results
From: John McNamara @ 2015-10-23 10:18 UTC (permalink / raw)
To: dev
Add a document to explain the DPDK patch submission and review process.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
v3:
* Add recommendation to test build the shared and combined libraries.
v2:
* Fixes for mailing list comments.
* Fix for broken link target.
doc/guides/contributing/documentation.rst | 2 +-
doc/guides/contributing/index.rst | 1 +
doc/guides/contributing/patches.rst | 349 ++++++++++++++++++++++++++++++
3 files changed, 351 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/contributing/patches.rst
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 7c1eb41..0e37f01 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -1,4 +1,4 @@
-.. doc_guidelines:
+.. _doc_guidelines:
DPDK Documentation Guidelines
=============================
diff --git a/doc/guides/contributing/index.rst b/doc/guides/contributing/index.rst
index 561427b..f49ca88 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -9,3 +9,4 @@ Contributor's Guidelines
design
versioning
documentation
+ patches
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
new file mode 100644
index 0000000..60bbdaa
--- /dev/null
+++ b/doc/guides/contributing/patches.rst
@@ -0,0 +1,349 @@
+.. submitting_patches:
+
+Contributing Code to DPDK
+=========================
+
+This document outlines the guidelines for submitting code to DPDK.
+
+The DPDK development process is modeled (loosely) on the Linux Kernel development model so it is worth reading the
+Linux kernel guide on submitting patches:
+`How to Get Your Change Into the Linux Kernel <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_.
+The rationale for many of the DPDK guidelines is explained in greater detail in the kernel guidelines.
+
+
+The DPDK Development Process
+-----------------------------
+
+The DPDK development process has the following features:
+
+* The code is hosted in a public git repository.
+* There is a mailing list where developers submit patches.
+* There are maintainers for hierarchical components.
+* Patches are reviewed publicly on the mailing list.
+* Successfully reviewed patches are merged to the master branch of the repository.
+
+The mailing list for DPDK development is `dev@dpkg.org <http://dpdk.org/ml/archives/dev/>`_.
+Contributors will need to `register for the mailing list <http://dpdk.org/ml/listinfo/dev>`_ in order to submit patches.
+It is also worth registering for the DPDK `Patchwork <http://dpdk.org/dev/patchwxispork/project/dpdk/list/>`_
+
+The development process requires some familiarity with the ``git`` version control system.
+Refer to the `Pro Git Book <http://www.git-scm.com/book/>`_ for further information.
+
+
+Getting the Source Code
+-----------------------
+
+The source code can be cloned using either of the following::
+
+ git clone git://dpdk.org/dpdk
+
+ git clone http://dpdk.org/git/dpdk
+
+
+Make your Changes
+-----------------
+
+Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines and requirements:
+
+* Follow the :ref:`coding_style` guidelines.
+
+* If you add new files or directories you should add your name to the ``MAINTAINERS`` file.
+
+* New external functions should be added to the local ``version.map`` file.
+ See the :doc:`Guidelines for ABI policy and versioning </contributing/versioning>`.
+
+* Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
+ See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
+
+* Run ``make install``, ``make examples`` and ``make test`` and build the shared and combined libraries
+ to ensure the changes haven't broken existing code:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ make T=$RTE_TARGET install
+ make T=$RTE_TARGET install CONFIG_RTE_BUILD_SHARED_LIB=y \
+ CONFIG_RTE_BUILD_COMBINE_LIBS=y
+ make T=$RTE_TARGET examples
+ make T=$RTE_TARGET test
+
+* Don’t break compilation between commits with forward dependencies in a patchset.
+ Each commit should compile on its own to allow for ``git bisect`` and continuous integration testing.
+
+* Add tests to the the ``app/test`` unit test framework where possible.
+
+* Add documentation, if relevant, in the form of Doxygen comments or a User Guide in RST format.
+ See the :ref:`Documentation Guidelines <doc_guidelines>`.
+
+Once the changes have been made you should commit them to your local repo.
+
+For small changes, that do not require specific explanations, it is better to keep things together in the
+same patch.
+Larger changes that require different explanations should be separated into logical patches in a patchset.
+A good way of thinking about whether a patch should be split is to consider whether the change could be
+applied without dependencies as a backport.
+
+As a guide to how patches should be structured run ``git log`` on similar files.
+
+
+Commit Messages: Subject Line
+-----------------------------
+
+The first, summary, line of the git commit message becomes the subject line of the patch email.
+Here are some guidelines for the summary line:
+
+* The summary line must capture the area and the impact of the change.
+
+* The summary line should be around 50 characters.
+
+* The summary line should be lowercase apart from acronyms.
+
+* It should be prefixed with the component name (use git log to check existing components).
+ For example::
+
+ config: enable same drivers options for linux and bsd
+
+ ixgbe: fix offload config option name
+
+* Use the imperative of the verb (like instructions to the code base).
+ For example::
+
+ ixgbe: fix rss in 32 bit
+
+* Don't add a period/full stop to the subject line or you will end up two in the patch name: ``dpdk_description..patch``.
+
+The actual email subject line should be prefixed by ``[PATCH]`` and the version, if greater than v1,
+for example: ``PATCH v2``.
+The is generally added by ``git send-email`` or ``git format-patch``, see below.
+
+If you are submitting an RFC draft of a feature you can use ``[RFC]`` instead of ``[PATCH]``.
+An RFC patch doesn't have to be complete.
+It is intended as a way of getting early feedback.
+
+
+Commit Messages: Body
+---------------------
+
+Here are some guidelines for the body of a commit message:
+
+* The body of the message should describe the issue being fixed or the feature being added.
+ It is important to provide enough information to allow a reviewer to understand the purpose of the patch.
+
+* When the change is obvious the body can be blank, apart from the signoff.
+
+* The commit message must end with a ``Signed-off-by:`` line which is added using::
+
+ git commit --signoff # or -s
+
+ The purpose of the signoff is explained in the
+ `Developer's Certificate of Origin <http://www.kernel.org/doc/Documentation/SubmittingPatches>`_
+ section of the Linux kernel guidelines.
+
+ .. Note::
+
+ All developers must ensure that they have read and understood the
+ Developer's Certificate of Origin section of the documentation prior
+ to applying the signoff and submitting a patch.
+
+* The signoff must be a real name and not an alias or nickname.
+ More than one signoff is allowed.
+
+* The text of the commit message should be wrapped at 72 characters.
+
+* When fixing a regression, it is a good idea to reference the id of the commit which introduced the bug.
+ You can generate the required text using the following git alias::
+
+ git alias: fixline = log -1 --abbrev=12 --format='Fixes: %h (\"%s\")'
+
+
+ The ``Fixes:`` line can then be added to the commit message::
+
+ doc: fix vhost sample parameter
+
+ Update the docs to reflect removed dev-index.
+
+ Fixes: 17b8320a3e11 ("vhost: remove index parameter")
+
+ Signed-off-by: Alex Smith <alex.smith@example.com>
+
+* When fixing an error or warning it is useful to add the error message and instructions on how to reproduce it.
+
+* Use correct capitalization, punctuation and spelling.
+
+In addition to the ``Signed-off-by:`` name the commit messages can also have one or more of the following:
+
+* ``Reported-by:`` The reporter of the issue.
+* ``Tested-by:`` The tester of the change.
+* ``Reviewed-by:`` The reviewer of the change.
+* ``Suggested-by:`` The person who suggested the change.
+* ``Acked-by:`` When a previous version of the patch was acked and the ack is still relevant.
+
+
+Creating Patches
+----------------
+
+It is possible to send patches directly from git but for new contributors it is recommended to generate the
+patches with ``git format-patch`` and then when everything looks okay, and the patches have been checked, to
+send them with ``git send-mail``.
+
+Here are some examples of using ``git format-patch`` to generate patches:
+
+.. code-block:: console
+
+ # Generate a patch from the last commit.
+ git format-patch -1
+
+ # Generate a patch from the last 3 commits.
+ git format-patch -3
+
+ # Generate the patches in a directory.
+ git format-patch -3 -o ~/patch/
+
+ # Add a cover letter to explain a patchset.
+ git format-patch -3 -o --cover-letter
+
+ # Add a prefix with a version number.
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+
+Cover letters are useful for explaining a patchset and help to generate a logical threading to the patches.
+Smaller notes can be put inline in the patch after the ``---`` separator, for example::
+
+ Subject: [PATCH] fm10k/base: add FM10420 device ids
+
+ Add the device ID for Boulder Rapids and Atwood Channel to enable
+ drivers to support those devices.
+
+ Signed-off-by: Alex Smith <alex.smith@example.com>
+ ---
+
+ ADD NOTES HERE.
+
+ drivers/net/fm10k/base/fm10k_api.c | 6 ++++++
+ drivers/net/fm10k/base/fm10k_type.h | 6 ++++++
+ 2 files changed, 12 insertions(+)
+ ...
+
+Version 2 and later of a patchset should also include a short log of the changes so the reviewer knows what has changed.
+This can be added to the cover letter or the annotations.
+For example::
+
+ v3:
+ * Fixed issued with version.map.
+
+ v2:
+ * Added i40e support.
+ * Renamed ethdev functions from rte_eth_ieee15888_*() to rte_eth_timesync_*()
+ since 802.1AS can be supported through the same interfaces.
+
+
+Checking the Patches
+--------------------
+
+Patches should be checked for formatting and syntax issues using the Linux scripts tool ``checkpatch``.
+
+The ``checkpatch`` utility can be obtained by cloning, and periodically, updating the Linux kernel sources.
+
+The kernel guidelines tested by ``checkpatch`` don't match the DPDK Coding Style guidelines exactly but
+they provide a good indication of conformance.
+Warnings about kernel data types or about split strings can be ignored::
+
+ /path/checkpatch.pl --ignore PREFER_KERNEL_TYPES,SPLIT_STRING -q files*.patch
+
+Ensure that the code compiles with ``gcc`` and ``clang``::
+
+ make T=x86_64-native-linuxapp-gcc install
+ make T=x86_64-native-linuxapp-clang install
+
+Confirm that the changes haven't broken any existing code by running ``make install``, ``make examples`` and
+``make test`` and building the shared and combined libraries:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ make T=$RTE_TARGET install
+ make T=$RTE_TARGET install CONFIG_RTE_BUILD_SHARED_LIB=y \
+ CONFIG_RTE_BUILD_COMBINE_LIBS=y
+ make T=$RTE_TARGET examples
+ make T=$RTE_TARGET test
+
+
+Sending Patches
+---------------
+
+Patches should be sent to the mailing list using ``git send-email``.
+You can configure an external SMTP with something like the following::
+
+ [sendemail]
+ smtpuser = name@domain.com
+ smtpserver = smtp.domain.com
+ smtpserverport = 465
+ smtpencryption = ssl
+
+See the `Git send-mail <https://git-scm.com/docs/git-send-email>`_ documentation for more details.
+
+The patches should be sent to ``dev@dpdk.org``.
+If the patches are a change to existing files then you should send them TO the maintainer(s) and CC ``dev@dpdk.org``.
+The appropriate maintainer can be found in the ``MAINTAINERS`` file::
+
+ git send-email --to maintainer@some.org --cc dev@dpdk.org 000*.patch
+
+New additions can be sent without a maintainer::
+
+ git send-email --to dev@dpdk.org 000*.patch
+
+You can test the emails by sending it to yourself or with the ``--dry-run`` option.
+
+If the patch is in relation to a previous email thread you can add it to the same thread using the Message ID::
+
+ git send-email --to dev@dpdk.org --in-reply-to <1234-foo@bar.com> 000*.patch
+
+The Message ID can be found in the raw text of emails or at the top of each Patchwork patch,
+`for example <http://dpdk.org/dev/patchwork/patch/7646/>`_.
+Shallow threading (``--thread --no-chain-reply-to``) is preferred for a patch series.
+
+Once submitted your patches will appear on the mailing list and in Patchwork.
+
+Experienced committers may send patches directly with ``git send-email`` without the ``git format-patch`` step.
+The options ``--annotate`` and ``confirm = always`` are recommended for checking patches before sending.
+
+
+The Review Process
+------------------
+
+The more work you put into the previous steps the easier it will be to get a patch accepted.
+
+The general cycle for patch review and acceptance is:
+
+#. Submit the patch.
+
+#. Check the automatic test reports in the coming hours.
+
+#. Wait for review comments. While you are waiting review some other patches.
+
+#. Fix the review comments and submit a ``v n+1`` patchset::
+
+ git format-patch -3 -o --subject-prefix 'PATCH v2'
+
+#. Update Patchwork to mark your previous patches as "Superseded".
+
+#. If the patch is deemed suitable for merging by the relevant maintainer(s) or other developers they will ``ack``
+ the patch with an email that includes something like::
+
+ Acked-by: Alex Smith <alex.smith@example.com>
+
+ **Note**: When acking patches please remove as much of the text of the patch email as possible.
+ It is generally best to delete everything after the ``Signed-off-by:`` line.
+
+#. Having the patch ``Reviewed-by:`` and/or ``Tested-by:`` will also help the patch to be accepted.
+
+#. If the patch isn't deemed suitable based on being out of scope or conflicting with existing functionality
+ it may receive a ``nack``.
+ In this case you will need to make a more convincing technical argument in favor of your patches.
+
+#. In addition a patch will not be accepted if it doesn't address comments from a previous version with fixes or
+ valid arguments.
+
+#. Acked patches will be merged in the current or next merge window.
--
1.8.1.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures for fdir new modes
2015-10-23 9:58 0% ` Bruce Richardson
@ 2015-10-23 13:06 0% ` Lu, Wenzhuo
0 siblings, 0 replies; 200+ results
From: Lu, Wenzhuo @ 2015-10-23 13:06 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
Hi Bruce,
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, October 23, 2015 5:58 PM
> To: Lu, Wenzhuo
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com
> Subject: Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures
> for fdir new modes
>
> On Fri, Oct 23, 2015 at 02:22:28AM +0100, Lu, Wenzhuo wrote:
> > Hi Bruce,
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Thursday, October 22, 2015 8:57 PM
> > > To: Lu, Wenzhuo
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the
> > > structures for fdir new modes
> > >
> > > On Thu, Oct 22, 2015 at 03:11:36PM +0800, Wenzhuo Lu wrote:
> > > > Define the new modes and modify the filter and mask structures for
> > > > the mac vlan and tunnel modes.
> > > >
> > > > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > >
> > > Hi Wenzhuo,
> > >
> > > couple of stylistic comments below, which would help with patch
> > > review, especially with regards to checking for ABI issues.
> > >
> > > /Bruce
> > >
> > > > ---
> > > > lib/librte_ether/rte_eth_ctrl.h | 69
> > > > ++++++++++++++++++++++++++++++-----------
> > > > 1 file changed, 51 insertions(+), 18 deletions(-)
> > > >
> > > > diff --git a/lib/librte_ether/rte_eth_ctrl.h
> > > > b/lib/librte_ether/rte_eth_ctrl.h index 26b7b33..078faf9 100644
> > > > --- a/lib/librte_ether/rte_eth_ctrl.h
> > > > +++ b/lib/librte_ether/rte_eth_ctrl.h
> > > > @@ -248,6 +248,17 @@ enum rte_eth_tunnel_type { };
> > > >
> > > > /**
> > > > + * Flow Director setting modes: none, signature or perfect.
> > > > + */
> > > > +enum rte_fdir_mode {
> > > > + RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > > > + RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter mode.
> > > */
> > > > + RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode for
> > > IP. */
> > > > + RTE_FDIR_MODE_PERFECT_MAC_VLAN, /**< Enable FDIR filter mode
> > > - MAC VLAN. */
> > > > + RTE_FDIR_MODE_PERFECT_TUNNEL, /**< Enable FDIR filter mode -
> > > tunnel. */
> > > > +};
> > > > +
> > >
> > > Why is this structure definition moved in the file, it makes seeing
> > > the diff vs the old version difficult.
> > I remember in the original version I move this enum to resolve a compile
> issue.
> > But now after all the code changed, the issue is not here. So, I'll move it
> back.
> >
> > >
> > > > +/**
> > > > * filter type of tunneling packet
> > > > */
> > > > #define ETH_TUNNEL_FILTER_OMAC 0x01 /**< filter by outer MAC
> > > > addr
> > > */
> > > > @@ -377,18 +388,46 @@ struct rte_eth_sctpv6_flow { };
> > > >
> > > > /**
> > > > + * A structure used to define the input for MAC VLAN flow */
> > > > +struct rte_eth_mac_vlan_flow {
> > > > + struct ether_addr mac_addr; /**< Mac address to match. */ };
> > > > +
> > > > +/**
> > > > + * Tunnel type for flow director.
> > > > + */
> > > > +enum rte_eth_fdir_tunnel_type {
> > > > + RTE_FDIR_TUNNEL_TYPE_NVGRE = 0,
> > > > + RTE_FDIR_TUNNEL_TYPE_VXLAN,
> > > > + RTE_FDIR_TUNNEL_TYPE_UNKNOWN,
> > > > +};
> > > > +
> > > > +/**
> > > > + * A structure used to define the input for tunnel flow, now it's
> > > > +VxLAN or
> > > > + * NVGRE
> > > > + */
> > > > +struct rte_eth_tunnel_flow {
> > > > + enum rte_eth_fdir_tunnel_type tunnel_type; /**< Tunnel type to
> > > match. */
> > > > + uint32_t tunnel_id; /**< Tunnel ID to match. TNI, VNI...
> > > */
> > > > + struct ether_addr mac_addr; /**< Mac address to match. */
> > > > +};
> > > > +
> > > > +/**
> > > > * An union contains the inputs for all types of flow
> > > > */
> > > > union rte_eth_fdir_flow {
> > > > - struct rte_eth_l2_flow l2_flow;
> > > > - struct rte_eth_udpv4_flow udp4_flow;
> > > > - struct rte_eth_tcpv4_flow tcp4_flow;
> > > > - struct rte_eth_sctpv4_flow sctp4_flow;
> > > > - struct rte_eth_ipv4_flow ip4_flow;
> > > > - struct rte_eth_udpv6_flow udp6_flow;
> > > > - struct rte_eth_tcpv6_flow tcp6_flow;
> > > > - struct rte_eth_sctpv6_flow sctp6_flow;
> > > > - struct rte_eth_ipv6_flow ipv6_flow;
> > > > + struct rte_eth_l2_flow l2_flow;
> > > > + struct rte_eth_udpv4_flow udp4_flow;
> > > > + struct rte_eth_tcpv4_flow tcp4_flow;
> > > > + struct rte_eth_sctpv4_flow sctp4_flow;
> > > > + struct rte_eth_ipv4_flow ip4_flow;
> > > > + struct rte_eth_udpv6_flow udp6_flow;
> > > > + struct rte_eth_tcpv6_flow tcp6_flow;
> > > > + struct rte_eth_sctpv6_flow sctp6_flow;
> > > > + struct rte_eth_ipv6_flow ipv6_flow;
> > > > + struct rte_eth_mac_vlan_flow mac_vlan_flow;
> > > > + struct rte_eth_tunnel_flow tunnel_flow;
> > >
> > > Can you please minimize the whitespace changes here. It looks in the
> > > diff like you are replacing the entire set of entries, but on closer
> > > inspection it looks like you are just adding in two extra lines.
> > Using vi or other editing tools, we can see all this fields are
> > aligned. I think it's worth to keep it.
> >
>
> I'm a bit uncertain about this. It really makes the diff confusing, so ideally
> the whitespace cleanup should go in a separate patch. On the other hand,
> it's a fairly small change, so maybe it's ok here.
>
> Right now, I'd prefer it as a separate patch, because I had to manually go
> through each line added/removed to see how they tallied before I
> understood that you were just adding in two new lines, and nothing else.
>
> Thomas, perhaps you'd like to comment here?
Maybe you miss the Thomas' mail.
I'd like to remove the whitespaces for reviewer's sake. :)
>
> > >
> > > > };
> > > >
> > > > /**
> > > > @@ -465,6 +504,9 @@ struct rte_eth_fdir_masks {
> > > > struct rte_eth_ipv6_flow ipv6_mask;
> > > > uint16_t src_port_mask;
> > > > uint16_t dst_port_mask;
> > > > + uint8_t mac_addr_byte_mask; /** Per byte MAC address mask */
> > > > + uint32_t tunnel_id_mask; /** tunnel ID mask */
> > > > + uint8_t tunnel_type_mask;
> > > > };
> > > >
> > > > /**
> > > > @@ -515,15 +557,6 @@ struct rte_eth_fdir_flex_conf {
> > > > /**< Flex mask configuration for each flow type */ };
> > > >
> > > > -/**
> > > > - * Flow Director setting modes: none, signature or perfect.
> > > > - */
> > > > -enum rte_fdir_mode {
> > > > - RTE_FDIR_MODE_NONE = 0, /**< Disable FDIR support. */
> > > > - RTE_FDIR_MODE_SIGNATURE, /**< Enable FDIR signature filter
> > > mode. */
> > > > - RTE_FDIR_MODE_PERFECT, /**< Enable FDIR perfect filter mode.
> > > */
> > > > -};
> > > > -
> > > > #define UINT32_BIT (CHAR_BIT * sizeof(uint32_t)) #define
> > > > RTE_FLOW_MASK_ARRAY_SIZE \
> > > > (RTE_ALIGN(RTE_ETH_FLOW_MAX, UINT32_BIT)/UINT32_BIT)
> > > > --
> > > > 1.9.3
> > > >
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
@ 2015-10-23 13:51 3% Michal Jastrzebski
2015-10-23 13:51 1% ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
To: dev
From: Michal Kobylinski <michalx.kobylinski@intel.com>
The current DPDK implementation for LPM for IPv4 and IPv6 limits the
number of next hops to 256, as the next hop ID is an 8-bit long field.
Proposed extension increase number of next hops for IPv4 to 2^24 and
also allows 32-bits read/write operations.
This patchset requires additional change to rte_table library to meet
ABI compatibility requirements. A v2 will be sent next week.
Michal Kobylinski (3):
lpm: increase number of next hops for lpm (ipv4)
examples: update of apps using librte_lpm (ipv4)
doc: update release 2.2 after changes in librte_lpm
app/test/test_func_reentrancy.c | 4 +-
app/test/test_lpm.c | 227 ++++-----
doc/guides/rel_notes/release_2_2.rst | 2 +
examples/ip_fragmentation/main.c | 10 +-
examples/ip_reassembly/main.c | 9 +-
examples/l3fwd-power/main.c | 2 +-
examples/l3fwd-vf/main.c | 2 +-
examples/l3fwd/main.c | 16 +-
examples/load_balancer/runtime.c | 3 +-
lib/librte_lpm/rte_lpm.c | 887 ++++++++++++++++++++++++++++++++++-
lib/librte_lpm/rte_lpm.h | 295 +++++++++++-
lib/librte_lpm/rte_lpm_version.map | 59 ++-
lib/librte_table/rte_table_lpm.c | 10 +-
13 files changed, 1345 insertions(+), 181 deletions(-)
--
1.9.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 13:51 3% [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
@ 2015-10-23 13:51 1% ` Michal Jastrzebski
2015-10-23 14:38 3% ` Bruce Richardson
2015-10-23 13:51 5% ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
2015-10-23 16:20 0% ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
2 siblings, 1 reply; 200+ results
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
To: dev
From: Michal Kobylinski <michalx.kobylinski@intel.com>
Main implementation - changes to lpm library regarding new data types.
Additionally this patch implements changes required by test application.
ABI versioning requirements are met only for lpm library,
for table library it will be sent in v2 of this patch-set.
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
---
app/test/test_func_reentrancy.c | 4 +-
app/test/test_lpm.c | 227 +++++-----
lib/librte_lpm/rte_lpm.c | 887 ++++++++++++++++++++++++++++++++++++-
lib/librte_lpm/rte_lpm.h | 295 +++++++++++-
lib/librte_lpm/rte_lpm_version.map | 59 ++-
lib/librte_table/rte_table_lpm.c | 10 +-
6 files changed, 1322 insertions(+), 160 deletions(-)
diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c
index dbecc52..331ab29 100644
--- a/app/test/test_func_reentrancy.c
+++ b/app/test/test_func_reentrancy.c
@@ -343,7 +343,7 @@ static void
lpm_clean(unsigned lcore_id)
{
char lpm_name[MAX_STRING_SIZE];
- struct rte_lpm *lpm;
+ struct rte_lpm_extend *lpm;
int i;
for (i = 0; i < MAX_LPM_ITER_TIMES; i++) {
@@ -358,7 +358,7 @@ static int
lpm_create_free(__attribute__((unused)) void *arg)
{
unsigned lcore_self = rte_lcore_id();
- struct rte_lpm *lpm;
+ struct rte_lpm_extend *lpm;
char lpm_name[MAX_STRING_SIZE];
int i;
diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 8b4ded9..31f54d0 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -114,7 +114,7 @@ rte_lpm_test tests[] = {
int32_t
test0(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
/* rte_lpm_create: lpm name == NULL */
lpm = rte_lpm_create(NULL, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -139,7 +139,7 @@ test0(void)
int32_t
test1(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
int32_t i;
/* rte_lpm_free: Free NULL */
@@ -163,7 +163,7 @@ test1(void)
int32_t
test2(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
TEST_LPM_ASSERT(lpm != NULL);
@@ -179,7 +179,7 @@ test2(void)
int32_t
test3(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
uint8_t depth = 24, next_hop = 100;
int32_t status = 0;
@@ -212,7 +212,7 @@ test3(void)
int32_t
test4(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
uint8_t depth = 24;
int32_t status = 0;
@@ -252,7 +252,7 @@ test5(void)
int32_t status = 0;
/* rte_lpm_lookup: lpm == NULL */
- status = rte_lpm_lookup(NULL, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(NULL, ip, &next_hop_return);
TEST_LPM_ASSERT(status < 0);
/*Create vaild lpm to use in rest of test. */
@@ -260,7 +260,7 @@ test5(void)
TEST_LPM_ASSERT(lpm != NULL);
/* rte_lpm_lookup: depth < 1 */
- status = rte_lpm_lookup(lpm, ip, NULL);
+ status = rte_lpm_lookup_extend(lpm, ip, NULL);
TEST_LPM_ASSERT(status < 0);
rte_lpm_free(lpm);
@@ -276,9 +276,10 @@ test5(void)
int32_t
test6(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 24;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -287,13 +288,13 @@ test6(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -309,10 +310,11 @@ int32_t
test7(void)
{
__m128i ipx4;
- uint16_t hop[4];
- struct rte_lpm *lpm = NULL;
+ uint32_t hop[4];
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 32;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -321,20 +323,20 @@ test7(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ipx4 = _mm_set_epi32(ip, ip + 0x100, ip - 0x100, ip);
- rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+ rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
TEST_LPM_ASSERT(hop[0] == next_hop_add);
- TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
- TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+ TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+ TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
TEST_LPM_ASSERT(hop[3] == next_hop_add);
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -355,10 +357,11 @@ int32_t
test8(void)
{
__m128i ipx4;
- uint16_t hop[4];
- struct rte_lpm *lpm = NULL;
+ uint32_t hop[4];
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -373,18 +376,18 @@ test8(void)
TEST_LPM_ASSERT(status == 0);
/* Check IP in first half of tbl24 which should be empty. */
- status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip1, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
- status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip2, &next_hop_return);
TEST_LPM_ASSERT((status == 0) &&
(next_hop_return == next_hop_add));
ipx4 = _mm_set_epi32(ip2, ip1, ip2, ip1);
- rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
- TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+ rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
+ TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
TEST_LPM_ASSERT(hop[1] == next_hop_add);
- TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+ TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
TEST_LPM_ASSERT(hop[3] == next_hop_add);
}
@@ -395,7 +398,7 @@ test8(void)
status = rte_lpm_delete(lpm, ip2, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip2, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip2, &next_hop_return);
if (depth != 1) {
TEST_LPM_ASSERT((status == 0) &&
@@ -405,20 +408,20 @@ test8(void)
TEST_LPM_ASSERT(status == -ENOENT);
}
- status = rte_lpm_lookup(lpm, ip1, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip1, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
ipx4 = _mm_set_epi32(ip1, ip1, ip2, ip2);
- rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
+ rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
if (depth != 1) {
TEST_LPM_ASSERT(hop[0] == next_hop_add);
TEST_LPM_ASSERT(hop[1] == next_hop_add);
} else {
- TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
- TEST_LPM_ASSERT(hop[1] == UINT16_MAX);
+ TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
+ TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
}
- TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
- TEST_LPM_ASSERT(hop[3] == UINT16_MAX);
+ TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+ TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
}
rte_lpm_free(lpm);
@@ -436,9 +439,10 @@ test8(void)
int32_t
test9(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip, ip_1, ip_2;
- uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+ uint8_t depth, depth_1, depth_2;
+ uint32_t next_hop_add, next_hop_add_1,
next_hop_add_2, next_hop_return;
int32_t status = 0;
@@ -453,13 +457,13 @@ test9(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -472,7 +476,7 @@ test9(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
depth = 24;
@@ -481,7 +485,7 @@ test9(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
depth = 24;
@@ -494,7 +498,7 @@ test9(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -508,7 +512,7 @@ test9(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ip = IPv4(128, 0, 0, 5);
@@ -518,26 +522,26 @@ test9(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
ip = IPv4(128, 0, 0, 0);
depth = 32;
next_hop_add = 100;
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -557,25 +561,25 @@ test9(void)
status = rte_lpm_add(lpm, ip_1, depth_1, next_hop_add_1);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_1, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
status = rte_lpm_add(lpm, ip_2, depth_2, next_hop_add_2);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_2, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_2));
status = rte_lpm_delete(lpm, ip_2, depth_2);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip_2, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_2, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
status = rte_lpm_delete(lpm, ip_1, depth_1);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip_1, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_1, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -600,9 +604,10 @@ int32_t
test10(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
/* Add rule that covers a TBL24 range previously invalid & lookup
@@ -617,13 +622,13 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -635,7 +640,7 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
@@ -660,13 +665,13 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ip = IPv4(128, 0, 0, 0);
next_hop_add = 100;
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ip = IPv4(128, 0, 0, 0);
@@ -675,7 +680,7 @@ test10(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
ip = IPv4(128, 0, 0, 10);
@@ -684,7 +689,7 @@ test10(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -699,7 +704,7 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
next_hop_add = 101;
@@ -707,13 +712,13 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -728,7 +733,7 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
next_hop_add = 101;
@@ -736,13 +741,13 @@ test10(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -755,7 +760,7 @@ test10(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status < 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_delete_all(lpm);
@@ -768,7 +773,7 @@ test10(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status < 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -786,9 +791,10 @@ int32_t
test11(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -808,13 +814,13 @@ test11(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ip = IPv4(128, 0, 0, 0);
next_hop_add = 100;
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add));
ip = IPv4(128, 0, 0, 0);
@@ -823,7 +829,7 @@ test11(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
ip = IPv4(128, 0, 0, 10);
@@ -832,7 +838,7 @@ test11(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -851,10 +857,11 @@ int32_t
test12(void)
{
__m128i ipx4;
- uint16_t hop[4];
- struct rte_lpm *lpm = NULL;
+ uint32_t hop[4];
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip, i;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -868,21 +875,21 @@ test12(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) &&
(next_hop_return == next_hop_add));
ipx4 = _mm_set_epi32(ip, ip + 1, ip, ip - 1);
- rte_lpm_lookupx4(lpm, ipx4, hop, UINT16_MAX);
- TEST_LPM_ASSERT(hop[0] == UINT16_MAX);
+ rte_lpm_lookupx4_extend(lpm, ipx4, hop, UINT32_MAX);
+ TEST_LPM_ASSERT(hop[0] == UINT32_MAX);
TEST_LPM_ASSERT(hop[1] == next_hop_add);
- TEST_LPM_ASSERT(hop[2] == UINT16_MAX);
+ TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
TEST_LPM_ASSERT(hop[3] == next_hop_add);
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
}
@@ -902,9 +909,10 @@ test12(void)
int32_t
test13(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip, i;
- uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add_1, next_hop_add_2, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -917,7 +925,7 @@ test13(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add_1);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) && (next_hop_return == next_hop_add_1));
depth = 32;
@@ -927,14 +935,14 @@ test13(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add_2);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) &&
(next_hop_return == next_hop_add_2));
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) &&
(next_hop_return == next_hop_add_1));
}
@@ -944,7 +952,7 @@ test13(void)
status = rte_lpm_delete(lpm, ip, depth);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT(status == -ENOENT);
rte_lpm_free(lpm);
@@ -964,9 +972,10 @@ test14(void)
/* We only use depth = 32 in the loop below so we must make sure
* that we have enough storage for all rules at that depth*/
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
/* Add enough space for 256 rules for every depth */
@@ -982,7 +991,7 @@ test14(void)
status = rte_lpm_add(lpm, ip, depth, next_hop_add);
TEST_LPM_ASSERT(status == 0);
- status = rte_lpm_lookup(lpm, ip, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip, &next_hop_return);
TEST_LPM_ASSERT((status == 0) &&
(next_hop_return == next_hop_add));
}
@@ -1011,7 +1020,7 @@ test14(void)
int32_t
test15(void)
{
- struct rte_lpm *lpm = NULL, *result = NULL;
+ struct rte_lpm_extend *lpm = NULL, *result = NULL;
/* Create lpm */
lpm = rte_lpm_create("lpm_find_existing", SOCKET_ID_ANY, 256 * 32, 0);
@@ -1040,7 +1049,7 @@ int32_t
test16(void)
{
uint32_t ip;
- struct rte_lpm *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
+ struct rte_lpm_extend *lpm = rte_lpm_create(__func__, SOCKET_ID_ANY,
256 * 32, 0);
/* ip loops through all possibilities for top 24 bits of address */
@@ -1071,17 +1080,17 @@ test16(void)
int32_t
test17(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
const uint32_t ip_10_32 = IPv4(10, 10, 10, 2);
const uint32_t ip_10_24 = IPv4(10, 10, 10, 0);
const uint32_t ip_20_25 = IPv4(10, 10, 20, 2);
const uint8_t d_ip_10_32 = 32,
d_ip_10_24 = 24,
d_ip_20_25 = 25;
- const uint8_t next_hop_ip_10_32 = 100,
+ const uint32_t next_hop_ip_10_32 = 100,
next_hop_ip_10_24 = 105,
next_hop_ip_20_25 = 111;
- uint8_t next_hop_return = 0;
+ uint32_t next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -1091,7 +1100,7 @@ test17(void)
next_hop_ip_10_32)) < 0)
return -1;
- status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_10_32, &next_hop_return);
uint8_t test_hop_10_32 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
@@ -1100,7 +1109,7 @@ test17(void)
next_hop_ip_10_24)) < 0)
return -1;
- status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_10_24, &next_hop_return);
uint8_t test_hop_10_24 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
@@ -1109,7 +1118,7 @@ test17(void)
next_hop_ip_20_25)) < 0)
return -1;
- status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_20_25, &next_hop_return);
uint8_t test_hop_20_25 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
@@ -1124,11 +1133,11 @@ test17(void)
return -1;
}
- status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_10_32, &next_hop_return);
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
- status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
+ status = rte_lpm_lookup_extend(lpm, ip_10_24, &next_hop_return);
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
@@ -1172,10 +1181,10 @@ print_route_distribution(const struct route_rule *table, uint32_t n)
int32_t
perf_test(void)
{
- struct rte_lpm *lpm = NULL;
+ struct rte_lpm_extend *lpm = NULL;
uint64_t begin, total_time, lpm_used_entries = 0;
unsigned i, j;
- uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+ uint32_t next_hop_add = 0xAA, next_hop_return = 0;
int status = 0;
uint64_t cache_line_counter = 0;
int64_t count = 0;
@@ -1236,7 +1245,7 @@ perf_test(void)
begin = rte_rdtsc();
for (j = 0; j < BATCH_SIZE; j ++) {
- if (rte_lpm_lookup(lpm, ip_batch[j], &next_hop_return) != 0)
+ if (rte_lpm_lookup_extend(lpm, ip_batch[j], &next_hop_return) != 0)
count++;
}
@@ -1252,7 +1261,7 @@ perf_test(void)
count = 0;
for (i = 0; i < ITERATIONS; i ++) {
static uint32_t ip_batch[BATCH_SIZE];
- uint16_t next_hops[BULK_SIZE];
+ uint32_t next_hops[BULK_SIZE];
/* Create array of random IP addresses */
for (j = 0; j < BATCH_SIZE; j ++)
@@ -1262,9 +1271,9 @@ perf_test(void)
begin = rte_rdtsc();
for (j = 0; j < BATCH_SIZE; j += BULK_SIZE) {
unsigned k;
- rte_lpm_lookup_bulk(lpm, &ip_batch[j], next_hops, BULK_SIZE);
+ rte_lpm_lookup_bulk_func_extend(lpm, &ip_batch[j], next_hops, BULK_SIZE);
for (k = 0; k < BULK_SIZE; k++)
- if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS)))
+ if (unlikely(!(next_hops[k] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)))
count++;
}
@@ -1279,7 +1288,7 @@ perf_test(void)
count = 0;
for (i = 0; i < ITERATIONS; i++) {
static uint32_t ip_batch[BATCH_SIZE];
- uint16_t next_hops[4];
+ uint32_t next_hops[4];
/* Create array of random IP addresses */
for (j = 0; j < BATCH_SIZE; j++)
@@ -1293,9 +1302,9 @@ perf_test(void)
ipx4 = _mm_loadu_si128((__m128i *)(ip_batch + j));
ipx4 = *(__m128i *)(ip_batch + j);
- rte_lpm_lookupx4(lpm, ipx4, next_hops, UINT16_MAX);
+ rte_lpm_lookupx4_extend(lpm, ipx4, next_hops, UINT32_MAX);
for (k = 0; k < RTE_DIM(next_hops); k++)
- if (unlikely(next_hops[k] == UINT16_MAX))
+ if (unlikely(next_hops[k] == UINT32_MAX))
count++;
}
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..58b7fcc 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -120,7 +120,7 @@ depth_to_range(uint8_t depth)
* Find an existing lpm table and return a pointer to it.
*/
struct rte_lpm *
-rte_lpm_find_existing(const char *name)
+rte_lpm_find_existing_v20(const char *name)
{
struct rte_lpm *l = NULL;
struct rte_tailq_entry *te;
@@ -143,12 +143,42 @@ rte_lpm_find_existing(const char *name)
return l;
}
+VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
+
+struct rte_lpm_extend *
+rte_lpm_find_existing_v22(const char *name)
+{
+ struct rte_lpm_extend *l = NULL;
+ struct rte_tailq_entry *te;
+ struct rte_lpm_list *lpm_list;
+
+ lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+ TAILQ_FOREACH(te, lpm_list, next) {
+ l = (struct rte_lpm_extend *) te->data;
+ if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
+ break;
+ }
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return l;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v22, 2.2);
+MAP_STATIC_SYMBOL(struct rte_lpm_extend *
+ rte_lpm_find_existing(const char *name), rte_lpm_find_existing_v22);
/*
* Allocates memory for LPM object
*/
+
struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules,
+rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
__rte_unused int flags)
{
char mem_name[RTE_LPM_NAMESIZE];
@@ -213,12 +243,117 @@ exit:
return lpm;
}
+VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
+
+struct rte_lpm_extend *
+rte_lpm_create_v22(const char *name, int socket_id, int max_rules,
+ __rte_unused int flags)
+{
+ char mem_name[RTE_LPM_NAMESIZE];
+ struct rte_lpm_extend *lpm = NULL;
+ struct rte_tailq_entry *te;
+ uint32_t mem_size;
+ struct rte_lpm_list *lpm_list;
+
+ lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+ RTE_BUILD_BUG_ON(sizeof(union rte_lpm_tbl24_entry_extend) != 4);
+ RTE_BUILD_BUG_ON(sizeof(union rte_lpm_tbl8_entry_extend) != 4);
+
+ /* Check user arguments. */
+ if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
+
+ /* Determine the amount of memory to allocate. */
+ mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* guarantee there's no existing */
+ TAILQ_FOREACH(te, lpm_list, next) {
+ lpm = (struct rte_lpm_extend *) te->data;
+ if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
+ break;
+ }
+ if (te != NULL)
+ goto exit;
+
+ /* allocate tailq entry */
+ te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
+ goto exit;
+ }
+
+ /* Allocate memory to store the LPM data structures. */
+ lpm = (struct rte_lpm_extend *)rte_zmalloc_socket(mem_name, mem_size,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (lpm == NULL) {
+ RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
+ rte_free(te);
+ goto exit;
+ }
+
+ /* Save user arguments. */
+ lpm->max_rules = max_rules;
+ snprintf(lpm->name, sizeof(lpm->name), "%s", name);
+
+ te->data = (void *) lpm;
+
+ TAILQ_INSERT_TAIL(lpm_list, te, next);
+
+exit:
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return lpm;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_create, _v22, 2.2);
+MAP_STATIC_SYMBOL(struct rte_lpm_extend *
+ rte_lpm_create(const char *name, int socket_id, int max_rules,
+ __rte_unused int flags), rte_lpm_create_v22);
/*
* Deallocates memory for given LPM table.
*/
void
-rte_lpm_free(struct rte_lpm *lpm)
+rte_lpm_free_v20(struct rte_lpm *lpm)
+{
+ struct rte_lpm_list *lpm_list;
+ struct rte_tailq_entry *te;
+
+ /* Check user arguments. */
+ if (lpm == NULL)
+ return;
+
+ lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find our tailq entry */
+ TAILQ_FOREACH(te, lpm_list, next) {
+ if (te->data == (void *) lpm)
+ break;
+ }
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(lpm_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(lpm);
+ rte_free(te);
+}
+VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
+
+void
+rte_lpm_free_v22(struct rte_lpm_extend *lpm)
{
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -248,6 +383,9 @@ rte_lpm_free(struct rte_lpm *lpm)
rte_free(lpm);
rte_free(te);
}
+BIND_DEFAULT_SYMBOL(rte_lpm_free, _v22, 2.2);
+MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm_extend *lpm),
+ rte_lpm_free_v22);
/*
* Adds a rule to the rule table.
@@ -328,10 +466,80 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
return rule_index;
}
+static inline int32_t
+rule_add_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth,
+ uint32_t next_hop)
+{
+ uint32_t rule_gindex, rule_index, last_rule;
+ int i;
+
+ VERIFY_DEPTH(depth);
+
+ /* Scan through rule group to see if rule already exists. */
+ if (lpm->rule_info[depth - 1].used_rules > 0) {
+
+ /* rule_gindex stands for rule group index. */
+ rule_gindex = lpm->rule_info[depth - 1].first_rule;
+ /* Initialise rule_index to point to start of rule group. */
+ rule_index = rule_gindex;
+ /* Last rule = Last used rule in this rule group. */
+ last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+
+ for (; rule_index < last_rule; rule_index++) {
+
+ /* If rule already exists update its next_hop and return. */
+ if (lpm->rules_tbl[rule_index].ip == ip_masked) {
+ lpm->rules_tbl[rule_index].next_hop = next_hop;
+
+ return rule_index;
+ }
+ }
+
+ if (rule_index == lpm->max_rules)
+ return -ENOSPC;
+ } else {
+ /* Calculate the position in which the rule will be stored. */
+ rule_index = 0;
+
+ for (i = depth - 1; i > 0; i--) {
+ if (lpm->rule_info[i - 1].used_rules > 0) {
+ rule_index = lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules;
+ break;
+ }
+ }
+ if (rule_index == lpm->max_rules)
+ return -ENOSPC;
+
+ lpm->rule_info[depth - 1].first_rule = rule_index;
+ }
+
+ /* Make room for the new rule in the array. */
+ for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
+ if (lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
+ return -ENOSPC;
+
+ if (lpm->rule_info[i - 1].used_rules > 0) {
+ lpm->rules_tbl[lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules]
+ = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
+ lpm->rule_info[i - 1].first_rule++;
+ }
+ }
+
+ /* Add the new rule. */
+ lpm->rules_tbl[rule_index].ip = ip_masked;
+ lpm->rules_tbl[rule_index].next_hop = next_hop;
+
+ /* Increment the used rules counter for this rule group. */
+ lpm->rule_info[depth - 1].used_rules++;
+
+ return rule_index;
+}
+
/*
* Delete a rule from the rule table.
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
+
static inline void
rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
@@ -353,6 +561,27 @@ rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
lpm->rule_info[depth - 1].used_rules--;
}
+static inline void
+rule_delete_extend(struct rte_lpm_extend *lpm, int32_t rule_index, uint8_t depth)
+{
+ int i;
+
+ VERIFY_DEPTH(depth);
+
+ lpm->rules_tbl[rule_index] = lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
+ + lpm->rule_info[depth - 1].used_rules - 1];
+
+ for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
+ if (lpm->rule_info[i].used_rules > 0) {
+ lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
+ lpm->rules_tbl[lpm->rule_info[i].first_rule + lpm->rule_info[i].used_rules - 1];
+ lpm->rule_info[i].first_rule--;
+ }
+ }
+
+ lpm->rule_info[depth - 1].used_rules--;
+}
+
/*
* Finds a rule in rule table.
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
@@ -378,6 +607,27 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
return -EINVAL;
}
+static inline int32_t
+rule_find_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth)
+{
+ uint32_t rule_gindex, last_rule, rule_index;
+
+ VERIFY_DEPTH(depth);
+
+ rule_gindex = lpm->rule_info[depth - 1].first_rule;
+ last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+
+ /* Scan used rules at given depth to find rule. */
+ for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
+ /* If rule is found return the rule index. */
+ if (lpm->rules_tbl[rule_index].ip == ip_masked)
+ return rule_index;
+ }
+
+ /* If rule is not found return -EINVAL. */
+ return -EINVAL;
+}
+
/*
* Find, clean and allocate a tbl8.
*/
@@ -409,6 +659,33 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
return -ENOSPC;
}
+static inline int32_t
+tbl8_alloc_extend(union rte_lpm_tbl8_entry_extend *tbl8)
+{
+ uint32_t tbl8_gindex; /* tbl8 group index. */
+ union rte_lpm_tbl8_entry_extend *tbl8_entry;
+
+ /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
+ for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
+ tbl8_gindex++) {
+ tbl8_entry = &tbl8[tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
+ /* If a free tbl8 group is found clean it and set as VALID. */
+ if (!tbl8_entry->valid_group) {
+ memset(&tbl8_entry[0], 0,
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
+ sizeof(tbl8_entry[0]));
+
+ tbl8_entry->valid_group = VALID;
+
+ /* Return group index for allocated tbl8 group. */
+ return tbl8_gindex;
+ }
+ }
+
+ /* If there are no tbl8 groups free then return error. */
+ return -ENOSPC;
+}
+
static inline void
tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
{
@@ -416,6 +693,13 @@ tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
tbl8[tbl8_group_start].valid_group = INVALID;
}
+static inline void
+tbl8_free_extend(union rte_lpm_tbl8_entry_extend *tbl8, uint32_t tbl8_group_start)
+{
+ /* Set tbl8 group invalid*/
+ tbl8[tbl8_group_start].valid_group = INVALID;
+}
+
static inline int32_t
add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t next_hop)
@@ -485,12 +769,77 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static inline int32_t
-add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+add_depth_small_extend(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+ uint32_t next_hop)
{
- uint32_t tbl24_index;
- int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
- tbl8_range, i;
+ uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
+
+ /* Calculate the index into Table24. */
+ tbl24_index = ip >> 8;
+ tbl24_range = depth_to_range(depth);
+
+ for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+ /*
+ * For invalid OR valid and non-extended tbl 24 entries set
+ * entry.
+ */
+ if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
+ lpm->tbl24[i].depth <= depth)) {
+
+ union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+ new_tbl24_entry.next_hop = next_hop;
+ new_tbl24_entry.valid = VALID;
+ new_tbl24_entry.ext_entry = 0;
+ new_tbl24_entry.depth = depth;
+
+ /* Setting tbl24 entry in one go to avoid race
+ * conditions
+ */
+ lpm->tbl24[i] = new_tbl24_entry;
+
+ continue;
+ }
+
+ if (lpm->tbl24[i].ext_entry == 1) {
+ /* If tbl24 entry is valid and extended calculate the
+ * index into tbl8.
+ */
+ tbl8_index = lpm->tbl24[i].tbl8_gindex *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl8_group_end = tbl8_index +
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+ for (j = tbl8_index; j < tbl8_group_end; j++) {
+ if (!lpm->tbl8[j].valid ||
+ lpm->tbl8[j].depth <= depth) {
+ union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+ new_tbl8_entry.valid = VALID;
+ new_tbl8_entry.valid_group = VALID;
+ new_tbl8_entry.depth = depth;
+ new_tbl8_entry.next_hop = next_hop;
+
+ /*
+ * Setting tbl8 entry in one go to avoid
+ * race conditions
+ */
+ lpm->tbl8[j] = new_tbl8_entry;
+
+ continue;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static inline int32_t
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+ uint8_t next_hop)
+{
+ uint32_t tbl24_index;
+ int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
+ tbl8_range, i;
tbl24_index = (ip_masked >> 8);
tbl8_range = depth_to_range(depth);
@@ -616,11 +965,140 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
return 0;
}
+static inline int32_t
+add_depth_big_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked, uint8_t depth,
+ uint32_t next_hop)
+{
+ uint32_t tbl24_index;
+ int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
+ tbl8_range, i;
+
+ tbl24_index = (ip_masked >> 8);
+ tbl8_range = depth_to_range(depth);
+
+ if (!lpm->tbl24[tbl24_index].valid) {
+ /* Search for a free tbl8 group. */
+ tbl8_group_index = tbl8_alloc_extend(lpm->tbl8);
+
+ /* Check tbl8 allocation was successful. */
+ if (tbl8_group_index < 0) {
+ return tbl8_group_index;
+ }
+
+ /* Find index into tbl8 and range. */
+ tbl8_index = (tbl8_group_index *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
+ (ip_masked & 0xFF);
+
+ /* Set tbl8 entry. */
+ for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+ lpm->tbl8[i].depth = depth;
+ lpm->tbl8[i].next_hop = next_hop;
+ lpm->tbl8[i].valid = VALID;
+ }
+
+ /*
+ * Update tbl24 entry to point to new tbl8 entry. Note: The
+ * ext_flag and tbl8_index need to be updated simultaneously,
+ * so assign whole structure in one go
+ */
+
+ union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+ new_tbl24_entry.next_hop = (uint8_t)tbl8_group_index;
+ new_tbl24_entry.valid = VALID;
+ new_tbl24_entry.ext_entry = 1;
+ new_tbl24_entry.depth = 0;
+
+ lpm->tbl24[tbl24_index] = new_tbl24_entry;
+
+ }
+ /* If valid entry but not extended calculate the index into Table8. */
+ else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
+ /* Search for free tbl8 group. */
+ tbl8_group_index = tbl8_alloc_extend(lpm->tbl8);
+
+ if (tbl8_group_index < 0) {
+ return tbl8_group_index;
+ }
+
+ tbl8_group_start = tbl8_group_index *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl8_group_end = tbl8_group_start +
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+ /* Populate new tbl8 with tbl24 value. */
+ for (i = tbl8_group_start; i < tbl8_group_end; i++) {
+ lpm->tbl8[i].valid = VALID;
+ lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
+ lpm->tbl8[i].next_hop =
+ lpm->tbl24[tbl24_index].next_hop;
+ }
+
+ tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+
+ /* Insert new rule into the tbl8 entry. */
+ for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
+ if (!lpm->tbl8[i].valid ||
+ lpm->tbl8[i].depth <= depth) {
+ lpm->tbl8[i].valid = VALID;
+ lpm->tbl8[i].depth = depth;
+ lpm->tbl8[i].next_hop = next_hop;
+
+ continue;
+ }
+ }
+
+ /*
+ * Update tbl24 entry to point to new tbl8 entry. Note: The
+ * ext_flag and tbl8_index need to be updated simultaneously,
+ * so assign whole structure in one go.
+ */
+
+ union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+ new_tbl24_entry.next_hop = (uint8_t)tbl8_group_index;
+ new_tbl24_entry.valid = VALID;
+ new_tbl24_entry.ext_entry = 1;
+ new_tbl24_entry.depth = 0;
+
+ lpm->tbl24[tbl24_index] = new_tbl24_entry;
+
+ } else { /*
+ * If it is valid, extended entry calculate the index into tbl8.
+ */
+ tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
+ tbl8_group_start = tbl8_group_index *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+
+ for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+
+ if (!lpm->tbl8[i].valid ||
+ lpm->tbl8[i].depth <= depth) {
+ union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+ new_tbl8_entry.valid = VALID;
+ new_tbl8_entry.depth = depth;
+ new_tbl8_entry.next_hop = next_hop;
+ new_tbl8_entry.valid_group = lpm->tbl8[i].valid_group;
+
+ /*
+ * Setting tbl8 entry in one go to avoid race
+ * condition
+ */
+ lpm->tbl8[i] = new_tbl8_entry;
+
+ continue;
+ }
+ }
+ }
+
+ return 0;
+}
+
/*
* Add a route
*/
int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t next_hop)
{
int32_t rule_index, status = 0;
@@ -659,12 +1137,56 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
return 0;
}
+VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
+
+int
+rte_lpm_add_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+ uint32_t next_hop)
+{
+ int32_t rule_index, status = 0;
+ uint32_t ip_masked;
+
+ /* Check user arguments. */
+ if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+ return -EINVAL;
+
+ ip_masked = ip & depth_to_mask(depth);
+
+ /* Add the rule to the rule table. */
+ rule_index = rule_add_extend(lpm, ip_masked, depth, next_hop);
+
+ /* If the is no space available for new rule return error. */
+ if (rule_index < 0) {
+ return rule_index;
+ }
+
+ if (depth <= MAX_DEPTH_TBL24) {
+ status = add_depth_small_extend(lpm, ip_masked, depth, next_hop);
+ } else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
+ status = add_depth_big_extend(lpm, ip_masked, depth, next_hop);
+
+ /*
+ * If add fails due to exhaustion of tbl8 extensions delete
+ * rule that was added to rule table.
+ */
+ if (status < 0) {
+ rule_delete_extend(lpm, rule_index, depth);
+
+ return status;
+ }
+ }
+
+ return 0;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_add, _v22, 2.2);
+MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm_extend *lpm,
+ uint32_t ip, uint8_t depth, uint32_t next_hop), rte_lpm_add_v22);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *next_hop)
{
uint32_t ip_masked;
@@ -688,6 +1210,37 @@ uint8_t *next_hop)
/* If rule is not found return 0. */
return 0;
}
+VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
+
+int
+rte_lpm_is_rule_present_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop)
+{
+ uint32_t ip_masked;
+ int32_t rule_index;
+
+ /* Check user arguments. */
+ if ((lpm == NULL) ||
+ (next_hop == NULL) ||
+ (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+ return -EINVAL;
+
+ /* Look for the rule using rule_find. */
+ ip_masked = ip & depth_to_mask(depth);
+ rule_index = rule_find_extend(lpm, ip_masked, depth);
+
+ if (rule_index >= 0) {
+ *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ return 1;
+ }
+
+ /* If rule is not found return 0. */
+ return 0;
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v22, 2.2);
+MAP_STATIC_SYMBOL(int
+ rte_lpm_is_rule_present(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+ uint32_t *next_hop), rte_lpm_is_rule_present_v22);
static inline int32_t
find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
@@ -711,6 +1264,28 @@ find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub
}
static inline int32_t
+find_previous_rule_extend(struct rte_lpm_extend *lpm,
+ uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
+{
+ int32_t rule_index;
+ uint32_t ip_masked;
+ uint8_t prev_depth;
+
+ for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
+ ip_masked = ip & depth_to_mask(prev_depth);
+
+ rule_index = rule_find_extend(lpm, ip_masked, prev_depth);
+
+ if (rule_index >= 0) {
+ *sub_rule_depth = prev_depth;
+ return rule_index;
+ }
+ }
+
+ return -1;
+}
+
+static inline int32_t
delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
@@ -805,6 +1380,96 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
return 0;
}
+static inline int32_t
+delete_depth_small_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked,
+ uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+{
+ uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
+
+ /* Calculate the range and index into Table24. */
+ tbl24_range = depth_to_range(depth);
+ tbl24_index = (ip_masked >> 8);
+
+ /*
+ * Firstly check the sub_rule_index. A -1 indicates no replacement rule
+ * and a positive number indicates a sub_rule_index.
+ */
+ if (sub_rule_index < 0) {
+ /*
+ * If no replacement rule exists then invalidate entries
+ * associated with this rule.
+ */
+ for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+
+ if (lpm->tbl24[i].ext_entry == 0 &&
+ lpm->tbl24[i].depth <= depth) {
+ lpm->tbl24[i].valid = INVALID;
+ } else {
+ /*
+ * If TBL24 entry is extended, then there has
+ * to be a rule with depth >= 25 in the
+ * associated TBL8 group.
+ */
+
+ tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
+ tbl8_index = tbl8_group_index *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+ for (j = tbl8_index; j < (tbl8_index +
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
+
+ if (lpm->tbl8[j].depth <= depth)
+ lpm->tbl8[j].valid = INVALID;
+ }
+ }
+ }
+ } else {
+ /*
+ * If a replacement rule exists then modify entries
+ * associated with this rule.
+ */
+
+ union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+ new_tbl24_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+ new_tbl24_entry.valid = VALID;
+ new_tbl24_entry.ext_entry = 0;
+ new_tbl24_entry.depth = sub_rule_depth;
+
+ union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+ new_tbl8_entry.valid = VALID;
+ new_tbl8_entry.depth = sub_rule_depth;
+ new_tbl8_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+
+
+ for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
+
+ if (lpm->tbl24[i].ext_entry == 0 &&
+ lpm->tbl24[i].depth <= depth) {
+ lpm->tbl24[i] = new_tbl24_entry;
+ } else {
+ /*
+ * If TBL24 entry is extended, then there has
+ * to be a rule with depth >= 25 in the
+ * associated TBL8 group.
+ */
+
+ tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
+ tbl8_index = tbl8_group_index *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+ for (j = tbl8_index; j < (tbl8_index +
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
+
+ if (lpm->tbl8[j].depth <= depth)
+ lpm->tbl8[j] = new_tbl8_entry;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
/*
* Checks if table 8 group can be recycled.
*
@@ -813,6 +1478,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
* Return of value > -1 means tbl8 is in use but has all the same values and
* thus can be recycled
*/
+
static inline int32_t
tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
{
@@ -860,6 +1526,53 @@ tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
}
static inline int32_t
+tbl8_recycle_check_extend(union rte_lpm_tbl8_entry_extend *tbl8, uint32_t tbl8_group_start)
+{
+ uint32_t tbl8_group_end, i;
+
+ tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
+ /*
+ * Check the first entry of the given tbl8. If it is invalid we know
+ * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
+ * (As they would affect all entries in a tbl8) and thus this table
+ * can not be recycled.
+ */
+ if (tbl8[tbl8_group_start].valid) {
+ /*
+ * If first entry is valid check if the depth is less than 24
+ * and if so check the rest of the entries to verify that they
+ * are all of this depth.
+ */
+ if (tbl8[tbl8_group_start].depth < MAX_DEPTH_TBL24) {
+ for (i = (tbl8_group_start + 1); i < tbl8_group_end;
+ i++) {
+
+ if (tbl8[i].depth !=
+ tbl8[tbl8_group_start].depth) {
+
+ return -EEXIST;
+ }
+ }
+ /* If all entries are the same return the tb8 index */
+ return tbl8_group_start;
+ }
+
+ return -EEXIST;
+ }
+ /*
+ * If the first entry is invalid check if the rest of the entries in
+ * the tbl8 are invalid.
+ */
+ for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
+ if (tbl8[i].valid)
+ return -EEXIST;
+ }
+ /* If no valid entries are found then return -EINVAL. */
+ return -EINVAL;
+}
+
+static inline int32_t
delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
@@ -938,11 +1651,86 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
return 0;
}
+static inline int32_t
+delete_depth_big_extend(struct rte_lpm_extend *lpm, uint32_t ip_masked,
+ uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+{
+ uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
+ tbl8_range, i;
+ int32_t tbl8_recycle_index;
+
+ /*
+ * Calculate the index into tbl24 and range. Note: All depths larger
+ * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
+ */
+ tbl24_index = ip_masked >> 8;
+
+ /* Calculate the index into tbl8 and range. */
+ tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
+ tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
+ tbl8_range = depth_to_range(depth);
+
+ if (sub_rule_index < 0) {
+ /*
+ * Loop through the range of entries on tbl8 for which the
+ * rule_to_delete must be removed or modified.
+ */
+ for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+ if (lpm->tbl8[i].depth <= depth)
+ lpm->tbl8[i].valid = INVALID;
+ }
+ } else {
+ /* Set new tbl8 entry. */
+ union rte_lpm_tbl8_entry_extend new_tbl8_entry;
+ new_tbl8_entry.valid = VALID;
+ new_tbl8_entry.depth = sub_rule_depth;
+ new_tbl8_entry.valid_group = lpm->tbl8[tbl8_group_start].valid_group;
+ new_tbl8_entry.next_hop = lpm->rules_tbl[sub_rule_index].next_hop;
+
+ /*
+ * Loop through the range of entries on tbl8 for which the
+ * rule_to_delete must be modified.
+ */
+ for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
+ if (lpm->tbl8[i].depth <= depth)
+ lpm->tbl8[i] = new_tbl8_entry;
+ }
+ }
+
+ /*
+ * Check if there are any valid entries in this tbl8 group. If all
+ * tbl8 entries are invalid we can free the tbl8 and invalidate the
+ * associated tbl24 entry.
+ */
+
+ tbl8_recycle_index = tbl8_recycle_check_extend(lpm->tbl8, tbl8_group_start);
+
+ if (tbl8_recycle_index == -EINVAL) {
+ /* Set tbl24 before freeing tbl8 to avoid race condition. */
+ lpm->tbl24[tbl24_index].valid = 0;
+ tbl8_free_extend(lpm->tbl8, tbl8_group_start);
+ } else if (tbl8_recycle_index > -1) {
+ /* Update tbl24 entry. */
+ union rte_lpm_tbl24_entry_extend new_tbl24_entry;
+ new_tbl24_entry.next_hop = lpm->tbl8[tbl8_recycle_index].next_hop;
+ new_tbl24_entry.valid = VALID;
+ new_tbl24_entry.ext_entry = 0;
+ new_tbl24_entry.depth = lpm->tbl8[tbl8_recycle_index].depth;
+
+ /* Set tbl24 before freeing tbl8 to avoid race condition. */
+ lpm->tbl24[tbl24_index] = new_tbl24_entry;
+ tbl8_free_extend(lpm->tbl8, tbl8_group_start);
+ }
+
+ return 0;
+}
+
/*
* Deletes a rule
*/
int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
uint32_t ip_masked;
@@ -993,12 +1781,85 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
return delete_depth_big(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
}
}
+VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
+
+int
+rte_lpm_delete_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth)
+{
+ int32_t rule_to_delete_index, sub_rule_index;
+ uint32_t ip_masked;
+ uint8_t sub_rule_depth;
+ /*
+ * Check input arguments. Note: IP must be a positive integer of 32
+ * bits in length therefore it need not be checked.
+ */
+ if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
+ return -EINVAL;
+ }
+
+ ip_masked = ip & depth_to_mask(depth);
+
+ /*
+ * Find the index of the input rule, that needs to be deleted, in the
+ * rule table.
+ */
+ rule_to_delete_index = rule_find_extend(lpm, ip_masked, depth);
+
+ /*
+ * Check if rule_to_delete_index was found. If no rule was found the
+ * function rule_find returns -EINVAL.
+ */
+ if (rule_to_delete_index < 0)
+ return -EINVAL;
+
+ /* Delete the rule from the rule table. */
+ rule_delete_extend(lpm, rule_to_delete_index, depth);
+
+ /*
+ * Find rule to replace the rule_to_delete. If there is no rule to
+ * replace the rule_to_delete we return -1 and invalidate the table
+ * entries associated with this rule.
+ */
+ sub_rule_depth = 0;
+ sub_rule_index = find_previous_rule_extend(lpm, ip, depth, &sub_rule_depth);
+
+ /*
+ * If the input depth value is less than 25 use function
+ * delete_depth_small otherwise use delete_depth_big.
+ */
+ if (depth <= MAX_DEPTH_TBL24) {
+ return delete_depth_small_extend(lpm, ip_masked, depth,
+ sub_rule_index, sub_rule_depth);
+ } else { /* If depth > MAX_DEPTH_TBL24 */
+ return delete_depth_big_extend(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
+ }
+}
+BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v22, 2.2);
+MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth), rte_lpm_delete_v22);
+
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all(struct rte_lpm *lpm)
+rte_lpm_delete_all_v20(struct rte_lpm *lpm)
+{
+ /* Zero rule information. */
+ memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+
+ /* Zero tbl24. */
+ memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
+
+ /* Zero tbl8. */
+ memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
+
+ /* Delete all rules form the rules table. */
+ memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+}
+VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
+
+void
+rte_lpm_delete_all_v22(struct rte_lpm_extend *lpm)
{
/* Zero rule information. */
memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1012,3 +1873,5 @@ rte_lpm_delete_all(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
+BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v22, 2.2);
+MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm_extend *lpm), rte_lpm_delete_all_v22);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..5ecb95b 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -49,6 +49,8 @@
#include <rte_common.h>
#include <rte_vect.h>
+#include <rte_compat.h>
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -128,12 +130,76 @@ struct rte_lpm_tbl8_entry {
};
#endif
+/** @internal bitmask with valid and ext_entry/valid_group fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND 0x03000000
+
+/** Bitmask used to indicate successful lookup */
+#define RTE_LPM_LOOKUP_SUCCESS_EXTEND 0x01000000
+
+/** Bitmask used to get 24-bits value next hop from uint32_t **/
+#define RTE_LPM_NEXT_HOP_MASK 0x00ffffff
+
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+
+/** @internal Tbl24 entry structure. */
+union rte_lpm_tbl24_entry_extend {
+uint32_t entry;
+struct{
+ uint32_t next_hop :24;/**< next hop. */
+ uint32_t valid :1; /**< Validation flag. */
+ uint32_t ext_entry :1; /**< External entry. */
+ uint32_t depth :6; /**< Rule depth. */
+ };
+};
+/* Store group index (i.e. gindex)into tbl8. */
+#define tbl8_gindex next_hop
+
+
+/** @internal Tbl8 entry structure. */
+union rte_lpm_tbl8_entry_extend {
+uint32_t entry;
+struct {
+ uint32_t next_hop :24;/**< next hop. */
+ uint32_t valid :1; /**< Validation flag. */
+ uint32_t valid_group :1; /**< External entry. */
+ uint32_t depth :6; /**< Rule depth. */
+ };
+};
+#else
+union rte_lpm_tbl24_entry_extend {
+struct {
+ uint32_t depth :6;
+ uint32_t ext_entry :1;
+ uint32_t valid :1;
+ uint32_t next_hop :24;
+ };
+uint32_t entry;
+};
+#define tbl8_gindex next_hop
+
+union rte_lpm_tbl8_entry_extend {
+struct {
+ uint32_t depth :6;
+ uint32_t valid_group :1;
+ uint32_t valid :1;
+ uint32_t next_hop :24;
+ };
+uint32_t entry;
+};
+#endif
+
/** @internal Rule structure. */
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint8_t next_hop; /**< Rule next hop. */
};
+/** @internal Rule (extend) structure. */
+struct rte_lpm_rule_extend {
+ uint32_t ip; /**< Rule IP address. */
+ uint32_t next_hop; /**< Rule next hop. */
+};
+
/** @internal Contains metadata about the rules table. */
struct rte_lpm_rule_info {
uint32_t used_rules; /**< Used rules so far. */
@@ -156,6 +222,22 @@ struct rte_lpm {
__rte_cache_aligned; /**< LPM rules. */
};
+/** @internal LPM (extend) structure. */
+struct rte_lpm_extend {
+ /* LPM metadata. */
+ char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
+ uint32_t max_rules; /**< Max. balanced rules per lpm. */
+ struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+
+ /* LPM Tables. */
+ union rte_lpm_tbl24_entry_extend tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+ __rte_cache_aligned; /**< LPM tbl24 table. */
+ union rte_lpm_tbl8_entry_extend tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+ __rte_cache_aligned; /**< LPM tbl8 table. */
+ struct rte_lpm_rule_extend rules_tbl[0] \
+ __rte_cache_aligned; /**< LPM rules. */
+};
+
/**
* Create an LPM object.
*
@@ -177,8 +259,12 @@ struct rte_lpm {
* - EEXIST - a memzone with the same name already exists
* - ENOMEM - no appropriate memory area found in which to create memzone
*/
-struct rte_lpm *
+struct rte_lpm_extend *
rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
+struct rte_lpm *
+rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
+struct rte_lpm_extend *
+rte_lpm_create_v22(const char *name, int socket_id, int max_rules, int flags);
/**
* Find an existing LPM object and return a pointer to it.
@@ -190,8 +276,12 @@ rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
* set appropriately. Possible rte_errno values include:
* - ENOENT - required entry not available to return.
*/
-struct rte_lpm *
+struct rte_lpm_extend *
rte_lpm_find_existing(const char *name);
+struct rte_lpm *
+rte_lpm_find_existing_v20(const char *name);
+struct rte_lpm_extend *
+rte_lpm_find_existing_v22(const char *name);
/**
* Free an LPM object.
@@ -202,7 +292,11 @@ rte_lpm_find_existing(const char *name);
* None
*/
void
-rte_lpm_free(struct rte_lpm *lpm);
+rte_lpm_free(struct rte_lpm_extend *lpm);
+void
+rte_lpm_free_v20(struct rte_lpm *lpm);
+void
+rte_lpm_free_v22(struct rte_lpm_extend *lpm);
/**
* Add a rule to the LPM table.
@@ -219,7 +313,11 @@ rte_lpm_free(struct rte_lpm *lpm);
* 0 on success, negative value otherwise
*/
int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
+int
+rte_lpm_add_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+int
+rte_lpm_add_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -237,8 +335,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
* 1 if the rule exists, 0 if it does not, a negative value on failure
*/
int
-rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
+int
+rte_lpm_is_rule_present_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *next_hop);
+int
+rte_lpm_is_rule_present_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth,
+uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -253,7 +357,11 @@ uint8_t *next_hop);
* 0 on success, negative value otherwise
*/
int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+rte_lpm_delete(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v20(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+int
+rte_lpm_delete_v22(struct rte_lpm_extend *lpm, uint32_t ip, uint8_t depth);
/**
* Delete all rules from the LPM table.
@@ -262,7 +370,11 @@ rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
* LPM object handle
*/
void
-rte_lpm_delete_all(struct rte_lpm *lpm);
+rte_lpm_delete_all(struct rte_lpm_extend *lpm);
+void
+rte_lpm_delete_all_v20(struct rte_lpm *lpm);
+void
+rte_lpm_delete_all_v22(struct rte_lpm_extend *lpm);
/**
* Lookup an IP into the LPM table.
@@ -276,6 +388,7 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
* @return
* -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
*/
+
static inline int
rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
{
@@ -302,6 +415,32 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
}
+static inline int
+rte_lpm_lookup_extend(struct rte_lpm_extend *lpm, uint32_t ip, uint32_t *next_hop)
+{
+ unsigned tbl24_index = (ip >> 8);
+ uint32_t tbl_entry;
+
+ /* DEBUG: Check user input arguments. */
+ RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+ /* Copy tbl24 entry */
+ tbl_entry = lpm->tbl24[tbl24_index].entry;
+
+ /* Copy tbl8 entry (only if needed) */
+ if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+
+ unsigned tbl8_index = (uint8_t)ip +
+ ((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+ tbl_entry = lpm->tbl8[tbl8_index].entry;
+ }
+
+ *next_hop = tbl_entry & RTE_LPM_NEXT_HOP_MASK;
+ return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS_EXTEND) ? 0 : -ENOENT;
+}
+
/**
* Lookup multiple IP addresses in an LPM table. This may be implemented as a
* macro, so the address of the function should not be used.
@@ -312,9 +451,9 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
* Array of IPs to be looked up in the LPM table
* @param next_hops
* Next hop of the most specific rule found for IP (valid on lookup hit only).
- * This is an array of two byte values. The most significant byte in each
+ * This is an array of four byte values. The most significant byte in each
* value says whether the lookup was successful (bitmask
- * RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
+ * RTE_LPM_LOOKUP_SUCCESS is set). The three least significant bytes are the
* actual next hop.
* @param n
* Number of elements in ips (and next_hops) array to lookup. This should be a
@@ -322,8 +461,11 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
* @return
* -EINVAL for incorrect arguments, otherwise 0
*/
+
#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+#define rte_lpm_lookup_bulk_extend(lpm, ips, next_hops, n) \
+ rte_lpm_lookup_bulk_func_extend(lpm, ips, next_hops, n)
static inline int
rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
@@ -358,8 +500,42 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
return 0;
}
+static inline int
+rte_lpm_lookup_bulk_func_extend(const struct rte_lpm_extend *lpm, const uint32_t *ips,
+ uint32_t *next_hops, const unsigned n)
+{
+ unsigned i;
+ unsigned tbl24_indexes[n];
+
+ /* DEBUG: Check user input arguments. */
+ RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+ (next_hops == NULL)), -EINVAL);
+
+ for (i = 0; i < n; i++) {
+ tbl24_indexes[i] = ips[i] >> 8;
+ }
+
+ for (i = 0; i < n; i++) {
+ /* Simply copy tbl24 entry to output */
+ next_hops[i] = lpm->tbl24[tbl24_indexes[i]].entry;
+
+ /* Overwrite output with tbl8 entry if needed */
+ if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+
+ unsigned tbl8_index = (uint8_t)ips[i] +
+ ((uint8_t)next_hops[i] *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+ next_hops[i] = lpm->tbl8[tbl8_index].entry;
+ }
+ }
+ return 0;
+}
+
/* Mask four results. */
#define RTE_LPM_MASKX4_RES UINT64_C(0x00ff00ff00ff00ff)
+#define RTE_LPM_MASKX2_RES UINT64_C(0x00ffffff00ffffff)
/**
* Lookup four IP addresses in an LPM table.
@@ -370,9 +546,9 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
* Four IPs to be looked up in the LPM table
* @param hop
* Next hop of the most specific rule found for IP (valid on lookup hit only).
- * This is an 4 elements array of two byte values.
- * If the lookup was succesfull for the given IP, then least significant byte
- * of the corresponding element is the actual next hop and the most
+ * This is an 4 elements array of four byte values.
+ * If the lookup was successful for the given IP, then three least significant bytes
+ * of the corresponding element are the actual next hop and the most
* significant byte is zero.
* If the lookup for the given IP failed, then corresponding element would
* contain default value, see description of then next parameter.
@@ -380,6 +556,7 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
* Default value to populate into corresponding element of hop[] array,
* if lookup would fail.
*/
+
static inline void
rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
uint16_t defv)
@@ -473,6 +650,100 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
}
+static inline void
+rte_lpm_lookupx4_extend(const struct rte_lpm_extend *lpm, __m128i ip, uint32_t hop[4],
+ uint32_t defv)
+{
+ __m128i i24;
+ rte_xmm_t i8;
+ uint32_t tbl[4];
+ uint64_t idx, pt, pt2;
+
+ const __m128i mask8 =
+ _mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
+
+ /*
+ * RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND for 2 LPM entries
+ * as one 64-bit value (0x0300000003000000).
+ */
+ const uint64_t mask_xv =
+ ((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND |
+ (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND << 32);
+
+ /*
+ * RTE_LPM_LOOKUP_SUCCESS_EXTEND for 2 LPM entries
+ * as one 64-bit value (0x0100000001000000).
+ */
+ const uint64_t mask_v =
+ ((uint64_t)RTE_LPM_LOOKUP_SUCCESS_EXTEND |
+ (uint64_t)RTE_LPM_LOOKUP_SUCCESS_EXTEND << 32);
+
+ /* get 4 indexes for tbl24[]. */
+ i24 = _mm_srli_epi32(ip, CHAR_BIT);
+
+ /* extract values from tbl24[] */
+ idx = _mm_cvtsi128_si64(i24);
+ i24 = _mm_srli_si128(i24, sizeof(uint64_t));
+
+ tbl[0] = lpm->tbl24[(uint32_t)idx].entry;
+ tbl[1] = lpm->tbl24[idx >> 32].entry;
+
+ idx = _mm_cvtsi128_si64(i24);
+
+ tbl[2] = lpm->tbl24[(uint32_t)idx].entry;
+ tbl[3] = lpm->tbl24[idx >> 32].entry;
+
+ /* get 4 indexes for tbl8[]. */
+ i8.x = _mm_and_si128(ip, mask8);
+
+ pt = (uint64_t)tbl[0] |
+ (uint64_t)tbl[1] << 32;
+ pt2 = (uint64_t)tbl[2] |
+ (uint64_t)tbl[3] << 32;
+
+ /* search successfully finished for all 4 IP addresses. */
+ if (likely((pt & mask_xv) == mask_v) &&
+ likely((pt2 & mask_xv) == mask_v)) {
+ *(uint64_t *)hop = pt & RTE_LPM_MASKX2_RES;
+ *(uint64_t *)(hop + 2) = pt2 & RTE_LPM_MASKX2_RES;
+ return;
+ }
+
+ if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+ i8.u32[0] = i8.u32[0] +
+ (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl[0] = lpm->tbl8[i8.u32[0]].entry;
+ }
+ if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+ i8.u32[1] = i8.u32[1] +
+ (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl[1] = lpm->tbl8[i8.u32[1]].entry;
+ }
+ if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+ i8.u32[2] = i8.u32[2] +
+ (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl[2] = lpm->tbl8[i8.u32[2]].entry;
+ }
+ if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK_EXTEND)) {
+ i8.u32[3] = i8.u32[3] +
+ (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl[3] = lpm->tbl8[i8.u32[3]].entry;
+ }
+
+ hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+ ? tbl[0] & RTE_LPM_NEXT_HOP_MASK : defv;
+ hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+ ? tbl[1] & RTE_LPM_NEXT_HOP_MASK : defv;
+ hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+ ? tbl[2] & RTE_LPM_NEXT_HOP_MASK : defv;
+ hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS_EXTEND)
+ ? tbl[3] & RTE_LPM_NEXT_HOP_MASK : defv;
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 70e1c05..6ac8d15 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,23 +1,42 @@
DPDK_2.0 {
- global:
+ global:
+ rte_lpm6_add;
+ rte_lpm6_create;
+ rte_lpm6_delete;
+ rte_lpm6_delete_all;
+ rte_lpm6_delete_bulk_func;
+ rte_lpm6_find_existing;
+ rte_lpm6_free;
+ rte_lpm6_is_rule_present;
+ rte_lpm6_lookup;
+ rte_lpm6_lookup_bulk_func;
- rte_lpm_add;
- rte_lpm_create;
- rte_lpm_delete;
- rte_lpm_delete_all;
- rte_lpm_find_existing;
- rte_lpm_free;
- rte_lpm_is_rule_present;
- rte_lpm6_add;
- rte_lpm6_create;
- rte_lpm6_delete;
- rte_lpm6_delete_all;
- rte_lpm6_delete_bulk_func;
- rte_lpm6_find_existing;
- rte_lpm6_free;
- rte_lpm6_is_rule_present;
- rte_lpm6_lookup;
- rte_lpm6_lookup_bulk_func;
-
- local: *;
+ local: *;
};
+
+DPDK_2.2 {
+ global:
+ rte_lpm_add;
+ rte_lpm_is_rule_present;
+ rte_lpm_create;
+ rte_lpm_delete;
+ rte_lpm_delete_all;
+ rte_lpm_find_existing;
+ rte_lpm_free;
+ local:
+ rule_add_extend;
+ rule_delete_extend;
+ rule_find_extend;
+ tbl8_alloc_extend;
+ tbl8_free_extend;
+ add_depth_small_extend;
+ add_depth_big_extend;
+ find_previous_rule_extend;
+ delete_depth_small_extend;
+ tbl8_recycle_check_extend;
+ delete_depth_big_extend;
+ rte_lpm_lookup_extend;
+ rte_lpm_lookup_bulk_func_extend;
+ rte_lpm_lookupx4_extend;
+
+} DPDK_2.0;
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index 849d899..ba55319 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -70,7 +70,7 @@ struct rte_table_lpm {
uint32_t offset;
/* Handle to low-level LPM table */
- struct rte_lpm *lpm;
+ struct rte_lpm_extend *lpm;
/* Next Hop Table (NHT) */
uint32_t nht_users[RTE_TABLE_LPM_MAX_NEXT_HOPS];
@@ -202,7 +202,7 @@ rte_table_lpm_entry_add(
struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
uint32_t nht_pos, nht_pos0_valid;
int status;
- uint8_t nht_pos0 = 0;
+ uint32_t nht_pos0 = 0;
/* Check input parameters */
if (lpm == NULL) {
@@ -268,7 +268,7 @@ rte_table_lpm_entry_delete(
{
struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
- uint8_t nht_pos;
+ uint32_t nht_pos;
int status;
/* Check input parameters */
@@ -342,9 +342,9 @@ rte_table_lpm_lookup(
uint32_t ip = rte_bswap32(
RTE_MBUF_METADATA_UINT32(pkt, lpm->offset));
int status;
- uint8_t nht_pos;
+ uint32_t nht_pos;
- status = rte_lpm_lookup(lpm->lpm, ip, &nht_pos);
+ status = rte_lpm_lookup_extend(lpm->lpm, ip, &nht_pos);
if (status == 0) {
pkts_out_mask |= pkt_mask;
entries[i] = (void *) &lpm->nht[nht_pos *
--
1.9.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
2015-10-23 13:51 3% [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
2015-10-23 13:51 1% ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
@ 2015-10-23 13:51 5% ` Michal Jastrzebski
2015-10-23 14:21 0% ` Bruce Richardson
2015-10-23 16:20 0% ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
2 siblings, 1 reply; 200+ results
From: Michal Jastrzebski @ 2015-10-23 13:51 UTC (permalink / raw)
To: dev
From: Michal Kobylinski <michalx.kobylinski@intel.com>
Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
---
doc/guides/rel_notes/release_2_2.rst | 2 ++
1 file changed, 2 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index ab1c25f..3c616ab 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -121,6 +121,8 @@ ABI Changes
* librte_cfgfile: Allow longer names and values by increasing the constants
CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
+
+* librte_lpm: Increase number of next hops for IPv4 to 2^24
Shared Library Versions
--
1.9.1
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
2015-10-23 13:51 5% ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
@ 2015-10-23 14:21 0% ` Bruce Richardson
2015-10-23 14:33 0% ` Jastrzebski, MichalX K
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-10-23 14:21 UTC (permalink / raw)
To: Michal Jastrzebski; +Cc: dev
On Fri, Oct 23, 2015 at 03:51:51PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski <michalx.kobylinski@intel.com>
>
> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
Hi Michal,
for when you do your v2, this doc update should be included in with the relevant
changes i.e. in patch 1, not as a separate doc patch.
/Bruce
> ---
> doc/guides/rel_notes/release_2_2.rst | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
> index ab1c25f..3c616ab 100644
> --- a/doc/guides/rel_notes/release_2_2.rst
> +++ b/doc/guides/rel_notes/release_2_2.rst
> @@ -121,6 +121,8 @@ ABI Changes
>
> * librte_cfgfile: Allow longer names and values by increasing the constants
> CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
> +
> +* librte_lpm: Increase number of next hops for IPv4 to 2^24
>
>
> Shared Library Versions
> --
> 1.9.1
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm
2015-10-23 14:21 0% ` Bruce Richardson
@ 2015-10-23 14:33 0% ` Jastrzebski, MichalX K
0 siblings, 0 replies; 200+ results
From: Jastrzebski, MichalX K @ 2015-10-23 14:33 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, October 23, 2015 4:22 PM
> To: Jastrzebski, MichalX K
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes
> in librte_lpm
>
> On Fri, Oct 23, 2015 at 03:51:51PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >
> > Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
>
> Hi Michal,
>
> for when you do your v2, this doc update should be included in with the
> relevant
> changes i.e. in patch 1, not as a separate doc patch.
>
> /Bruce
Thanks Bruce, we will do that.
The reason it is separated now is that 1st patch is extremely big and difficult to review.
I wonder if in v2 we could move changes related with test application to the second patch?
This will cause of course that without applying whole patch-set dpdk won't compile.
> > ---
> > doc/guides/rel_notes/release_2_2.rst | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_2_2.rst
> b/doc/guides/rel_notes/release_2_2.rst
> > index ab1c25f..3c616ab 100644
> > --- a/doc/guides/rel_notes/release_2_2.rst
> > +++ b/doc/guides/rel_notes/release_2_2.rst
> > @@ -121,6 +121,8 @@ ABI Changes
> >
> > * librte_cfgfile: Allow longer names and values by increasing the constants
> > CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
> > +
> > +* librte_lpm: Increase number of next hops for IPv4 to 2^24
> >
> >
> > Shared Library Versions
> > --
> > 1.9.1
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 13:51 1% ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
@ 2015-10-23 14:38 3% ` Bruce Richardson
2015-10-23 14:59 0% ` Jastrzebski, MichalX K
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2015-10-23 14:38 UTC (permalink / raw)
To: Michal Jastrzebski; +Cc: dev
On Fri, Oct 23, 2015 at 03:51:49PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski <michalx.kobylinski@intel.com>
>
> Main implementation - changes to lpm library regarding new data types.
> Additionally this patch implements changes required by test application.
> ABI versioning requirements are met only for lpm library,
> for table library it will be sent in v2 of this patch-set.
>
> Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
> ---
> app/test/test_func_reentrancy.c | 4 +-
> app/test/test_lpm.c | 227 +++++-----
> lib/librte_lpm/rte_lpm.c | 887 ++++++++++++++++++++++++++++++++++++-
> lib/librte_lpm/rte_lpm.h | 295 +++++++++++-
> lib/librte_lpm/rte_lpm_version.map | 59 ++-
> lib/librte_table/rte_table_lpm.c | 10 +-
> 6 files changed, 1322 insertions(+), 160 deletions(-)
>
> diff --git a/app/test/test_func_reentrancy.c b/app/test/test_func_reentrancy.c
> index dbecc52..331ab29 100644
> --- a/app/test/test_func_reentrancy.c
> +++ b/app/test/test_func_reentrancy.c
> @@ -343,7 +343,7 @@ static void
> lpm_clean(unsigned lcore_id)
> {
> char lpm_name[MAX_STRING_SIZE];
> - struct rte_lpm *lpm;
> + struct rte_lpm_extend *lpm;
I thought this patchset was just to increase the size of the lpm entries, not
to create a whole new entry type? The structure names etc. should all stay the
same, and let the ABI versionning take care of handling code using the older
structures.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 14:38 3% ` Bruce Richardson
@ 2015-10-23 14:59 0% ` Jastrzebski, MichalX K
0 siblings, 0 replies; 200+ results
From: Jastrzebski, MichalX K @ 2015-10-23 14:59 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, October 23, 2015 4:39 PM
> To: Jastrzebski, MichalX K
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 1/3] lpm: increase number of next hops
> for lpm (ipv4)
>
> On Fri, Oct 23, 2015 at 03:51:49PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >
> > Main implementation - changes to lpm library regarding new data types.
> > Additionally this patch implements changes required by test application.
> > ABI versioning requirements are met only for lpm library,
> > for table library it will be sent in v2 of this patch-set.
> >
> > Signed-off-by: Michal Kobylinski <michalx.kobylinski@intel.com>
> > ---
> > app/test/test_func_reentrancy.c | 4 +-
> > app/test/test_lpm.c | 227 +++++-----
> > lib/librte_lpm/rte_lpm.c | 887
> ++++++++++++++++++++++++++++++++++++-
> > lib/librte_lpm/rte_lpm.h | 295 +++++++++++-
> > lib/librte_lpm/rte_lpm_version.map | 59 ++-
> > lib/librte_table/rte_table_lpm.c | 10 +-
> > 6 files changed, 1322 insertions(+), 160 deletions(-)
> >
> > diff --git a/app/test/test_func_reentrancy.c
> b/app/test/test_func_reentrancy.c
> > index dbecc52..331ab29 100644
> > --- a/app/test/test_func_reentrancy.c
> > +++ b/app/test/test_func_reentrancy.c
> > @@ -343,7 +343,7 @@ static void
> > lpm_clean(unsigned lcore_id)
> > {
> > char lpm_name[MAX_STRING_SIZE];
> > - struct rte_lpm *lpm;
> > + struct rte_lpm_extend *lpm;
>
> I thought this patchset was just to increase the size of the lpm entries, not
> to create a whole new entry type? The structure names etc. should all stay
> the
> same, and let the ABI versionning take care of handling code using the older
> structures.
>
> /Bruce
Hi Bruce,
I see Your point. I think we should use here RTE_NEXT_ABI macro.
The code will have to be duplicated but it will allow to use old names in a new version.
Michal
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 13:51 3% [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
2015-10-23 13:51 1% ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
2015-10-23 13:51 5% ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
@ 2015-10-23 16:20 0% ` Matthew Hall
2015-10-23 16:33 3% ` Stephen Hemminger
2015-10-24 6:09 0% ` Matthew Hall
2 siblings, 2 replies; 200+ results
From: Matthew Hall @ 2015-10-23 16:20 UTC (permalink / raw)
To: Michal Jastrzebski; +Cc: dev
On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> From: Michal Kobylinski <michalx.kobylinski@intel.com>
>
> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> number of next hops to 256, as the next hop ID is an 8-bit long field.
> Proposed extension increase number of next hops for IPv4 to 2^24 and
> also allows 32-bits read/write operations.
>
> This patchset requires additional change to rte_table library to meet
> ABI compatibility requirements. A v2 will be sent next week.
I also have a patchset for this.
I will send it out as well so we could compare.
Matthew.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 16:20 0% ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
@ 2015-10-23 16:33 3% ` Stephen Hemminger
2015-10-23 18:38 0% ` Matthew Hall
2015-10-24 6:09 0% ` Matthew Hall
1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2015-10-23 16:33 UTC (permalink / raw)
To: Matthew Hall; +Cc: dev
On Fri, 23 Oct 2015 09:20:33 -0700
Matthew Hall <mhall@mhcomputing.net> wrote:
> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >
> > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > number of next hops to 256, as the next hop ID is an 8-bit long field.
> > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > also allows 32-bits read/write operations.
> >
> > This patchset requires additional change to rte_table library to meet
> > ABI compatibility requirements. A v2 will be sent next week.
>
> I also have a patchset for this.
>
> I will send it out as well so we could compare.
>
> Matthew.
Could you consider rolling in the Brocade/Vyatta changes to LPM
structure as well. Would prefer only one ABI change
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 16:33 3% ` Stephen Hemminger
@ 2015-10-23 18:38 0% ` Matthew Hall
2015-10-23 19:13 0% ` Vladimir Medvedkin
2015-10-23 19:59 1% ` Stephen Hemminger
0 siblings, 2 replies; 200+ results
From: Matthew Hall @ 2015-10-23 18:38 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On Fri, Oct 23, 2015 at 09:33:05AM -0700, Stephen Hemminger wrote:
> On Fri, 23 Oct 2015 09:20:33 -0700
> Matthew Hall <mhall@mhcomputing.net> wrote:
>
> > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > >
> > > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > > number of next hops to 256, as the next hop ID is an 8-bit long field.
> > > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > > also allows 32-bits read/write operations.
> > >
> > > This patchset requires additional change to rte_table library to meet
> > > ABI compatibility requirements. A v2 will be sent next week.
> >
> > I also have a patchset for this.
> >
> > I will send it out as well so we could compare.
> >
> > Matthew.
>
> Could you consider rolling in the Brocade/Vyatta changes to LPM
> structure as well. Would prefer only one ABI change
Hi Stephen,
I asked you if you could send me these a while ago but I never heard anything.
That's the only reason I made my own version.
If you have them available also maybe we can consolidate things.
Matthew.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 18:38 0% ` Matthew Hall
@ 2015-10-23 19:13 0% ` Vladimir Medvedkin
2015-10-23 19:59 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Vladimir Medvedkin @ 2015-10-23 19:13 UTC (permalink / raw)
To: Matthew Hall; +Cc: dev
Hi all,
I also have LPM library implementation. Main points:
- First, we don't need two separate structures rte_lpm_tbl8_entry and
rte_lpm_tbl24_entry. I think it's better to merge in one rte_lpm_tbl_entry
because there is only one difference in name of one bit - valid_group vs
ext_entry. Let it's name will be ext_valid.
- Second, I think that 16 bit is more than enough for next-hop index. It's
better to use remaining 8 bits for so called forwarding class. It is
something like Juniper DCU that can help us to classify traffic based on
dst prefix. But after conversation with Bruce Richardson I agree with him
that next-hop index and forwarding class can be split from one return value
by the application.
- Third, I want to add posibility to lookup AS number (or any other 4 byte)
that originate that prefix. It can be defined like:
union rte_lpm_tbl_entry_extend {
#ifdef RTE_LPM_ASNUM
uint64_t entry;
#else
uint32_t entry;
#endif
#ifdef RTE_LPM_ASNUM
uint32_t as_num;
#endif
struct{
uint32_t next_hop :24;/**< next hop. */
uint32_t valid :1; /**< Validation flag. */
uint32_t ext_valid :1; /**< External entry. */
uint32_t depth :6; /**< Rule depth. */
};
};
- Fourth, extension of next-hop index is done not only for increasing of
next-hops but also to increase more specific routes. So I think that should
be fixed
+ unsigned tbl8_index = (uint8_t)ip +
+ ((uint8_t)tbl_entry *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
Regards,
Vladimir
2015-10-23 21:38 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> On Fri, Oct 23, 2015 at 09:33:05AM -0700, Stephen Hemminger wrote:
> > On Fri, 23 Oct 2015 09:20:33 -0700
> > Matthew Hall <mhall@mhcomputing.net> wrote:
> >
> > > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > > From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > > >
> > > > The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > > > number of next hops to 256, as the next hop ID is an 8-bit long
> field.
> > > > Proposed extension increase number of next hops for IPv4 to 2^24 and
> > > > also allows 32-bits read/write operations.
> > > >
> > > > This patchset requires additional change to rte_table library to meet
> > > > ABI compatibility requirements. A v2 will be sent next week.
> > >
> > > I also have a patchset for this.
> > >
> > > I will send it out as well so we could compare.
> > >
> > > Matthew.
> >
> > Could you consider rolling in the Brocade/Vyatta changes to LPM
> > structure as well. Would prefer only one ABI change
>
> Hi Stephen,
>
> I asked you if you could send me these a while ago but I never heard
> anything.
>
> That's the only reason I made my own version.
>
> If you have them available also maybe we can consolidate things.
>
> Matthew.
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 18:38 0% ` Matthew Hall
2015-10-23 19:13 0% ` Vladimir Medvedkin
@ 2015-10-23 19:59 1% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2015-10-23 19:59 UTC (permalink / raw)
To: Matthew Hall; +Cc: dev
From 9efec4571eec4db455a29773b95cf9264c046a03 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <shemming@brocade.com>
Date: Fri, 23 Oct 2015 12:55:05 -0700
Subject: [PATCH] lpm: brocade extensions
This is a brute-force merge of the Brocade extension to LPM
to current DPDK source tree.
No API/ABI compatibility is expected.
1. Allow arbitrary number of rules
2. Get rid of N^2 search for rule add/delete
3. Add route scope
4. Extend nexthop to 16 bits
5. Extend to allow for more info on delete, (callback and nexthop)
6. Dynamically grow /8 table (requires RCU)
7. Support full /0 and /32 rules
---
lib/librte_lpm/rte_lpm.c | 814 ++++++++++++++++++++++++++---------------------
lib/librte_lpm/rte_lpm.h | 381 +++++++---------------
2 files changed, 567 insertions(+), 628 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..ef1f0bf 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -2,6 +2,7 @@
* BSD LICENSE
*
* Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2012-2015 Brocade Communications Systems
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -38,13 +39,15 @@
#include <stdio.h>
#include <errno.h>
#include <sys/queue.h>
+#include <bsd/sys/tree.h>
#include <rte_log.h>
#include <rte_branch_prediction.h>
#include <rte_common.h>
-#include <rte_memory.h> /* for definition of RTE_CACHE_LINE_SIZE */
+#include <rte_memory.h> /* for definition of RTE_CACHE_LINE_SIZE */
#include <rte_malloc.h>
#include <rte_memzone.h>
+#include <rte_tailq.h>
#include <rte_eal.h>
#include <rte_eal_memconfig.h>
#include <rte_per_lcore.h>
@@ -52,9 +55,25 @@
#include <rte_errno.h>
#include <rte_rwlock.h>
#include <rte_spinlock.h>
+#include <rte_debug.h>
#include "rte_lpm.h"
+#include <urcu-qsbr.h>
+
+/** Auto-growth of tbl8 */
+#define RTE_LPM_TBL8_INIT_GROUPS 256 /* power of 2 */
+#define RTE_LPM_TBL8_INIT_ENTRIES (RTE_LPM_TBL8_INIT_GROUPS * \
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
+/** Rule structure. */
+struct rte_lpm_rule {
+ uint32_t ip; /**< Rule IP address. */
+ uint16_t next_hop; /**< Rule next hop. */
+ uint8_t scope; /**< Rule scope */
+ uint8_t reserved;
+ RB_ENTRY(rte_lpm_rule) link;
+};
+
TAILQ_HEAD(rte_lpm_list, rte_tailq_entry);
static struct rte_tailq_elem rte_lpm_tailq = {
@@ -71,31 +90,55 @@ enum valid_flag {
/* Macro to enable/disable run-time checks. */
#if defined(RTE_LIBRTE_LPM_DEBUG)
-#include <rte_debug.h>
-#define VERIFY_DEPTH(depth) do { \
- if ((depth == 0) || (depth > RTE_LPM_MAX_DEPTH)) \
+#define VERIFY_DEPTH(depth) do { \
+ if (depth > RTE_LPM_MAX_DEPTH) \
rte_panic("LPM: Invalid depth (%u) at line %d", \
- (unsigned)(depth), __LINE__); \
+ (unsigned)(depth), __LINE__); \
} while (0)
#else
#define VERIFY_DEPTH(depth)
#endif
+/* Comparison function for red-black tree nodes.
+ "If the first argument is smaller than the second, the function
+ returns a value smaller than zero. If they are equal, the function
+ returns zero. Otherwise, it should return a value greater than zero."
+*/
+static inline int rules_cmp(const struct rte_lpm_rule *r1,
+ const struct rte_lpm_rule *r2)
+{
+ if (r1->ip < r2->ip)
+ return -1;
+ else if (r1->ip > r2->ip)
+ return 1;
+ else
+ return r1->scope - r2->scope;
+}
+
+/* Satisfy old style attribute in tree.h header */
+#ifndef __unused
+#define __unused __attribute__ ((unused))
+#endif
+
+/* Generate internal functions and make them static. */
+RB_GENERATE_STATIC(rte_lpm_rules_tree, rte_lpm_rule, link, rules_cmp)
+
/*
* Converts a given depth value to its corresponding mask value.
*
* depth (IN) : range = 1 - 32
- * mask (OUT) : 32bit mask
+ * mask (OUT) : 32bit mask
*/
static uint32_t __attribute__((pure))
depth_to_mask(uint8_t depth)
{
VERIFY_DEPTH(depth);
- /* To calculate a mask start with a 1 on the left hand side and right
- * shift while populating the left hand side with 1's
- */
- return (int)0x80000000 >> (depth - 1);
+ /* per C std. shift of 32 bits is undefined */
+ if (depth == 0)
+ return 0;
+
+ return ~0u << (32 - depth);
}
/*
@@ -113,7 +156,7 @@ depth_to_range(uint8_t depth)
return 1 << (MAX_DEPTH_TBL24 - depth);
/* Else if depth is greater than 24 */
- return (1 << (RTE_LPM_MAX_DEPTH - depth));
+ return 1 << (32 - depth);
}
/*
@@ -148,31 +191,28 @@ rte_lpm_find_existing(const char *name)
* Allocates memory for LPM object
*/
struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules,
- __rte_unused int flags)
+rte_lpm_create(const char *name, int socket_id)
{
char mem_name[RTE_LPM_NAMESIZE];
struct rte_lpm *lpm = NULL;
struct rte_tailq_entry *te;
- uint32_t mem_size;
+ unsigned int depth;
struct rte_lpm_list *lpm_list;
+ /* check that we have an initialized tail queue */
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
+ RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 4);
+ RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 4);
/* Check user arguments. */
- if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
+ if ((name == NULL) || (socket_id < -1)) {
rte_errno = EINVAL;
return NULL;
}
snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
- /* Determine the amount of memory to allocate. */
- mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
/* guarantee there's no existing */
@@ -192,17 +232,33 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
}
/* Allocate memory to store the LPM data structures. */
- lpm = (struct rte_lpm *)rte_zmalloc_socket(mem_name, mem_size,
- RTE_CACHE_LINE_SIZE, socket_id);
+ lpm = rte_zmalloc_socket(mem_name, sizeof(*lpm), RTE_CACHE_LINE_SIZE,
+ socket_id);
if (lpm == NULL) {
RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
- rte_free(te);
goto exit;
}
/* Save user arguments. */
- lpm->max_rules = max_rules;
snprintf(lpm->name, sizeof(lpm->name), "%s", name);
+ lpm->socket_id = socket_id;
+
+ /* Vyatta change to use red-black tree */
+ for (depth = 0; depth < RTE_LPM_MAX_DEPTH; ++depth)
+ RB_INIT(&lpm->rules[depth]);
+
+ /* Vyatta change to dynamically grow tbl8 */
+ lpm->tbl8_num_groups = RTE_LPM_TBL8_INIT_GROUPS;
+ lpm->tbl8_rover = RTE_LPM_TBL8_INIT_GROUPS - 1;
+ lpm->tbl8 = rte_calloc_socket(NULL, RTE_LPM_TBL8_INIT_ENTRIES,
+ sizeof(struct rte_lpm_tbl8_entry),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (lpm->tbl8 == NULL) {
+ RTE_LOG(ERR, LPM, "LPM tbl8 group allocation failed\n");
+ rte_free(lpm);
+ lpm = NULL;
+ goto exit;
+ }
te->data = (void *) lpm;
@@ -245,248 +301,237 @@ rte_lpm_free(struct rte_lpm *lpm)
rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(lpm->tbl8);
rte_free(lpm);
rte_free(te);
}
+
/*
- * Adds a rule to the rule table.
- *
- * NOTE: The rule table is split into 32 groups. Each group contains rules that
- * apply to a specific prefix depth (i.e. group 1 contains rules that apply to
- * prefixes with a depth of 1 etc.). In the following code (depth - 1) is used
- * to refer to depth 1 because even though the depth range is 1 - 32, depths
- * are stored in the rule table from 0 - 31.
- * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
+ * Finds a rule in rule table.
*/
-static inline int32_t
-rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+static struct rte_lpm_rule *
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, uint8_t scope)
{
- uint32_t rule_gindex, rule_index, last_rule;
- int i;
-
- VERIFY_DEPTH(depth);
-
- /* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
-
- /* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- /* Initialise rule_index to point to start of rule group. */
- rule_index = rule_gindex;
- /* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- for (; rule_index < last_rule; rule_index++) {
+ struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+ struct rte_lpm_rule k = {
+ .ip = ip_masked,
+ .scope = scope,
+ };
- /* If rule already exists update its next_hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- return rule_index;
- }
- }
-
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
- } else {
- /* Calculate the position in which the rule will be stored. */
- rule_index = 0;
+ return RB_FIND(rte_lpm_rules_tree, head, &k);
+}
- for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules;
- break;
- }
- }
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
+/* Finds rule in table in scope order */
+static struct rte_lpm_rule *
+rule_find_any(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+{
+ struct rte_lpm_rule *r;
+ int scope;
- lpm->rule_info[depth - 1].first_rule = rule_index;
+ for (scope = 255; scope >= 0; --scope) {
+ r = rule_find(lpm, ip_masked, depth, scope);
+ if (r)
+ return r;
}
- /* Make room for the new rule in the array. */
- for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
- return -ENOSPC;
+ return NULL;
+}
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
- }
- }
+/*
+ * Adds a rule to the rule table.
+ *
+ * NOTE: The rule table is split into 32 groups. Each group contains rules that
+ * apply to a specific prefix depth (i.e. group 1 contains rules that apply to
+ * prefixes with a depth of 1 etc.).
+ * NOTE: Valid range for depth parameter is 0 .. 32 inclusive.
+ */
+static struct rte_lpm_rule *
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+ uint16_t next_hop, uint8_t scope)
+{
+ struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+ struct rte_lpm_rule *r, *old;
- /* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ /*
+ * NB: uses regular malloc to avoid chewing up precious
+ * memory pool space for rules.
+ */
+ r = malloc(sizeof(*r));
+ if (!r)
+ return NULL;
- /* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
+ r->ip = ip_masked;
+ r->next_hop = next_hop;
+ r->scope = scope;
- return rule_index;
+ old = RB_INSERT(rte_lpm_rules_tree, head, r);
+ if (!old)
+ return r;
+
+ /* collision with existing rule */
+ free(r);
+ return old;
}
/*
* Delete a rule from the rule table.
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
-static inline void
-rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+static void
+rule_delete(struct rte_lpm *lpm, struct rte_lpm_rule *r, uint8_t depth)
{
- int i;
+ struct rte_lpm_rules_tree *head = &lpm->rules[depth];
- VERIFY_DEPTH(depth);
-
- lpm->rules_tbl[rule_index] = lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
+ RB_REMOVE(rte_lpm_rules_tree, head, r);
- for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
- }
- }
-
- lpm->rule_info[depth - 1].used_rules--;
+ free(r);
}
/*
- * Finds a rule in rule table.
- * NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
+ * Dynamically increase size of tbl8
*/
-static inline int32_t
-rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+static int
+tbl8_grow(struct rte_lpm *lpm)
{
- uint32_t rule_gindex, last_rule, rule_index;
-
- VERIFY_DEPTH(depth);
+ size_t old_size, new_size;
+ struct rte_lpm_tbl8_entry *new_tbl8;
+
+ /* This should not happen,
+ * worst case is each /24 can point to one tbl8 */
+ if (lpm->tbl8_num_groups >= RTE_LPM_TBL24_NUM_ENTRIES)
+ rte_panic("LPM: tbl8 grow already at %u",
+ lpm->tbl8_num_groups);
+
+ old_size = lpm->tbl8_num_groups;
+ new_size = old_size << 1;
+ new_tbl8 = rte_calloc_socket(NULL,
+ new_size * RTE_LPM_TBL8_GROUP_NUM_ENTRIES,
+ sizeof(struct rte_lpm_tbl8_entry),
+ RTE_CACHE_LINE_SIZE,
+ lpm->socket_id);
+ if (new_tbl8 == NULL) {
+ RTE_LOG(ERR, LPM, "LPM tbl8 group expand allocation failed\n");
+ return -ENOMEM;
+ }
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
+ memcpy(new_tbl8, lpm->tbl8,
+ old_size * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+ * sizeof(struct rte_lpm_tbl8_entry));
- /* Scan used rules at given depth to find rule. */
- for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
- /* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
- return rule_index;
- }
+ /* swap in new table */
+ defer_rcu(rte_free, lpm->tbl8);
+ rcu_assign_pointer(lpm->tbl8, new_tbl8);
+ lpm->tbl8_num_groups = new_size;
- /* If rule is not found return -EINVAL. */
- return -EINVAL;
+ return 0;
}
/*
* Find, clean and allocate a tbl8.
*/
-static inline int32_t
-tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
+static int32_t
+tbl8_alloc(struct rte_lpm *lpm)
{
uint32_t tbl8_gindex; /* tbl8 group index. */
struct rte_lpm_tbl8_entry *tbl8_entry;
/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
- tbl8_gindex++) {
- tbl8_entry = &tbl8[tbl8_gindex *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
+ for (tbl8_gindex = (lpm->tbl8_rover + 1) & (lpm->tbl8_num_groups - 1);
+ tbl8_gindex != lpm->tbl8_rover;
+ tbl8_gindex = (tbl8_gindex + 1) & (lpm->tbl8_num_groups - 1)) {
+ tbl8_entry = lpm->tbl8
+ + tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+
/* If a free tbl8 group is found clean it and set as VALID. */
- if (!tbl8_entry->valid_group) {
- memset(&tbl8_entry[0], 0,
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
- sizeof(tbl8_entry[0]));
+ if (likely(!tbl8_entry->valid_group))
+ goto found;
+ }
- tbl8_entry->valid_group = VALID;
+ /* Out of space expand */
+ tbl8_gindex = lpm->tbl8_num_groups;
+ if (tbl8_grow(lpm) < 0)
+ return -ENOSPC;
- /* Return group index for allocated tbl8 group. */
- return tbl8_gindex;
- }
- }
+ tbl8_entry = lpm->tbl8
+ + tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ found:
+ memset(tbl8_entry, 0,
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES * sizeof(tbl8_entry[0]));
+
+ tbl8_entry->valid_group = VALID;
- /* If there are no tbl8 groups free then return error. */
- return -ENOSPC;
+ /* Remember last slot to start looking there */
+ lpm->tbl8_rover = tbl8_gindex;
+
+ /* Return group index for allocated tbl8 group. */
+ return tbl8_gindex;
}
static inline void
-tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
- tbl8[tbl8_group_start].valid_group = INVALID;
+ lpm->tbl8[tbl8_group_start].valid_group = INVALID;
}
-static inline int32_t
+static void
add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ uint16_t next_hop)
{
uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
+ struct rte_lpm_tbl24_entry new_tbl24_entry = {
+ .valid = VALID,
+ .ext_entry = 0,
+ .depth = depth,
+ { .next_hop = next_hop, }
+ };
+ struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
+ .valid = VALID,
+ .depth = depth,
+ .next_hop = next_hop,
+ };
+
+ /* Force compiler to initialize before assignment */
+ rte_barrier();
/* Calculate the index into Table24. */
tbl24_index = ip >> 8;
tbl24_range = depth_to_range(depth);
-
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
/*
* For invalid OR valid and non-extended tbl 24 entries set
* entry.
*/
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
- lpm->tbl24[i].depth <= depth)) {
-
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .next_hop = next_hop, },
- .valid = VALID,
- .ext_entry = 0,
- .depth = depth,
- };
-
- /* Setting tbl24 entry in one go to avoid race
- * conditions
- */
- lpm->tbl24[i] = new_tbl24_entry;
-
+ if (!lpm->tbl24[i].valid || lpm->tbl24[i].ext_entry == 0) {
+ if (!lpm->tbl24[i].valid ||
+ lpm->tbl24[i].depth <= depth)
+ lpm->tbl24[i] = new_tbl24_entry;
continue;
}
- if (lpm->tbl24[i].ext_entry == 1) {
- /* If tbl24 entry is valid and extended calculate the
- * index into tbl8.
- */
- tbl8_index = lpm->tbl24[i].tbl8_gindex *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
- struct rte_lpm_tbl8_entry
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- .next_hop = next_hop,
- };
-
- /*
- * Setting tbl8 entry in one go to avoid
- * race conditions
- */
- lpm->tbl8[j] = new_tbl8_entry;
-
- continue;
- }
+ /* If tbl24 entry is valid and extended calculate the index
+ * into tbl8. */
+ tbl8_index = lpm->tbl24[i].tbl8_gindex
+ * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl8_group_end = tbl8_index + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ for (j = tbl8_index; j < tbl8_group_end; j++) {
+ if (!lpm->tbl8[j].valid ||
+ lpm->tbl8[j].depth <= depth) {
+ /*
+ * Setting tbl8 entry in one go to avoid race
+ * conditions
+ */
+ lpm->tbl8[j] = new_tbl8_entry;
}
}
}
-
- return 0;
}
-static inline int32_t
+static int32_t
add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+ uint16_t next_hop)
{
uint32_t tbl24_index;
int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
@@ -497,12 +542,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
if (!lpm->tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc(lpm->tbl8);
+ tbl8_group_index = tbl8_alloc(lpm);
- /* Check tbl8 allocation was successful. */
- if (tbl8_group_index < 0) {
+ /* Check tbl8 allocation was unsuccessful. */
+ if (tbl8_group_index < 0)
return tbl8_group_index;
- }
/* Find index into tbl8 and range. */
tbl8_index = (tbl8_group_index *
@@ -510,35 +554,38 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
(ip_masked & 0xFF);
/* Set tbl8 entry. */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- lpm->tbl8[i].depth = depth;
- lpm->tbl8[i].next_hop = next_hop;
- lpm->tbl8[i].valid = VALID;
- }
+ struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
+ .valid = VALID,
+ .depth = depth,
+ .next_hop = next_hop,
+ };
+
+ for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++)
+ lpm->tbl8[i] = new_tbl8_entry;
/*
* Update tbl24 entry to point to new tbl8 entry. Note: The
* ext_flag and tbl8_index need to be updated simultaneously,
* so assign whole structure in one go
*/
-
struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .tbl8_gindex = (uint8_t)tbl8_group_index, },
.valid = VALID,
.ext_entry = 1,
.depth = 0,
+ { .tbl8_gindex = tbl8_group_index, }
};
+ rte_barrier();
lpm->tbl24[tbl24_index] = new_tbl24_entry;
-
- }/* If valid entry but not extended calculate the index into Table8. */
+ }
+ /* If valid entry but not extended calculate the index into Table8. */
else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc(lpm->tbl8);
+ tbl8_group_index = tbl8_alloc(lpm);
- if (tbl8_group_index < 0) {
+ if (tbl8_group_index < 0)
return tbl8_group_index;
- }
tbl8_group_start = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -546,69 +593,68 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
/* Populate new tbl8 with tbl24 value. */
- for (i = tbl8_group_start; i < tbl8_group_end; i++) {
- lpm->tbl8[i].valid = VALID;
- lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
- lpm->tbl8[i].next_hop =
- lpm->tbl24[tbl24_index].next_hop;
- }
+ struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
+ .valid = VALID,
+ .depth = lpm->tbl24[tbl24_index].depth,
+ .next_hop = lpm->tbl24[tbl24_index].next_hop,
+ };
+
+ for (i = tbl8_group_start; i < tbl8_group_end; i++)
+ lpm->tbl8[i] = new_tbl8_entry;
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
- /* Insert new rule into the tbl8 entry. */
- for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- lpm->tbl8[i].valid = VALID;
- lpm->tbl8[i].depth = depth;
- lpm->tbl8[i].next_hop = next_hop;
-
- continue;
- }
- }
+ /* Insert new specific rule into the tbl8 entry. */
+ new_tbl8_entry.depth = depth;
+ new_tbl8_entry.next_hop = next_hop;
+ for (i = tbl8_index; i < tbl8_index + tbl8_range; i++)
+ lpm->tbl8[i] = new_tbl8_entry;
/*
* Update tbl24 entry to point to new tbl8 entry. Note: The
* ext_flag and tbl8_index need to be updated simultaneously,
* so assign whole structure in one go.
*/
-
struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .tbl8_gindex = (uint8_t)tbl8_group_index, },
.valid = VALID,
.ext_entry = 1,
.depth = 0,
+ { .tbl8_gindex = tbl8_group_index, }
};
+ /*
+ * Ensure compiler isn't doing something completely odd
+ * like updating tbl24 before tbl8.
+ */
+ rte_barrier();
lpm->tbl24[tbl24_index] = new_tbl24_entry;
- }
- else { /*
- * If it is valid, extended entry calculate the index into tbl8.
- */
+ } else {
+ /*
+ * If it is valid, extended entry calculate the index into tbl8.
+ */
+ struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
+ .valid = VALID,
+ .depth = depth,
+ .next_hop = next_hop,
+ };
+ rte_barrier();
+
tbl8_group_index = lpm->tbl24[tbl24_index].tbl8_gindex;
tbl8_group_start = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl8_entry new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .next_hop = next_hop,
- .valid_group = lpm->tbl8[i].valid_group,
- };
-
+ lpm->tbl8[i].depth <= depth) {
/*
* Setting tbl8 entry in one go to avoid race
* condition
*/
lpm->tbl8[i] = new_tbl8_entry;
-
- continue;
}
}
}
@@ -621,38 +667,32 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ uint16_t next_hop, uint8_t scope)
{
- int32_t rule_index, status = 0;
- uint32_t ip_masked;
+ struct rte_lpm_rule *rule;
+ uint32_t ip_masked = (ip & depth_to_mask(depth));
/* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+ if ((lpm == NULL) || (depth >= RTE_LPM_MAX_DEPTH))
return -EINVAL;
- ip_masked = ip & depth_to_mask(depth);
-
/* Add the rule to the rule table. */
- rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+ rule = rule_add(lpm, ip_masked, depth, next_hop, scope);
/* If the is no space available for new rule return error. */
- if (rule_index < 0) {
- return rule_index;
- }
-
- if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small(lpm, ip_masked, depth, next_hop);
- }
- else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big(lpm, ip_masked, depth, next_hop);
+ if (rule == NULL)
+ return -ENOSPC;
+ if (depth <= MAX_DEPTH_TBL24)
+ add_depth_small(lpm, ip_masked, depth, next_hop);
+ else {
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
+ int status = add_depth_big(lpm, ip_masked, depth, next_hop);
if (status < 0) {
- rule_delete(lpm, rule_index, depth);
-
+ rule_delete(lpm, rule, depth);
return status;
}
}
@@ -665,10 +705,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+ uint16_t *next_hop, uint8_t scope)
{
uint32_t ip_masked;
- int32_t rule_index;
+ struct rte_lpm_rule *rule;
/* Check user arguments. */
if ((lpm == NULL) ||
@@ -678,10 +718,10 @@ uint8_t *next_hop)
/* Look for the rule using rule_find. */
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find(lpm, ip_masked, depth);
+ rule = rule_find(lpm, ip_masked, depth, scope);
- if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ if (rule != NULL) {
+ *next_hop = rule->next_hop;
return 1;
}
@@ -689,30 +729,29 @@ uint8_t *next_hop)
return 0;
}
-static inline int32_t
-find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t *sub_rule_depth)
+static struct rte_lpm_rule *
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+ uint8_t *sub_rule_depth)
{
- int32_t rule_index;
+ struct rte_lpm_rule *rule;
uint32_t ip_masked;
- uint8_t prev_depth;
+ int prev_depth;
- for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
+ for (prev_depth = depth - 1; prev_depth >= 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
-
- rule_index = rule_find(lpm, ip_masked, prev_depth);
-
- if (rule_index >= 0) {
+ rule = rule_find_any(lpm, ip_masked, prev_depth);
+ if (rule) {
*sub_rule_depth = prev_depth;
- return rule_index;
+ return rule;
}
}
- return -1;
+ return NULL;
}
-static inline int32_t
-delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+static void
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+ struct rte_lpm_rule *sub_rule, uint8_t new_depth)
{
uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
@@ -720,28 +759,22 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
tbl24_range = depth_to_range(depth);
tbl24_index = (ip_masked >> 8);
- /*
- * Firstly check the sub_rule_index. A -1 indicates no replacement rule
- * and a positive number indicates a sub_rule_index.
- */
- if (sub_rule_index < 0) {
+ /* Firstly check the sub_rule. */
+ if (sub_rule == NULL) {
/*
* If no replacement rule exists then invalidate entries
* associated with this rule.
*/
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].ext_entry == 0 &&
- lpm->tbl24[i].depth <= depth ) {
- lpm->tbl24[i].valid = INVALID;
- }
- else {
+ if (lpm->tbl24[i].ext_entry == 0) {
+ if (lpm->tbl24[i].depth <= depth)
+ lpm->tbl24[i].valid = INVALID;
+ } else {
/*
* If TBL24 entry is extended, then there has
* to be a rule with depth >= 25 in the
* associated TBL8 group.
*/
-
tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
tbl8_index = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -749,60 +782,54 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
for (j = tbl8_index; j < (tbl8_index +
RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
- if (lpm->tbl8[j].depth <= depth)
+ if (lpm->tbl8[j].valid &&
+ lpm->tbl8[j].depth <= depth)
lpm->tbl8[j].valid = INVALID;
}
}
}
- }
- else {
+ } else {
/*
* If a replacement rule exists then modify entries
* associated with this rule.
*/
-
struct rte_lpm_tbl24_entry new_tbl24_entry = {
- {.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,},
.valid = VALID,
.ext_entry = 0,
- .depth = sub_rule_depth,
+ .depth = new_depth,
+ { .next_hop = sub_rule->next_hop, }
};
struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
.valid = VALID,
- .depth = sub_rule_depth,
- .next_hop = lpm->rules_tbl
- [sub_rule_index].next_hop,
+ .depth = new_depth,
+ .next_hop = sub_rule->next_hop,
};
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].ext_entry == 0 &&
- lpm->tbl24[i].depth <= depth ) {
- lpm->tbl24[i] = new_tbl24_entry;
- }
- else {
+ if (lpm->tbl24[i].ext_entry == 0) {
+ if (lpm->tbl24[i].depth <= depth)
+ lpm->tbl24[i] = new_tbl24_entry;
+ } else {
/*
* If TBL24 entry is extended, then there has
* to be a rule with depth >= 25 in the
* associated TBL8 group.
*/
-
tbl8_group_index = lpm->tbl24[i].tbl8_gindex;
tbl8_index = tbl8_group_index *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
for (j = tbl8_index; j < (tbl8_index +
RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
+ if (!lpm->tbl8[j].valid ||
+ lpm->tbl8[j].depth <= depth)
lpm->tbl8[j] = new_tbl8_entry;
}
}
}
}
-
- return 0;
}
/*
@@ -813,8 +840,9 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
* Return of value > -1 means tbl8 is in use but has all the same values and
* thus can be recycled
*/
-static inline int32_t
-tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+static int32_t
+tbl8_recycle_check(const struct rte_lpm_tbl8_entry *tbl8,
+ uint32_t tbl8_group_start)
{
uint32_t tbl8_group_end, i;
tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -855,13 +883,14 @@ tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
if (tbl8[i].valid)
return -EEXIST;
}
+
/* If no valid entries are found then return -EINVAL. */
return -EINVAL;
}
-static inline int32_t
-delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
+static void
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+ struct rte_lpm_rule *sub_rule, uint8_t new_depth)
{
uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
tbl8_range, i;
@@ -879,23 +908,22 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
tbl8_range = depth_to_range(depth);
- if (sub_rule_index < 0) {
+ if (sub_rule == NULL) {
/*
* Loop through the range of entries on tbl8 for which the
* rule_to_delete must be removed or modified.
*/
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
+ if (lpm->tbl8[i].valid && lpm->tbl8[i].depth <= depth)
lpm->tbl8[i].valid = INVALID;
}
- }
- else {
+ } else {
/* Set new tbl8 entry. */
struct rte_lpm_tbl8_entry new_tbl8_entry = {
+ .valid_group = VALID,
.valid = VALID,
- .depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .depth = new_depth,
+ .next_hop = sub_rule->next_hop,
};
/*
@@ -903,7 +931,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
* rule_to_delete must be modified.
*/
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
+ if (!lpm->tbl8[i].valid || lpm->tbl8[i].depth <= depth)
lpm->tbl8[i] = new_tbl8_entry;
}
}
@@ -915,100 +943,158 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
*/
tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
-
- if (tbl8_recycle_index == -EINVAL){
+ if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition. */
lpm->tbl24[tbl24_index].valid = 0;
- tbl8_free(lpm->tbl8, tbl8_group_start);
- }
- else if (tbl8_recycle_index > -1) {
+ rte_barrier();
+ tbl8_free(lpm, tbl8_group_start);
+ } else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop, },
.valid = VALID,
.ext_entry = 0,
.depth = lpm->tbl8[tbl8_recycle_index].depth,
+ { .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop, }
};
/* Set tbl24 before freeing tbl8 to avoid race condition. */
lpm->tbl24[tbl24_index] = new_tbl24_entry;
- tbl8_free(lpm->tbl8, tbl8_group_start);
+ rte_barrier();
+ tbl8_free(lpm, tbl8_group_start);
}
+}
- return 0;
+/*
+ * Find rule to replace the just deleted. If there is no rule to
+ * replace the rule_to_delete we return NULL and invalidate the table
+ * entries associated with this rule.
+ */
+static void rule_replace(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+{
+ uint32_t ip_masked;
+ struct rte_lpm_rule *sub_rule;
+ uint8_t sub_depth = 0;
+
+ ip_masked = ip & depth_to_mask(depth);
+ sub_rule = find_previous_rule(lpm, ip, depth, &sub_depth);
+
+ /*
+ * If the input depth value is less than 25 use function
+ * delete_depth_small otherwise use delete_depth_big.
+ */
+ if (depth <= MAX_DEPTH_TBL24)
+ delete_depth_small(lpm, ip_masked, depth, sub_rule, sub_depth);
+ else
+ delete_depth_big(lpm, ip_masked, depth, sub_rule, sub_depth);
}
/*
* Deletes a rule
*/
int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+ uint16_t *next_hop, uint8_t scope)
{
- int32_t rule_to_delete_index, sub_rule_index;
+ struct rte_lpm_rule *rule;
uint32_t ip_masked;
- uint8_t sub_rule_depth;
+
/*
* Check input arguments. Note: IP must be a positive integer of 32
* bits in length therefore it need not be checked.
*/
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
+ if ((lpm == NULL) || (depth >= RTE_LPM_MAX_DEPTH))
return -EINVAL;
- }
ip_masked = ip & depth_to_mask(depth);
/*
- * Find the index of the input rule, that needs to be deleted, in the
+ * Find the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find(lpm, ip_masked, depth);
+ rule = rule_find(lpm, ip_masked, depth, scope);
/*
* Check if rule_to_delete_index was found. If no rule was found the
- * function rule_find returns -EINVAL.
+ * function rule_find returns -E_RTE_NO_TAILQ.
*/
- if (rule_to_delete_index < 0)
+ if (rule == NULL)
return -EINVAL;
- /* Delete the rule from the rule table. */
- rule_delete(lpm, rule_to_delete_index, depth);
-
/*
- * Find rule to replace the rule_to_delete. If there is no rule to
- * replace the rule_to_delete we return -1 and invalidate the table
- * entries associated with this rule.
+ * Return next hop so caller can avoid lookup.
*/
- sub_rule_depth = 0;
- sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
+ if (next_hop)
+ *next_hop = rule->next_hop;
- /*
- * If the input depth value is less than 25 use function
- * delete_depth_small otherwise use delete_depth_big.
- */
- if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small(lpm, ip_masked, depth,
- sub_rule_index, sub_rule_depth);
- }
- else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big(lpm, ip_masked, depth, sub_rule_index, sub_rule_depth);
- }
+ /* Delete the rule from the rule table. */
+ rule_delete(lpm, rule, depth);
+
+ /* Replace with next level up rule */
+ rule_replace(lpm, ip, depth);
+
+ return 0;
}
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg)
{
- /* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
+ uint8_t depth;
/* Zero tbl24. */
memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
/* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
+ memset(lpm->tbl8, 0,
+ lpm->tbl8_num_groups * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+ * sizeof(struct rte_lpm_tbl8_entry));
+ lpm->tbl8_rover = lpm->tbl8_num_groups - 1;
/* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
+ for (depth = 0; depth < RTE_LPM_MAX_DEPTH; ++depth) {
+ struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+ struct rte_lpm_rule *r, *n;
+
+ RB_FOREACH_SAFE(r, rte_lpm_rules_tree, head, n) {
+ if (func)
+ func(lpm, r->ip, depth, r->scope,
+ r->next_hop, arg);
+ rule_delete(lpm, r, depth);
+ }
+ }
+}
+
+/*
+ * Iterate over LPM rules
+ */
+void
+rte_lpm_walk(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg)
+{
+ uint8_t depth;
+
+ for (depth = 0; depth < RTE_LPM_MAX_DEPTH; depth++) {
+ struct rte_lpm_rules_tree *head = &lpm->rules[depth];
+ struct rte_lpm_rule *r, *n;
+
+ RB_FOREACH_SAFE(r, rte_lpm_rules_tree, head, n) {
+ func(lpm, r->ip, depth, r->scope, r->next_hop, arg);
+ }
+ }
+}
+
+/* Count usage of tbl8 */
+unsigned
+rte_lpm_tbl8_count(const struct rte_lpm *lpm)
+{
+ unsigned i, count = 0;
+
+ for (i = 0; i < lpm->tbl8_num_groups; i++) {
+ const struct rte_lpm_tbl8_entry *tbl8_entry
+ = lpm->tbl8 + i * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ if (tbl8_entry->valid_group)
+ ++count;
+ }
+ return count;
}
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..a39e3b5 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -2,6 +2,7 @@
* BSD LICENSE
*
* Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2012-2015 Brocade Communications Systems
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -43,11 +44,9 @@
#include <sys/queue.h>
#include <stdint.h>
#include <stdlib.h>
+#include <bsd/sys/tree.h>
#include <rte_branch_prediction.h>
-#include <rte_byteorder.h>
#include <rte_memory.h>
-#include <rte_common.h>
-#include <rte_vect.h>
#ifdef __cplusplus
extern "C" {
@@ -55,130 +54,89 @@ extern "C" {
/** Max number of characters in LPM name. */
#define RTE_LPM_NAMESIZE 32
+
+ /** Maximum depth value possible for IPv4 LPM. */
+#define RTE_LPM_MAX_DEPTH 33
+
+/** Total number of tbl24 entries. */
+#define RTE_LPM_TBL24_NUM_ENTRIES (1 << 24)
-/** Maximum depth value possible for IPv4 LPM. */
-#define RTE_LPM_MAX_DEPTH 32
+/** Number of entries in a tbl8 group. */
+#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES 256
-/** @internal Total number of tbl24 entries. */
-#define RTE_LPM_TBL24_NUM_ENTRIES (1 << 24)
-
-/** @internal Number of entries in a tbl8 group. */
-#define RTE_LPM_TBL8_GROUP_NUM_ENTRIES 256
-
-/** @internal Total number of tbl8 groups in the tbl8. */
-#define RTE_LPM_TBL8_NUM_GROUPS 256
-
-/** @internal Total number of tbl8 entries. */
-#define RTE_LPM_TBL8_NUM_ENTRIES (RTE_LPM_TBL8_NUM_GROUPS * \
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES)
-
-/** @internal Macro to enable/disable run-time checks. */
-#if defined(RTE_LIBRTE_LPM_DEBUG)
-#define RTE_LPM_RETURN_IF_TRUE(cond, retval) do { \
- if (cond) return (retval); \
-} while (0)
-#else
-#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
-#endif
-
-/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
-
-/** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS 0x0100
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-/** @internal Tbl24 entry structure. */
+/** Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
+ /* Using single uint8_t to store 3 values. */
+ uint8_t valid :1; /**< Validation flag. */
+ uint8_t ext_entry :1; /**< external entry? */
+ uint8_t depth; /**< Rule depth. */
/* Stores Next hop or group index (i.e. gindex)into tbl8. */
union {
- uint8_t next_hop;
- uint8_t tbl8_gindex;
+ uint16_t next_hop;
+ uint16_t tbl8_gindex;
};
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- uint8_t ext_entry :1; /**< External entry. */
- uint8_t depth :6; /**< Rule depth. */
};
-/** @internal Tbl8 entry structure. */
+/** Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
- uint8_t next_hop; /**< next hop. */
- /* Using single uint8_t to store 3 values. */
+ uint16_t next_hop; /**< next hop. */
+ uint8_t depth; /**< Rule depth. */
uint8_t valid :1; /**< Validation flag. */
uint8_t valid_group :1; /**< Group validation flag. */
- uint8_t depth :6; /**< Rule depth. */
-};
-#else
-struct rte_lpm_tbl24_entry {
- uint8_t depth :6;
- uint8_t ext_entry :1;
- uint8_t valid :1;
- union {
- uint8_t tbl8_gindex;
- uint8_t next_hop;
- };
-};
-
-struct rte_lpm_tbl8_entry {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- uint8_t next_hop;
-};
-#endif
-
-/** @internal Rule structure. */
-struct rte_lpm_rule {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
-};
-
-/** @internal Contains metadata about the rules table. */
-struct rte_lpm_rule_info {
- uint32_t used_rules; /**< Used rules so far. */
- uint32_t first_rule; /**< Indexes the first rule of a given depth. */
};
/** @internal LPM structure. */
struct rte_lpm {
+ TAILQ_ENTRY(rte_lpm) next; /**< Next in list. */
+
/* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
+ char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
+
+ /**< LPM rules. */
+ int socket_id; /**< socket to allocate rules on */
+ RB_HEAD(rte_lpm_rules_tree, rte_lpm_rule) rules[RTE_LPM_MAX_DEPTH];
/* LPM Tables. */
- struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+ uint32_t tbl8_num_groups; /* Number of slots */
+ uint32_t tbl8_rover; /* Next slot to check */
+ struct rte_lpm_tbl8_entry *tbl8; /* Actual table */
+
+ struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
__rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
- __rte_cache_aligned; /**< LPM tbl8 table. */
- struct rte_lpm_rule rules_tbl[0] \
- __rte_cache_aligned; /**< LPM rules. */
};
/**
+ * Compiler memory barrier.
+ *
+ * Protects against compiler optimization of ordered operations.
+ */
+#ifdef __GNUC__
+#define rte_barrier() asm volatile("": : :"memory")
+#else
+/* Intel compiler has intrinsic for this. */
+#define rte_barrier() __memory_barrier()
+#endif
+
+/**
* Create an LPM object.
*
* @param name
* LPM object name
* @param socket_id
* NUMA socket ID for LPM table memory allocation
- * @param max_rules
- * Maximum number of LPM rules that can be added
- * @param flags
- * This parameter is currently unused
* @return
* Handle to LPM object on success, NULL otherwise with rte_errno set
* to an appropriate values. Possible rte_errno values include:
* - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
* - E_RTE_SECONDARY - function was called from a secondary process instance
+ * - E_RTE_NO_TAILQ - no tailq list could be got for the lpm object list
* - EINVAL - invalid parameter passed to function
* - ENOSPC - the maximum number of memzones has already been allocated
* - EEXIST - a memzone with the same name already exists
* - ENOMEM - no appropriate memory area found in which to create memzone
*/
struct rte_lpm *
-rte_lpm_create(const char *name, int socket_id, int max_rules, int flags);
+rte_lpm_create(const char *name, int socket_id);
/**
* Find an existing LPM object and return a pointer to it.
@@ -215,11 +173,14 @@ rte_lpm_free(struct rte_lpm *lpm);
* Depth of the rule to be added to the LPM table
* @param next_hop
* Next hop of the rule to be added to the LPM table
+ * @param scope
+ * Priority scope of this route rule
* @return
* 0 on success, negative value otherwise
*/
int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+ uint16_t next_hop, uint8_t scope);
/**
* Check if a rule is present in the LPM table,
@@ -231,6 +192,8 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
* IP of the rule to be searched
* @param depth
* Depth of the rule to searched
+ * @param scope
+ * Priority scope of the rule
* @param next_hop
* Next hop of the rule (valid only if it is found)
* @return
@@ -238,7 +201,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+ uint16_t *next_hop, uint8_t scope);
/**
* Delete a rule from the LPM table.
@@ -249,20 +212,30 @@ uint8_t *next_hop);
* IP of the rule to be deleted from the LPM table
* @param depth
* Depth of the rule to be deleted from the LPM table
+ * @param scope
+ * Priority scope of this route rule
* @return
* 0 on success, negative value otherwise
*/
int
-rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+ uint16_t *next_hop, uint8_t scope);
+
+/** iterator function for LPM rule */
+typedef void (*rte_lpm_walk_func_t)(struct rte_lpm *lpm,
+ uint32_t ip, uint8_t depth, uint8_t scope,
+ uint16_t next_hop, void *arg);
/**
* Delete all rules from the LPM table.
*
* @param lpm
* LPM object handle
+ * @param func
+ * Optional callback for each entry
*/
void
-rte_lpm_delete_all(struct rte_lpm *lpm);
+rte_lpm_delete_all(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg);
/**
* Lookup an IP into the LPM table.
@@ -277,200 +250,80 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
* -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
*/
static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint16_t *next_hop)
{
- unsigned tbl24_index = (ip >> 8);
- uint16_t tbl_entry;
+ struct rte_lpm_tbl24_entry tbl24;
+ struct rte_lpm_tbl8_entry tbl8;
- /* DEBUG: Check user input arguments. */
- RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+ /* Copy tbl24 entry (to avoid conconcurrency issues) */
+ tbl24 = lpm->tbl24[ip >> 8];
+ rte_barrier();
- /* Copy tbl24 entry */
- tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
+ /*
+ * Use the tbl24_index to access the required tbl24 entry then check if
+ * the tbl24 entry is INVALID, if so return -ENOENT.
+ */
+ if (unlikely(!tbl24.valid))
+ return -ENOENT; /* Lookup miss. */
- /* Copy tbl8 entry (only if needed) */
- if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ /*
+ * If tbl24 entry is valid check if it is NOT extended (i.e. it does
+ * not use a tbl8 extension) if so return the next hop.
+ */
+ if (tbl24.ext_entry == 0) {
+ *next_hop = tbl24.next_hop;
+ return 0; /* Lookup hit. */
+ }
- unsigned tbl8_index = (uint8_t)ip +
- ((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+ /*
+ * If tbl24 entry is valid and extended calculate the index into the
+ * tbl8 entry.
+ */
+ tbl8 = lpm->tbl8[tbl24.tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES
+ + (ip & 0xFF)];
+ rte_barrier();
- tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
- }
+ /* Check if the tbl8 entry is invalid and if so return -ENOENT. */
+ if (unlikely(!tbl8.valid))
+ return -ENOENT; /* Lookup miss. */
- *next_hop = (uint8_t)tbl_entry;
- return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+ /* If the tbl8 entry is valid return return the next_hop. */
+ *next_hop = tbl8.next_hop;
+ return 0; /* Lookup hit. */
}
/**
- * Lookup multiple IP addresses in an LPM table. This may be implemented as a
- * macro, so the address of the function should not be used.
+ * Iterate over all rules in the LPM table.
*
* @param lpm
* LPM object handle
- * @param ips
- * Array of IPs to be looked up in the LPM table
- * @param next_hops
- * Next hop of the most specific rule found for IP (valid on lookup hit only).
- * This is an array of two byte values. The most significant byte in each
- * value says whether the lookup was successful (bitmask
- * RTE_LPM_LOOKUP_SUCCESS is set). The least significant byte is the
- * actual next hop.
- * @param n
- * Number of elements in ips (and next_hops) array to lookup. This should be a
- * compile time constant, and divisible by 8 for best performance.
- * @return
- * -EINVAL for incorrect arguments, otherwise 0
+ * @param func
+ * Callback to display
+ * @param arg
+ * Argument passed to iterator
*/
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
- rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
-
-static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
- uint16_t * next_hops, const unsigned n)
-{
- unsigned i;
- unsigned tbl24_indexes[n];
-
- /* DEBUG: Check user input arguments. */
- RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
- (next_hops == NULL)), -EINVAL);
-
- for (i = 0; i < n; i++) {
- tbl24_indexes[i] = ips[i] >> 8;
- }
-
- for (i = 0; i < n; i++) {
- /* Simply copy tbl24 entry to output */
- next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
-
- /* Overwrite output with tbl8 entry if needed */
- if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
- unsigned tbl8_index = (uint8_t)ips[i] +
- ((uint8_t)next_hops[i] *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
- next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
- }
- }
- return 0;
-}
+void
+rte_lpm_walk(struct rte_lpm *lpm, rte_lpm_walk_func_t func, void *arg);
-/* Mask four results. */
-#define RTE_LPM_MASKX4_RES UINT64_C(0x00ff00ff00ff00ff)
+/**
+ * Return the number of entries in the Tbl8 array
+ *
+ * @param lpm
+ * LPM object handle
+ */
+unsigned
+rte_lpm_tbl8_count(const struct rte_lpm *lpm);
/**
- * Lookup four IP addresses in an LPM table.
+ * Return the number of free entries in the Tbl8 array
*
* @param lpm
* LPM object handle
- * @param ip
- * Four IPs to be looked up in the LPM table
- * @param hop
- * Next hop of the most specific rule found for IP (valid on lookup hit only).
- * This is an 4 elements array of two byte values.
- * If the lookup was succesfull for the given IP, then least significant byte
- * of the corresponding element is the actual next hop and the most
- * significant byte is zero.
- * If the lookup for the given IP failed, then corresponding element would
- * contain default value, see description of then next parameter.
- * @param defv
- * Default value to populate into corresponding element of hop[] array,
- * if lookup would fail.
*/
-static inline void
-rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
- uint16_t defv)
+static inline unsigned
+rte_lpm_tbl8_free_count(const struct rte_lpm *lpm)
{
- __m128i i24;
- rte_xmm_t i8;
- uint16_t tbl[4];
- uint64_t idx, pt;
-
- const __m128i mask8 =
- _mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
-
- /*
- * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
- * as one 64-bit value (0x0300030003000300).
- */
- const uint64_t mask_xv =
- ((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
-
- /*
- * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
- * as one 64-bit value (0x0100010001000100).
- */
- const uint64_t mask_v =
- ((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
-
- /* get 4 indexes for tbl24[]. */
- i24 = _mm_srli_epi32(ip, CHAR_BIT);
-
- /* extract values from tbl24[] */
- idx = _mm_cvtsi128_si64(i24);
- i24 = _mm_srli_si128(i24, sizeof(uint64_t));
-
- tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
- tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
- idx = _mm_cvtsi128_si64(i24);
-
- tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
- tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
- /* get 4 indexes for tbl8[]. */
- i8.x = _mm_and_si128(ip, mask8);
-
- pt = (uint64_t)tbl[0] |
- (uint64_t)tbl[1] << 16 |
- (uint64_t)tbl[2] << 32 |
- (uint64_t)tbl[3] << 48;
-
- /* search successfully finished for all 4 IP addresses. */
- if (likely((pt & mask_xv) == mask_v)) {
- uintptr_t ph = (uintptr_t)hop;
- *(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
- return;
- }
-
- if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[0] = i8.u32[0] +
- (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
- }
- if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[1] = i8.u32[1] +
- (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
- }
- if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[2] = i8.u32[2] +
- (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
- }
- if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[3] = i8.u32[3] +
- (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
- }
-
- hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
- hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
- hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
- hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
+ return lpm->tbl8_num_groups - rte_lpm_tbl8_count(lpm);
}
#ifdef __cplusplus
--
2.1.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-23 16:20 0% ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
2015-10-23 16:33 3% ` Stephen Hemminger
@ 2015-10-24 6:09 0% ` Matthew Hall
2015-10-25 17:52 0% ` Vladimir Medvedkin
2015-10-26 12:13 0% ` Jastrzebski, MichalX K
1 sibling, 2 replies; 200+ results
From: Matthew Hall @ 2015-10-24 6:09 UTC (permalink / raw)
To: Michal Jastrzebski, Michal Kobylinski; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 1489 bytes --]
On 10/23/15 9:20 AM, Matthew Hall wrote:
> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
>>
>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
>> number of next hops to 256, as the next hop ID is an 8-bit long field.
>> Proposed extension increase number of next hops for IPv4 to 2^24 and
>> also allows 32-bits read/write operations.
>>
>> This patchset requires additional change to rte_table library to meet
>> ABI compatibility requirements. A v2 will be sent next week.
>
> I also have a patchset for this.
>
> I will send it out as well so we could compare.
>
> Matthew.
Sorry about the delay; I only work on DPDK in personal time and not as
part of a job. My patchset is attached to this email.
One possible advantage with my patchset, compared to others, is that the
space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
between these two standards, which is something I try to avoid as much
as humanly possible.
This is because my application code is green-field, so I absolutely
don't want to put any ugly hacks or incompatibilities in this code if I
can possibly avoid it.
Otherwise, I am not necessarily as expert about rte_lpm as some of the
full-time guys, but I think with four or five of us in the thread
hammering out patches we will be able to create something amazing
together and I am very very very very very happy about this.
Matthew.
[-- Attachment #2: 0001-rte_lpm.h-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 4336 bytes --]
>From 6a8e3428344ed11af8a1999dcec5c31c10f37c3a Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:49:46 +0000
Subject: [PATCH 1/8] rte_lpm.h: use 24 bit extended next hop
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_lpm/rte_lpm.h | 46 +++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..c677c4a 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -82,32 +82,36 @@ extern "C" {
#endif
/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03000000
+
+/** @internal bitmask with next_hop field set */
+#define RTE_LPM_NEXT_HOP_BITMASK 0x00FFFFFF
/** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS 0x0100
+#define RTE_LPM_LOOKUP_SUCCESS 0x01000000
+
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
- /* Stores Next hop or group index (i.e. gindex)into tbl8. */
+ /* Stores Next hop or group index (i.e. gindex) into tbl8. */
union {
- uint8_t next_hop;
- uint8_t tbl8_gindex;
- };
+ uint32_t next_hop :24;
+ uint32_t tbl8_gindex :24;
+ } __attribute__((__packed__));
/* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- uint8_t ext_entry :1; /**< External entry. */
- uint8_t depth :6; /**< Rule depth. */
+ uint32_t valid :1; /**< Validation flag. */
+ uint32_t ext_entry :1; /**< External entry. */
+ uint32_t depth :6; /**< Rule depth. */
};
/** @internal Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
- uint8_t next_hop; /**< next hop. */
+ uint32_t next_hop :24; /**< next hop. */
/* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- uint8_t valid_group :1; /**< Group validation flag. */
- uint8_t depth :6; /**< Rule depth. */
+ uint8_t valid :1; /**< Validation flag. */
+ uint8_t valid_group :1; /**< Group validation flag. */
+ uint8_t depth :6; /**< Rule depth. */
};
#else
struct rte_lpm_tbl24_entry {
@@ -130,8 +134,8 @@ struct rte_lpm_tbl8_entry {
/** @internal Rule structure. */
struct rte_lpm_rule {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
+ uint32_t ip; /**< Rule IP address. */
+ uint32_t next_hop; /**< Rule next hop. */
};
/** @internal Contains metadata about the rules table. */
@@ -219,7 +223,7 @@ rte_lpm_free(struct rte_lpm *lpm);
* 0 on success, negative value otherwise
*/
int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -238,7 +242,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t next_hop);
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -301,6 +305,8 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
*next_hop = (uint8_t)tbl_entry;
return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
}
+int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table. This may be implemented as a
@@ -360,6 +366,9 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
/* Mask four results. */
#define RTE_LPM_MASKX4_RES UINT64_C(0x00ff00ff00ff00ff)
+int
+rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
+ uint32_t * next_hops, const unsigned n);
/**
* Lookup four IP addresses in an LPM table.
@@ -472,6 +481,9 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
}
+void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
+ uint32_t defv);
#ifdef __cplusplus
}
--
1.9.1
[-- Attachment #3: 0002-rte_lpm.h-disable-inlining-of-rte_lpm-lookup-functio.patch --]
[-- Type: text/plain, Size: 6138 bytes --]
>From 7ee9f2e9a8853d49a332d971f5b56e79efccd71b Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:53:43 +0000
Subject: [PATCH 2/8] rte_lpm.h: disable inlining of rte_lpm lookup functions
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_lpm/rte_lpm.h | 152 -----------------------------------------------
1 file changed, 152 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c677c4a..76282d8 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -280,31 +280,6 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
* @return
* -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
*/
-static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
-{
- unsigned tbl24_index = (ip >> 8);
- uint16_t tbl_entry;
-
- /* DEBUG: Check user input arguments. */
- RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
-
- /* Copy tbl24 entry */
- tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
-
- /* Copy tbl8 entry (only if needed) */
- if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
- unsigned tbl8_index = (uint8_t)ip +
- ((uint8_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
- tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
- }
-
- *next_hop = (uint8_t)tbl_entry;
- return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
-}
int
rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
@@ -328,41 +303,6 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop);
* @return
* -EINVAL for incorrect arguments, otherwise 0
*/
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
- rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
-
-static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
- uint16_t * next_hops, const unsigned n)
-{
- unsigned i;
- unsigned tbl24_indexes[n];
-
- /* DEBUG: Check user input arguments. */
- RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
- (next_hops == NULL)), -EINVAL);
-
- for (i = 0; i < n; i++) {
- tbl24_indexes[i] = ips[i] >> 8;
- }
-
- for (i = 0; i < n; i++) {
- /* Simply copy tbl24 entry to output */
- next_hops[i] = *(const uint16_t *)&lpm->tbl24[tbl24_indexes[i]];
-
- /* Overwrite output with tbl8 entry if needed */
- if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
-
- unsigned tbl8_index = (uint8_t)ips[i] +
- ((uint8_t)next_hops[i] *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
-
- next_hops[i] = *(const uint16_t *)&lpm->tbl8[tbl8_index];
- }
- }
- return 0;
-}
/* Mask four results. */
#define RTE_LPM_MASKX4_RES UINT64_C(0x00ff00ff00ff00ff)
@@ -389,98 +329,6 @@ rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
* Default value to populate into corresponding element of hop[] array,
* if lookup would fail.
*/
-static inline void
-rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint16_t hop[4],
- uint16_t defv)
-{
- __m128i i24;
- rte_xmm_t i8;
- uint16_t tbl[4];
- uint64_t idx, pt;
-
- const __m128i mask8 =
- _mm_set_epi32(UINT8_MAX, UINT8_MAX, UINT8_MAX, UINT8_MAX);
-
- /*
- * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
- * as one 64-bit value (0x0300030003000300).
- */
- const uint64_t mask_xv =
- ((uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 16 |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 32 |
- (uint64_t)RTE_LPM_VALID_EXT_ENTRY_BITMASK << 48);
-
- /*
- * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
- * as one 64-bit value (0x0100010001000100).
- */
- const uint64_t mask_v =
- ((uint64_t)RTE_LPM_LOOKUP_SUCCESS |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 16 |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 32 |
- (uint64_t)RTE_LPM_LOOKUP_SUCCESS << 48);
-
- /* get 4 indexes for tbl24[]. */
- i24 = _mm_srli_epi32(ip, CHAR_BIT);
-
- /* extract values from tbl24[] */
- idx = _mm_cvtsi128_si64(i24);
- i24 = _mm_srli_si128(i24, sizeof(uint64_t));
-
- tbl[0] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
- tbl[1] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
- idx = _mm_cvtsi128_si64(i24);
-
- tbl[2] = *(const uint16_t *)&lpm->tbl24[(uint32_t)idx];
- tbl[3] = *(const uint16_t *)&lpm->tbl24[idx >> 32];
-
- /* get 4 indexes for tbl8[]. */
- i8.x = _mm_and_si128(ip, mask8);
-
- pt = (uint64_t)tbl[0] |
- (uint64_t)tbl[1] << 16 |
- (uint64_t)tbl[2] << 32 |
- (uint64_t)tbl[3] << 48;
-
- /* search successfully finished for all 4 IP addresses. */
- if (likely((pt & mask_xv) == mask_v)) {
- uintptr_t ph = (uintptr_t)hop;
- *(uint64_t *)ph = pt & RTE_LPM_MASKX4_RES;
- return;
- }
-
- if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[0] = i8.u32[0] +
- (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[0] = *(const uint16_t *)&lpm->tbl8[i8.u32[0]];
- }
- if (unlikely((pt >> 16 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[1] = i8.u32[1] +
- (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[1] = *(const uint16_t *)&lpm->tbl8[i8.u32[1]];
- }
- if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[2] = i8.u32[2] +
- (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[2] = *(const uint16_t *)&lpm->tbl8[i8.u32[2]];
- }
- if (unlikely((pt >> 48 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
- i8.u32[3] = i8.u32[3] +
- (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl[3] = *(const uint16_t *)&lpm->tbl8[i8.u32[3]];
- }
-
- hop[0] = (tbl[0] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[0] : defv;
- hop[1] = (tbl[1] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[1] : defv;
- hop[2] = (tbl[2] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[2] : defv;
- hop[3] = (tbl[3] & RTE_LPM_LOOKUP_SUCCESS) ? (uint8_t)tbl[3] : defv;
-}
void
rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
uint32_t defv);
--
1.9.1
[-- Attachment #4: 0003-rte_lpm.c-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 8601 bytes --]
>From e54e01b6edcc820230b7e47de40920a00031b6c1 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:48:07 +0000
Subject: [PATCH 3/8] rte_lpm.c: use 24 bit extended next hop
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_lpm/rte_lpm.c | 184 ++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 174 insertions(+), 10 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..d9cb007 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -159,8 +159,8 @@ rte_lpm_create(const char *name, int socket_id, int max_rules,
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
+ /* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2); */
+ /* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2); */
/* Check user arguments. */
if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
@@ -261,7 +261,7 @@ rte_lpm_free(struct rte_lpm *lpm)
*/
static inline int32_t
rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
int i;
@@ -418,7 +418,7 @@ tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
static inline int32_t
add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
@@ -486,7 +486,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
static inline int32_t
add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
uint32_t tbl24_index;
int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
@@ -621,7 +621,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
int32_t rule_index, status = 0;
uint32_t ip_masked;
@@ -665,7 +665,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+uint32_t *next_hop)
{
uint32_t ip_masked;
int32_t rule_index;
@@ -681,7 +681,7 @@ uint8_t *next_hop)
rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ *next_hop = lpm->rules_tbl[rule_index].next_hop & RTE_LPM_NEXT_HOP_BITMASK;
return 1;
}
@@ -771,8 +771,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
struct rte_lpm_tbl8_entry new_tbl8_entry = {
.valid = VALID,
.depth = sub_rule_depth,
- .next_hop = lpm->rules_tbl
- [sub_rule_index].next_hop,
+ .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
};
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
@@ -1012,3 +1011,168 @@ rte_lpm_delete_all(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
+
+int
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop)
+{
+ unsigned tbl24_index = (ip >> 8);
+ uint32_t tbl_entry;
+
+ /* DEBUG: Check user input arguments. */
+ RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)), -EINVAL);
+
+ /* Copy tbl24 entry */
+ tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
+
+ /* Copy tbl8 entry (only if needed) */
+ if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+ unsigned tbl8_index = (uint8_t)ip +
+ ((uint32_t)tbl_entry * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+ tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+ }
+
+ *next_hop = (uint32_t)tbl_entry & RTE_LPM_NEXT_HOP_BITMASK;
+ return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+}
+
+int
+rte_lpm_lookup_bulk(const struct rte_lpm *lpm, const uint32_t * ips,
+ uint32_t * next_hops, const unsigned n)
+{
+ unsigned i;
+ unsigned tbl24_indexes[n];
+
+ /* DEBUG: Check user input arguments. */
+ RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
+ (next_hops == NULL)), -EINVAL);
+
+ for (i = 0; i < n; i++) {
+ tbl24_indexes[i] = ips[i] >> 8;
+ }
+
+ for (i = 0; i < n; i++) {
+ /* Simply copy tbl24 entry to output */
+ next_hops[i] = *(const uint32_t *)&lpm->tbl24[tbl24_indexes[i]];
+
+ /* Overwrite output with tbl8 entry if needed */
+ if (unlikely((next_hops[i] & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+
+ unsigned tbl8_index = (uint8_t)ips[i] +
+ ((uint32_t)next_hops[i] *
+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+
+ next_hops[i] = *(const uint32_t *)&lpm->tbl8[tbl8_index] & RTE_LPM_NEXT_HOP_BITMASK;
+ }
+ }
+ return 0;
+}
+
+
+static
+__m128i _mm_not_si128(__m128i arg)
+{
+ __m128i minusone = _mm_set1_epi32(0xffffffff);
+ return _mm_xor_si128(arg, minusone);
+}
+
+/**
+ * Lookup four IP addresses in an LPM table.
+ *
+ * @param lpm
+ * LPM object handle
+ * @param ip
+ * Four IPs to be looked up in the LPM table
+ * @param hop
+ * Next hop of the most specific rule found for IP (valid on lookup hit only).
+ * This is an 4 elements array of two byte values.
+ * If the lookup was succesfull for the given IP, then least significant byte
+ * of the corresponding element is the actual next hop and the most
+ * significant byte is zero.
+ * If the lookup for the given IP failed, then corresponding element would
+ * contain default value, see description of then next parameter.
+ * @param defv
+ * Default value to populate into corresponding element of hop[] array,
+ * if lookup would fail.
+ */
+void
+rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip, uint32_t hop[4],
+ uint32_t defv)
+{
+ rte_xmm_t tbl24_i;
+ rte_xmm_t tbl8_i;
+ rte_xmm_t tbl_r;
+ rte_xmm_t tbl_h;
+ rte_xmm_t tbl_r_ok;
+
+ rte_xmm_t mask_8;
+ rte_xmm_t mask_ve;
+ rte_xmm_t mask_v;
+ rte_xmm_t mask_h;
+ rte_xmm_t mask_hi;
+
+ mask_8.x = _mm_set1_epi32(UINT8_MAX);
+
+ /*
+ * RTE_LPM_VALID_EXT_ENTRY_BITMASK for 4 LPM entries
+ * as one 64-bit value (0x0300030003000300).
+ */
+ mask_ve.x = _mm_set1_epi32(RTE_LPM_VALID_EXT_ENTRY_BITMASK);
+
+ /*
+ * RTE_LPM_LOOKUP_SUCCESS for 4 LPM entries
+ * as one 64-bit value (0x0100010001000100).
+ */
+ mask_v.x = _mm_set1_epi32(RTE_LPM_LOOKUP_SUCCESS);
+
+ mask_h.x = _mm_set1_epi32(RTE_LPM_NEXT_HOP_BITMASK);
+ mask_hi.x = _mm_not_si128(mask_h.x);
+
+ /* get 4 indexes for tbl24[]. */
+ tbl24_i.x = _mm_srli_epi32(ip, CHAR_BIT);
+
+ /* extract values from tbl24[] */
+ tbl_r.u32[0] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[0]];
+ tbl_r.u32[1] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[1]];
+ tbl_r.u32[2] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[2]];
+ tbl_r.u32[3] = *(const uint32_t *) &lpm->tbl24[tbl24_i.u32[3]];
+
+ /* search successfully finished for all 4 IP addresses. */
+ tbl_r_ok.x = _mm_and_si128(tbl_r.x, mask_ve.x);
+ tbl_h.x = _mm_and_si128(tbl_r.x, mask_hi.x);
+ if (likely(_mm_test_all_ones(_mm_cmpeq_epi32(tbl_r_ok.x, mask_v.x)))) {
+ *(__m128i*) &hop = tbl_h.x;
+ return;
+ }
+
+ /* get 4 indexes for tbl8[]. */
+ tbl8_i.x = _mm_and_si128(ip, mask_8.x);
+
+ if (unlikely(tbl_r_ok.u32[0] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ tbl8_i.u32[0] = tbl8_i.u32[0] + tbl_h.u32[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl_r.u32[0] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[0]];
+ }
+ if (unlikely(tbl_r_ok.u32[1] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ tbl8_i.u32[1] = tbl8_i.u32[1] + tbl_h.u32[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl_r.u32[1] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[1]];
+ }
+ if (unlikely(tbl_r_ok.u32[2] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ tbl8_i.u32[2] = tbl8_i.u32[2] + tbl_h.u32[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl_r.u32[2] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[2]];
+ }
+ if (unlikely(tbl_r_ok.u32[3] == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ tbl8_i.u32[3] = tbl8_i.u32[3] + tbl_h.u32[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
+ tbl_r.u32[3] = *(const uint32_t *) &lpm->tbl8[tbl8_i.u32[3]];
+ }
+
+ tbl_r_ok.x = _mm_and_si128(tbl_r.x, mask_v.x);
+ tbl_h.x = _mm_and_si128(tbl_r.x, mask_h.x);
+
+ hop[0] = tbl_r_ok.u32[0] ? tbl_h.u32[0] : defv;
+ hop[1] = tbl_r_ok.u32[1] ? tbl_h.u32[1] : defv;
+ hop[2] = tbl_r_ok.u32[2] ? tbl_h.u32[2] : defv;
+ hop[3] = tbl_r_ok.u32[3] ? tbl_h.u32[3] : defv;
+}
--
1.9.1
[-- Attachment #5: 0004-rte_lpm6.-c-h-use-24-bit-extended-next-hop.patch --]
[-- Type: text/plain, Size: 5655 bytes --]
>From 402d1bce8dd05b31fc6e457ca89bcd0b7160aa69 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:54:41 +0000
Subject: [PATCH 4/8] rte_lpm6.{c,h}: use 24 bit extended next hop
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_lpm/rte_lpm6.c | 27 ++++++++++++++-------------
lib/librte_lpm/rte_lpm6.h | 8 ++++----
2 files changed, 18 insertions(+), 17 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 6c2b293..8d7602f 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -96,9 +96,9 @@ struct rte_lpm6_tbl_entry {
/** Rules tbl entry structure. */
struct rte_lpm6_rule {
- uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
- uint8_t depth; /**< Rule depth. */
+ uint8_t ip[RTE_LPM6_IPV6_ADDR_SIZE]; /**< Rule IP address. */
+ uint32_t next_hop :24; /**< Rule next hop. */
+ uint32_t depth :8; /**< Rule depth. */
};
/** LPM6 structure. */
@@ -157,7 +157,7 @@ rte_lpm6_create(const char *name, int socket_id,
lpm_list = RTE_TAILQ_CAST(rte_lpm6_tailq.head, rte_lpm6_list);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm6_tbl_entry) != sizeof(uint32_t));
+ /* RTE_BUILD_BUG_ON(sizeof(struct rte_lpm6_tbl_entry) != sizeof(uint32_t)); */
/* Check user arguments. */
if ((name == NULL) || (socket_id < -1) || (config == NULL) ||
@@ -295,7 +295,7 @@ rte_lpm6_free(struct rte_lpm6 *lpm)
* the nexthop if so. Otherwise it adds a new rule if enough space is available.
*/
static inline int32_t
-rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t next_hop, uint8_t depth)
+rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint32_t next_hop, uint8_t depth)
{
uint32_t rule_index;
@@ -338,7 +338,7 @@ rule_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t next_hop, uint8_t depth)
*/
static void
expand_rule(struct rte_lpm6 *lpm, uint32_t tbl8_gindex, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
uint32_t tbl8_group_end, tbl8_gindex_next, j;
@@ -375,7 +375,7 @@ expand_rule(struct rte_lpm6 *lpm, uint32_t tbl8_gindex, uint8_t depth,
static inline int
add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
struct rte_lpm6_tbl_entry **tbl_next, uint8_t *ip, uint8_t bytes,
- uint8_t first_byte, uint8_t depth, uint8_t next_hop)
+ uint8_t first_byte, uint8_t depth, uint32_t next_hop)
{
uint32_t tbl_index, tbl_range, tbl8_group_start, tbl8_group_end, i;
int32_t tbl8_gindex;
@@ -506,7 +506,7 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
*/
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop)
+ uint32_t next_hop)
{
struct rte_lpm6_tbl_entry *tbl;
struct rte_lpm6_tbl_entry *tbl_next;
@@ -567,7 +567,7 @@ rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
static inline int
lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
const struct rte_lpm6_tbl_entry **tbl_next, uint8_t *ip,
- uint8_t first_byte, uint8_t *next_hop)
+ uint8_t first_byte, uint32_t *next_hop)
{
uint32_t tbl8_index, tbl_entry;
@@ -596,7 +596,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
* Looks up an IP
*/
int
-rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop)
{
const struct rte_lpm6_tbl_entry *tbl;
const struct rte_lpm6_tbl_entry *tbl_next;
@@ -630,13 +630,14 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n)
+ uint32_t * next_hops, unsigned n)
{
unsigned i;
const struct rte_lpm6_tbl_entry *tbl;
const struct rte_lpm6_tbl_entry *tbl_next;
uint32_t tbl24_index;
- uint8_t first_byte, next_hop;
+ uint8_t first_byte;
+ uint32_t next_hop;
int status;
/* DEBUG: Check user input arguments. */
@@ -697,7 +698,7 @@ rule_find(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth)
*/
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-uint8_t *next_hop)
+uint32_t *next_hop)
{
uint8_t ip_masked[RTE_LPM6_IPV6_ADDR_SIZE];
int32_t rule_index;
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index cedcea8..dd90beb 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -121,7 +121,7 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
*/
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop);
+ uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -140,7 +140,7 @@ rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
*/
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
-uint8_t *next_hop);
+uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -197,7 +197,7 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
* -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup hit
*/
int
-rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table.
@@ -218,7 +218,7 @@ rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n);
+ uint32_t * next_hops, unsigned n);
#ifdef __cplusplus
}
--
1.9.1
[-- Attachment #6: 0005-librte_table-use-uint32_t-for-next-hops-from-librte_.patch --]
[-- Type: text/plain, Size: 2443 bytes --]
>From e3ebfc026f7871d3014a0b9f8881579623b6592b Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:55:20 +0000
Subject: [PATCH 5/8] librte_table: use uint32_t for next hops from librte_lpm
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_table/rte_table_lpm.c | 6 +++---
lib/librte_table/rte_table_lpm_ipv6.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/librte_table/rte_table_lpm.c b/lib/librte_table/rte_table_lpm.c
index 849d899..2af2eee 100644
--- a/lib/librte_table/rte_table_lpm.c
+++ b/lib/librte_table/rte_table_lpm.c
@@ -202,7 +202,7 @@ rte_table_lpm_entry_add(
struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
uint32_t nht_pos, nht_pos0_valid;
int status;
- uint8_t nht_pos0 = 0;
+ uint32_t nht_pos0 = 0;
/* Check input parameters */
if (lpm == NULL) {
@@ -268,7 +268,7 @@ rte_table_lpm_entry_delete(
{
struct rte_table_lpm *lpm = (struct rte_table_lpm *) table;
struct rte_table_lpm_key *ip_prefix = (struct rte_table_lpm_key *) key;
- uint8_t nht_pos;
+ uint32_t nht_pos;
int status;
/* Check input parameters */
@@ -342,7 +342,7 @@ rte_table_lpm_lookup(
uint32_t ip = rte_bswap32(
RTE_MBUF_METADATA_UINT32(pkt, lpm->offset));
int status;
- uint8_t nht_pos;
+ uint32_t nht_pos;
status = rte_lpm_lookup(lpm->lpm, ip, &nht_pos);
if (status == 0) {
diff --git a/lib/librte_table/rte_table_lpm_ipv6.c b/lib/librte_table/rte_table_lpm_ipv6.c
index e9bc6a7..81a948e 100644
--- a/lib/librte_table/rte_table_lpm_ipv6.c
+++ b/lib/librte_table/rte_table_lpm_ipv6.c
@@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add(
(struct rte_table_lpm_ipv6_key *) key;
uint32_t nht_pos, nht_pos0_valid;
int status;
- uint8_t nht_pos0;
+ uint32_t nht_pos0;
/* Check input parameters */
if (lpm == NULL) {
@@ -280,7 +280,7 @@ rte_table_lpm_ipv6_entry_delete(
struct rte_table_lpm_ipv6 *lpm = (struct rte_table_lpm_ipv6 *) table;
struct rte_table_lpm_ipv6_key *ip_prefix =
(struct rte_table_lpm_ipv6_key *) key;
- uint8_t nht_pos;
+ uint32_t nht_pos;
int status;
/* Check input parameters */
@@ -356,7 +356,7 @@ rte_table_lpm_ipv6_lookup(
uint8_t *ip = RTE_MBUF_METADATA_UINT8_PTR(pkt,
lpm->offset);
int status;
- uint8_t nht_pos;
+ uint32_t nht_pos;
status = rte_lpm6_lookup(lpm->lpm, ip, &nht_pos);
if (status == 0) {
--
1.9.1
[-- Attachment #7: 0006-test_lpm-.c-update-tests-to-use-24-bit-extended-next.patch --]
[-- Type: text/plain, Size: 17123 bytes --]
>From e975f935595c6e901522dbaf10be598573276eaa Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:34:18 +0000
Subject: [PATCH 6/8] test_lpm*.c: update tests to use 24 bit extended next hop
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
app/test/test_lpm.c | 54 +++++++++++++++-----------
app/test/test_lpm6.c | 108 ++++++++++++++++++++++++++++++---------------------
2 files changed, 95 insertions(+), 67 deletions(-)
diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 8b4ded9..44e2fb4 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -181,7 +181,8 @@ test3(void)
{
struct rte_lpm *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t depth = 24, next_hop = 100;
+ uint8_t depth = 24;
+ uint32_t next_hop = 100;
int32_t status = 0;
/* rte_lpm_add: lpm == NULL */
@@ -248,7 +249,7 @@ test5(void)
#if defined(RTE_LIBRTE_LPM_DEBUG)
struct rte_lpm *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t next_hop_return = 0;
+ uint32_t next_hop_return = 0;
int32_t status = 0;
/* rte_lpm_lookup: lpm == NULL */
@@ -278,7 +279,8 @@ test6(void)
{
struct rte_lpm *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 24;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -309,10 +311,11 @@ int32_t
test7(void)
{
__m128i ipx4;
- uint16_t hop[4];
+ uint32_t hop[4];
struct rte_lpm *lpm = NULL;
uint32_t ip = IPv4(0, 0, 0, 0);
- uint8_t depth = 32, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 32;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -355,10 +358,11 @@ int32_t
test8(void)
{
__m128i ipx4;
- uint16_t hop[4];
+ uint32_t hop[4];
struct rte_lpm *lpm = NULL;
uint32_t ip1 = IPv4(127, 255, 255, 255), ip2 = IPv4(128, 0, 0, 0);
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -438,7 +442,8 @@ test9(void)
{
struct rte_lpm *lpm = NULL;
uint32_t ip, ip_1, ip_2;
- uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+ uint8_t depth, depth_1, depth_2;
+ uint32_t next_hop_add, next_hop_add_1,
next_hop_add_2, next_hop_return;
int32_t status = 0;
@@ -602,7 +607,8 @@ test10(void)
struct rte_lpm *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
/* Add rule that covers a TBL24 range previously invalid & lookup
@@ -788,7 +794,8 @@ test11(void)
struct rte_lpm *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -851,10 +858,11 @@ int32_t
test12(void)
{
__m128i ipx4;
- uint16_t hop[4];
+ uint32_t hop[4];
struct rte_lpm *lpm = NULL;
uint32_t ip, i;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -904,7 +912,8 @@ test13(void)
{
struct rte_lpm *lpm = NULL;
uint32_t ip, i;
- uint8_t depth, next_hop_add_1, next_hop_add_2, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add_1, next_hop_add_2, next_hop_return;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -966,7 +975,8 @@ test14(void)
struct rte_lpm *lpm = NULL;
uint32_t ip;
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
/* Add enough space for 256 rules for every depth */
@@ -1078,10 +1088,10 @@ test17(void)
const uint8_t d_ip_10_32 = 32,
d_ip_10_24 = 24,
d_ip_20_25 = 25;
- const uint8_t next_hop_ip_10_32 = 100,
+ const uint32_t next_hop_ip_10_32 = 100,
next_hop_ip_10_24 = 105,
next_hop_ip_20_25 = 111;
- uint8_t next_hop_return = 0;
+ uint32_t next_hop_return = 0;
int32_t status = 0;
lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, MAX_RULES, 0);
@@ -1092,7 +1102,7 @@ test17(void)
return -1;
status = rte_lpm_lookup(lpm, ip_10_32, &next_hop_return);
- uint8_t test_hop_10_32 = next_hop_return;
+ uint32_t test_hop_10_32 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
@@ -1101,7 +1111,7 @@ test17(void)
return -1;
status = rte_lpm_lookup(lpm, ip_10_24, &next_hop_return);
- uint8_t test_hop_10_24 = next_hop_return;
+ uint32_t test_hop_10_24 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
@@ -1110,7 +1120,7 @@ test17(void)
return -1;
status = rte_lpm_lookup(lpm, ip_20_25, &next_hop_return);
- uint8_t test_hop_20_25 = next_hop_return;
+ uint32_t test_hop_20_25 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
@@ -1175,7 +1185,7 @@ perf_test(void)
struct rte_lpm *lpm = NULL;
uint64_t begin, total_time, lpm_used_entries = 0;
unsigned i, j;
- uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+ uint32_t next_hop_add = 0xAA, next_hop_return = 0;
int status = 0;
uint64_t cache_line_counter = 0;
int64_t count = 0;
@@ -1252,7 +1262,7 @@ perf_test(void)
count = 0;
for (i = 0; i < ITERATIONS; i ++) {
static uint32_t ip_batch[BATCH_SIZE];
- uint16_t next_hops[BULK_SIZE];
+ uint32_t next_hops[BULK_SIZE];
/* Create array of random IP addresses */
for (j = 0; j < BATCH_SIZE; j ++)
@@ -1279,7 +1289,7 @@ perf_test(void)
count = 0;
for (i = 0; i < ITERATIONS; i++) {
static uint32_t ip_batch[BATCH_SIZE];
- uint16_t next_hops[4];
+ uint32_t next_hops[4];
/* Create array of random IP addresses */
for (j = 0; j < BATCH_SIZE; j++)
diff --git a/app/test/test_lpm6.c b/app/test/test_lpm6.c
index 1f88d7a..d5ba20a 100644
--- a/app/test/test_lpm6.c
+++ b/app/test/test_lpm6.c
@@ -291,7 +291,8 @@ test4(void)
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth = 24, next_hop = 100;
+ uint8_t depth = 24;
+ uint32_t next_hop = 100;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -367,7 +368,7 @@ test6(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t next_hop_return = 0;
+ uint32_t next_hop_return = 0;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -405,7 +406,7 @@ test7(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[10][16];
- int16_t next_hop_return[10];
+ uint32_t next_hop_return[10];
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -482,7 +483,8 @@ test9(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth = 16, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 16;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
uint8_t i;
@@ -526,7 +528,8 @@ test10(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth, next_hop_add = 100;
+ uint8_t depth;
+ uint32_t next_hop_add = 100;
int32_t status = 0;
int i;
@@ -570,7 +573,8 @@ test11(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth, next_hop_add = 100;
+ uint8_t depth;
+ uint32_t next_hop_add = 100;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -630,7 +634,8 @@ test12(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth, next_hop_add = 100;
+ uint8_t depth;
+ uint32_t next_hop_add = 100;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -668,7 +673,8 @@ test13(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth, next_hop_add = 100;
+ uint8_t depth;
+ uint32_t next_hop_add = 100;
int32_t status = 0;
config.max_rules = 2;
@@ -715,7 +721,8 @@ test14(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth = 25, next_hop_add = 100;
+ uint8_t depth = 25;
+ uint32_t next_hop_add = 100;
int32_t status = 0;
int i, j;
@@ -767,7 +774,8 @@ test15(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth = 24, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 24;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -803,7 +811,8 @@ test16(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {12,12,1,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth = 128, next_hop_add = 100, next_hop_return = 0;
+ uint8_t depth = 128;
+ uint32_t next_hop_add = 100, next_hop_return = 0;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -847,7 +856,8 @@ test17(void)
uint8_t ip1[] = {127,255,255,255,255,255,255,255,255,
255,255,255,255,255,255,255};
uint8_t ip2[] = {128,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -912,7 +922,8 @@ test18(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[16], ip_1[16], ip_2[16];
- uint8_t depth, depth_1, depth_2, next_hop_add, next_hop_add_1,
+ uint8_t depth, depth_1, depth_2;
+ uint32_t next_hop_add, next_hop_add_1,
next_hop_add_2, next_hop_return;
int32_t status = 0;
@@ -1074,7 +1085,8 @@ test19(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[16];
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1272,7 +1284,8 @@ test20(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[16];
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1339,8 +1352,9 @@ test21(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip_batch[4][16];
- uint8_t depth, next_hop_add;
- int16_t next_hop_return[4];
+ uint8_t depth;
+ uint32_t next_hop_add;
+ uint32_t next_hop_return[4];
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1377,7 +1391,7 @@ test21(void)
next_hop_return, 4);
TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == 100
&& next_hop_return[1] == 101 && next_hop_return[2] == 102
- && next_hop_return[3] == -1);
+ && next_hop_return[3] == (uint32_t) -1);
rte_lpm6_free(lpm);
@@ -1397,8 +1411,9 @@ test22(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip_batch[5][16];
- uint8_t depth[5], next_hop_add;
- int16_t next_hop_return[5];
+ uint8_t depth[5];
+ uint32_t next_hop_add;
+ uint32_t next_hop_return[5];
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1458,8 +1473,8 @@ test22(void)
status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
next_hop_return, 5);
- TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
- && next_hop_return[1] == -1 && next_hop_return[2] == 103
+ TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+ && next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == 103
&& next_hop_return[3] == 104 && next_hop_return[4] == 105);
/* Use the delete_bulk function to delete one more. Lookup again */
@@ -1469,8 +1484,8 @@ test22(void)
status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
next_hop_return, 5);
- TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
- && next_hop_return[1] == -1 && next_hop_return[2] == -1
+ TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+ && next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
&& next_hop_return[3] == 104 && next_hop_return[4] == 105);
/* Use the delete_bulk function to delete two, one invalid. Lookup again */
@@ -1482,9 +1497,9 @@ test22(void)
IPv6(ip_batch[4], 128, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
next_hop_return, 5);
- TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
- && next_hop_return[1] == -1 && next_hop_return[2] == -1
- && next_hop_return[3] == -1 && next_hop_return[4] == 105);
+ TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+ && next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
+ && next_hop_return[3] == (uint32_t) -1 && next_hop_return[4] == 105);
/* Use the delete_bulk function to delete the remaining one. Lookup again */
@@ -1493,9 +1508,9 @@ test22(void)
status = rte_lpm6_lookup_bulk_func(lpm, ip_batch,
next_hop_return, 5);
- TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == -1
- && next_hop_return[1] == -1 && next_hop_return[2] == -1
- && next_hop_return[3] == -1 && next_hop_return[4] == -1);
+ TEST_LPM_ASSERT(status == 0 && next_hop_return[0] == (uint32_t) -1
+ && next_hop_return[1] == (uint32_t) -1 && next_hop_return[2] == (uint32_t) -1
+ && next_hop_return[3] == (uint32_t) -1 && next_hop_return[4] == (uint32_t) -1);
rte_lpm6_free(lpm);
@@ -1514,7 +1529,8 @@ test23(void)
struct rte_lpm6_config config;
uint32_t i;
uint8_t ip[16];
- uint8_t depth, next_hop_add, next_hop_return;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1598,7 +1614,8 @@ test25(void)
struct rte_lpm6_config config;
uint8_t ip[16];
uint32_t i;
- uint8_t depth, next_hop_add, next_hop_return, next_hop_expected;
+ uint8_t depth;
+ uint32_t next_hop_add, next_hop_return, next_hop_expected;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1646,12 +1663,12 @@ test26(void)
uint8_t ip_10_24[] = {10, 10, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
uint8_t ip_20_25[] = {10, 10, 20, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
uint8_t d_ip_10_32 = 32;
- uint8_t d_ip_10_24 = 24;
- uint8_t d_ip_20_25 = 25;
- uint8_t next_hop_ip_10_32 = 100;
- uint8_t next_hop_ip_10_24 = 105;
- uint8_t next_hop_ip_20_25 = 111;
- uint8_t next_hop_return = 0;
+ uint8_t d_ip_10_24 = 24;
+ uint8_t d_ip_20_25 = 25;
+ uint32_t next_hop_ip_10_32 = 100;
+ uint32_t next_hop_ip_10_24 = 105;
+ uint32_t next_hop_ip_20_25 = 111;
+ uint32_t next_hop_return = 0;
int32_t status = 0;
config.max_rules = MAX_RULES;
@@ -1666,7 +1683,7 @@ test26(void)
return -1;
status = rte_lpm6_lookup(lpm, ip_10_32, &next_hop_return);
- uint8_t test_hop_10_32 = next_hop_return;
+ uint32_t test_hop_10_32 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_32);
@@ -1675,7 +1692,7 @@ test26(void)
return -1;
status = rte_lpm6_lookup(lpm, ip_10_24, &next_hop_return);
- uint8_t test_hop_10_24 = next_hop_return;
+ uint32_t test_hop_10_24 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_10_24);
@@ -1684,7 +1701,7 @@ test26(void)
return -1;
status = rte_lpm6_lookup(lpm, ip_20_25, &next_hop_return);
- uint8_t test_hop_20_25 = next_hop_return;
+ uint32_t test_hop_20_25 = next_hop_return;
TEST_LPM_ASSERT(status == 0);
TEST_LPM_ASSERT(next_hop_return == next_hop_ip_20_25);
@@ -1723,7 +1740,8 @@ test27(void)
struct rte_lpm6 *lpm = NULL;
struct rte_lpm6_config config;
uint8_t ip[] = {128,128,128,128,128,128,128,128,128,128,128,128,128,128,0,0};
- uint8_t depth = 128, next_hop_add = 100, next_hop_return;
+ uint8_t depth = 128;
+ uint32_t next_hop_add = 100, next_hop_return;
int32_t status = 0;
int i, j;
@@ -1799,7 +1817,7 @@ perf_test(void)
struct rte_lpm6_config config;
uint64_t begin, total_time;
unsigned i, j;
- uint8_t next_hop_add = 0xAA, next_hop_return = 0;
+ uint32_t next_hop_add = 0xAA, next_hop_return = 0;
int status = 0;
int64_t count = 0;
@@ -1856,7 +1874,7 @@ perf_test(void)
count = 0;
uint8_t ip_batch[NUM_IPS_ENTRIES][16];
- int16_t next_hops[NUM_IPS_ENTRIES];
+ uint32_t next_hops[NUM_IPS_ENTRIES];
for (i = 0; i < NUM_IPS_ENTRIES; i++)
memcpy(ip_batch[i], large_ips_table[i].ip, 16);
@@ -1869,7 +1887,7 @@ perf_test(void)
total_time += rte_rdtsc() - begin;
for (j = 0; j < NUM_IPS_ENTRIES; j++)
- if (next_hops[j] < 0)
+ if ((int32_t) next_hops[j] < 0)
count++;
}
printf("BULK LPM Lookup: %.1f cycles (fails = %.1f%%)\n",
--
1.9.1
[-- Attachment #8: 0007-examples-update-examples-to-use-24-bit-extended-next.patch --]
[-- Type: text/plain, Size: 5383 bytes --]
>From cfaf9c28dbff3bec0c867aa4270b01b04bf50276 Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sat, 27 Jun 2015 22:42:42 +0000
Subject: [PATCH 7/8] examples: update examples to use 24 bit extended next hop
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
examples/ip_reassembly/main.c | 3 ++-
examples/l3fwd-power/main.c | 2 +-
examples/l3fwd-vf/main.c | 2 +-
examples/l3fwd/main.c | 16 ++++++++--------
examples/load_balancer/runtime.c | 2 +-
5 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 741c398..86e33a7 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -347,7 +347,8 @@ reassemble(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
struct rte_ip_frag_death_row *dr;
struct rx_queue *rxq;
void *d_addr_bytes;
- uint8_t next_hop, dst_port;
+ uint32_t next_hop;
+ uint8_t dst_port;
rxq = &qconf->rx_queue_list[queue];
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 8bb88ce..f647713 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -631,7 +631,7 @@ static inline uint8_t
get_ipv4_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid,
lookup_struct_t *ipv4_l3fwd_lookup_struct)
{
- uint8_t next_hop;
+ uint32_t next_hop;
return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index 01f610e..193c3ab 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -440,7 +440,7 @@ get_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid, lookup_struct_t * l3fwd
static inline uint8_t
get_dst_port(struct ipv4_hdr *ipv4_hdr, uint8_t portid, lookup_struct_t * l3fwd_lookup_struct)
{
- uint8_t next_hop;
+ uint32_t next_hop;
return (uint8_t) ((rte_lpm_lookup(l3fwd_lookup_struct,
rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)?
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 1f3e5c6..4f31e52 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -710,7 +710,7 @@ get_ipv6_dst_port(void *ipv6_hdr, uint8_t portid, lookup_struct_t * ipv6_l3fwd_
static inline uint8_t
get_ipv4_dst_port(void *ipv4_hdr, uint8_t portid, lookup_struct_t * ipv4_l3fwd_lookup_struct)
{
- uint8_t next_hop;
+ uint32_t next_hop;
return (uint8_t) ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,
rte_be_to_cpu_32(((struct ipv4_hdr *)ipv4_hdr)->dst_addr),
@@ -720,7 +720,7 @@ get_ipv4_dst_port(void *ipv4_hdr, uint8_t portid, lookup_struct_t * ipv4_l3fwd_
static inline uint8_t
get_ipv6_dst_port(void *ipv6_hdr, uint8_t portid, lookup6_struct_t * ipv6_l3fwd_lookup_struct)
{
- uint8_t next_hop;
+ uint32_t next_hop;
return (uint8_t) ((rte_lpm6_lookup(ipv6_l3fwd_lookup_struct,
((struct ipv6_hdr*)ipv6_hdr)->dst_addr, &next_hop) == 0)?
next_hop : portid);
@@ -1151,7 +1151,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
* to BAD_PORT value.
*/
static inline __attribute__((always_inline)) void
-rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)
+rfc1812_process(struct ipv4_hdr *ipv4_hdr, uint32_t *dp, uint32_t ptype)
{
uint8_t ihl;
@@ -1182,7 +1182,7 @@ static inline __attribute__((always_inline)) uint16_t
get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
uint32_t dst_ipv4, uint8_t portid)
{
- uint8_t next_hop;
+ uint32_t next_hop;
struct ipv6_hdr *ipv6_hdr;
struct ether_hdr *eth_hdr;
@@ -1205,7 +1205,7 @@ get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
static inline void
process_packet(struct lcore_conf *qconf, struct rte_mbuf *pkt,
- uint16_t *dst_port, uint8_t portid)
+ uint32_t *dst_port, uint8_t portid)
{
struct ether_hdr *eth_hdr;
struct ipv4_hdr *ipv4_hdr;
@@ -1275,7 +1275,7 @@ processx4_step2(const struct lcore_conf *qconf,
uint32_t ipv4_flag,
uint8_t portid,
struct rte_mbuf *pkt[FWDSTEP],
- uint16_t dprt[FWDSTEP])
+ uint32_t dprt[FWDSTEP])
{
rte_xmm_t dst;
const __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,
@@ -1301,7 +1301,7 @@ processx4_step2(const struct lcore_conf *qconf,
* Perform RFC1812 checks and updates for IPV4 packets.
*/
static inline void
-processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
+processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint32_t dst_port[FWDSTEP])
{
__m128i te[FWDSTEP];
__m128i ve[FWDSTEP];
@@ -1527,7 +1527,7 @@ main_loop(__attribute__((unused)) void *dummy)
int32_t k;
uint16_t dlp;
uint16_t *lp;
- uint16_t dst_port[MAX_PKT_BURST];
+ uint32_t dst_port[MAX_PKT_BURST];
__m128i dip[MAX_PKT_BURST / FWDSTEP];
uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];
uint16_t pnum[MAX_PKT_BURST + 1];
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index 2b265c2..6944325 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -525,7 +525,7 @@ app_lcore_worker(
struct rte_mbuf *pkt;
struct ipv4_hdr *ipv4_hdr;
uint32_t ipv4_dst, pos;
- uint8_t port;
+ uint32_t port;
if (likely(j < bsz_rd - 1)) {
APP_WORKER_PREFETCH1(rte_pktmbuf_mtod(lp->mbuf_in.array[j+1], unsigned char *));
--
1.9.1
[-- Attachment #9: 0008-Makefile-add-fno-strict-aliasing-due-to-LPM-casting-.patch --]
[-- Type: text/plain, Size: 763 bytes --]
>From 2ded34ca11f61ed8a3bcfda6d13339e04b0430bf Mon Sep 17 00:00:00 2001
From: Matthew Hall <mhall@mhcomputing.net>
Date: Sun, 28 Jun 2015 22:52:45 +0000
Subject: [PATCH 8/8] Makefile: add -fno-strict-aliasing due to LPM casting
logic
Signed-off-by: Matthew Hall <mhall@mhcomputing.net>
---
lib/librte_lpm/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index 688cfc9..20030b8 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -35,7 +35,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
LIB = librte_lpm.a
CFLAGS += -O3
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -fno-strict-aliasing
EXPORT_MAP := rte_lpm_version.map
--
1.9.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-24 6:09 0% ` Matthew Hall
@ 2015-10-25 17:52 0% ` Vladimir Medvedkin
[not found] ` <20151026115519.GA7576@MKJASTRX-MOBL>
2015-10-26 12:13 0% ` Jastrzebski, MichalX K
1 sibling, 1 reply; 200+ results
From: Vladimir Medvedkin @ 2015-10-25 17:52 UTC (permalink / raw)
To: Matthew Hall; +Cc: dev
Hi all,
Here my implementation
Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
---
config/common_bsdapp | 1 +
config/common_linuxapp | 1 +
lib/librte_lpm/rte_lpm.c | 194
+++++++++++++++++++++++++++++------------------
lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
4 files changed, 219 insertions(+), 140 deletions(-)
diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..408cc2c 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
#
CONFIG_RTE_LIBRTE_LPM=y
CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n
#
# Compile librte_acl
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..1c60e63 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
#
CONFIG_RTE_LIBRTE_LPM=y
CONFIG_RTE_LIBRTE_LPM_DEBUG=n
+CONFIG_RTE_LIBRTE_LPM_ASNUM=n
#
# Compile librte_acl
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 163ba3c..363b400 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id, int
max_rules,
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
+#else
+ RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
+#endif
/* Check user arguments. */
if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
rte_errno = EINVAL;
@@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
*/
static inline int32_t
rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+ struct rte_lpm_res *res)
{
uint32_t rule_gindex, rule_index, last_rule;
int i;
@@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
/* If rule already exists update its next_hop and
return. */
if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop =
next_hop;
-
+ lpm->rules_tbl[rule_index].next_hop =
res->next_hop;
+ lpm->rules_tbl[rule_index].fwd_class =
res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ lpm->rules_tbl[rule_index].as_num =
res->as_num;
+#endif
return rule_index;
}
}
@@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
/* Add the new rule. */
lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
+ lpm->rules_tbl[rule_index].next_hop = res->next_hop;
+ lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ lpm->rules_tbl[rule_index].as_num = res->as_num;
+#endif
/* Increment the used rules counter for this rule group. */
lpm->rule_info[depth - 1].used_rules++;
@@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static inline int32_t
-tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
{
uint32_t tbl8_gindex; /* tbl8 group index. */
- struct rte_lpm_tbl8_entry *tbl8_entry;
+ struct rte_lpm_tbl_entry *tbl8_entry;
/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
@@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
tbl8_entry = &tbl8[tbl8_gindex *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
/* If a free tbl8 group is found clean it and set as VALID.
*/
- if (!tbl8_entry->valid_group) {
+ if (!tbl8_entry->ext_valid) {
memset(&tbl8_entry[0], 0,
RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
sizeof(tbl8_entry[0]));
- tbl8_entry->valid_group = VALID;
+ tbl8_entry->ext_valid = VALID;
/* Return group index for allocated tbl8 group. */
return tbl8_gindex;
@@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
}
static inline void
-tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
- tbl8[tbl8_group_start].valid_group = INVALID;
+ tbl8[tbl8_group_start].ext_valid = INVALID;
}
static inline int32_t
add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ struct rte_lpm_res *res)
{
uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
/* Calculate the index into Table24. */
tbl24_index = ip >> 8;
tbl24_range = depth_to_range(depth);
+ struct rte_lpm_tbl_entry new_tbl_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = res->as_num,
+#endif
+ .next_hop = res->next_hop,
+ .fwd_class = res->fwd_class,
+ .ext_valid = 0,
+ .depth = depth,
+ .valid = VALID,
+ };
+
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
/*
* For invalid OR valid and non-extended tbl 24 entries set
* entry.
*/
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
+ if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid == 0 &&
lpm->tbl24[i].depth <= depth)) {
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .next_hop = next_hop, },
- .valid = VALID,
- .ext_entry = 0,
- .depth = depth,
- };
-
/* Setting tbl24 entry in one go to avoid race
* conditions
*/
- lpm->tbl24[i] = new_tbl24_entry;
+ lpm->tbl24[i] = new_tbl_entry;
continue;
}
- if (lpm->tbl24[i].ext_entry == 1) {
+ if (lpm->tbl24[i].ext_valid == 1) {
/* If tbl24 entry is valid and extended calculate
the
* index into tbl8.
*/
@@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
uint8_t depth,
for (j = tbl8_index; j < tbl8_group_end; j++) {
if (!lpm->tbl8[j].valid ||
lpm->tbl8[j].depth <=
depth) {
- struct rte_lpm_tbl8_entry
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- .next_hop = next_hop,
- };
+
+ new_tbl_entry.ext_valid = VALID;
/*
* Setting tbl8 entry in one go to
avoid
* race conditions
*/
- lpm->tbl8[j] = new_tbl8_entry;
+ lpm->tbl8[j] = new_tbl_entry;
continue;
}
@@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
uint8_t depth,
static inline int32_t
add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
+ struct rte_lpm_res *res)
{
uint32_t tbl24_index;
int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
tbl8_index,
@@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
/* Set tbl8 entry. */
for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
lpm->tbl8[i].depth = depth;
- lpm->tbl8[i].next_hop = next_hop;
+ lpm->tbl8[i].next_hop = res->next_hop;
+ lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ lpm->tbl8[i].as_num = res->as_num;
+#endif
lpm->tbl8[i].valid = VALID;
}
@@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
* so assign whole structure in one go
*/
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .tbl8_gindex = (uint8_t)tbl8_group_index, },
- .valid = VALID,
- .ext_entry = 1,
+ struct rte_lpm_tbl_entry new_tbl24_entry = {
+ .tbl8_gindex = (uint16_t)tbl8_group_index,
.depth = 0,
+ .ext_valid = 1,
+ .valid = VALID,
};
lpm->tbl24[tbl24_index] = new_tbl24_entry;
}/* If valid entry but not extended calculate the index into
Table8. */
- else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
+ else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
/* Search for free tbl8 group. */
tbl8_group_index = tbl8_alloc(lpm->tbl8);
@@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
lpm->tbl8[i].next_hop =
lpm->tbl24[tbl24_index].next_hop;
+ lpm->tbl8[i].fwd_class =
+ lpm->tbl24[tbl24_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ lpm->tbl8[i].as_num =
lpm->tbl24[tbl24_index].as_num;
+#endif
}
tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
@@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth,
lpm->tbl8[i].depth <= depth) {
lpm->tbl8[i].valid = VALID;
lpm->tbl8[i].depth = depth;
- lpm->tbl8[i].next_hop = next_hop;
+ lpm->tbl8[i].next_hop = res->next_hop;
+ lpm->tbl8[i].fwd_class = res->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ lpm->tbl8[i].as_num = res->as_num;
+#endif
continue;
}
@@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
* so assign whole structure in one go.
*/
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .tbl8_gindex = (uint8_t)tbl8_group_index,
},
- .valid = VALID,
- .ext_entry = 1,
+ struct rte_lpm_tbl_entry new_tbl24_entry = {
+ .tbl8_gindex = (uint16_t)tbl8_group_index,
.depth = 0,
+ .ext_valid = 1,
+ .valid = VALID,
};
lpm->tbl24[tbl24_index] = new_tbl24_entry;
@@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
if (!lpm->tbl8[i].valid ||
lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl8_entry new_tbl8_entry = {
- .valid = VALID,
+ struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = res->as_num,
+#endif
+ .next_hop = res->next_hop,
+ .fwd_class = res->fwd_class,
.depth = depth,
- .next_hop = next_hop,
- .valid_group =
lpm->tbl8[i].valid_group,
+ .ext_valid = lpm->tbl8[i].ext_valid,
+ .valid = VALID,
};
/*
@@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked, uint8_t depth,
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
+ struct rte_lpm_res *res)
{
int32_t rule_index, status = 0;
uint32_t ip_masked;
/* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
+ if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
RTE_LPM_MAX_DEPTH))
return -EINVAL;
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(lpm, ip_masked, depth, res);
/* If the is no space available for new rule return error. */
if (rule_index < 0) {
@@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(lpm, ip_masked, depth, res);
}
else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(lpm, ip_masked, depth, res);
/*
* If add fails due to exhaustion of tbl8 extensions delete
@@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth,
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
+ struct rte_lpm_res *res)
{
uint32_t ip_masked;
int32_t rule_index;
/* Check user arguments. */
if ((lpm == NULL) ||
- (next_hop == NULL) ||
+ (res == NULL) ||
(depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
return -EINVAL;
@@ -681,7 +706,11 @@ uint8_t *next_hop)
rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
+ res->next_hop = lpm->rules_tbl[rule_index].next_hop;
+ res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ res->as_num = lpm->rules_tbl[rule_index].as_num;
+#endif
return 1;
}
@@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
*/
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
{
- if (lpm->tbl24[i].ext_entry == 0 &&
+ if (lpm->tbl24[i].ext_valid == 0 &&
lpm->tbl24[i].depth <= depth ) {
lpm->tbl24[i].valid = INVALID;
}
@@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
* associated with this rule.
*/
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- {.next_hop =
lpm->rules_tbl[sub_rule_index].next_hop,},
- .valid = VALID,
- .ext_entry = 0,
+ struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+ .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
.depth = sub_rule_depth,
+ .ext_valid = 0,
+ .valid = VALID,
};
- struct rte_lpm_tbl8_entry new_tbl8_entry = {
- .valid = VALID,
+ struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+ .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
.depth = sub_rule_depth,
- .next_hop = lpm->rules_tbl
- [sub_rule_index].next_hop,
+ .valid = VALID,
};
for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
{
- if (lpm->tbl24[i].ext_entry == 0 &&
+ if (lpm->tbl24[i].ext_valid == 0 &&
lpm->tbl24[i].depth <= depth ) {
lpm->tbl24[i] = new_tbl24_entry;
}
@@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
ip_masked,
* thus can be recycled
*/
static inline int32_t
-tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
tbl8_group_start)
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
tbl8_group_start)
{
uint32_t tbl8_group_end, i;
tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
@@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked,
}
else {
/* Set new tbl8 entry. */
- struct rte_lpm_tbl8_entry new_tbl8_entry = {
- .valid = VALID,
- .depth = sub_rule_depth,
- .valid_group =
lpm->tbl8[tbl8_group_start].valid_group,
+ struct rte_lpm_tbl_entry new_tbl8_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = lpm->rules_tbl[sub_rule_index].as_num,
+#endif
+ .fwd_class =
lpm->rules_tbl[sub_rule_index].fwd_class,
.next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
+ .depth = sub_rule_depth,
+ .ext_valid = lpm->tbl8[tbl8_group_start].ext_valid,
+ .valid = VALID,
};
/*
@@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
ip_masked,
}
else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
- struct rte_lpm_tbl24_entry new_tbl24_entry = {
- { .next_hop =
lpm->tbl8[tbl8_recycle_index].next_hop, },
- .valid = VALID,
- .ext_entry = 0,
+ struct rte_lpm_tbl_entry new_tbl24_entry = {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
+#endif
+ .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
+ .fwd_class =
lpm->tbl8[tbl8_recycle_index].fwd_class,
.depth = lpm->tbl8[tbl8_recycle_index].depth,
+ .ext_valid = 0,
+ .valid = VALID,
};
/* Set tbl24 before freeing tbl8 to avoid race condition. */
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index c299ce2..7c615bc 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -31,8 +31,8 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef _RTE_LPM_H_
-#define _RTE_LPM_H_
+#ifndef _RTE_LPM_EXT_H_
+#define _RTE_LPM_EXT_H_
/**
* @file
@@ -81,57 +81,58 @@ extern "C" {
#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
#endif
-/** @internal bitmask with valid and ext_entry/valid_group fields set */
-#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
+/** @internal bitmask with valid and ext_valid/ext_valid fields set */
+#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
/** Bitmask used to indicate successful lookup */
-#define RTE_LPM_LOOKUP_SUCCESS 0x0100
+#define RTE_LPM_LOOKUP_SUCCESS 0x01
+
+struct rte_lpm_res {
+ uint16_t next_hop;
+ uint8_t fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint32_t as_num;
+#endif
+};
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-/** @internal Tbl24 entry structure. */
-struct rte_lpm_tbl24_entry {
- /* Stores Next hop or group index (i.e. gindex)into tbl8. */
+struct rte_lpm_tbl_entry {
+ uint8_t valid :1;
+ uint8_t ext_valid :1;
+ uint8_t depth :6;
+ uint8_t fwd_class;
union {
- uint8_t next_hop;
- uint8_t tbl8_gindex;
+ uint16_t next_hop;
+ uint16_t tbl8_gindex;
};
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- uint8_t ext_entry :1; /**< External entry. */
- uint8_t depth :6; /**< Rule depth. */
-};
-
-/** @internal Tbl8 entry structure. */
-struct rte_lpm_tbl8_entry {
- uint8_t next_hop; /**< next hop. */
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- uint8_t valid_group :1; /**< Group validation flag. */
- uint8_t depth :6; /**< Rule depth. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint32_t as_num;
+#endif
};
#else
-struct rte_lpm_tbl24_entry {
- uint8_t depth :6;
- uint8_t ext_entry :1;
- uint8_t valid :1;
+struct rte_lpm_tbl_entry {
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint32_t as_num;
+#endif
union {
- uint8_t tbl8_gindex;
- uint8_t next_hop;
+ uint16_t tbl8_gindex;
+ uint16_t next_hop;
};
-};
-
-struct rte_lpm_tbl8_entry {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- uint8_t next_hop;
+ uint8_t fwd_class;
+ uint8_t depth :6;
+ uint8_t ext_valid :1;
+ uint8_t valid :1;
};
#endif
/** @internal Rule structure. */
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint32_t as_num;
+#endif
+ uint16_t next_hop; /**< Rule next hop. */
+ uint8_t fwd_class;
};
/** @internal Contains metadata about the rules table. */
@@ -148,9 +149,9 @@ struct rte_lpm {
struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule
info table. */
/* LPM Tables. */
- struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
+ struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
__rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
+ struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
__rte_cache_aligned; /**< LPM tbl8 table. */
struct rte_lpm_rule rules_tbl[0] \
__rte_cache_aligned; /**< LPM rules. */
@@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
* 0 on success, negative value otherwise
*/
int
-rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
next_hop);
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
rte_lpm_res *res);
/**
* Check if a rule is present in the LPM table,
@@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t
depth, uint8_t next_hop);
*/
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
+ struct rte_lpm_res *res);
/**
* Delete a rule from the LPM table.
@@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
* -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup
hit
*/
static inline int
-rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
+rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res *res)
{
unsigned tbl24_index = (ip >> 8);
- uint16_t tbl_entry;
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint64_t tbl_entry;
+#else
+ uint32_t tbl_entry;
+#endif
/* DEBUG: Check user input arguments. */
- RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
-EINVAL);
+ RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -EINVAL);
/* Copy tbl24 entry */
- tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
+#else
+ tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
+#endif
/* Copy tbl8 entry (only if needed) */
if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
unsigned tbl8_index = (uint8_t)ip +
- ((uint8_t)tbl_entry *
RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+ ((*(struct rte_lpm_tbl_entry
*)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
- tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
+#else
+ tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
+#endif
}
-
- *next_hop = (uint8_t)tbl_entry;
+ res->next_hop = ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
+ res->fwd_class = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->fwd_class;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ res->as_num = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->as_num;
+#endif
return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
+
}
/**
@@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
uint8_t *next_hop)
* @return
* -EINVAL for incorrect arguments, otherwise 0
*/
-#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
- rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
+#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
+ rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
static inline int
-rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
- uint16_t * next_hops, const unsigned n)
+rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
+ struct rte_lpm_res *res_tbl, const unsigned n)
{
unsigned i;
+ int ret = 0;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ uint64_t tbl_entry;
+#else
+ uint32_t tbl_entry;
+#endif
unsigned tbl24_indexes[n];
/* DEBUG: Check user input arguments. */
RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
- (next_hops == NULL)), -EINVAL);
+ (res_tbl == NULL)), -EINVAL);
for (i = 0; i < n; i++) {
tbl24_indexes[i] = ips[i] >> 8;
@@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm,
const uint32_t * ips,
for (i = 0; i < n; i++) {
/* Simply copy tbl24 entry to output */
- next_hops[i] = *(const uint16_t
*)&lpm->tbl24[tbl24_indexes[i]];
-
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ tbl_entry = *(const uint64_t
*)&lpm->tbl24[tbl24_indexes[i]];
+#else
+ tbl_entry = *(const uint32_t
*)&lpm->tbl24[tbl24_indexes[i]];
+#endif
/* Overwrite output with tbl8 entry if needed */
- if (unlikely((next_hops[i] &
RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
- RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
+ if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
==
+ RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
unsigned tbl8_index = (uint8_t)ips[i] +
- ((uint8_t)next_hops[i] *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
+ ((*(struct rte_lpm_tbl_entry
*)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
- next_hops[i] = *(const uint16_t
*)&lpm->tbl8[tbl8_index];
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ tbl_entry = *(const uint64_t
*)&lpm->tbl8[tbl8_index];
+#else
+ tbl_entry = *(const uint32_t
*)&lpm->tbl8[tbl8_index];
+#endif
}
+ res_tbl[i].next_hop = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->next_hop;
+ res_tbl[i].fwd_class = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->next_hop;
+#ifdef RTE_LIBRTE_LPM_ASNUM
+ res_tbl[i].as_num = ((struct rte_lpm_tbl_entry
*)&tbl_entry)->as_num;
+#endif
+ ret |= 1 << i;
}
- return 0;
+ return ret;
}
/* Mask four results. */
@@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, __m128i ip,
uint16_t hop[4],
}
#endif
-#endif /* _RTE_LPM_H_ */
+#endif /* _RTE_LPM_EXT_H_ */
2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> On 10/23/15 9:20 AM, Matthew Hall wrote:
>
>> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
>>
>>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
>>>
>>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
>>> number of next hops to 256, as the next hop ID is an 8-bit long field.
>>> Proposed extension increase number of next hops for IPv4 to 2^24 and
>>> also allows 32-bits read/write operations.
>>>
>>> This patchset requires additional change to rte_table library to meet
>>> ABI compatibility requirements. A v2 will be sent next week.
>>>
>>
>> I also have a patchset for this.
>>
>> I will send it out as well so we could compare.
>>
>> Matthew.
>>
>
> Sorry about the delay; I only work on DPDK in personal time and not as
> part of a job. My patchset is attached to this email.
>
> One possible advantage with my patchset, compared to others, is that the
> space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> between these two standards, which is something I try to avoid as much as
> humanly possible.
>
> This is because my application code is green-field, so I absolutely don't
> want to put any ugly hacks or incompatibilities in this code if I can
> possibly avoid it.
>
> Otherwise, I am not necessarily as expert about rte_lpm as some of the
> full-time guys, but I think with four or five of us in the thread hammering
> out patches we will be able to create something amazing together and I am
> very very very very very happy about this.
>
> Matthew.
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6 4/5] doc: update with link changes
@ 2015-10-25 21:59 13% ` Marc Sune
0 siblings, 0 replies; 200+ results
From: Marc Sune @ 2015-10-25 21:59 UTC (permalink / raw)
To: dev
Add new features, ABI changes and resolved issues notice for
the refactored link patch.
Signed-off-by: Marc Sune <marcdevel@gmail.com>
---
doc/guides/rel_notes/release_2_2.rst | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 5687676..e0d1741 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -4,6 +4,17 @@ DPDK Release 2.2
New Features
------------
+* **ethdev: define a set of advertised link speeds.**
+
+ Allowing to define a set of advertised speeds for auto-negociation,
+ explicitely disable link auto-negociation (single speed) and full
+ auto-negociation.
+
+* **ethdev: add speed_cap bitmap to recover eth device link speed capabilities
+ define a set of advertised link speeds.**
+
+ ``struct rte_eth_dev_info`` has now speed_cap bitmap, which allows the
+ application to recover the supported speeds for that ethernet device.
Resolved Issues
---------------
@@ -48,6 +59,11 @@ Libraries
Fixed issue where an incorrect Cuckoo Hash key table size could be
calculated limiting the size to 4GB.
+* **ethdev: Fixed link_speed overflow in rte_eth_link for 100Gbps.**
+
+ 100Gbps in Mbps (100000) exceeds 16 bit max value of ``link_speed`` in
+ ``rte_eth_link``.
+
Examples
~~~~~~~~
@@ -81,6 +97,8 @@ API Changes
* The deprecated ring PMD functions are removed:
rte_eth_ring_pair_create() and rte_eth_ring_pair_attach().
+* New API call, rte_eth_speed_to_bm_flag(), in ethdev to map numerical speeds
+ to bitmap fields.
ABI Changes
-----------
@@ -91,6 +109,11 @@ ABI Changes
* The ethdev flow director entries for SCTP were changed.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* The ethdev rte_eth_link and rte_eth_conf structures were changed to
+ support the new link API, as well as ETH_LINK_HALF/FULL_DUPLEX.
+
+* The ethdev rte_eth_dev_info was changed to support device speed capabilities.
+
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
--
2.1.4
^ permalink raw reply [relevance 13%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
[not found] ` <20151026115519.GA7576@MKJASTRX-MOBL>
@ 2015-10-26 11:57 0% ` Jastrzebski, MichalX K
2015-10-26 14:03 3% ` Vladimir Medvedkin
0 siblings, 1 reply; 200+ results
From: Jastrzebski, MichalX K @ 2015-10-26 11:57 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev
> -----Original Message-----
> From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> Sent: Monday, October 26, 2015 12:55 PM
> To: Vladimir Medvedkin
> Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> for lpm (ipv4)
>
> On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > Hi all,
> >
> > Here my implementation
> >
> > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > ---
> > config/common_bsdapp | 1 +
> > config/common_linuxapp | 1 +
> > lib/librte_lpm/rte_lpm.c | 194
> > +++++++++++++++++++++++++++++------------------
> > lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> > 4 files changed, 219 insertions(+), 140 deletions(-)
> >
> > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > index b37dcf4..408cc2c 100644
> > --- a/config/common_bsdapp
> > +++ b/config/common_bsdapp
> > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > #
> > CONFIG_RTE_LIBRTE_LPM=y
> > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> >
> > #
> > # Compile librte_acl
> > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > index 0de43d5..1c60e63 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > #
> > CONFIG_RTE_LIBRTE_LPM=y
> > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> >
> > #
> > # Compile librte_acl
> > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > index 163ba3c..363b400 100644
> > --- a/lib/librte_lpm/rte_lpm.c
> > +++ b/lib/librte_lpm/rte_lpm.c
> > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> int
> > max_rules,
> >
> > lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> >
> > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > +#else
> > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > +#endif
> > /* Check user arguments. */
> > if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> > rte_errno = EINVAL;
> > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > */
> > static inline int32_t
> > rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > - uint8_t next_hop)
> > + struct rte_lpm_res *res)
> > {
> > uint32_t rule_gindex, rule_index, last_rule;
> > int i;
> > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >
> > /* If rule already exists update its next_hop and
> > return. */
> > if (lpm->rules_tbl[rule_index].ip == ip_masked) {
> > - lpm->rules_tbl[rule_index].next_hop =
> > next_hop;
> > -
> > + lpm->rules_tbl[rule_index].next_hop =
> > res->next_hop;
> > + lpm->rules_tbl[rule_index].fwd_class =
> > res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + lpm->rules_tbl[rule_index].as_num =
> > res->as_num;
> > +#endif
> > return rule_index;
> > }
> > }
> > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> >
> > /* Add the new rule. */
> > lpm->rules_tbl[rule_index].ip = ip_masked;
> > - lpm->rules_tbl[rule_index].next_hop = next_hop;
> > + lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > + lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + lpm->rules_tbl[rule_index].as_num = res->as_num;
> > +#endif
> >
> > /* Increment the used rules counter for this rule group. */
> > lpm->rule_info[depth - 1].used_rules++;
> > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth)
> > * Find, clean and allocate a tbl8.
> > */
> > static inline int32_t
> > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > {
> > uint32_t tbl8_gindex; /* tbl8 group index. */
> > - struct rte_lpm_tbl8_entry *tbl8_entry;
> > + struct rte_lpm_tbl_entry *tbl8_entry;
> >
> > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> > for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > tbl8_entry = &tbl8[tbl8_gindex *
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > /* If a free tbl8 group is found clean it and set as VALID.
> > */
> > - if (!tbl8_entry->valid_group) {
> > + if (!tbl8_entry->ext_valid) {
> > memset(&tbl8_entry[0], 0,
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
> > sizeof(tbl8_entry[0]));
> >
> > - tbl8_entry->valid_group = VALID;
> > + tbl8_entry->ext_valid = VALID;
> >
> > /* Return group index for allocated tbl8 group. */
> > return tbl8_gindex;
> > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > }
> >
> > static inline void
> > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> > {
> > /* Set tbl8 group invalid*/
> > - tbl8[tbl8_group_start].valid_group = INVALID;
> > + tbl8[tbl8_group_start].ext_valid = INVALID;
> > }
> >
> > static inline int32_t
> > add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > - uint8_t next_hop)
> > + struct rte_lpm_res *res)
> > {
> > uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
> >
> > /* Calculate the index into Table24. */
> > tbl24_index = ip >> 8;
> > tbl24_range = depth_to_range(depth);
> > + struct rte_lpm_tbl_entry new_tbl_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = res->as_num,
> > +#endif
> > + .next_hop = res->next_hop,
> > + .fwd_class = res->fwd_class,
> > + .ext_valid = 0,
> > + .depth = depth,
> > + .valid = VALID,
> > + };
> > +
> >
> > for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> > /*
> > * For invalid OR valid and non-extended tbl 24 entries set
> > * entry.
> > */
> > - if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry == 0 &&
> > + if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid == 0 &&
> > lpm->tbl24[i].depth <= depth)) {
> >
> > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > - { .next_hop = next_hop, },
> > - .valid = VALID,
> > - .ext_entry = 0,
> > - .depth = depth,
> > - };
> > -
> > /* Setting tbl24 entry in one go to avoid race
> > * conditions
> > */
> > - lpm->tbl24[i] = new_tbl24_entry;
> > + lpm->tbl24[i] = new_tbl_entry;
> >
> > continue;
> > }
> >
> > - if (lpm->tbl24[i].ext_entry == 1) {
> > + if (lpm->tbl24[i].ext_valid == 1) {
> > /* If tbl24 entry is valid and extended calculate
> > the
> > * index into tbl8.
> > */
> > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> ip,
> > uint8_t depth,
> > for (j = tbl8_index; j < tbl8_group_end; j++) {
> > if (!lpm->tbl8[j].valid ||
> > lpm->tbl8[j].depth <=
> > depth) {
> > - struct rte_lpm_tbl8_entry
> > - new_tbl8_entry = {
> > - .valid = VALID,
> > - .valid_group = VALID,
> > - .depth = depth,
> > - .next_hop = next_hop,
> > - };
> > +
> > + new_tbl_entry.ext_valid = VALID;
> >
> > /*
> > * Setting tbl8 entry in one go to
> > avoid
> > * race conditions
> > */
> > - lpm->tbl8[j] = new_tbl8_entry;
> > + lpm->tbl8[j] = new_tbl_entry;
> >
> > continue;
> > }
> > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t depth,
> >
> > static inline int32_t
> > add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > - uint8_t next_hop)
> > + struct rte_lpm_res *res)
> > {
> > uint32_t tbl24_index;
> > int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > tbl8_index,
> > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> > /* Set tbl8 entry. */
> > for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
> > lpm->tbl8[i].depth = depth;
> > - lpm->tbl8[i].next_hop = next_hop;
> > + lpm->tbl8[i].next_hop = res->next_hop;
> > + lpm->tbl8[i].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + lpm->tbl8[i].as_num = res->as_num;
> > +#endif
> > lpm->tbl8[i].valid = VALID;
> > }
> >
> > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> > * so assign whole structure in one go
> > */
> >
> > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > - { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > - .valid = VALID,
> > - .ext_entry = 1,
> > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > + .tbl8_gindex = (uint16_t)tbl8_group_index,
> > .depth = 0,
> > + .ext_valid = 1,
> > + .valid = VALID,
> > };
> >
> > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> >
> > }/* If valid entry but not extended calculate the index into
> > Table8. */
> > - else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > + else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > /* Search for free tbl8 group. */
> > tbl8_group_index = tbl8_alloc(lpm->tbl8);
> >
> > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> > lpm->tbl8[i].depth = lpm->tbl24[tbl24_index].depth;
> > lpm->tbl8[i].next_hop =
> > lpm->tbl24[tbl24_index].next_hop;
> > + lpm->tbl8[i].fwd_class =
> > + lpm->tbl24[tbl24_index].fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + lpm->tbl8[i].as_num =
> > lpm->tbl24[tbl24_index].as_num;
> > +#endif
> > }
> >
> > tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> ip_masked,
> > uint8_t depth,
> > lpm->tbl8[i].depth <= depth) {
> > lpm->tbl8[i].valid = VALID;
> > lpm->tbl8[i].depth = depth;
> > - lpm->tbl8[i].next_hop = next_hop;
> > + lpm->tbl8[i].next_hop = res->next_hop;
> > + lpm->tbl8[i].fwd_class = res->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + lpm->tbl8[i].as_num = res->as_num;
> > +#endif
> >
> > continue;
> > }
> > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> > * so assign whole structure in one go.
> > */
> >
> > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > - { .tbl8_gindex = (uint8_t)tbl8_group_index,
> > },
> > - .valid = VALID,
> > - .ext_entry = 1,
> > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > + .tbl8_gindex = (uint16_t)tbl8_group_index,
> > .depth = 0,
> > + .ext_valid = 1,
> > + .valid = VALID,
> > };
> >
> > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> >
> > if (!lpm->tbl8[i].valid ||
> > lpm->tbl8[i].depth <= depth) {
> > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > - .valid = VALID,
> > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = res->as_num,
> > +#endif
> > + .next_hop = res->next_hop,
> > + .fwd_class = res->fwd_class,
> > .depth = depth,
> > - .next_hop = next_hop,
> > - .valid_group =
> > lpm->tbl8[i].valid_group,
> > + .ext_valid = lpm->tbl8[i].ext_valid,
> > + .valid = VALID,
> > };
> >
> > /*
> > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked, uint8_t depth,
> > */
> > int
> > rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > - uint8_t next_hop)
> > + struct rte_lpm_res *res)
> > {
> > int32_t rule_index, status = 0;
> > uint32_t ip_masked;
> >
> > /* Check user arguments. */
> > - if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > + if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > RTE_LPM_MAX_DEPTH))
> > return -EINVAL;
> >
> > ip_masked = ip & depth_to_mask(depth);
> >
> > /* Add the rule to the rule table. */
> > - rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > + rule_index = rule_add(lpm, ip_masked, depth, res);
> >
> > /* If the is no space available for new rule return error. */
> > if (rule_index < 0) {
> > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth,
> > }
> >
> > if (depth <= MAX_DEPTH_TBL24) {
> > - status = add_depth_small(lpm, ip_masked, depth, next_hop);
> > + status = add_depth_small(lpm, ip_masked, depth, res);
> > }
> > else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > - status = add_depth_big(lpm, ip_masked, depth, next_hop);
> > + status = add_depth_big(lpm, ip_masked, depth, res);
> >
> > /*
> > * If add fails due to exhaustion of tbl8 extensions delete
> > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth,
> > */
> > int
> > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -uint8_t *next_hop)
> > + struct rte_lpm_res *res)
> > {
> > uint32_t ip_masked;
> > int32_t rule_index;
> >
> > /* Check user arguments. */
> > if ((lpm == NULL) ||
> > - (next_hop == NULL) ||
> > + (res == NULL) ||
> > (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > return -EINVAL;
> >
> > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > rule_index = rule_find(lpm, ip_masked, depth);
> >
> > if (rule_index >= 0) {
> > - *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > + res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > + res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + res->as_num = lpm->rules_tbl[rule_index].as_num;
> > +#endif
> > return 1;
> > }
> >
> > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > */
> > for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
> > {
> >
> > - if (lpm->tbl24[i].ext_entry == 0 &&
> > + if (lpm->tbl24[i].ext_valid == 0 &&
> > lpm->tbl24[i].depth <= depth ) {
> > lpm->tbl24[i].valid = INVALID;
> > }
> > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > ip_masked,
> > * associated with this rule.
> > */
> >
> > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > - {.next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,},
> > - .valid = VALID,
> > - .ext_entry = 0,
> > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > + .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > + .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> > .depth = sub_rule_depth,
> > + .ext_valid = 0,
> > + .valid = VALID,
> > };
> >
> > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > - .valid = VALID,
> > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > + .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > + .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> > .depth = sub_rule_depth,
> > - .next_hop = lpm->rules_tbl
> > - [sub_rule_index].next_hop,
> > + .valid = VALID,
> > };
> >
> > for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++)
> > {
> >
> > - if (lpm->tbl24[i].ext_entry == 0 &&
> > + if (lpm->tbl24[i].ext_valid == 0 &&
> > lpm->tbl24[i].depth <= depth ) {
> > lpm->tbl24[i] = new_tbl24_entry;
> > }
> > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > * thus can be recycled
> > */
> > static inline int32_t
> > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > tbl8_group_start)
> > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > tbl8_group_start)
> > {
> > uint32_t tbl8_group_end, i;
> > tbl8_group_end = tbl8_group_start +
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > }
> > else {
> > /* Set new tbl8 entry. */
> > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > - .valid = VALID,
> > - .depth = sub_rule_depth,
> > - .valid_group =
> > lpm->tbl8[tbl8_group_start].valid_group,
> > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = lpm->rules_tbl[sub_rule_index].as_num,
> > +#endif
> > + .fwd_class =
> > lpm->rules_tbl[sub_rule_index].fwd_class,
> > .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
> > + .depth = sub_rule_depth,
> > + .ext_valid = lpm->tbl8[tbl8_group_start].ext_valid,
> > + .valid = VALID,
> > };
> >
> > /*
> > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > }
> > else if (tbl8_recycle_index > -1) {
> > /* Update tbl24 entry. */
> > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > - { .next_hop =
> > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > - .valid = VALID,
> > - .ext_entry = 0,
> > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > +#endif
> > + .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
> > + .fwd_class =
> > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > + .ext_valid = 0,
> > + .valid = VALID,
> > };
> >
> > /* Set tbl24 before freeing tbl8 to avoid race condition. */
> > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > index c299ce2..7c615bc 100644
> > --- a/lib/librte_lpm/rte_lpm.h
> > +++ b/lib/librte_lpm/rte_lpm.h
> > @@ -31,8 +31,8 @@
> > * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> > */
> >
> > -#ifndef _RTE_LPM_H_
> > -#define _RTE_LPM_H_
> > +#ifndef _RTE_LPM_EXT_H_
> > +#define _RTE_LPM_EXT_H_
> >
> > /**
> > * @file
> > @@ -81,57 +81,58 @@ extern "C" {
> > #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > #endif
> >
> > -/** @internal bitmask with valid and ext_entry/valid_group fields set */
> > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> >
> > /** Bitmask used to indicate successful lookup */
> > -#define RTE_LPM_LOOKUP_SUCCESS 0x0100
> > +#define RTE_LPM_LOOKUP_SUCCESS 0x01
> > +
> > +struct rte_lpm_res {
> > + uint16_t next_hop;
> > + uint8_t fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint32_t as_num;
> > +#endif
> > +};
> >
> > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > -/** @internal Tbl24 entry structure. */
> > -struct rte_lpm_tbl24_entry {
> > - /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > +struct rte_lpm_tbl_entry {
> > + uint8_t valid :1;
> > + uint8_t ext_valid :1;
> > + uint8_t depth :6;
> > + uint8_t fwd_class;
> > union {
> > - uint8_t next_hop;
> > - uint8_t tbl8_gindex;
> > + uint16_t next_hop;
> > + uint16_t tbl8_gindex;
> > };
> > - /* Using single uint8_t to store 3 values. */
> > - uint8_t valid :1; /**< Validation flag. */
> > - uint8_t ext_entry :1; /**< External entry. */
> > - uint8_t depth :6; /**< Rule depth. */
> > -};
> > -
> > -/** @internal Tbl8 entry structure. */
> > -struct rte_lpm_tbl8_entry {
> > - uint8_t next_hop; /**< next hop. */
> > - /* Using single uint8_t to store 3 values. */
> > - uint8_t valid :1; /**< Validation flag. */
> > - uint8_t valid_group :1; /**< Group validation flag. */
> > - uint8_t depth :6; /**< Rule depth. */
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint32_t as_num;
> > +#endif
> > };
> > #else
> > -struct rte_lpm_tbl24_entry {
> > - uint8_t depth :6;
> > - uint8_t ext_entry :1;
> > - uint8_t valid :1;
> > +struct rte_lpm_tbl_entry {
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint32_t as_num;
> > +#endif
> > union {
> > - uint8_t tbl8_gindex;
> > - uint8_t next_hop;
> > + uint16_t tbl8_gindex;
> > + uint16_t next_hop;
> > };
> > -};
> > -
> > -struct rte_lpm_tbl8_entry {
> > - uint8_t depth :6;
> > - uint8_t valid_group :1;
> > - uint8_t valid :1;
> > - uint8_t next_hop;
> > + uint8_t fwd_class;
> > + uint8_t depth :6;
> > + uint8_t ext_valid :1;
> > + uint8_t valid :1;
> > };
> > #endif
> >
> > /** @internal Rule structure. */
> > struct rte_lpm_rule {
> > uint32_t ip; /**< Rule IP address. */
> > - uint8_t next_hop; /**< Rule next hop. */
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint32_t as_num;
> > +#endif
> > + uint16_t next_hop; /**< Rule next hop. */
> > + uint8_t fwd_class;
> > };
> >
> > /** @internal Contains metadata about the rules table. */
> > @@ -148,9 +149,9 @@ struct rte_lpm {
> > struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule
> > info table. */
> >
> > /* LPM Tables. */
> > - struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > + struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > __rte_cache_aligned; /**< LPM tbl24 table. */
> > - struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > + struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > __rte_cache_aligned; /**< LPM tbl8 table. */
> > struct rte_lpm_rule rules_tbl[0] \
> > __rte_cache_aligned; /**< LPM rules. */
> > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > * 0 on success, negative value otherwise
> > */
> > int
> > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > next_hop);
> > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > rte_lpm_res *res);
> >
> > /**
> > * Check if a rule is present in the LPM table,
> > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> uint8_t
> > depth, uint8_t next_hop);
> > */
> > int
> > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > -uint8_t *next_hop);
> > + struct rte_lpm_res *res);
> >
> > /**
> > * Delete a rule from the LPM table.
> > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > * -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on lookup
> > hit
> > */
> > static inline int
> > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res *res)
> > {
> > unsigned tbl24_index = (ip >> 8);
> > - uint16_t tbl_entry;
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint64_t tbl_entry;
> > +#else
> > + uint32_t tbl_entry;
> > +#endif
> > /* DEBUG: Check user input arguments. */
> > - RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > -EINVAL);
> > + RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> EINVAL);
> >
> > /* Copy tbl24 entry */
> > - tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > +#else
> > + tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > +#endif
> > /* Copy tbl8 entry (only if needed) */
> > if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> >
> > unsigned tbl8_index = (uint8_t)ip +
> > - ((uint8_t)tbl_entry *
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > + ((*(struct rte_lpm_tbl_entry
> > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> >
> > - tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > +#else
> > + tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > +#endif
> > }
> > -
> > - *next_hop = (uint8_t)tbl_entry;
> > + res->next_hop = ((struct rte_lpm_tbl_entry *)&tbl_entry)->next_hop;
> > + res->fwd_class = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->fwd_class;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + res->as_num = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->as_num;
> > +#endif
> > return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > +
> > }
> >
> > /**
> > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t *next_hop)
> > * @return
> > * -EINVAL for incorrect arguments, otherwise 0
> > */
> > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > - rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > + rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> >
> > static inline int
> > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t * ips,
> > - uint16_t * next_hops, const unsigned n)
> > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *ips,
> > + struct rte_lpm_res *res_tbl, const unsigned n)
> > {
> > unsigned i;
> > + int ret = 0;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + uint64_t tbl_entry;
> > +#else
> > + uint32_t tbl_entry;
> > +#endif
> > unsigned tbl24_indexes[n];
> >
> > /* DEBUG: Check user input arguments. */
> > RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > - (next_hops == NULL)), -EINVAL);
> > + (res_tbl == NULL)), -EINVAL);
> >
> > for (i = 0; i < n; i++) {
> > tbl24_indexes[i] = ips[i] >> 8;
> > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> *lpm,
> > const uint32_t * ips,
> >
> > for (i = 0; i < n; i++) {
> > /* Simply copy tbl24 entry to output */
> > - next_hops[i] = *(const uint16_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > -
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + tbl_entry = *(const uint64_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > +#else
> > + tbl_entry = *(const uint32_t
> > *)&lpm->tbl24[tbl24_indexes[i]];
> > +#endif
> > /* Overwrite output with tbl8 entry if needed */
> > - if (unlikely((next_hops[i] &
> > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > - RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > + if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > ==
> > + RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> >
> > unsigned tbl8_index = (uint8_t)ips[i] +
> > - ((uint8_t)next_hops[i] *
> > - RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > + ((*(struct rte_lpm_tbl_entry
> > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> >
> > - next_hops[i] = *(const uint16_t
> > *)&lpm->tbl8[tbl8_index];
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + tbl_entry = *(const uint64_t
> > *)&lpm->tbl8[tbl8_index];
> > +#else
> > + tbl_entry = *(const uint32_t
> > *)&lpm->tbl8[tbl8_index];
> > +#endif
> > }
> > + res_tbl[i].next_hop = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > + res_tbl[i].fwd_class = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > + res_tbl[i].as_num = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->as_num;
> > +#endif
> > + ret |= 1 << i;
> > }
> > - return 0;
> > + return ret;
> > }
> >
> > /* Mask four results. */
> > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> __m128i ip,
> > uint16_t hop[4],
> > }
> > #endif
> >
> > -#endif /* _RTE_LPM_H_ */
> > +#endif /* _RTE_LPM_EXT_H_ */
> >
> > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> >
> > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > >
> > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > >>
> > >>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > >>>
> > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> > >>> number of next hops to 256, as the next hop ID is an 8-bit long field.
> > >>> Proposed extension increase number of next hops for IPv4 to 2^24 and
> > >>> also allows 32-bits read/write operations.
> > >>>
> > >>> This patchset requires additional change to rte_table library to meet
> > >>> ABI compatibility requirements. A v2 will be sent next week.
> > >>>
> > >>
> > >> I also have a patchset for this.
> > >>
> > >> I will send it out as well so we could compare.
> > >>
> > >> Matthew.
> > >>
> > >
> > > Sorry about the delay; I only work on DPDK in personal time and not as
> > > part of a job. My patchset is attached to this email.
> > >
> > > One possible advantage with my patchset, compared to others, is that the
> > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > between these two standards, which is something I try to avoid as much
> as
> > > humanly possible.
> > >
> > > This is because my application code is green-field, so I absolutely don't
> > > want to put any ugly hacks or incompatibilities in this code if I can
> > > possibly avoid it.
> > >
> > > Otherwise, I am not necessarily as expert about rte_lpm as some of the
> > > full-time guys, but I think with four or five of us in the thread hammering
> > > out patches we will be able to create something amazing together and I
> am
> > > very very very very very happy about this.
> > >
> > > Matthew.
> > >
>
Hi Vladimir,
Thanks for sharing Your implementation.
Could You please clarify what as_num and fwd_class fields represent?
The second issue I have is that Your patch doesn’t want to apply on top of
current head. Could You check this please?
Best regards
Michal
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-24 6:09 0% ` Matthew Hall
2015-10-25 17:52 0% ` Vladimir Medvedkin
@ 2015-10-26 12:13 0% ` Jastrzebski, MichalX K
1 sibling, 0 replies; 200+ results
From: Jastrzebski, MichalX K @ 2015-10-26 12:13 UTC (permalink / raw)
To: Matthew Hall, Kobylinski, MichalX; +Cc: dev
> -----Original Message-----
> From: Matthew Hall [mailto:mhall@mhcomputing.net]
> Sent: Saturday, October 24, 2015 8:10 AM
> To: Jastrzebski, MichalX K; Kobylinski, MichalX
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> for lpm (ipv4)
>
> On 10/23/15 9:20 AM, Matthew Hall wrote:
> > On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> >> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> >>
> >> The current DPDK implementation for LPM for IPv4 and IPv6 limits the
> >> number of next hops to 256, as the next hop ID is an 8-bit long field.
> >> Proposed extension increase number of next hops for IPv4 to 2^24 and
> >> also allows 32-bits read/write operations.
> >>
> >> This patchset requires additional change to rte_table library to meet
> >> ABI compatibility requirements. A v2 will be sent next week.
> >
> > I also have a patchset for this.
> >
> > I will send it out as well so we could compare.
> >
> > Matthew.
>
> Sorry about the delay; I only work on DPDK in personal time and not as
> part of a job. My patchset is attached to this email.
>
> One possible advantage with my patchset, compared to others, is that the
> space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> between these two standards, which is something I try to avoid as much
> as humanly possible.
>
> This is because my application code is green-field, so I absolutely
> don't want to put any ugly hacks or incompatibilities in this code if I
> can possibly avoid it.
>
> Otherwise, I am not necessarily as expert about rte_lpm as some of the
> full-time guys, but I think with four or five of us in the thread
> hammering out patches we will be able to create something amazing
> together and I am very very very very very happy about this.
>
> Matthew.
Hi Matthew,
Thank You for a patch-set.
I can't apply patch 0001-... , could You check it please?
I have the following error:
Checking patch lib/librte_lpm/rte_lpm.h...
error: while searching for:
#endif
/** @internal bitmask with valid and ext_entry/valid_group fields set */
#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
/** Bitmask used to indicate successful lookup */
#define RTE_LPM_LOOKUP_SUCCESS 0x0100
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
/* Stores Next hop or group index (i.e. gindex)into tbl8. */
union {
uint8_t next_hop;
uint8_t tbl8_gindex;
};
/* Using single uint8_t to store 3 values. */
uint8_t valid :1; /**< Validation flag. */
uint8_t ext_entry :1; /**< External entry. */
uint8_t depth :6; /**< Rule depth. */
};
/** @internal Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
uint8_t next_hop; /**< next hop. */
/* Using single uint8_t to store 3 values. */
uint8_t valid :1; /**< Validation flag. */
uint8_t valid_group :1; /**< Group validation flag. */
uint8_t depth :6; /**< Rule depth. */
};
#else
struct rte_lpm_tbl24_entry {
error: patch failed: lib/librte_lpm/rte_lpm.h:82
error: lib/librte_lpm/rte_lpm.h: patch does not apply
Best regards,
Michal
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-26 11:57 0% ` Jastrzebski, MichalX K
@ 2015-10-26 14:03 3% ` Vladimir Medvedkin
2015-10-26 15:39 0% ` Michal Jastrzebski
0 siblings, 1 reply; 200+ results
From: Vladimir Medvedkin @ 2015-10-26 14:03 UTC (permalink / raw)
To: Jastrzebski, MichalX K; +Cc: dev
Hi Michal,
Forwarding class can help us to classify traffic based on dst prefix, it's
something like Juniper DCU. For example on Juniper MX I can make policy
that install prefix into the FIB with some class and use it on dataplane,
for example with ACL.
On Juniper MX I can make something like that:
#show policy-options
policy-statement community-to-class {
term customer {
from community originate-customer;
then destination-class customer;
}
}
community originate-customer members 12345:11111;
# show routing-options
forwarding-table {
export community-to-class;
}
# show forwarding-options
forwarding-options {
family inet {
filter {
output test-filter;
}
}
}
# show firewall family inet filter test-filter
term 1 {
from {
protocol icmp;
destination-class customer;
}
then {
discard;
}
}
announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
After than on dataplane we have
NPC1( vty)# show route ip lookup 10.10.10.10
Route Information (10.10.10.10):
interface : xe-1/0/0.0 (328)
Nexthop prefix : -
Nexthop ID : 1048574
MTU : 0
Class ID : 129 <- That is "forwarding class" in my implementation
This construction discards all ICMP traffic that goes to dst prefixes which
was originated with community 12345:11111. With this mechanism we can make
on control plane different sophisticated policy to control traffic on
dataplane.
The same with as_num, we can have on dataplane AS number that has
originated that prefix, or another 4-byte number e.g. geo-id.
What issue do you mean? I think it is because of table/pipeline/test
frameworks that doesen't want to compile due to changing API/ABI. You can
turn it off for LPM testing, if my patch will be applied I will make
changes in above-mentioned frameworks.
Regards,
Vladimir
2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
michalx.k.jastrzebski@intel.com>:
> > -----Original Message-----
> > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > Sent: Monday, October 26, 2015 12:55 PM
> > To: Vladimir Medvedkin
> > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> > for lpm (ipv4)
> >
> > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > Hi all,
> > >
> > > Here my implementation
> > >
> > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > ---
> > > config/common_bsdapp | 1 +
> > > config/common_linuxapp | 1 +
> > > lib/librte_lpm/rte_lpm.c | 194
> > > +++++++++++++++++++++++++++++------------------
> > > lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> > > 4 files changed, 219 insertions(+), 140 deletions(-)
> > >
> > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > index b37dcf4..408cc2c 100644
> > > --- a/config/common_bsdapp
> > > +++ b/config/common_bsdapp
> > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > #
> > > CONFIG_RTE_LIBRTE_LPM=y
> > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > >
> > > #
> > > # Compile librte_acl
> > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > index 0de43d5..1c60e63 100644
> > > --- a/config/common_linuxapp
> > > +++ b/config/common_linuxapp
> > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > #
> > > CONFIG_RTE_LIBRTE_LPM=y
> > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > >
> > > #
> > > # Compile librte_acl
> > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > index 163ba3c..363b400 100644
> > > --- a/lib/librte_lpm/rte_lpm.c
> > > +++ b/lib/librte_lpm/rte_lpm.c
> > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> > int
> > > max_rules,
> > >
> > > lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> > >
> > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > +#else
> > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > +#endif
> > > /* Check user arguments. */
> > > if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> > > rte_errno = EINVAL;
> > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > > */
> > > static inline int32_t
> > > rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > - uint8_t next_hop)
> > > + struct rte_lpm_res *res)
> > > {
> > > uint32_t rule_gindex, rule_index, last_rule;
> > > int i;
> > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >
> > > /* If rule already exists update its next_hop
> and
> > > return. */
> > > if (lpm->rules_tbl[rule_index].ip ==
> ip_masked) {
> > > - lpm->rules_tbl[rule_index].next_hop =
> > > next_hop;
> > > -
> > > + lpm->rules_tbl[rule_index].next_hop =
> > > res->next_hop;
> > > + lpm->rules_tbl[rule_index].fwd_class =
> > > res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + lpm->rules_tbl[rule_index].as_num =
> > > res->as_num;
> > > +#endif
> > > return rule_index;
> > > }
> > > }
> > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > >
> > > /* Add the new rule. */
> > > lpm->rules_tbl[rule_index].ip = ip_masked;
> > > - lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > + lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > + lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > +#endif
> > >
> > > /* Increment the used rules counter for this rule group. */
> > > lpm->rule_info[depth - 1].used_rules++;
> > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth)
> > > * Find, clean and allocate a tbl8.
> > > */
> > > static inline int32_t
> > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > > {
> > > uint32_t tbl8_gindex; /* tbl8 group index. */
> > > - struct rte_lpm_tbl8_entry *tbl8_entry;
> > > + struct rte_lpm_tbl_entry *tbl8_entry;
> > >
> > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group.
> */
> > > for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > tbl8_entry = &tbl8[tbl8_gindex *
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > /* If a free tbl8 group is found clean it and set as
> VALID.
> > > */
> > > - if (!tbl8_entry->valid_group) {
> > > + if (!tbl8_entry->ext_valid) {
> > > memset(&tbl8_entry[0], 0,
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> *
> > > sizeof(tbl8_entry[0]));
> > >
> > > - tbl8_entry->valid_group = VALID;
> > > + tbl8_entry->ext_valid = VALID;
> > >
> > > /* Return group index for allocated tbl8
> group. */
> > > return tbl8_gindex;
> > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > }
> > >
> > > static inline void
> > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> > > {
> > > /* Set tbl8 group invalid*/
> > > - tbl8[tbl8_group_start].valid_group = INVALID;
> > > + tbl8[tbl8_group_start].ext_valid = INVALID;
> > > }
> > >
> > > static inline int32_t
> > > add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > - uint8_t next_hop)
> > > + struct rte_lpm_res *res)
> > > {
> > > uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end,
> i, j;
> > >
> > > /* Calculate the index into Table24. */
> > > tbl24_index = ip >> 8;
> > > tbl24_range = depth_to_range(depth);
> > > + struct rte_lpm_tbl_entry new_tbl_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num = res->as_num,
> > > +#endif
> > > + .next_hop = res->next_hop,
> > > + .fwd_class = res->fwd_class,
> > > + .ext_valid = 0,
> > > + .depth = depth,
> > > + .valid = VALID,
> > > + };
> > > +
> > >
> > > for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> > > /*
> > > * For invalid OR valid and non-extended tbl 24
> entries set
> > > * entry.
> > > */
> > > - if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry
> == 0 &&
> > > + if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid
> == 0 &&
> > > lpm->tbl24[i].depth <= depth)) {
> > >
> > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > - { .next_hop = next_hop, },
> > > - .valid = VALID,
> > > - .ext_entry = 0,
> > > - .depth = depth,
> > > - };
> > > -
> > > /* Setting tbl24 entry in one go to avoid race
> > > * conditions
> > > */
> > > - lpm->tbl24[i] = new_tbl24_entry;
> > > + lpm->tbl24[i] = new_tbl_entry;
> > >
> > > continue;
> > > }
> > >
> > > - if (lpm->tbl24[i].ext_entry == 1) {
> > > + if (lpm->tbl24[i].ext_valid == 1) {
> > > /* If tbl24 entry is valid and extended
> calculate
> > > the
> > > * index into tbl8.
> > > */
> > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > ip,
> > > uint8_t depth,
> > > for (j = tbl8_index; j < tbl8_group_end; j++) {
> > > if (!lpm->tbl8[j].valid ||
> > > lpm->tbl8[j].depth <=
> > > depth) {
> > > - struct rte_lpm_tbl8_entry
> > > - new_tbl8_entry = {
> > > - .valid = VALID,
> > > - .valid_group = VALID,
> > > - .depth = depth,
> > > - .next_hop = next_hop,
> > > - };
> > > +
> > > + new_tbl_entry.ext_valid =
> VALID;
> > >
> > > /*
> > > * Setting tbl8 entry in one
> go to
> > > avoid
> > > * race conditions
> > > */
> > > - lpm->tbl8[j] = new_tbl8_entry;
> > > + lpm->tbl8[j] = new_tbl_entry;
> > >
> > > continue;
> > > }
> > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t depth,
> > >
> > > static inline int32_t
> > > add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > - uint8_t next_hop)
> > > + struct rte_lpm_res *res)
> > > {
> > > uint32_t tbl24_index;
> > > int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > tbl8_index,
> > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > > /* Set tbl8 entry. */
> > > for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> i++) {
> > > lpm->tbl8[i].depth = depth;
> > > - lpm->tbl8[i].next_hop = next_hop;
> > > + lpm->tbl8[i].next_hop = res->next_hop;
> > > + lpm->tbl8[i].fwd_class = res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + lpm->tbl8[i].as_num = res->as_num;
> > > +#endif
> > > lpm->tbl8[i].valid = VALID;
> > > }
> > >
> > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > > * so assign whole structure in one go
> > > */
> > >
> > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > - { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > > - .valid = VALID,
> > > - .ext_entry = 1,
> > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > + .tbl8_gindex = (uint16_t)tbl8_group_index,
> > > .depth = 0,
> > > + .ext_valid = 1,
> > > + .valid = VALID,
> > > };
> > >
> > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > >
> > > }/* If valid entry but not extended calculate the index into
> > > Table8. */
> > > - else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > + else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > > /* Search for free tbl8 group. */
> > > tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > >
> > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > > lpm->tbl8[i].depth =
> lpm->tbl24[tbl24_index].depth;
> > > lpm->tbl8[i].next_hop =
> > >
> lpm->tbl24[tbl24_index].next_hop;
> > > + lpm->tbl8[i].fwd_class =
> > > +
> lpm->tbl24[tbl24_index].fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + lpm->tbl8[i].as_num =
> > > lpm->tbl24[tbl24_index].as_num;
> > > +#endif
> > > }
> > >
> > > tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > ip_masked,
> > > uint8_t depth,
> > > lpm->tbl8[i].depth <= depth) {
> > > lpm->tbl8[i].valid = VALID;
> > > lpm->tbl8[i].depth = depth;
> > > - lpm->tbl8[i].next_hop = next_hop;
> > > + lpm->tbl8[i].next_hop = res->next_hop;
> > > + lpm->tbl8[i].fwd_class =
> res->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + lpm->tbl8[i].as_num = res->as_num;
> > > +#endif
> > >
> > > continue;
> > > }
> > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > > * so assign whole structure in one go.
> > > */
> > >
> > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > - { .tbl8_gindex =
> (uint8_t)tbl8_group_index,
> > > },
> > > - .valid = VALID,
> > > - .ext_entry = 1,
> > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > + .tbl8_gindex =
> (uint16_t)tbl8_group_index,
> > > .depth = 0,
> > > + .ext_valid = 1,
> > > + .valid = VALID,
> > > };
> > >
> > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > >
> > > if (!lpm->tbl8[i].valid ||
> > > lpm->tbl8[i].depth <= depth) {
> > > - struct rte_lpm_tbl8_entry
> new_tbl8_entry = {
> > > - .valid = VALID,
> > > + struct rte_lpm_tbl_entry
> new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num = res->as_num,
> > > +#endif
> > > + .next_hop = res->next_hop,
> > > + .fwd_class = res->fwd_class,
> > > .depth = depth,
> > > - .next_hop = next_hop,
> > > - .valid_group =
> > > lpm->tbl8[i].valid_group,
> > > + .ext_valid =
> lpm->tbl8[i].ext_valid,
> > > + .valid = VALID,
> > > };
> > >
> > > /*
> > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked, uint8_t depth,
> > > */
> > > int
> > > rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > - uint8_t next_hop)
> > > + struct rte_lpm_res *res)
> > > {
> > > int32_t rule_index, status = 0;
> > > uint32_t ip_masked;
> > >
> > > /* Check user arguments. */
> > > - if ((lpm == NULL) || (depth < 1) || (depth >
> RTE_LPM_MAX_DEPTH))
> > > + if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > > RTE_LPM_MAX_DEPTH))
> > > return -EINVAL;
> > >
> > > ip_masked = ip & depth_to_mask(depth);
> > >
> > > /* Add the rule to the rule table. */
> > > - rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > + rule_index = rule_add(lpm, ip_masked, depth, res);
> > >
> > > /* If the is no space available for new rule return error. */
> > > if (rule_index < 0) {
> > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth,
> > > }
> > >
> > > if (depth <= MAX_DEPTH_TBL24) {
> > > - status = add_depth_small(lpm, ip_masked, depth,
> next_hop);
> > > + status = add_depth_small(lpm, ip_masked, depth, res);
> > > }
> > > else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > - status = add_depth_big(lpm, ip_masked, depth,
> next_hop);
> > > + status = add_depth_big(lpm, ip_masked, depth, res);
> > >
> > > /*
> > > * If add fails due to exhaustion of tbl8 extensions
> delete
> > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth,
> > > */
> > > int
> > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> depth,
> > > -uint8_t *next_hop)
> > > + struct rte_lpm_res *res)
> > > {
> > > uint32_t ip_masked;
> > > int32_t rule_index;
> > >
> > > /* Check user arguments. */
> > > if ((lpm == NULL) ||
> > > - (next_hop == NULL) ||
> > > + (res == NULL) ||
> > > (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > > return -EINVAL;
> > >
> > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > > rule_index = rule_find(lpm, ip_masked, depth);
> > >
> > > if (rule_index >= 0) {
> > > - *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > + res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > + res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > +#endif
> > > return 1;
> > > }
> > >
> > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > */
> > > for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++)
> > > {
> > >
> > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > lpm->tbl24[i].depth <= depth )
> {
> > > lpm->tbl24[i].valid = INVALID;
> > > }
> > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > uint32_t
> > > ip_masked,
> > > * associated with this rule.
> > > */
> > >
> > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > - {.next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > - .valid = VALID,
> > > - .ext_entry = 0,
> > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > + .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > + .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > .depth = sub_rule_depth,
> > > + .ext_valid = 0,
> > > + .valid = VALID,
> > > };
> > >
> > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > - .valid = VALID,
> > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > + .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > + .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > .depth = sub_rule_depth,
> > > - .next_hop = lpm->rules_tbl
> > > - [sub_rule_index].next_hop,
> > > + .valid = VALID,
> > > };
> > >
> > > for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++)
> > > {
> > >
> > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > lpm->tbl24[i].depth <= depth )
> {
> > > lpm->tbl24[i] = new_tbl24_entry;
> > > }
> > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > * thus can be recycled
> > > */
> > > static inline int32_t
> > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > tbl8_group_start)
> > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > tbl8_group_start)
> > > {
> > > uint32_t tbl8_group_end, i;
> > > tbl8_group_end = tbl8_group_start +
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > }
> > > else {
> > > /* Set new tbl8 entry. */
> > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > - .valid = VALID,
> > > - .depth = sub_rule_depth,
> > > - .valid_group =
> > > lpm->tbl8[tbl8_group_start].valid_group,
> > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num =
> lpm->rules_tbl[sub_rule_index].as_num,
> > > +#endif
> > > + .fwd_class =
> > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > .next_hop =
> lpm->rules_tbl[sub_rule_index].next_hop,
> > > + .depth = sub_rule_depth,
> > > + .ext_valid =
> lpm->tbl8[tbl8_group_start].ext_valid,
> > > + .valid = VALID,
> > > };
> > >
> > > /*
> > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > }
> > > else if (tbl8_recycle_index > -1) {
> > > /* Update tbl24 entry. */
> > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > - { .next_hop =
> > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > - .valid = VALID,
> > > - .ext_entry = 0,
> > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > > +#endif
> > > + .next_hop =
> lpm->tbl8[tbl8_recycle_index].next_hop,
> > > + .fwd_class =
> > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > > .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > > + .ext_valid = 0,
> > > + .valid = VALID,
> > > };
> > >
> > > /* Set tbl24 before freeing tbl8 to avoid race
> condition. */
> > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > index c299ce2..7c615bc 100644
> > > --- a/lib/librte_lpm/rte_lpm.h
> > > +++ b/lib/librte_lpm/rte_lpm.h
> > > @@ -31,8 +31,8 @@
> > > * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > DAMAGE.
> > > */
> > >
> > > -#ifndef _RTE_LPM_H_
> > > -#define _RTE_LPM_H_
> > > +#ifndef _RTE_LPM_EXT_H_
> > > +#define _RTE_LPM_EXT_H_
> > >
> > > /**
> > > * @file
> > > @@ -81,57 +81,58 @@ extern "C" {
> > > #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > > #endif
> > >
> > > -/** @internal bitmask with valid and ext_entry/valid_group fields set
> */
> > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > >
> > > /** Bitmask used to indicate successful lookup */
> > > -#define RTE_LPM_LOOKUP_SUCCESS 0x0100
> > > +#define RTE_LPM_LOOKUP_SUCCESS 0x01
> > > +
> > > +struct rte_lpm_res {
> > > + uint16_t next_hop;
> > > + uint8_t fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint32_t as_num;
> > > +#endif
> > > +};
> > >
> > > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > -/** @internal Tbl24 entry structure. */
> > > -struct rte_lpm_tbl24_entry {
> > > - /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > > +struct rte_lpm_tbl_entry {
> > > + uint8_t valid :1;
> > > + uint8_t ext_valid :1;
> > > + uint8_t depth :6;
> > > + uint8_t fwd_class;
> > > union {
> > > - uint8_t next_hop;
> > > - uint8_t tbl8_gindex;
> > > + uint16_t next_hop;
> > > + uint16_t tbl8_gindex;
> > > };
> > > - /* Using single uint8_t to store 3 values. */
> > > - uint8_t valid :1; /**< Validation flag. */
> > > - uint8_t ext_entry :1; /**< External entry. */
> > > - uint8_t depth :6; /**< Rule depth. */
> > > -};
> > > -
> > > -/** @internal Tbl8 entry structure. */
> > > -struct rte_lpm_tbl8_entry {
> > > - uint8_t next_hop; /**< next hop. */
> > > - /* Using single uint8_t to store 3 values. */
> > > - uint8_t valid :1; /**< Validation flag. */
> > > - uint8_t valid_group :1; /**< Group validation flag. */
> > > - uint8_t depth :6; /**< Rule depth. */
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint32_t as_num;
> > > +#endif
> > > };
> > > #else
> > > -struct rte_lpm_tbl24_entry {
> > > - uint8_t depth :6;
> > > - uint8_t ext_entry :1;
> > > - uint8_t valid :1;
> > > +struct rte_lpm_tbl_entry {
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint32_t as_num;
> > > +#endif
> > > union {
> > > - uint8_t tbl8_gindex;
> > > - uint8_t next_hop;
> > > + uint16_t tbl8_gindex;
> > > + uint16_t next_hop;
> > > };
> > > -};
> > > -
> > > -struct rte_lpm_tbl8_entry {
> > > - uint8_t depth :6;
> > > - uint8_t valid_group :1;
> > > - uint8_t valid :1;
> > > - uint8_t next_hop;
> > > + uint8_t fwd_class;
> > > + uint8_t depth :6;
> > > + uint8_t ext_valid :1;
> > > + uint8_t valid :1;
> > > };
> > > #endif
> > >
> > > /** @internal Rule structure. */
> > > struct rte_lpm_rule {
> > > uint32_t ip; /**< Rule IP address. */
> > > - uint8_t next_hop; /**< Rule next hop. */
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint32_t as_num;
> > > +#endif
> > > + uint16_t next_hop; /**< Rule next hop. */
> > > + uint8_t fwd_class;
> > > };
> > >
> > > /** @internal Contains metadata about the rules table. */
> > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > > struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> Rule
> > > info table. */
> > >
> > > /* LPM Tables. */
> > > - struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > + struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > __rte_cache_aligned; /**< LPM tbl24 table. */
> > > - struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > + struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > __rte_cache_aligned; /**< LPM tbl8 table. */
> > > struct rte_lpm_rule rules_tbl[0] \
> > > __rte_cache_aligned; /**< LPM rules. */
> > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > > * 0 on success, negative value otherwise
> > > */
> > > int
> > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > > next_hop);
> > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > > rte_lpm_res *res);
> > >
> > > /**
> > > * Check if a rule is present in the LPM table,
> > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > uint8_t
> > > depth, uint8_t next_hop);
> > > */
> > > int
> > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> depth,
> > > -uint8_t *next_hop);
> > > + struct rte_lpm_res *res);
> > >
> > > /**
> > > * Delete a rule from the LPM table.
> > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > > * -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> lookup
> > > hit
> > > */
> > > static inline int
> > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res
> *res)
> > > {
> > > unsigned tbl24_index = (ip >> 8);
> > > - uint16_t tbl_entry;
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint64_t tbl_entry;
> > > +#else
> > > + uint32_t tbl_entry;
> > > +#endif
> > > /* DEBUG: Check user input arguments. */
> > > - RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > > -EINVAL);
> > > + RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > EINVAL);
> > >
> > > /* Copy tbl24 entry */
> > > - tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > +#else
> > > + tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > +#endif
> > > /* Copy tbl8 entry (only if needed) */
> > > if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > >
> > > unsigned tbl8_index = (uint8_t)ip +
> > > - ((uint8_t)tbl_entry *
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > + ((*(struct rte_lpm_tbl_entry
> > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > >
> > > - tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > > +#else
> > > + tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > > +#endif
> > > }
> > > -
> > > - *next_hop = (uint8_t)tbl_entry;
> > > + res->next_hop = ((struct rte_lpm_tbl_entry
> *)&tbl_entry)->next_hop;
> > > + res->fwd_class = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->fwd_class;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + res->as_num = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->as_num;
> > > +#endif
> > > return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > +
> > > }
> > >
> > > /**
> > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t *next_hop)
> > > * @return
> > > * -EINVAL for incorrect arguments, otherwise 0
> > > */
> > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > - rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > + rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > >
> > > static inline int
> > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *
> ips,
> > > - uint16_t * next_hops, const unsigned n)
> > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> *ips,
> > > + struct rte_lpm_res *res_tbl, const unsigned n)
> > > {
> > > unsigned i;
> > > + int ret = 0;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + uint64_t tbl_entry;
> > > +#else
> > > + uint32_t tbl_entry;
> > > +#endif
> > > unsigned tbl24_indexes[n];
> > >
> > > /* DEBUG: Check user input arguments. */
> > > RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > - (next_hops == NULL)), -EINVAL);
> > > + (res_tbl == NULL)), -EINVAL);
> > >
> > > for (i = 0; i < n; i++) {
> > > tbl24_indexes[i] = ips[i] >> 8;
> > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > *lpm,
> > > const uint32_t * ips,
> > >
> > > for (i = 0; i < n; i++) {
> > > /* Simply copy tbl24 entry to output */
> > > - next_hops[i] = *(const uint16_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > -
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + tbl_entry = *(const uint64_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > +#else
> > > + tbl_entry = *(const uint32_t
> > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > +#endif
> > > /* Overwrite output with tbl8 entry if needed */
> > > - if (unlikely((next_hops[i] &
> > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > - RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > + if (unlikely((tbl_entry &
> RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > ==
> > > + RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > >
> > > unsigned tbl8_index = (uint8_t)ips[i] +
> > > - ((uint8_t)next_hops[i] *
> > > -
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > + ((*(struct rte_lpm_tbl_entry
> > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > >
> > > - next_hops[i] = *(const uint16_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + tbl_entry = *(const uint64_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#else
> > > + tbl_entry = *(const uint32_t
> > > *)&lpm->tbl8[tbl8_index];
> > > +#endif
> > > }
> > > + res_tbl[i].next_hop = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > + res_tbl[i].fwd_class = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > + res_tbl[i].as_num = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->as_num;
> > > +#endif
> > > + ret |= 1 << i;
> > > }
> > > - return 0;
> > > + return ret;
> > > }
> > >
> > > /* Mask four results. */
> > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > __m128i ip,
> > > uint16_t hop[4],
> > > }
> > > #endif
> > >
> > > -#endif /* _RTE_LPM_H_ */
> > > +#endif /* _RTE_LPM_EXT_H_ */
> > >
> > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > >
> > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > >
> > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > >>
> > > >>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > > >>>
> > > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits
> the
> > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> field.
> > > >>> Proposed extension increase number of next hops for IPv4 to 2^24
> and
> > > >>> also allows 32-bits read/write operations.
> > > >>>
> > > >>> This patchset requires additional change to rte_table library to
> meet
> > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > >>>
> > > >>
> > > >> I also have a patchset for this.
> > > >>
> > > >> I will send it out as well so we could compare.
> > > >>
> > > >> Matthew.
> > > >>
> > > >
> > > > Sorry about the delay; I only work on DPDK in personal time and not
> as
> > > > part of a job. My patchset is attached to this email.
> > > >
> > > > One possible advantage with my patchset, compared to others, is that
> the
> > > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > > between these two standards, which is something I try to avoid as
> much
> > as
> > > > humanly possible.
> > > >
> > > > This is because my application code is green-field, so I absolutely
> don't
> > > > want to put any ugly hacks or incompatibilities in this code if I can
> > > > possibly avoid it.
> > > >
> > > > Otherwise, I am not necessarily as expert about rte_lpm as some of
> the
> > > > full-time guys, but I think with four or five of us in the thread
> hammering
> > > > out patches we will be able to create something amazing together and
> I
> > am
> > > > very very very very very happy about this.
> > > >
> > > > Matthew.
> > > >
> >
>
> Hi Vladimir,
> Thanks for sharing Your implementation.
> Could You please clarify what as_num and fwd_class fields represent?
> The second issue I have is that Your patch doesn’t want to apply on top of
> current head. Could You check this please?
>
> Best regards
> Michal
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-26 14:03 3% ` Vladimir Medvedkin
@ 2015-10-26 15:39 0% ` Michal Jastrzebski
2015-10-26 16:59 0% ` Vladimir Medvedkin
0 siblings, 1 reply; 200+ results
From: Michal Jastrzebski @ 2015-10-26 15:39 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev
esOn Mon, Oct 26, 2015 at 05:03:31PM +0300, Vladimir Medvedkin wrote:
> Hi Michal,
>
> Forwarding class can help us to classify traffic based on dst prefix, it's
> something like Juniper DCU. For example on Juniper MX I can make policy
> that install prefix into the FIB with some class and use it on dataplane,
> for example with ACL.
> On Juniper MX I can make something like that:
> #show policy-options
> policy-statement community-to-class {
> term customer {
> from community originate-customer;
> then destination-class customer;
> }
> }
> community originate-customer members 12345:11111;
> # show routing-options
> forwarding-table {
> export community-to-class;
> }
> # show forwarding-options
> forwarding-options {
> family inet {
> filter {
> output test-filter;
> }
> }
> }
> # show firewall family inet filter test-filter
> term 1 {
> from {
> protocol icmp;
> destination-class customer;
> }
> then {
> discard;
> }
> }
> announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
> After than on dataplane we have
> NPC1( vty)# show route ip lookup 10.10.10.10
> Route Information (10.10.10.10):
> interface : xe-1/0/0.0 (328)
> Nexthop prefix : -
> Nexthop ID : 1048574
> MTU : 0
> Class ID : 129 <- That is "forwarding class" in my implementation
> This construction discards all ICMP traffic that goes to dst prefixes which
> was originated with community 12345:11111. With this mechanism we can make
> on control plane different sophisticated policy to control traffic on
> dataplane.
> The same with as_num, we can have on dataplane AS number that has
> originated that prefix, or another 4-byte number e.g. geo-id.
> What issue do you mean? I think it is because of table/pipeline/test
> frameworks that doesen't want to compile due to changing API/ABI. You can
> turn it off for LPM testing, if my patch will be applied I will make
> changes in above-mentioned frameworks.
>
> Regards,
> Vladimir
Hi Vladimir,
I have an issue with applying Your patch not compilation.
This is the error i get:
Checking patch config/common_bsdapp...
Checking patch config/common_linuxapp...
Checking patch lib/librte_lpm/rte_lpm.c...
error: while searching for:
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
/* Check user arguments. */
if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
rte_errno = EINVAL;
error: patch failed: lib/librte_lpm/rte_lpm.c:159
error: lib/librte_lpm/rte_lpm.c: patch does not apply
Checking patch lib/librte_lpm/rte_lpm.h...
error: while searching for:
#define RTE_LPM_RETURN_IF_TRUE(cond, retval)
#endif
/** @internal bitmask with valid and ext_entry/valid_group fields set */
#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
/** Bitmask used to indicate successful lookup */
#define RTE_LPM_LOOKUP_SUCCESS 0x0100
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
struct rte_lpm_tbl24_entry {
/* Stores Next hop or group index (i.e. gindex)into tbl8. */
union {
uint8_t next_hop;
uint8_t tbl8_gindex;
};
/* Using single uint8_t to store 3 values. */
uint8_t valid :1; /**< Validation flag. */
uint8_t ext_entry :1; /**< External entry. */
uint8_t depth :6; /**< Rule depth. */
};
/** @internal Tbl8 entry structure. */
struct rte_lpm_tbl8_entry {
uint8_t next_hop; /**< next hop. */
/* Using single uint8_t to store 3 values. */
uint8_t valid :1; /**< Validation flag. */
uint8_t valid_group :1; /**< Group validation flag. */
uint8_t depth :6; /**< Rule depth. */
};
#else
struct rte_lpm_tbl24_entry {
uint8_t depth :6;
uint8_t ext_entry :1;
uint8_t valid :1;
union {
uint8_t tbl8_gindex;
uint8_t next_hop;
};
};
struct rte_lpm_tbl8_entry {
uint8_t depth :6;
uint8_t valid_group :1;
uint8_t valid :1;
uint8_t next_hop;
};
#endif
/** @internal Rule structure. */
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint8_t next_hop; /**< Rule next hop. */
};
/** @internal Contains metadata about the rules table. */
error: patch failed: lib/librte_lpm/rte_lpm.h:81
error: lib/librte_lpm/rte_lpm.h: patch does not apply
> 2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
> michalx.k.jastrzebski@intel.com>:
>
> > > -----Original Message-----
> > > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > > Sent: Monday, October 26, 2015 12:55 PM
> > > To: Vladimir Medvedkin
> > > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops
> > > for lpm (ipv4)
> > >
> > > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > > Hi all,
> > > >
> > > > Here my implementation
> > > >
> > > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > > ---
> > > > config/common_bsdapp | 1 +
> > > > config/common_linuxapp | 1 +
> > > > lib/librte_lpm/rte_lpm.c | 194
> > > > +++++++++++++++++++++++++++++------------------
> > > > lib/librte_lpm/rte_lpm.h | 163 +++++++++++++++++++++++----------------
> > > > 4 files changed, 219 insertions(+), 140 deletions(-)
> > > >
> > > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > > index b37dcf4..408cc2c 100644
> > > > --- a/config/common_bsdapp
> > > > +++ b/config/common_bsdapp
> > > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > #
> > > > CONFIG_RTE_LIBRTE_LPM=y
> > > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > >
> > > > #
> > > > # Compile librte_acl
> > > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > > index 0de43d5..1c60e63 100644
> > > > --- a/config/common_linuxapp
> > > > +++ b/config/common_linuxapp
> > > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > #
> > > > CONFIG_RTE_LIBRTE_LPM=y
> > > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > >
> > > > #
> > > > # Compile librte_acl
> > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > > index 163ba3c..363b400 100644
> > > > --- a/lib/librte_lpm/rte_lpm.c
> > > > +++ b/lib/librte_lpm/rte_lpm.c
> > > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int socket_id,
> > > int
> > > > max_rules,
> > > >
> > > > lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
> > > >
> > > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > > +#else
> > > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > > +#endif
> > > > /* Check user arguments. */
> > > > if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> > > > rte_errno = EINVAL;
> > > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > > > */
> > > > static inline int32_t
> > > > rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > - uint8_t next_hop)
> > > > + struct rte_lpm_res *res)
> > > > {
> > > > uint32_t rule_gindex, rule_index, last_rule;
> > > > int i;
> > > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >
> > > > /* If rule already exists update its next_hop
> > and
> > > > return. */
> > > > if (lpm->rules_tbl[rule_index].ip ==
> > ip_masked) {
> > > > - lpm->rules_tbl[rule_index].next_hop =
> > > > next_hop;
> > > > -
> > > > + lpm->rules_tbl[rule_index].next_hop =
> > > > res->next_hop;
> > > > + lpm->rules_tbl[rule_index].fwd_class =
> > > > res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + lpm->rules_tbl[rule_index].as_num =
> > > > res->as_num;
> > > > +#endif
> > > > return rule_index;
> > > > }
> > > > }
> > > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > >
> > > > /* Add the new rule. */
> > > > lpm->rules_tbl[rule_index].ip = ip_masked;
> > > > - lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > > + lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > > + lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > > +#endif
> > > >
> > > > /* Increment the used rules counter for this rule group. */
> > > > lpm->rule_info[depth - 1].used_rules++;
> > > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth)
> > > > * Find, clean and allocate a tbl8.
> > > > */
> > > > static inline int32_t
> > > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > > > {
> > > > uint32_t tbl8_gindex; /* tbl8 group index. */
> > > > - struct rte_lpm_tbl8_entry *tbl8_entry;
> > > > + struct rte_lpm_tbl_entry *tbl8_entry;
> > > >
> > > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group.
> > */
> > > > for (tbl8_gindex = 0; tbl8_gindex < RTE_LPM_TBL8_NUM_GROUPS;
> > > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > tbl8_entry = &tbl8[tbl8_gindex *
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > > /* If a free tbl8 group is found clean it and set as
> > VALID.
> > > > */
> > > > - if (!tbl8_entry->valid_group) {
> > > > + if (!tbl8_entry->ext_valid) {
> > > > memset(&tbl8_entry[0], 0,
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> > *
> > > > sizeof(tbl8_entry[0]));
> > > >
> > > > - tbl8_entry->valid_group = VALID;
> > > > + tbl8_entry->ext_valid = VALID;
> > > >
> > > > /* Return group index for allocated tbl8
> > group. */
> > > > return tbl8_gindex;
> > > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > }
> > > >
> > > > static inline void
> > > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t tbl8_group_start)
> > > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
> > > > {
> > > > /* Set tbl8 group invalid*/
> > > > - tbl8[tbl8_group_start].valid_group = INVALID;
> > > > + tbl8[tbl8_group_start].ext_valid = INVALID;
> > > > }
> > > >
> > > > static inline int32_t
> > > > add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > - uint8_t next_hop)
> > > > + struct rte_lpm_res *res)
> > > > {
> > > > uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end,
> > i, j;
> > > >
> > > > /* Calculate the index into Table24. */
> > > > tbl24_index = ip >> 8;
> > > > tbl24_range = depth_to_range(depth);
> > > > + struct rte_lpm_tbl_entry new_tbl_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num = res->as_num,
> > > > +#endif
> > > > + .next_hop = res->next_hop,
> > > > + .fwd_class = res->fwd_class,
> > > > + .ext_valid = 0,
> > > > + .depth = depth,
> > > > + .valid = VALID,
> > > > + };
> > > > +
> > > >
> > > > for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
> > > > /*
> > > > * For invalid OR valid and non-extended tbl 24
> > entries set
> > > > * entry.
> > > > */
> > > > - if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_entry
> > == 0 &&
> > > > + if (!lpm->tbl24[i].valid || (lpm->tbl24[i].ext_valid
> > == 0 &&
> > > > lpm->tbl24[i].depth <= depth)) {
> > > >
> > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > - { .next_hop = next_hop, },
> > > > - .valid = VALID,
> > > > - .ext_entry = 0,
> > > > - .depth = depth,
> > > > - };
> > > > -
> > > > /* Setting tbl24 entry in one go to avoid race
> > > > * conditions
> > > > */
> > > > - lpm->tbl24[i] = new_tbl24_entry;
> > > > + lpm->tbl24[i] = new_tbl_entry;
> > > >
> > > > continue;
> > > > }
> > > >
> > > > - if (lpm->tbl24[i].ext_entry == 1) {
> > > > + if (lpm->tbl24[i].ext_valid == 1) {
> > > > /* If tbl24 entry is valid and extended
> > calculate
> > > > the
> > > > * index into tbl8.
> > > > */
> > > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > > ip,
> > > > uint8_t depth,
> > > > for (j = tbl8_index; j < tbl8_group_end; j++) {
> > > > if (!lpm->tbl8[j].valid ||
> > > > lpm->tbl8[j].depth <=
> > > > depth) {
> > > > - struct rte_lpm_tbl8_entry
> > > > - new_tbl8_entry = {
> > > > - .valid = VALID,
> > > > - .valid_group = VALID,
> > > > - .depth = depth,
> > > > - .next_hop = next_hop,
> > > > - };
> > > > +
> > > > + new_tbl_entry.ext_valid =
> > VALID;
> > > >
> > > > /*
> > > > * Setting tbl8 entry in one
> > go to
> > > > avoid
> > > > * race conditions
> > > > */
> > > > - lpm->tbl8[j] = new_tbl8_entry;
> > > > + lpm->tbl8[j] = new_tbl_entry;
> > > >
> > > > continue;
> > > > }
> > > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t depth,
> > > >
> > > > static inline int32_t
> > > > add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > - uint8_t next_hop)
> > > > + struct rte_lpm_res *res)
> > > > {
> > > > uint32_t tbl24_index;
> > > > int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > > tbl8_index,
> > > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > > /* Set tbl8 entry. */
> > > > for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> > i++) {
> > > > lpm->tbl8[i].depth = depth;
> > > > - lpm->tbl8[i].next_hop = next_hop;
> > > > + lpm->tbl8[i].next_hop = res->next_hop;
> > > > + lpm->tbl8[i].fwd_class = res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + lpm->tbl8[i].as_num = res->as_num;
> > > > +#endif
> > > > lpm->tbl8[i].valid = VALID;
> > > > }
> > > >
> > > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > > * so assign whole structure in one go
> > > > */
> > > >
> > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > - { .tbl8_gindex = (uint8_t)tbl8_group_index, },
> > > > - .valid = VALID,
> > > > - .ext_entry = 1,
> > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > + .tbl8_gindex = (uint16_t)tbl8_group_index,
> > > > .depth = 0,
> > > > + .ext_valid = 1,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > >
> > > > }/* If valid entry but not extended calculate the index into
> > > > Table8. */
> > > > - else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > > + else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > > > /* Search for free tbl8 group. */
> > > > tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > > >
> > > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > > lpm->tbl8[i].depth =
> > lpm->tbl24[tbl24_index].depth;
> > > > lpm->tbl8[i].next_hop =
> > > >
> > lpm->tbl24[tbl24_index].next_hop;
> > > > + lpm->tbl8[i].fwd_class =
> > > > +
> > lpm->tbl24[tbl24_index].fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + lpm->tbl8[i].as_num =
> > > > lpm->tbl24[tbl24_index].as_num;
> > > > +#endif
> > > > }
> > > >
> > > > tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > ip_masked,
> > > > uint8_t depth,
> > > > lpm->tbl8[i].depth <= depth) {
> > > > lpm->tbl8[i].valid = VALID;
> > > > lpm->tbl8[i].depth = depth;
> > > > - lpm->tbl8[i].next_hop = next_hop;
> > > > + lpm->tbl8[i].next_hop = res->next_hop;
> > > > + lpm->tbl8[i].fwd_class =
> > res->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + lpm->tbl8[i].as_num = res->as_num;
> > > > +#endif
> > > >
> > > > continue;
> > > > }
> > > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > > * so assign whole structure in one go.
> > > > */
> > > >
> > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > - { .tbl8_gindex =
> > (uint8_t)tbl8_group_index,
> > > > },
> > > > - .valid = VALID,
> > > > - .ext_entry = 1,
> > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > + .tbl8_gindex =
> > (uint16_t)tbl8_group_index,
> > > > .depth = 0,
> > > > + .ext_valid = 1,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > >
> > > > if (!lpm->tbl8[i].valid ||
> > > > lpm->tbl8[i].depth <= depth) {
> > > > - struct rte_lpm_tbl8_entry
> > new_tbl8_entry = {
> > > > - .valid = VALID,
> > > > + struct rte_lpm_tbl_entry
> > new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num = res->as_num,
> > > > +#endif
> > > > + .next_hop = res->next_hop,
> > > > + .fwd_class = res->fwd_class,
> > > > .depth = depth,
> > > > - .next_hop = next_hop,
> > > > - .valid_group =
> > > > lpm->tbl8[i].valid_group,
> > > > + .ext_valid =
> > lpm->tbl8[i].ext_valid,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > /*
> > > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked, uint8_t depth,
> > > > */
> > > > int
> > > > rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > - uint8_t next_hop)
> > > > + struct rte_lpm_res *res)
> > > > {
> > > > int32_t rule_index, status = 0;
> > > > uint32_t ip_masked;
> > > >
> > > > /* Check user arguments. */
> > > > - if ((lpm == NULL) || (depth < 1) || (depth >
> > RTE_LPM_MAX_DEPTH))
> > > > + if ((lpm == NULL) || (res == NULL) || (depth < 1) || (depth >
> > > > RTE_LPM_MAX_DEPTH))
> > > > return -EINVAL;
> > > >
> > > > ip_masked = ip & depth_to_mask(depth);
> > > >
> > > > /* Add the rule to the rule table. */
> > > > - rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > > + rule_index = rule_add(lpm, ip_masked, depth, res);
> > > >
> > > > /* If the is no space available for new rule return error. */
> > > > if (rule_index < 0) {
> > > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth,
> > > > }
> > > >
> > > > if (depth <= MAX_DEPTH_TBL24) {
> > > > - status = add_depth_small(lpm, ip_masked, depth,
> > next_hop);
> > > > + status = add_depth_small(lpm, ip_masked, depth, res);
> > > > }
> > > > else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > > - status = add_depth_big(lpm, ip_masked, depth,
> > next_hop);
> > > > + status = add_depth_big(lpm, ip_masked, depth, res);
> > > >
> > > > /*
> > > > * If add fails due to exhaustion of tbl8 extensions
> > delete
> > > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth,
> > > > */
> > > > int
> > > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > depth,
> > > > -uint8_t *next_hop)
> > > > + struct rte_lpm_res *res)
> > > > {
> > > > uint32_t ip_masked;
> > > > int32_t rule_index;
> > > >
> > > > /* Check user arguments. */
> > > > if ((lpm == NULL) ||
> > > > - (next_hop == NULL) ||
> > > > + (res == NULL) ||
> > > > (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > > > return -EINVAL;
> > > >
> > > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > > > rule_index = rule_find(lpm, ip_masked, depth);
> > > >
> > > > if (rule_index >= 0) {
> > > > - *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > + res->next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > + res->fwd_class = lpm->rules_tbl[rule_index].fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > > +#endif
> > > > return 1;
> > > > }
> > > >
> > > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > */
> > > > for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> > i++)
> > > > {
> > > >
> > > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > > lpm->tbl24[i].depth <= depth )
> > {
> > > > lpm->tbl24[i].valid = INVALID;
> > > > }
> > > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > > uint32_t
> > > > ip_masked,
> > > > * associated with this rule.
> > > > */
> > > >
> > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > - {.next_hop =
> > > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > > - .valid = VALID,
> > > > - .ext_entry = 0,
> > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > + .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > + .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > .depth = sub_rule_depth,
> > > > + .ext_valid = 0,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > - .valid = VALID,
> > > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > + .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > + .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > .depth = sub_rule_depth,
> > > > - .next_hop = lpm->rules_tbl
> > > > - [sub_rule_index].next_hop,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> > i++)
> > > > {
> > > >
> > > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > > lpm->tbl24[i].depth <= depth )
> > {
> > > > lpm->tbl24[i] = new_tbl24_entry;
> > > > }
> > > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > * thus can be recycled
> > > > */
> > > > static inline int32_t
> > > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > > tbl8_group_start)
> > > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > tbl8_group_start)
> > > > {
> > > > uint32_t tbl8_group_end, i;
> > > > tbl8_group_end = tbl8_group_start +
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > }
> > > > else {
> > > > /* Set new tbl8 entry. */
> > > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > - .valid = VALID,
> > > > - .depth = sub_rule_depth,
> > > > - .valid_group =
> > > > lpm->tbl8[tbl8_group_start].valid_group,
> > > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num =
> > lpm->rules_tbl[sub_rule_index].as_num,
> > > > +#endif
> > > > + .fwd_class =
> > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > .next_hop =
> > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > + .depth = sub_rule_depth,
> > > > + .ext_valid =
> > lpm->tbl8[tbl8_group_start].ext_valid,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > /*
> > > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > }
> > > > else if (tbl8_recycle_index > -1) {
> > > > /* Update tbl24 entry. */
> > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > - { .next_hop =
> > > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > > - .valid = VALID,
> > > > - .ext_entry = 0,
> > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + .as_num = lpm->tbl8[tbl8_recycle_index].as_num,
> > > > +#endif
> > > > + .next_hop =
> > lpm->tbl8[tbl8_recycle_index].next_hop,
> > > > + .fwd_class =
> > > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > > > .depth = lpm->tbl8[tbl8_recycle_index].depth,
> > > > + .ext_valid = 0,
> > > > + .valid = VALID,
> > > > };
> > > >
> > > > /* Set tbl24 before freeing tbl8 to avoid race
> > condition. */
> > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > index c299ce2..7c615bc 100644
> > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > @@ -31,8 +31,8 @@
> > > > * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > > DAMAGE.
> > > > */
> > > >
> > > > -#ifndef _RTE_LPM_H_
> > > > -#define _RTE_LPM_H_
> > > > +#ifndef _RTE_LPM_EXT_H_
> > > > +#define _RTE_LPM_EXT_H_
> > > >
> > > > /**
> > > > * @file
> > > > @@ -81,57 +81,58 @@ extern "C" {
> > > > #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > > > #endif
> > > >
> > > > -/** @internal bitmask with valid and ext_entry/valid_group fields set
> > */
> > > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > > +/** @internal bitmask with valid and ext_valid/ext_valid fields set */
> > > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > > >
> > > > /** Bitmask used to indicate successful lookup */
> > > > -#define RTE_LPM_LOOKUP_SUCCESS 0x0100
> > > > +#define RTE_LPM_LOOKUP_SUCCESS 0x01
> > > > +
> > > > +struct rte_lpm_res {
> > > > + uint16_t next_hop;
> > > > + uint8_t fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint32_t as_num;
> > > > +#endif
> > > > +};
> > > >
> > > > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > > -/** @internal Tbl24 entry structure. */
> > > > -struct rte_lpm_tbl24_entry {
> > > > - /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> > > > +struct rte_lpm_tbl_entry {
> > > > + uint8_t valid :1;
> > > > + uint8_t ext_valid :1;
> > > > + uint8_t depth :6;
> > > > + uint8_t fwd_class;
> > > > union {
> > > > - uint8_t next_hop;
> > > > - uint8_t tbl8_gindex;
> > > > + uint16_t next_hop;
> > > > + uint16_t tbl8_gindex;
> > > > };
> > > > - /* Using single uint8_t to store 3 values. */
> > > > - uint8_t valid :1; /**< Validation flag. */
> > > > - uint8_t ext_entry :1; /**< External entry. */
> > > > - uint8_t depth :6; /**< Rule depth. */
> > > > -};
> > > > -
> > > > -/** @internal Tbl8 entry structure. */
> > > > -struct rte_lpm_tbl8_entry {
> > > > - uint8_t next_hop; /**< next hop. */
> > > > - /* Using single uint8_t to store 3 values. */
> > > > - uint8_t valid :1; /**< Validation flag. */
> > > > - uint8_t valid_group :1; /**< Group validation flag. */
> > > > - uint8_t depth :6; /**< Rule depth. */
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint32_t as_num;
> > > > +#endif
> > > > };
> > > > #else
> > > > -struct rte_lpm_tbl24_entry {
> > > > - uint8_t depth :6;
> > > > - uint8_t ext_entry :1;
> > > > - uint8_t valid :1;
> > > > +struct rte_lpm_tbl_entry {
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint32_t as_num;
> > > > +#endif
> > > > union {
> > > > - uint8_t tbl8_gindex;
> > > > - uint8_t next_hop;
> > > > + uint16_t tbl8_gindex;
> > > > + uint16_t next_hop;
> > > > };
> > > > -};
> > > > -
> > > > -struct rte_lpm_tbl8_entry {
> > > > - uint8_t depth :6;
> > > > - uint8_t valid_group :1;
> > > > - uint8_t valid :1;
> > > > - uint8_t next_hop;
> > > > + uint8_t fwd_class;
> > > > + uint8_t depth :6;
> > > > + uint8_t ext_valid :1;
> > > > + uint8_t valid :1;
> > > > };
> > > > #endif
> > > >
> > > > /** @internal Rule structure. */
> > > > struct rte_lpm_rule {
> > > > uint32_t ip; /**< Rule IP address. */
> > > > - uint8_t next_hop; /**< Rule next hop. */
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint32_t as_num;
> > > > +#endif
> > > > + uint16_t next_hop; /**< Rule next hop. */
> > > > + uint8_t fwd_class;
> > > > };
> > > >
> > > > /** @internal Contains metadata about the rules table. */
> > > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > > > struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> > Rule
> > > > info table. */
> > > >
> > > > /* LPM Tables. */
> > > > - struct rte_lpm_tbl24_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > + struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > __rte_cache_aligned; /**< LPM tbl24 table. */
> > > > - struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > + struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > __rte_cache_aligned; /**< LPM tbl8 table. */
> > > > struct rte_lpm_rule rules_tbl[0] \
> > > > __rte_cache_aligned; /**< LPM rules. */
> > > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > > > * 0 on success, negative value otherwise
> > > > */
> > > > int
> > > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint8_t
> > > > next_hop);
> > > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, struct
> > > > rte_lpm_res *res);
> > > >
> > > > /**
> > > > * Check if a rule is present in the LPM table,
> > > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > uint8_t
> > > > depth, uint8_t next_hop);
> > > > */
> > > > int
> > > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > depth,
> > > > -uint8_t *next_hop);
> > > > + struct rte_lpm_res *res);
> > > >
> > > > /**
> > > > * Delete a rule from the LPM table.
> > > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > > > * -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> > lookup
> > > > hit
> > > > */
> > > > static inline int
> > > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t *next_hop)
> > > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct rte_lpm_res
> > *res)
> > > > {
> > > > unsigned tbl24_index = (ip >> 8);
> > > > - uint16_t tbl_entry;
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint64_t tbl_entry;
> > > > +#else
> > > > + uint32_t tbl_entry;
> > > > +#endif
> > > > /* DEBUG: Check user input arguments. */
> > > > - RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop == NULL)),
> > > > -EINVAL);
> > > > + RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > > EINVAL);
> > > >
> > > > /* Copy tbl24 entry */
> > > > - tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > > +#else
> > > > + tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > > +#endif
> > > > /* Copy tbl8 entry (only if needed) */
> > > > if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > >
> > > > unsigned tbl8_index = (uint8_t)ip +
> > > > - ((uint8_t)tbl_entry *
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > + ((*(struct rte_lpm_tbl_entry
> > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > >
> > > > - tbl_entry = *(const uint16_t *)&lpm->tbl8[tbl8_index];
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + tbl_entry = *(const uint64_t *)&lpm->tbl8[tbl8_index];
> > > > +#else
> > > > + tbl_entry = *(const uint32_t *)&lpm->tbl8[tbl8_index];
> > > > +#endif
> > > > }
> > > > -
> > > > - *next_hop = (uint8_t)tbl_entry;
> > > > + res->next_hop = ((struct rte_lpm_tbl_entry
> > *)&tbl_entry)->next_hop;
> > > > + res->fwd_class = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->fwd_class;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + res->as_num = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->as_num;
> > > > +#endif
> > > > return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > > +
> > > > }
> > > >
> > > > /**
> > > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t *next_hop)
> > > > * @return
> > > > * -EINVAL for incorrect arguments, otherwise 0
> > > > */
> > > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > > - rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > > + rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > > >
> > > > static inline int
> > > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t *
> > ips,
> > > > - uint16_t * next_hops, const unsigned n)
> > > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> > *ips,
> > > > + struct rte_lpm_res *res_tbl, const unsigned n)
> > > > {
> > > > unsigned i;
> > > > + int ret = 0;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + uint64_t tbl_entry;
> > > > +#else
> > > > + uint32_t tbl_entry;
> > > > +#endif
> > > > unsigned tbl24_indexes[n];
> > > >
> > > > /* DEBUG: Check user input arguments. */
> > > > RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > > - (next_hops == NULL)), -EINVAL);
> > > > + (res_tbl == NULL)), -EINVAL);
> > > >
> > > > for (i = 0; i < n; i++) {
> > > > tbl24_indexes[i] = ips[i] >> 8;
> > > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > > *lpm,
> > > > const uint32_t * ips,
> > > >
> > > > for (i = 0; i < n; i++) {
> > > > /* Simply copy tbl24 entry to output */
> > > > - next_hops[i] = *(const uint16_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > -
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + tbl_entry = *(const uint64_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > +#else
> > > > + tbl_entry = *(const uint32_t
> > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > +#endif
> > > > /* Overwrite output with tbl8 entry if needed */
> > > > - if (unlikely((next_hops[i] &
> > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > > - RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > + if (unlikely((tbl_entry &
> > RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > > ==
> > > > + RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > >
> > > > unsigned tbl8_index = (uint8_t)ips[i] +
> > > > - ((uint8_t)next_hops[i] *
> > > > -
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > + ((*(struct rte_lpm_tbl_entry
> > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > >
> > > > - next_hops[i] = *(const uint16_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + tbl_entry = *(const uint64_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#else
> > > > + tbl_entry = *(const uint32_t
> > > > *)&lpm->tbl8[tbl8_index];
> > > > +#endif
> > > > }
> > > > + res_tbl[i].next_hop = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->next_hop;
> > > > + res_tbl[i].fwd_class = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->next_hop;
> > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > + res_tbl[i].as_num = ((struct rte_lpm_tbl_entry
> > > > *)&tbl_entry)->as_num;
> > > > +#endif
> > > > + ret |= 1 << i;
> > > > }
> > > > - return 0;
> > > > + return ret;
> > > > }
> > > >
> > > > /* Mask four results. */
> > > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > > __m128i ip,
> > > > uint16_t hop[4],
> > > > }
> > > > #endif
> > > >
> > > > -#endif /* _RTE_LPM_H_ */
> > > > +#endif /* _RTE_LPM_EXT_H_ */
> > > >
> > > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > > >
> > > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > > >
> > > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski wrote:
> > > > >>
> > > > >>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > > > >>>
> > > > >>> The current DPDK implementation for LPM for IPv4 and IPv6 limits
> > the
> > > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> > field.
> > > > >>> Proposed extension increase number of next hops for IPv4 to 2^24
> > and
> > > > >>> also allows 32-bits read/write operations.
> > > > >>>
> > > > >>> This patchset requires additional change to rte_table library to
> > meet
> > > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > > >>>
> > > > >>
> > > > >> I also have a patchset for this.
> > > > >>
> > > > >> I will send it out as well so we could compare.
> > > > >>
> > > > >> Matthew.
> > > > >>
> > > > >
> > > > > Sorry about the delay; I only work on DPDK in personal time and not
> > as
> > > > > part of a job. My patchset is attached to this email.
> > > > >
> > > > > One possible advantage with my patchset, compared to others, is that
> > the
> > > > > space problem is fixed in both IPV4 and in IPV6, to prevent asymmetry
> > > > > between these two standards, which is something I try to avoid as
> > much
> > > as
> > > > > humanly possible.
> > > > >
> > > > > This is because my application code is green-field, so I absolutely
> > don't
> > > > > want to put any ugly hacks or incompatibilities in this code if I can
> > > > > possibly avoid it.
> > > > >
> > > > > Otherwise, I am not necessarily as expert about rte_lpm as some of
> > the
> > > > > full-time guys, but I think with four or five of us in the thread
> > hammering
> > > > > out patches we will be able to create something amazing together and
> > I
> > > am
> > > > > very very very very very happy about this.
> > > > >
> > > > > Matthew.
> > > > >
> > >
> >
> > Hi Vladimir,
> > Thanks for sharing Your implementation.
> > Could You please clarify what as_num and fwd_class fields represent?
> > The second issue I have is that Your patch doesn’t want to apply on top of
> > current head. Could You check this please?
> >
> > Best regards
> > Michal
> >
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 01/16] mk: Introduce ARMv7 architecture
@ 2015-10-26 16:37 3% ` Jan Viktorin
1 sibling, 0 replies; 200+ results
From: Jan Viktorin @ 2015-10-26 16:37 UTC (permalink / raw)
To: Thomas Monjalon, David Hunt, dev; +Cc: Vlastimil Kosar
From: Vlastimil Kosar <kosar@rehivetech.com>
Make DPDK run on ARMv7-A architecture. This patch assumes
ARM Cortex-A9. However, it is known to be working on Cortex-A7
and Cortex-A15.
Signed-off-by: Vlastimil Kosar <kosar@rehivetech.com>
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
---
v1 -> v2:
* the -mtune parameter of GCC is configurable now
* the -mfpu=neon can be turned off
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
---
config/defconfig_arm-armv7-a-linuxapp-gcc | 78 +++++++++++++++++++++++++++++++
mk/arch/arm/rte.vars.mk | 39 ++++++++++++++++
mk/machine/armv7-a/rte.vars.mk | 67 ++++++++++++++++++++++++++
3 files changed, 184 insertions(+)
create mode 100644 config/defconfig_arm-armv7-a-linuxapp-gcc
create mode 100644 mk/arch/arm/rte.vars.mk
create mode 100644 mk/machine/armv7-a/rte.vars.mk
diff --git a/config/defconfig_arm-armv7-a-linuxapp-gcc b/config/defconfig_arm-armv7-a-linuxapp-gcc
new file mode 100644
index 0000000..5b582a8
--- /dev/null
+++ b/config/defconfig_arm-armv7-a-linuxapp-gcc
@@ -0,0 +1,78 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All right reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "common_linuxapp"
+
+CONFIG_RTE_MACHINE="armv7-a"
+
+CONFIG_RTE_ARCH="arm"
+CONFIG_RTE_ARCH_ARM=y
+CONFIG_RTE_ARCH_ARMv7=y
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a9"
+CONFIG_RTE_ARCH_ARM_NEON=y
+
+CONFIG_RTE_TOOLCHAIN="gcc"
+CONFIG_RTE_TOOLCHAIN_GCC=y
+
+# ARM doesn't have support for vmware TSC map
+CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
+
+# avoids using i686/x86_64 SIMD instructions, nothing for ARM
+CONFIG_RTE_BITMAP_OPTIMIZATIONS=0
+
+# KNI is not supported on 32-bit
+CONFIG_RTE_LIBRTE_KNI=n
+
+# PCI is usually not used on ARM
+CONFIG_RTE_EAL_IGB_UIO=n
+
+# missing rte_vect.h for ARM
+CONFIG_XMM_SIZE=16
+
+# fails to compile on ARM
+CONFIG_RTE_LIBRTE_ACL=n
+CONFIG_RTE_LIBRTE_LPM=n
+
+# cannot use those on ARM
+CONFIG_RTE_KNI_KMOD=n
+CONFIG_RTE_LIBRTE_EM_PMD=n
+CONFIG_RTE_LIBRTE_IGB_PMD=n
+CONFIG_RTE_LIBRTE_CXGBE_PMD=n
+CONFIG_RTE_LIBRTE_E1000_PMD=n
+CONFIG_RTE_LIBRTE_ENIC_PMD=n
+CONFIG_RTE_LIBRTE_FM10K_PMD=n
+CONFIG_RTE_LIBRTE_I40E_PMD=n
+CONFIG_RTE_LIBRTE_IXGBE_PMD=n
+CONFIG_RTE_LIBRTE_MLX4_PMD=n
+CONFIG_RTE_LIBRTE_MPIPE_PMD=n
+CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
+CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
+CONFIG_RTE_LIBRTE_PMD_XENVIRT=n
+CONFIG_RTE_LIBRTE_PMD_BNX2X=n
diff --git a/mk/arch/arm/rte.vars.mk b/mk/arch/arm/rte.vars.mk
new file mode 100644
index 0000000..df0c043
--- /dev/null
+++ b/mk/arch/arm/rte.vars.mk
@@ -0,0 +1,39 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+ARCH ?= arm
+CROSS ?=
+
+CPU_CFLAGS ?= -marm -DRTE_CACHE_LINE_SIZE=64 -munaligned-access
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?= -felf
+
+export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
diff --git a/mk/machine/armv7-a/rte.vars.mk b/mk/machine/armv7-a/rte.vars.mk
new file mode 100644
index 0000000..48d3979
--- /dev/null
+++ b/mk/machine/armv7-a/rte.vars.mk
@@ -0,0 +1,67 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+
+CPU_CFLAGS += -mfloat-abi=softfp
+
+MACHINE_CFLAGS += -march=armv7-a
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE)
+endif
+
+ifeq ($(CONFIG_RTE_ARCH_ARM_NEON),y)
+MACHINE_CFLAGS += -mfpu=neon
+endif
--
2.6.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4)
2015-10-26 15:39 0% ` Michal Jastrzebski
@ 2015-10-26 16:59 0% ` Vladimir Medvedkin
0 siblings, 0 replies; 200+ results
From: Vladimir Medvedkin @ 2015-10-26 16:59 UTC (permalink / raw)
To: Michal Jastrzebski; +Cc: dev
Michal,
Looks strange, you have:
error: while searching for:
lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
...
error: patch failed: lib/librte_lpm/rte_lpm.c:159
but if we look at
http://dpdk.org/browse/dpdk/tree/lib/librte_lpm/rte_lpm.c#n159
patch should apply fine.
Latest commit in my repo is 139debc42dc0a320dad40f5295b74d2e3ab8a7f9
2015-10-26 18:39 GMT+03:00 Michal Jastrzebski <
michalx.k.jastrzebski@intel.com>:
> esOn Mon, Oct 26, 2015 at 05:03:31PM +0300, Vladimir Medvedkin wrote:
> > Hi Michal,
> >
> > Forwarding class can help us to classify traffic based on dst prefix,
> it's
> > something like Juniper DCU. For example on Juniper MX I can make policy
> > that install prefix into the FIB with some class and use it on dataplane,
> > for example with ACL.
> > On Juniper MX I can make something like that:
> > #show policy-options
> > policy-statement community-to-class {
> > term customer {
> > from community originate-customer;
> > then destination-class customer;
> > }
> > }
> > community originate-customer members 12345:11111;
> > # show routing-options
> > forwarding-table {
> > export community-to-class;
> > }
> > # show forwarding-options
> > forwarding-options {
> > family inet {
> > filter {
> > output test-filter;
> > }
> > }
> > }
> > # show firewall family inet filter test-filter
> > term 1 {
> > from {
> > protocol icmp;
> > destination-class customer;
> > }
> > then {
> > discard;
> > }
> > }
> > announce route 10.10.10.10/32 next-hop 10.10.10.2 community 12345:11111
> > After than on dataplane we have
> > NPC1( vty)# show route ip lookup 10.10.10.10
> > Route Information (10.10.10.10):
> > interface : xe-1/0/0.0 (328)
> > Nexthop prefix : -
> > Nexthop ID : 1048574
> > MTU : 0
> > Class ID : 129 <- That is "forwarding class" in my implementation
> > This construction discards all ICMP traffic that goes to dst prefixes
> which
> > was originated with community 12345:11111. With this mechanism we can
> make
> > on control plane different sophisticated policy to control traffic on
> > dataplane.
> > The same with as_num, we can have on dataplane AS number that has
> > originated that prefix, or another 4-byte number e.g. geo-id.
> > What issue do you mean? I think it is because of table/pipeline/test
> > frameworks that doesen't want to compile due to changing API/ABI. You can
> > turn it off for LPM testing, if my patch will be applied I will make
> > changes in above-mentioned frameworks.
> >
> > Regards,
> > Vladimir
>
> Hi Vladimir,
> I have an issue with applying Your patch not compilation.
> This is the error i get:
> Checking patch config/common_bsdapp...
> Checking patch config/common_linuxapp...
> Checking patch lib/librte_lpm/rte_lpm.c...
> error: while searching for:
>
> lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
>
> RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
>
> /* Check user arguments. */
> if ((name == NULL) || (socket_id < -1) || (max_rules == 0)){
> rte_errno = EINVAL;
>
> error: patch failed: lib/librte_lpm/rte_lpm.c:159
> error: lib/librte_lpm/rte_lpm.c: patch does not apply
> Checking patch lib/librte_lpm/rte_lpm.h...
> error: while searching for:
> #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> #endif
>
> /** @internal bitmask with valid and ext_entry/valid_group fields set */
> #define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
>
> /** Bitmask used to indicate successful lookup */
> #define RTE_LPM_LOOKUP_SUCCESS 0x0100
>
> #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> /** @internal Tbl24 entry structure. */
> struct rte_lpm_tbl24_entry {
> /* Stores Next hop or group index (i.e. gindex)into tbl8. */
> union {
> uint8_t next_hop;
> uint8_t tbl8_gindex;
> };
> /* Using single uint8_t to store 3 values. */
> uint8_t valid :1; /**< Validation flag. */
> uint8_t ext_entry :1; /**< External entry. */
> uint8_t depth :6; /**< Rule depth. */
> };
>
> /** @internal Tbl8 entry structure. */
> struct rte_lpm_tbl8_entry {
> uint8_t next_hop; /**< next hop. */
> /* Using single uint8_t to store 3 values. */
> uint8_t valid :1; /**< Validation flag. */
> uint8_t valid_group :1; /**< Group validation flag. */
> uint8_t depth :6; /**< Rule depth. */
> };
> #else
> struct rte_lpm_tbl24_entry {
> uint8_t depth :6;
> uint8_t ext_entry :1;
> uint8_t valid :1;
> union {
> uint8_t tbl8_gindex;
> uint8_t next_hop;
> };
> };
>
> struct rte_lpm_tbl8_entry {
> uint8_t depth :6;
> uint8_t valid_group :1;
> uint8_t valid :1;
> uint8_t next_hop;
> };
> #endif
>
> /** @internal Rule structure. */
> struct rte_lpm_rule {
> uint32_t ip; /**< Rule IP address. */
> uint8_t next_hop; /**< Rule next hop. */
> };
>
> /** @internal Contains metadata about the rules table. */
>
> error: patch failed: lib/librte_lpm/rte_lpm.h:81
> error: lib/librte_lpm/rte_lpm.h: patch does not apply
>
>
>
> > 2015-10-26 14:57 GMT+03:00 Jastrzebski, MichalX K <
> > michalx.k.jastrzebski@intel.com>:
> >
> > > > -----Original Message-----
> > > > From: Michal Jastrzebski [mailto:michalx.k.jastrzebski@intel.com]
> > > > Sent: Monday, October 26, 2015 12:55 PM
> > > > To: Vladimir Medvedkin
> > > > Subject: Re: [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next
> hops
> > > > for lpm (ipv4)
> > > >
> > > > On Sun, Oct 25, 2015 at 08:52:04PM +0300, Vladimir Medvedkin wrote:
> > > > > Hi all,
> > > > >
> > > > > Here my implementation
> > > > >
> > > > > Signed-off-by: Vladimir Medvedkin <medvedkinv@gmail.com>
> > > > > ---
> > > > > config/common_bsdapp | 1 +
> > > > > config/common_linuxapp | 1 +
> > > > > lib/librte_lpm/rte_lpm.c | 194
> > > > > +++++++++++++++++++++++++++++------------------
> > > > > lib/librte_lpm/rte_lpm.h | 163
> +++++++++++++++++++++++----------------
> > > > > 4 files changed, 219 insertions(+), 140 deletions(-)
> > > > >
> > > > > diff --git a/config/common_bsdapp b/config/common_bsdapp
> > > > > index b37dcf4..408cc2c 100644
> > > > > --- a/config/common_bsdapp
> > > > > +++ b/config/common_bsdapp
> > > > > @@ -344,6 +344,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > > #
> > > > > CONFIG_RTE_LIBRTE_LPM=y
> > > > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > > >
> > > > > #
> > > > > # Compile librte_acl
> > > > > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > > > > index 0de43d5..1c60e63 100644
> > > > > --- a/config/common_linuxapp
> > > > > +++ b/config/common_linuxapp
> > > > > @@ -352,6 +352,7 @@ CONFIG_RTE_LIBRTE_JOBSTATS=y
> > > > > #
> > > > > CONFIG_RTE_LIBRTE_LPM=y
> > > > > CONFIG_RTE_LIBRTE_LPM_DEBUG=n
> > > > > +CONFIG_RTE_LIBRTE_LPM_ASNUM=n
> > > > >
> > > > > #
> > > > > # Compile librte_acl
> > > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > > > index 163ba3c..363b400 100644
> > > > > --- a/lib/librte_lpm/rte_lpm.c
> > > > > +++ b/lib/librte_lpm/rte_lpm.c
> > > > > @@ -159,9 +159,11 @@ rte_lpm_create(const char *name, int
> socket_id,
> > > > int
> > > > > max_rules,
> > > > >
> > > > > lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head,
> rte_lpm_list);
> > > > >
> > > > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl24_entry) != 2);
> > > > > - RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl8_entry) != 2);
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 8);
> > > > > +#else
> > > > > + RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry) != 4);
> > > > > +#endif
> > > > > /* Check user arguments. */
> > > > > if ((name == NULL) || (socket_id < -1) || (max_rules ==
> 0)){
> > > > > rte_errno = EINVAL;
> > > > > @@ -261,7 +263,7 @@ rte_lpm_free(struct rte_lpm *lpm)
> > > > > */
> > > > > static inline int32_t
> > > > > rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
> > > > > - uint8_t next_hop)
> > > > > + struct rte_lpm_res *res)
> > > > > {
> > > > > uint32_t rule_gindex, rule_index, last_rule;
> > > > > int i;
> > > > > @@ -282,8 +284,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >
> > > > > /* If rule already exists update its
> next_hop
> > > and
> > > > > return. */
> > > > > if (lpm->rules_tbl[rule_index].ip ==
> > > ip_masked) {
> > > > > -
> lpm->rules_tbl[rule_index].next_hop =
> > > > > next_hop;
> > > > > -
> > > > > +
> lpm->rules_tbl[rule_index].next_hop =
> > > > > res->next_hop;
> > > > > +
> lpm->rules_tbl[rule_index].fwd_class =
> > > > > res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + lpm->rules_tbl[rule_index].as_num =
> > > > > res->as_num;
> > > > > +#endif
> > > > > return rule_index;
> > > > > }
> > > > > }
> > > > > @@ -320,7 +325,11 @@ rule_add(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > >
> > > > > /* Add the new rule. */
> > > > > lpm->rules_tbl[rule_index].ip = ip_masked;
> > > > > - lpm->rules_tbl[rule_index].next_hop = next_hop;
> > > > > + lpm->rules_tbl[rule_index].next_hop = res->next_hop;
> > > > > + lpm->rules_tbl[rule_index].fwd_class = res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + lpm->rules_tbl[rule_index].as_num = res->as_num;
> > > > > +#endif
> > > > >
> > > > > /* Increment the used rules counter for this rule group. */
> > > > > lpm->rule_info[depth - 1].used_rules++;
> > > > > @@ -382,10 +391,10 @@ rule_find(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth)
> > > > > * Find, clean and allocate a tbl8.
> > > > > */
> > > > > static inline int32_t
> > > > > -tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > > +tbl8_alloc(struct rte_lpm_tbl_entry *tbl8)
> > > > > {
> > > > > uint32_t tbl8_gindex; /* tbl8 group index. */
> > > > > - struct rte_lpm_tbl8_entry *tbl8_entry;
> > > > > + struct rte_lpm_tbl_entry *tbl8_entry;
> > > > >
> > > > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8
> group.
> > > */
> > > > > for (tbl8_gindex = 0; tbl8_gindex <
> RTE_LPM_TBL8_NUM_GROUPS;
> > > > > @@ -393,12 +402,12 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > > tbl8_entry = &tbl8[tbl8_gindex *
> > > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > > > /* If a free tbl8 group is found clean it and set
> as
> > > VALID.
> > > > > */
> > > > > - if (!tbl8_entry->valid_group) {
> > > > > + if (!tbl8_entry->ext_valid) {
> > > > > memset(&tbl8_entry[0], 0,
> > > > >
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES
> > > *
> > > > > sizeof(tbl8_entry[0]));
> > > > >
> > > > > - tbl8_entry->valid_group = VALID;
> > > > > + tbl8_entry->ext_valid = VALID;
> > > > >
> > > > > /* Return group index for allocated tbl8
> > > group. */
> > > > > return tbl8_gindex;
> > > > > @@ -410,46 +419,50 @@ tbl8_alloc(struct rte_lpm_tbl8_entry *tbl8)
> > > > > }
> > > > >
> > > > > static inline void
> > > > > -tbl8_free(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> tbl8_group_start)
> > > > > +tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t
> tbl8_group_start)
> > > > > {
> > > > > /* Set tbl8 group invalid*/
> > > > > - tbl8[tbl8_group_start].valid_group = INVALID;
> > > > > + tbl8[tbl8_group_start].ext_valid = INVALID;
> > > > > }
> > > > >
> > > > > static inline int32_t
> > > > > add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > > - uint8_t next_hop)
> > > > > + struct rte_lpm_res *res)
> > > > > {
> > > > > uint32_t tbl24_index, tbl24_range, tbl8_index,
> tbl8_group_end,
> > > i, j;
> > > > >
> > > > > /* Calculate the index into Table24. */
> > > > > tbl24_index = ip >> 8;
> > > > > tbl24_range = depth_to_range(depth);
> > > > > + struct rte_lpm_tbl_entry new_tbl_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num = res->as_num,
> > > > > +#endif
> > > > > + .next_hop = res->next_hop,
> > > > > + .fwd_class = res->fwd_class,
> > > > > + .ext_valid = 0,
> > > > > + .depth = depth,
> > > > > + .valid = VALID,
> > > > > + };
> > > > > +
> > > > >
> > > > > for (i = tbl24_index; i < (tbl24_index + tbl24_range);
> i++) {
> > > > > /*
> > > > > * For invalid OR valid and non-extended tbl 24
> > > entries set
> > > > > * entry.
> > > > > */
> > > > > - if (!lpm->tbl24[i].valid ||
> (lpm->tbl24[i].ext_entry
> > > == 0 &&
> > > > > + if (!lpm->tbl24[i].valid ||
> (lpm->tbl24[i].ext_valid
> > > == 0 &&
> > > > > lpm->tbl24[i].depth <= depth)) {
> > > > >
> > > > > - struct rte_lpm_tbl24_entry new_tbl24_entry
> = {
> > > > > - { .next_hop = next_hop, },
> > > > > - .valid = VALID,
> > > > > - .ext_entry = 0,
> > > > > - .depth = depth,
> > > > > - };
> > > > > -
> > > > > /* Setting tbl24 entry in one go to avoid
> race
> > > > > * conditions
> > > > > */
> > > > > - lpm->tbl24[i] = new_tbl24_entry;
> > > > > + lpm->tbl24[i] = new_tbl_entry;
> > > > >
> > > > > continue;
> > > > > }
> > > > >
> > > > > - if (lpm->tbl24[i].ext_entry == 1) {
> > > > > + if (lpm->tbl24[i].ext_valid == 1) {
> > > > > /* If tbl24 entry is valid and extended
> > > calculate
> > > > > the
> > > > > * index into tbl8.
> > > > > */
> > > > > @@ -461,19 +474,14 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> > > > ip,
> > > > > uint8_t depth,
> > > > > for (j = tbl8_index; j < tbl8_group_end;
> j++) {
> > > > > if (!lpm->tbl8[j].valid ||
> > > > > lpm->tbl8[j].depth
> <=
> > > > > depth) {
> > > > > - struct rte_lpm_tbl8_entry
> > > > > - new_tbl8_entry = {
> > > > > - .valid = VALID,
> > > > > - .valid_group =
> VALID,
> > > > > - .depth = depth,
> > > > > - .next_hop =
> next_hop,
> > > > > - };
> > > > > +
> > > > > + new_tbl_entry.ext_valid =
> > > VALID;
> > > > >
> > > > > /*
> > > > > * Setting tbl8 entry in
> one
> > > go to
> > > > > avoid
> > > > > * race conditions
> > > > > */
> > > > > - lpm->tbl8[j] =
> new_tbl8_entry;
> > > > > + lpm->tbl8[j] =
> new_tbl_entry;
> > > > >
> > > > > continue;
> > > > > }
> > > > > @@ -486,7 +494,7 @@ add_depth_small(struct rte_lpm *lpm, uint32_t
> ip,
> > > > > uint8_t depth,
> > > > >
> > > > > static inline int32_t
> > > > > add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t
> depth,
> > > > > - uint8_t next_hop)
> > > > > + struct rte_lpm_res *res)
> > > > > {
> > > > > uint32_t tbl24_index;
> > > > > int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end,
> > > > > tbl8_index,
> > > > > @@ -512,7 +520,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > > /* Set tbl8 entry. */
> > > > > for (i = tbl8_index; i < (tbl8_index + tbl8_range);
> > > i++) {
> > > > > lpm->tbl8[i].depth = depth;
> > > > > - lpm->tbl8[i].next_hop = next_hop;
> > > > > + lpm->tbl8[i].next_hop = res->next_hop;
> > > > > + lpm->tbl8[i].fwd_class = res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + lpm->tbl8[i].as_num = res->as_num;
> > > > > +#endif
> > > > > lpm->tbl8[i].valid = VALID;
> > > > > }
> > > > >
> > > > > @@ -522,17 +534,17 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > > * so assign whole structure in one go
> > > > > */
> > > > >
> > > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > - { .tbl8_gindex =
> (uint8_t)tbl8_group_index, },
> > > > > - .valid = VALID,
> > > > > - .ext_entry = 1,
> > > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > + .tbl8_gindex = (uint16_t)tbl8_group_index,
> > > > > .depth = 0,
> > > > > + .ext_valid = 1,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > >
> > > > > }/* If valid entry but not extended calculate the index
> into
> > > > > Table8. */
> > > > > - else if (lpm->tbl24[tbl24_index].ext_entry == 0) {
> > > > > + else if (lpm->tbl24[tbl24_index].ext_valid == 0) {
> > > > > /* Search for free tbl8 group. */
> > > > > tbl8_group_index = tbl8_alloc(lpm->tbl8);
> > > > >
> > > > > @@ -551,6 +563,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > > lpm->tbl8[i].depth =
> > > lpm->tbl24[tbl24_index].depth;
> > > > > lpm->tbl8[i].next_hop =
> > > > >
> > > lpm->tbl24[tbl24_index].next_hop;
> > > > > + lpm->tbl8[i].fwd_class =
> > > > > +
> > > lpm->tbl24[tbl24_index].fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + lpm->tbl8[i].as_num =
> > > > > lpm->tbl24[tbl24_index].as_num;
> > > > > +#endif
> > > > > }
> > > > >
> > > > > tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
> > > > > @@ -561,7 +578,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > ip_masked,
> > > > > uint8_t depth,
> > > > > lpm->tbl8[i].depth <=
> depth) {
> > > > > lpm->tbl8[i].valid = VALID;
> > > > > lpm->tbl8[i].depth = depth;
> > > > > - lpm->tbl8[i].next_hop = next_hop;
> > > > > + lpm->tbl8[i].next_hop =
> res->next_hop;
> > > > > + lpm->tbl8[i].fwd_class =
> > > res->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + lpm->tbl8[i].as_num = res->as_num;
> > > > > +#endif
> > > > >
> > > > > continue;
> > > > > }
> > > > > @@ -573,11 +594,11 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > > * so assign whole structure in one go.
> > > > > */
> > > > >
> > > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > - { .tbl8_gindex =
> > > (uint8_t)tbl8_group_index,
> > > > > },
> > > > > - .valid = VALID,
> > > > > - .ext_entry = 1,
> > > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > + .tbl8_gindex =
> > > (uint16_t)tbl8_group_index,
> > > > > .depth = 0,
> > > > > + .ext_valid = 1,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > lpm->tbl24[tbl24_index] = new_tbl24_entry;
> > > > > @@ -595,11 +616,15 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > >
> > > > > if (!lpm->tbl8[i].valid ||
> > > > > lpm->tbl8[i].depth <=
> depth) {
> > > > > - struct rte_lpm_tbl8_entry
> > > new_tbl8_entry = {
> > > > > - .valid = VALID,
> > > > > + struct rte_lpm_tbl_entry
> > > new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num = res->as_num,
> > > > > +#endif
> > > > > + .next_hop = res->next_hop,
> > > > > + .fwd_class =
> res->fwd_class,
> > > > > .depth = depth,
> > > > > - .next_hop = next_hop,
> > > > > - .valid_group =
> > > > > lpm->tbl8[i].valid_group,
> > > > > + .ext_valid =
> > > lpm->tbl8[i].ext_valid,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > /*
> > > > > @@ -621,19 +646,19 @@ add_depth_big(struct rte_lpm *lpm, uint32_t
> > > > > ip_masked, uint8_t depth,
> > > > > */
> > > > > int
> > > > > rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> > > > > - uint8_t next_hop)
> > > > > + struct rte_lpm_res *res)
> > > > > {
> > > > > int32_t rule_index, status = 0;
> > > > > uint32_t ip_masked;
> > > > >
> > > > > /* Check user arguments. */
> > > > > - if ((lpm == NULL) || (depth < 1) || (depth >
> > > RTE_LPM_MAX_DEPTH))
> > > > > + if ((lpm == NULL) || (res == NULL) || (depth < 1) ||
> (depth >
> > > > > RTE_LPM_MAX_DEPTH))
> > > > > return -EINVAL;
> > > > >
> > > > > ip_masked = ip & depth_to_mask(depth);
> > > > >
> > > > > /* Add the rule to the rule table. */
> > > > > - rule_index = rule_add(lpm, ip_masked, depth, next_hop);
> > > > > + rule_index = rule_add(lpm, ip_masked, depth, res);
> > > > >
> > > > > /* If the is no space available for new rule return error.
> */
> > > > > if (rule_index < 0) {
> > > > > @@ -641,10 +666,10 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth,
> > > > > }
> > > > >
> > > > > if (depth <= MAX_DEPTH_TBL24) {
> > > > > - status = add_depth_small(lpm, ip_masked, depth,
> > > next_hop);
> > > > > + status = add_depth_small(lpm, ip_masked, depth,
> res);
> > > > > }
> > > > > else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
> > > > > - status = add_depth_big(lpm, ip_masked, depth,
> > > next_hop);
> > > > > + status = add_depth_big(lpm, ip_masked, depth, res);
> > > > >
> > > > > /*
> > > > > * If add fails due to exhaustion of tbl8
> extensions
> > > delete
> > > > > @@ -665,14 +690,14 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth,
> > > > > */
> > > > > int
> > > > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > > depth,
> > > > > -uint8_t *next_hop)
> > > > > + struct rte_lpm_res *res)
> > > > > {
> > > > > uint32_t ip_masked;
> > > > > int32_t rule_index;
> > > > >
> > > > > /* Check user arguments. */
> > > > > if ((lpm == NULL) ||
> > > > > - (next_hop == NULL) ||
> > > > > + (res == NULL) ||
> > > > > (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
> > > > > return -EINVAL;
> > > > >
> > > > > @@ -681,7 +706,11 @@ uint8_t *next_hop)
> > > > > rule_index = rule_find(lpm, ip_masked, depth);
> > > > >
> > > > > if (rule_index >= 0) {
> > > > > - *next_hop = lpm->rules_tbl[rule_index].next_hop;
> > > > > + res->next_hop =
> lpm->rules_tbl[rule_index].next_hop;
> > > > > + res->fwd_class =
> lpm->rules_tbl[rule_index].fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + res->as_num = lpm->rules_tbl[rule_index].as_num;
> > > > > +#endif
> > > > > return 1;
> > > > > }
> > > > >
> > > > > @@ -731,7 +760,7 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > > */
> > > > > for (i = tbl24_index; i < (tbl24_index +
> tbl24_range);
> > > i++)
> > > > > {
> > > > >
> > > > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > > > lpm->tbl24[i].depth <=
> depth )
> > > {
> > > > > lpm->tbl24[i].valid = INVALID;
> > > > > }
> > > > > @@ -761,23 +790,30 @@ delete_depth_small(struct rte_lpm *lpm,
> > > > uint32_t
> > > > > ip_masked,
> > > > > * associated with this rule.
> > > > > */
> > > > >
> > > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > - {.next_hop =
> > > > > lpm->rules_tbl[sub_rule_index].next_hop,},
> > > > > - .valid = VALID,
> > > > > - .ext_entry = 0,
> > > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > + .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > + .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > > .depth = sub_rule_depth,
> > > > > + .ext_valid = 0,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > > - .valid = VALID,
> > > > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > + .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > + .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > > .depth = sub_rule_depth,
> > > > > - .next_hop = lpm->rules_tbl
> > > > > - [sub_rule_index].next_hop,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > for (i = tbl24_index; i < (tbl24_index +
> tbl24_range);
> > > i++)
> > > > > {
> > > > >
> > > > > - if (lpm->tbl24[i].ext_entry == 0 &&
> > > > > + if (lpm->tbl24[i].ext_valid == 0 &&
> > > > > lpm->tbl24[i].depth <=
> depth )
> > > {
> > > > > lpm->tbl24[i] = new_tbl24_entry;
> > > > > }
> > > > > @@ -814,7 +850,7 @@ delete_depth_small(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > > * thus can be recycled
> > > > > */
> > > > > static inline int32_t
> > > > > -tbl8_recycle_check(struct rte_lpm_tbl8_entry *tbl8, uint32_t
> > > > > tbl8_group_start)
> > > > > +tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > > tbl8_group_start)
> > > > > {
> > > > > uint32_t tbl8_group_end, i;
> > > > > tbl8_group_end = tbl8_group_start +
> > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
> > > > > @@ -891,11 +927,15 @@ delete_depth_big(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > > }
> > > > > else {
> > > > > /* Set new tbl8 entry. */
> > > > > - struct rte_lpm_tbl8_entry new_tbl8_entry = {
> > > > > - .valid = VALID,
> > > > > - .depth = sub_rule_depth,
> > > > > - .valid_group =
> > > > > lpm->tbl8[tbl8_group_start].valid_group,
> > > > > + struct rte_lpm_tbl_entry new_tbl8_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num =
> > > lpm->rules_tbl[sub_rule_index].as_num,
> > > > > +#endif
> > > > > + .fwd_class =
> > > > > lpm->rules_tbl[sub_rule_index].fwd_class,
> > > > > .next_hop =
> > > lpm->rules_tbl[sub_rule_index].next_hop,
> > > > > + .depth = sub_rule_depth,
> > > > > + .ext_valid =
> > > lpm->tbl8[tbl8_group_start].ext_valid,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > /*
> > > > > @@ -923,11 +963,15 @@ delete_depth_big(struct rte_lpm *lpm,
> uint32_t
> > > > > ip_masked,
> > > > > }
> > > > > else if (tbl8_recycle_index > -1) {
> > > > > /* Update tbl24 entry. */
> > > > > - struct rte_lpm_tbl24_entry new_tbl24_entry = {
> > > > > - { .next_hop =
> > > > > lpm->tbl8[tbl8_recycle_index].next_hop, },
> > > > > - .valid = VALID,
> > > > > - .ext_entry = 0,
> > > > > + struct rte_lpm_tbl_entry new_tbl24_entry = {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + .as_num =
> lpm->tbl8[tbl8_recycle_index].as_num,
> > > > > +#endif
> > > > > + .next_hop =
> > > lpm->tbl8[tbl8_recycle_index].next_hop,
> > > > > + .fwd_class =
> > > > > lpm->tbl8[tbl8_recycle_index].fwd_class,
> > > > > .depth =
> lpm->tbl8[tbl8_recycle_index].depth,
> > > > > + .ext_valid = 0,
> > > > > + .valid = VALID,
> > > > > };
> > > > >
> > > > > /* Set tbl24 before freeing tbl8 to avoid race
> > > condition. */
> > > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > > index c299ce2..7c615bc 100644
> > > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > > @@ -31,8 +31,8 @@
> > > > > * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > > > DAMAGE.
> > > > > */
> > > > >
> > > > > -#ifndef _RTE_LPM_H_
> > > > > -#define _RTE_LPM_H_
> > > > > +#ifndef _RTE_LPM_EXT_H_
> > > > > +#define _RTE_LPM_EXT_H_
> > > > >
> > > > > /**
> > > > > * @file
> > > > > @@ -81,57 +81,58 @@ extern "C" {
> > > > > #define RTE_LPM_RETURN_IF_TRUE(cond, retval)
> > > > > #endif
> > > > >
> > > > > -/** @internal bitmask with valid and ext_entry/valid_group fields
> set
> > > */
> > > > > -#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x0300
> > > > > +/** @internal bitmask with valid and ext_valid/ext_valid fields
> set */
> > > > > +#define RTE_LPM_VALID_EXT_ENTRY_BITMASK 0x03
> > > > >
> > > > > /** Bitmask used to indicate successful lookup */
> > > > > -#define RTE_LPM_LOOKUP_SUCCESS 0x0100
> > > > > +#define RTE_LPM_LOOKUP_SUCCESS 0x01
> > > > > +
> > > > > +struct rte_lpm_res {
> > > > > + uint16_t next_hop;
> > > > > + uint8_t fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint32_t as_num;
> > > > > +#endif
> > > > > +};
> > > > >
> > > > > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> > > > > -/** @internal Tbl24 entry structure. */
> > > > > -struct rte_lpm_tbl24_entry {
> > > > > - /* Stores Next hop or group index (i.e. gindex)into tbl8.
> */
> > > > > +struct rte_lpm_tbl_entry {
> > > > > + uint8_t valid :1;
> > > > > + uint8_t ext_valid :1;
> > > > > + uint8_t depth :6;
> > > > > + uint8_t fwd_class;
> > > > > union {
> > > > > - uint8_t next_hop;
> > > > > - uint8_t tbl8_gindex;
> > > > > + uint16_t next_hop;
> > > > > + uint16_t tbl8_gindex;
> > > > > };
> > > > > - /* Using single uint8_t to store 3 values. */
> > > > > - uint8_t valid :1; /**< Validation flag. */
> > > > > - uint8_t ext_entry :1; /**< External entry. */
> > > > > - uint8_t depth :6; /**< Rule depth. */
> > > > > -};
> > > > > -
> > > > > -/** @internal Tbl8 entry structure. */
> > > > > -struct rte_lpm_tbl8_entry {
> > > > > - uint8_t next_hop; /**< next hop. */
> > > > > - /* Using single uint8_t to store 3 values. */
> > > > > - uint8_t valid :1; /**< Validation flag. */
> > > > > - uint8_t valid_group :1; /**< Group validation flag. */
> > > > > - uint8_t depth :6; /**< Rule depth. */
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint32_t as_num;
> > > > > +#endif
> > > > > };
> > > > > #else
> > > > > -struct rte_lpm_tbl24_entry {
> > > > > - uint8_t depth :6;
> > > > > - uint8_t ext_entry :1;
> > > > > - uint8_t valid :1;
> > > > > +struct rte_lpm_tbl_entry {
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint32_t as_num;
> > > > > +#endif
> > > > > union {
> > > > > - uint8_t tbl8_gindex;
> > > > > - uint8_t next_hop;
> > > > > + uint16_t tbl8_gindex;
> > > > > + uint16_t next_hop;
> > > > > };
> > > > > -};
> > > > > -
> > > > > -struct rte_lpm_tbl8_entry {
> > > > > - uint8_t depth :6;
> > > > > - uint8_t valid_group :1;
> > > > > - uint8_t valid :1;
> > > > > - uint8_t next_hop;
> > > > > + uint8_t fwd_class;
> > > > > + uint8_t depth :6;
> > > > > + uint8_t ext_valid :1;
> > > > > + uint8_t valid :1;
> > > > > };
> > > > > #endif
> > > > >
> > > > > /** @internal Rule structure. */
> > > > > struct rte_lpm_rule {
> > > > > uint32_t ip; /**< Rule IP address. */
> > > > > - uint8_t next_hop; /**< Rule next hop. */
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint32_t as_num;
> > > > > +#endif
> > > > > + uint16_t next_hop; /**< Rule next hop. */
> > > > > + uint8_t fwd_class;
> > > > > };
> > > > >
> > > > > /** @internal Contains metadata about the rules table. */
> > > > > @@ -148,9 +149,9 @@ struct rte_lpm {
> > > > > struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**<
> > > Rule
> > > > > info table. */
> > > > >
> > > > > /* LPM Tables. */
> > > > > - struct rte_lpm_tbl24_entry
> tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > > + struct rte_lpm_tbl_entry tbl24[RTE_LPM_TBL24_NUM_ENTRIES] \
> > > > > __rte_cache_aligned; /**< LPM tbl24 table.
> */
> > > > > - struct rte_lpm_tbl8_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > > + struct rte_lpm_tbl_entry tbl8[RTE_LPM_TBL8_NUM_ENTRIES] \
> > > > > __rte_cache_aligned; /**< LPM tbl8 table.
> */
> > > > > struct rte_lpm_rule rules_tbl[0] \
> > > > > __rte_cache_aligned; /**< LPM rules. */
> > > > > @@ -219,7 +220,7 @@ rte_lpm_free(struct rte_lpm *lpm);
> > > > > * 0 on success, negative value otherwise
> > > > > */
> > > > > int
> > > > > -rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> uint8_t
> > > > > next_hop);
> > > > > +rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
> struct
> > > > > rte_lpm_res *res);
> > > > >
> > > > > /**
> > > > > * Check if a rule is present in the LPM table,
> > > > > @@ -238,7 +239,7 @@ rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
> > > > uint8_t
> > > > > depth, uint8_t next_hop);
> > > > > */
> > > > > int
> > > > > rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t
> > > depth,
> > > > > -uint8_t *next_hop);
> > > > > + struct rte_lpm_res *res);
> > > > >
> > > > > /**
> > > > > * Delete a rule from the LPM table.
> > > > > @@ -277,29 +278,43 @@ rte_lpm_delete_all(struct rte_lpm *lpm);
> > > > > * -EINVAL for incorrect arguments, -ENOENT on lookup miss, 0 on
> > > lookup
> > > > > hit
> > > > > */
> > > > > static inline int
> > > > > -rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint8_t
> *next_hop)
> > > > > +rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, struct
> rte_lpm_res
> > > *res)
> > > > > {
> > > > > unsigned tbl24_index = (ip >> 8);
> > > > > - uint16_t tbl_entry;
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint64_t tbl_entry;
> > > > > +#else
> > > > > + uint32_t tbl_entry;
> > > > > +#endif
> > > > > /* DEBUG: Check user input arguments. */
> > > > > - RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (next_hop ==
> NULL)),
> > > > > -EINVAL);
> > > > > + RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (res == NULL)), -
> > > > EINVAL);
> > > > >
> > > > > /* Copy tbl24 entry */
> > > > > - tbl_entry = *(const uint16_t *)&lpm->tbl24[tbl24_index];
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + tbl_entry = *(const uint64_t *)&lpm->tbl24[tbl24_index];
> > > > > +#else
> > > > > + tbl_entry = *(const uint32_t *)&lpm->tbl24[tbl24_index];
> > > > > +#endif
> > > > > /* Copy tbl8 entry (only if needed) */
> > > > > if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> ==
> > > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > >
> > > > > unsigned tbl8_index = (uint8_t)ip +
> > > > > - ((uint8_t)tbl_entry *
> > > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > > + ((*(struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > >
> > > > > - tbl_entry = *(const uint16_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + tbl_entry = *(const uint64_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#else
> > > > > + tbl_entry = *(const uint32_t
> *)&lpm->tbl8[tbl8_index];
> > > > > +#endif
> > > > > }
> > > > > -
> > > > > - *next_hop = (uint8_t)tbl_entry;
> > > > > + res->next_hop = ((struct rte_lpm_tbl_entry
> > > *)&tbl_entry)->next_hop;
> > > > > + res->fwd_class = ((struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->fwd_class;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + res->as_num = ((struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->as_num;
> > > > > +#endif
> > > > > return (tbl_entry & RTE_LPM_LOOKUP_SUCCESS) ? 0 : -ENOENT;
> > > > > +
> > > > > }
> > > > >
> > > > > /**
> > > > > @@ -322,19 +337,25 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t
> ip,
> > > > > uint8_t *next_hop)
> > > > > * @return
> > > > > * -EINVAL for incorrect arguments, otherwise 0
> > > > > */
> > > > > -#define rte_lpm_lookup_bulk(lpm, ips, next_hops, n) \
> > > > > - rte_lpm_lookup_bulk_func(lpm, ips, next_hops, n)
> > > > > +#define rte_lpm_lookup_bulk(lpm, ips, res_tbl, n) \
> > > > > + rte_lpm_lookup_bulk_func(lpm, ips, res_tbl, n)
> > > > >
> > > > > static inline int
> > > > > -rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const
> uint32_t *
> > > ips,
> > > > > - uint16_t * next_hops, const unsigned n)
> > > > > +rte_lpm_lookup_bulk_func(const struct rte_lpm *lpm, const uint32_t
> > > *ips,
> > > > > + struct rte_lpm_res *res_tbl, const unsigned n)
> > > > > {
> > > > > unsigned i;
> > > > > + int ret = 0;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + uint64_t tbl_entry;
> > > > > +#else
> > > > > + uint32_t tbl_entry;
> > > > > +#endif
> > > > > unsigned tbl24_indexes[n];
> > > > >
> > > > > /* DEBUG: Check user input arguments. */
> > > > > RTE_LPM_RETURN_IF_TRUE(((lpm == NULL) || (ips == NULL) ||
> > > > > - (next_hops == NULL)), -EINVAL);
> > > > > + (res_tbl == NULL)), -EINVAL);
> > > > >
> > > > > for (i = 0; i < n; i++) {
> > > > > tbl24_indexes[i] = ips[i] >> 8;
> > > > > @@ -342,20 +363,32 @@ rte_lpm_lookup_bulk_func(const struct rte_lpm
> > > > *lpm,
> > > > > const uint32_t * ips,
> > > > >
> > > > > for (i = 0; i < n; i++) {
> > > > > /* Simply copy tbl24 entry to output */
> > > > > - next_hops[i] = *(const uint16_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > -
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + tbl_entry = *(const uint64_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > +#else
> > > > > + tbl_entry = *(const uint32_t
> > > > > *)&lpm->tbl24[tbl24_indexes[i]];
> > > > > +#endif
> > > > > /* Overwrite output with tbl8 entry if needed */
> > > > > - if (unlikely((next_hops[i] &
> > > > > RTE_LPM_VALID_EXT_ENTRY_BITMASK) ==
> > > > > - RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > > + if (unlikely((tbl_entry &
> > > RTE_LPM_VALID_EXT_ENTRY_BITMASK)
> > > > > ==
> > > > > + RTE_LPM_VALID_EXT_ENTRY_BITMASK)) {
> > > > >
> > > > > unsigned tbl8_index = (uint8_t)ips[i] +
> > > > > - ((uint8_t)next_hops[i] *
> > > > > -
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > > + ((*(struct rte_lpm_tbl_entry
> > > > > *)&tbl_entry).tbl8_gindex * RTE_LPM_TBL8_GROUP_NUM_ENTRIES);
> > > > >
> > > > > - next_hops[i] = *(const uint16_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + tbl_entry = *(const uint64_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#else
> > > > > + tbl_entry = *(const uint32_t
> > > > > *)&lpm->tbl8[tbl8_index];
> > > > > +#endif
> > > > > }
> > > > > + res_tbl[i].next_hop = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->next_hop;
> > > > > + res_tbl[i].fwd_class = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->next_hop;
> > > > > +#ifdef RTE_LIBRTE_LPM_ASNUM
> > > > > + res_tbl[i].as_num = ((struct
> rte_lpm_tbl_entry
> > > > > *)&tbl_entry)->as_num;
> > > > > +#endif
> > > > > + ret |= 1 << i;
> > > > > }
> > > > > - return 0;
> > > > > + return ret;
> > > > > }
> > > > >
> > > > > /* Mask four results. */
> > > > > @@ -477,4 +510,4 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm,
> > > > __m128i ip,
> > > > > uint16_t hop[4],
> > > > > }
> > > > > #endif
> > > > >
> > > > > -#endif /* _RTE_LPM_H_ */
> > > > > +#endif /* _RTE_LPM_EXT_H_ */
> > > > >
> > > > > 2015-10-24 9:09 GMT+03:00 Matthew Hall <mhall@mhcomputing.net>:
> > > > >
> > > > > > On 10/23/15 9:20 AM, Matthew Hall wrote:
> > > > > >
> > > > > >> On Fri, Oct 23, 2015 at 03:51:48PM +0200, Michal Jastrzebski
> wrote:
> > > > > >>
> > > > > >>> From: Michal Kobylinski <michalx.kobylinski@intel.com>
> > > > > >>>
> > > > > >>> The current DPDK implementation for LPM for IPv4 and IPv6
> limits
> > > the
> > > > > >>> number of next hops to 256, as the next hop ID is an 8-bit long
> > > field.
> > > > > >>> Proposed extension increase number of next hops for IPv4 to
> 2^24
> > > and
> > > > > >>> also allows 32-bits read/write operations.
> > > > > >>>
> > > > > >>> This patchset requires additional change to rte_table library
> to
> > > meet
> > > > > >>> ABI compatibility requirements. A v2 will be sent next week.
> > > > > >>>
> > > > > >>
> > > > > >> I also have a patchset for this.
> > > > > >>
> > > > > >> I will send it out as well so we could compare.
> > > > > >>
> > > > > >> Matthew.
> > > > > >>
> > > > > >
> > > > > > Sorry about the delay; I only work on DPDK in personal time and
> not
> > > as
> > > > > > part of a job. My patchset is attached to this email.
> > > > > >
> > > > > > One possible advantage with my patchset, compared to others, is
> that
> > > the
> > > > > > space problem is fixed in both IPV4 and in IPV6, to prevent
> asymmetry
> > > > > > between these two standards, which is something I try to avoid as
> > > much
> > > > as
> > > > > > humanly possible.
> > > > > >
> > > > > > This is because my application code is green-field, so I
> absolutely
> > > don't
> > > > > > want to put any ugly hacks or incompatibilities in this code if
> I can
> > > > > > possibly avoid it.
> > > > > >
> > > > > > Otherwise, I am not necessarily as expert about rte_lpm as some
> of
> > > the
> > > > > > full-time guys, but I think with four or five of us in the thread
> > > hammering
> > > > > > out patches we will be able to create something amazing together
> and
> > > I
> > > > am
> > > > > > very very very very very happy about this.
> > > > > >
> > > > > > Matthew.
> > > > > >
> > > >
> > >
> > > Hi Vladimir,
> > > Thanks for sharing Your implementation.
> > > Could You please clarify what as_num and fwd_class fields represent?
> > > The second issue I have is that Your patch doesn’t want to apply on
> top of
> > > current head. Could You check this please?
> > >
> > > Best regards
> > > Michal
> > >
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation
2015-09-22 3:45 19% ` [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation Jingjing Wu
@ 2015-10-27 7:54 7% ` Zhang, Helin
2015-10-27 9:35 4% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Zhang, Helin @ 2015-10-27 7:54 UTC (permalink / raw)
To: Wu, Jingjing, dev
I am not sure if doc udpated here is what we expected or not.
Any guidance on this from ABI experts?
Regards,
Helin
> -----Original Message-----
> From: Wu, Jingjing
> Sent: Tuesday, September 22, 2015 11:46 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing; Zhang, Helin; Lu, Wenzhuo; Xu, HuilongX
> Subject: [PATCH 4/4] doc: extend commands in testpmd and remove related ABI
> deprecation
>
> Modify the doc about flow director commands to support filtering in VFs.
> Remove related ABI deprecation.
>
> Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ----
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++++------
> 2 files changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index fffad80..e1a35b9 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -24,10 +24,6 @@ Deprecation Notices
> Structures: rte_fdir_*, rte_eth_fdir.
> Enums: rte_l4type, rte_iptype.
>
> -* ABI changes are planned for struct rte_eth_fdir_flow_ext in order to support
> - flow director filtering in VF. The release 2.1 does not contain these ABI
> - changes, but release 2.2 will, and no backwards compatibility is planned.
> -
> * ABI changes are planned for struct rte_eth_fdir_filter and
> rte_eth_fdir_masks in order to support new flow director modes,
> MAC VLAN and Cloud, on x550. The MAC VLAN mode means the MAC and
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index aa77a91..9a0d18a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -1624,30 +1624,30 @@ Different NICs may have different capabilities,
> command show port fdir (port_id)
>
> flow_director_filter (port_id) (add|del|update) flow
> (ipv4-other|ipv4-frag|ipv6-other|ipv6-frag)
> src (src_ip_address) dst (dst_ip_address) vlan (vlan_value) flexbytes
> (flexbytes_value)
> -(drop|fwd) queue (queue_id) fd_id (fd_id_value)
> +(drop|fwd) pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
>
> flow_director_filter (port_id) (add|del|update) flow
> (ipv4-tcp|ipv4-udp|ipv6-tcp|ipv6-udp)
> src (src_ip_address) (src_port) dst (dst_ip_address) (dst_port) vlan (vlan_value)
> -flexbytes (flexbytes_value) (drop|fwd) queue (queue_id) fd_id (fd_id_value)
> +flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id) queue (queue_id)
> +fd_id (fd_id_value)
>
> flow_director_filter (port_id) (add|del|update) flow (ipv4-sctp|ipv6-sctp) src
> (src_ip_address) (src_port) dst (dst_ip_address) (dst_port) tag (verification_tag)
> -vlan (vlan_value) flexbytes (flexbytes_value) (drop|fwd) queue (queue_id)
> fd_id (fd_id_value)
> +vlan (vlan_value) flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id)
> +queue (queue_id) fd_id (fd_id_value)
>
> flow_director_filter (port_id) (add|del|update) flow l2_payload -ether
> (ethertype) flexbytes (flexbytes_value) (drop|fwd) queue (queue_id) fd_id
> (fd_id_value)
> +ether (ethertype) flexbytes (flexbytes_value) (drop|fwd) pf|vf(vf_id)
> +queue (queue_id) fd_id (fd_id_value)
>
> For example, to add an ipv4-udp flow type filter:
>
> .. code-block:: console
>
> - testpmd> flow_director_filter 0 add flow ipv4-udp src 2.2.2.3 32 dst 2.2.2.5
> 33 vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
> + testpmd> flow_director_filter 0 add flow ipv4-udp src 2.2.2.3 32
> + dst 2.2.2.5 33 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
>
> For example, add an ipv4-other flow type filter:
>
> .. code-block:: console
>
> - testpmd> flow_director_filter 0 add flow ipv4-other src 2.2.2.3 dst 2.2.2.5
> vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
> + testpmd> flow_director_filter 0 add flow ipv4-other src 2.2.2.3 dst
> + 2.2.2.5 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
>
> flush_flow_director
> ~~~~~~~~~~~~~~~~~~~
> --
> 2.4.0
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation
2015-10-27 7:54 7% ` Zhang, Helin
@ 2015-10-27 9:35 4% ` Thomas Monjalon
2015-10-28 2:06 4% ` Wu, Jingjing
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2015-10-27 9:35 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
2015-10-27 07:54, Zhang, Helin:
> I am not sure if doc udpated here is what we expected or not.
> Any guidance on this from ABI experts?
[...]
> > doc/guides/rel_notes/deprecation.rst | 4 ----
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++++------
The deprecation should be removed in the patch changing the code.
And the release notes must be updated at the same time.
Thanks for the catch Helin.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
2015-10-22 12:06 2% ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 3% ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
2015-10-27 12:51 2% ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
@ 2015-10-27 12:51 10% ` Konstantin Ananyev
2 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
doc/guides/rel_notes/release_2_2.rst | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index de6916e..aff6306 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -11,6 +11,11 @@ New Features
* **Added vhost-user multiple queue support.**
+* **Add new API into rte_ethdev to retrieve RX/TX queue information.**
+
+ * Add the ability for the upper layer to query RX/TX queue information.
+ * Add into rte_eth_dev_info new fields to represent information about
+ RX/TX descriptors min/max/alig nnumbers per queue for the device.
Resolved Issues
---------------
@@ -98,6 +103,11 @@ API Changes
* The devargs union field virtual is renamed to virt for C++ compatibility.
+* New functions rte_eth_rx_queue_info_get() and rte_eth_tx_queue_info_get()
+ are introduced.
+
+* New fields rx_desc_lim and tx_desc_lim are added into rte_eth_dev_info
+ structure.
ABI Changes
-----------
@@ -108,6 +118,9 @@ ABI Changes
* The ethdev flow director entries for SCTP were changed.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
+* New fields rx_desc_lim and tx_desc_lim were added into rte_eth_dev_info
+ structure.
+
* The mbuf structure was changed to support unified packet type.
It was already done in 2.1 for CONFIG_RTE_NEXT_ABI.
--
1.8.5.3
^ permalink raw reply [relevance 10%]
* [dpdk-dev] [PATCHv7 0/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-22 12:06 2% ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
@ 2015-10-27 12:51 3% ` Konstantin Ananyev
2015-10-27 12:51 2% ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
2015-10-27 12:51 10% ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
2 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Add the ability for the upper layer to query:
1) configured RX/TX queue information.
2) information about RX/TX descriptors min/max/align
numbers per queue for the device.
v2 changes:
- Add formal check for the qinfo input parameter.
- As suggested rename 'rx_qinfo/tx_qinfo' to 'rxq_info/txq_info'
v3 changes:
- Updated rte_ether_version.map
- Merged with latest changes
v4 changes:
- rte_ether_version.map: move new functions into DPDK_2.1 sub-space.
v5 changes:
- adressed previous code-review comments
- rte_ether_version.map: move new functions into DPDK_2.2 sub-space.
- added new fields into rte_eth_dev_info
v6 chages:
- respin to comply with latest dpdk.org
- update release_notes, section "New Features"
v7 changes:
- update release notes, sections: "API Changes", "ABI Changes"
Konstantin Ananyev (9):
ethdev: add new API to retrieve RX/TX queue information
i40e: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
ixgbe: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
e1000: add support for eth_(rxq|txq)_info_get and (rx|tx)_desc_lim
fm10k: add HW specific desc_lim data into dev_info
cxgbe: add HW specific desc_lim data into dev_info
vmxnet3: add HW specific desc_lim data into dev_info
testpmd: add new command to display RX/TX queue information
doc: release notes update for queue_info_get() and (rx|tx)_desc_limit
app/test-pmd/cmdline.c | 48 +++++++++++++++++++
app/test-pmd/config.c | 77 ++++++++++++++++++++++++++++++
app/test-pmd/testpmd.h | 2 +
doc/guides/rel_notes/release_2_2.rst | 13 ++++++
drivers/net/cxgbe/cxgbe_ethdev.c | 9 ++++
drivers/net/e1000/e1000_ethdev.h | 36 ++++++++++++++
drivers/net/e1000/em_ethdev.c | 14 ++++++
drivers/net/e1000/em_rxtx.c | 71 ++++++++++++++++------------
drivers/net/e1000/igb_ethdev.c | 22 +++++++++
drivers/net/e1000/igb_rxtx.c | 66 +++++++++++++++++---------
drivers/net/fm10k/fm10k_ethdev.c | 11 +++++
drivers/net/i40e/i40e_ethdev.c | 14 ++++++
drivers/net/i40e/i40e_ethdev.h | 5 ++
drivers/net/i40e/i40e_ethdev_vf.c | 12 +++++
drivers/net/i40e/i40e_rxtx.c | 37 +++++++++++++++
drivers/net/ixgbe/ixgbe_ethdev.c | 23 +++++++++
drivers/net/ixgbe/ixgbe_ethdev.h | 6 +++
drivers/net/ixgbe/ixgbe_rxtx.c | 68 +++++++++++++++++----------
drivers/net/ixgbe/ixgbe_rxtx.h | 21 +++++++++
drivers/net/vmxnet3/vmxnet3_ethdev.c | 12 +++++
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
23 files changed, 648 insertions(+), 80 deletions(-)
--
1.8.5.3
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCHv7 1/9] ethdev: add new API to retrieve RX/TX queue information
2015-10-22 12:06 2% ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 3% ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
@ 2015-10-27 12:51 2% ` Konstantin Ananyev
2015-10-27 12:51 10% ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
2 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2015-10-27 12:51 UTC (permalink / raw)
To: dev
Add the ability for the upper layer to query RX/TX queue information.
Add into rte_eth_dev_info new fields to represent information about
RX/TX descriptors min/max/alig nnumbers per queue for the device.
Add new structures:
struct rte_eth_rxq_info
struct rte_eth_txq_info
new functions:
rte_eth_rx_queue_info_get
rte_eth_tx_queue_info_get
into rte_etdev API.
Left extra free space in the queue info structures,
so extra fields could be added later without ABI breakage.
Add new fields:
rx_desc_lim
tx_desc_lim
into rte_eth_dev_info.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_ether/rte_ethdev.c | 68 +++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 85 +++++++++++++++++++++++++++++++++-
lib/librte_ether/rte_ether_version.map | 8 ++++
3 files changed, 159 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index f593f6e..d18ecb5 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1447,6 +1447,19 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
return -EINVAL;
}
+ if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
+ nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
+ nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
+
+ PMD_DEBUG_TRACE("Invalid value for nb_rx_desc(=%hu), "
+ "should be: <= %hu, = %hu, and a product of %hu\n",
+ nb_rx_desc,
+ dev_info.rx_desc_lim.nb_max,
+ dev_info.rx_desc_lim.nb_min,
+ dev_info.rx_desc_lim.nb_align);
+ return -EINVAL;
+ }
+
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
@@ -1786,11 +1799,18 @@ void
rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
{
struct rte_eth_dev *dev;
+ const struct rte_eth_desc_lim lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ };
VALID_PORTID_OR_RET(port_id);
dev = &rte_eth_devices[port_id];
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
+ dev_info->rx_desc_lim = lim;
+ dev_info->tx_desc_lim = lim;
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
@@ -3221,6 +3241,54 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
}
int
+rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DEBUG_TRACE("Invalid RX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->rxq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
+rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rte_eth_dev *dev;
+
+ VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (qinfo == NULL)
+ return -EINVAL;
+
+ dev = &rte_eth_devices[port_id];
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DEBUG_TRACE("Invalid TX queue_id=%d\n", queue_id);
+ return -EINVAL;
+ }
+
+ FUNC_PTR_OR_ERR_RET(*dev->dev_ops->txq_info_get, -ENOTSUP);
+
+ memset(qinfo, 0, sizeof(*qinfo));
+ dev->dev_ops->txq_info_get(dev, queue_id, qinfo);
+ return 0;
+}
+
+int
rte_eth_dev_set_mc_addr_list(uint8_t port_id,
struct ether_addr *mc_addr_set,
uint32_t nb_mc_addr)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8a8c82b..4d7b6f2 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -653,6 +653,15 @@ struct rte_eth_txconf {
};
/**
+ * A structure contains information about HW descriptor ring limitations.
+ */
+struct rte_eth_desc_lim {
+ uint16_t nb_max; /**< Max allowed number of descriptors. */
+ uint16_t nb_min; /**< Min allowed number of descriptors. */
+ uint16_t nb_align; /**< Number of descriptors should be aligned to. */
+};
+
+/**
* This enum indicates the flow control mode
*/
enum rte_eth_fc_mode {
@@ -837,6 +846,8 @@ struct rte_eth_dev_info {
uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
+ struct rte_eth_desc_lim rx_desc_lim; /**< RX descriptors limits */
+ struct rte_eth_desc_lim tx_desc_lim; /**< TX descriptors limits */
};
/** Maximum name length for extended statistics counters */
@@ -854,6 +865,26 @@ struct rte_eth_xstats {
uint64_t value;
};
+/**
+ * Ethernet device RX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_rxq_info {
+ struct rte_mempool *mp; /**< mempool used by that queue. */
+ struct rte_eth_rxconf conf; /**< queue config parameters. */
+ uint8_t scattered_rx; /**< scattered packets RX supported. */
+ uint16_t nb_desc; /**< configured number of RXDs. */
+} __rte_cache_aligned;
+
+/**
+ * Ethernet device TX queue information structure.
+ * Used to retieve information about configured queue.
+ */
+struct rte_eth_txq_info {
+ struct rte_eth_txconf conf; /**< queue config parameters. */
+ uint16_t nb_desc; /**< configured number of TXDs. */
+} __rte_cache_aligned;
+
struct rte_eth_dev;
struct rte_eth_dev_callback;
@@ -965,6 +996,12 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
/**< @internal Check DD bit of specific RX descriptor */
+typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo);
+
+typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+
typedef int (*mtu_set_t)(struct rte_eth_dev *dev, uint16_t mtu);
/**< @internal Set MTU. */
@@ -1301,9 +1338,13 @@ struct eth_dev_ops {
rss_hash_update_t rss_hash_update;
/** Get current RSS hash configuration. */
rss_hash_conf_get_t rss_hash_conf_get;
- eth_filter_ctrl_t filter_ctrl; /**< common filter control*/
+ eth_filter_ctrl_t filter_ctrl;
+ /**< common filter control. */
eth_set_mc_addr_list_t set_mc_addr_list; /**< set list of mcast addrs */
-
+ eth_rxq_info_get_t rxq_info_get;
+ /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get;
+ /**< retrieve TX queue information. */
/** Turn IEEE1588/802.1AS timestamping on. */
eth_timesync_enable_t timesync_enable;
/** Turn IEEE1588/802.1AS timestamping off. */
@@ -3441,6 +3482,46 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id,
struct rte_eth_rxtx_callback *user_cb);
/**
+ * Retrieve information about given port's RX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The RX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+
+/**
+ * Retrieve information about given port's TX queue.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The TX queue on the Ethernet device for which information
+ * will be retrieved.
+ * @param qinfo
+ * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with
+ * the information of the Ethernet device.
+ *
+ * @return
+ * - 0: Success
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The port_id or the queue_id is out of range.
+ */
+int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
+
+/*
* Retrieve number of available registers for access
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 8345a6c..1fb4b87 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -127,3 +127,11 @@ DPDK_2.1 {
rte_eth_timesync_read_tx_timestamp;
} DPDK_2.0;
+
+DPDK_2.2 {
+ global:
+
+ rte_eth_rx_queue_info_get;
+ rte_eth_tx_queue_info_get;
+
+} DPDK_2.1;
--
1.8.5.3
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 01/17] mk: Introduce ARMv7 architecture
@ 2015-10-27 19:13 3% ` Jan Viktorin
1 sibling, 0 replies; 200+ results
From: Jan Viktorin @ 2015-10-27 19:13 UTC (permalink / raw)
To: dev, David Hunt, David Marchand, Ananyev, Konstantin; +Cc: Vlastimil Kosar
From: Vlastimil Kosar <kosar@rehivetech.com>
Make DPDK run on ARMv7-A architecture. This patch assumes
ARM Cortex-A9. However, it is known to be working on Cortex-A7
and Cortex-A15.
Signed-off-by: Vlastimil Kosar <kosar@rehivetech.com>
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
---
v1 -> v2:
* the -mtune parameter of GCC is configurable now
* the -mfpu=neon can be turned off
v2 -> v3: XMM_SIZE is defined in rte_vect.h in a following patch
---
config/defconfig_arm-armv7-a-linuxapp-gcc | 75 +++++++++++++++++++++++++++++++
mk/arch/arm/rte.vars.mk | 39 ++++++++++++++++
mk/machine/armv7-a/rte.vars.mk | 67 +++++++++++++++++++++++++++
3 files changed, 181 insertions(+)
create mode 100644 config/defconfig_arm-armv7-a-linuxapp-gcc
create mode 100644 mk/arch/arm/rte.vars.mk
create mode 100644 mk/machine/armv7-a/rte.vars.mk
diff --git a/config/defconfig_arm-armv7-a-linuxapp-gcc b/config/defconfig_arm-armv7-a-linuxapp-gcc
new file mode 100644
index 0000000..5a778cf
--- /dev/null
+++ b/config/defconfig_arm-armv7-a-linuxapp-gcc
@@ -0,0 +1,75 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All right reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "common_linuxapp"
+
+CONFIG_RTE_MACHINE="armv7-a"
+
+CONFIG_RTE_ARCH="arm"
+CONFIG_RTE_ARCH_ARM=y
+CONFIG_RTE_ARCH_ARMv7=y
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a9"
+CONFIG_RTE_ARCH_ARM_NEON=y
+
+CONFIG_RTE_TOOLCHAIN="gcc"
+CONFIG_RTE_TOOLCHAIN_GCC=y
+
+# ARM doesn't have support for vmware TSC map
+CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
+
+# avoids using i686/x86_64 SIMD instructions, nothing for ARM
+CONFIG_RTE_BITMAP_OPTIMIZATIONS=0
+
+# KNI is not supported on 32-bit
+CONFIG_RTE_LIBRTE_KNI=n
+
+# PCI is usually not used on ARM
+CONFIG_RTE_EAL_IGB_UIO=n
+
+# fails to compile on ARM
+CONFIG_RTE_LIBRTE_ACL=n
+CONFIG_RTE_LIBRTE_LPM=n
+
+# cannot use those on ARM
+CONFIG_RTE_KNI_KMOD=n
+CONFIG_RTE_LIBRTE_EM_PMD=n
+CONFIG_RTE_LIBRTE_IGB_PMD=n
+CONFIG_RTE_LIBRTE_CXGBE_PMD=n
+CONFIG_RTE_LIBRTE_E1000_PMD=n
+CONFIG_RTE_LIBRTE_ENIC_PMD=n
+CONFIG_RTE_LIBRTE_FM10K_PMD=n
+CONFIG_RTE_LIBRTE_I40E_PMD=n
+CONFIG_RTE_LIBRTE_IXGBE_PMD=n
+CONFIG_RTE_LIBRTE_MLX4_PMD=n
+CONFIG_RTE_LIBRTE_MPIPE_PMD=n
+CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
+CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
+CONFIG_RTE_LIBRTE_PMD_XENVIRT=n
+CONFIG_RTE_LIBRTE_PMD_BNX2X=n
diff --git a/mk/arch/arm/rte.vars.mk b/mk/arch/arm/rte.vars.mk
new file mode 100644
index 0000000..df0c043
--- /dev/null
+++ b/mk/arch/arm/rte.vars.mk
@@ -0,0 +1,39 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+ARCH ?= arm
+CROSS ?=
+
+CPU_CFLAGS ?= -marm -DRTE_CACHE_LINE_SIZE=64 -munaligned-access
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?= -felf
+
+export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
diff --git a/mk/machine/armv7-a/rte.vars.mk b/mk/machine/armv7-a/rte.vars.mk
new file mode 100644
index 0000000..48d3979
--- /dev/null
+++ b/mk/machine/armv7-a/rte.vars.mk
@@ -0,0 +1,67 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+
+CPU_CFLAGS += -mfloat-abi=softfp
+
+MACHINE_CFLAGS += -march=armv7-a
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE)
+endif
+
+ifeq ($(CONFIG_RTE_ARCH_ARM_NEON),y)
+MACHINE_CFLAGS += -mfpu=neon
+endif
--
2.6.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation
2015-10-27 9:35 4% ` Thomas Monjalon
@ 2015-10-28 2:06 4% ` Wu, Jingjing
0 siblings, 0 replies; 200+ results
From: Wu, Jingjing @ 2015-10-28 2:06 UTC (permalink / raw)
To: Thomas Monjalon, Zhang, Helin; +Cc: dev
Thanks helin and Thomas
I will update the release notes too.
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, October 27, 2015 5:36 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Wu, Jingjing
> Subject: Re: [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and
> remove related ABI deprecation
>
> 2015-10-27 07:54, Zhang, Helin:
> > I am not sure if doc udpated here is what we expected or not.
> > Any guidance on this from ABI experts?
> [...]
> > > doc/guides/rel_notes/deprecation.rst | 4 ----
> > > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 12 ++++++------
>
> The deprecation should be removed in the patch changing the code.
> And the release notes must be updated at the same time.
> Thanks for the catch Helin.
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 0/4] extend flow director to support VF filtering in i40e driver
2015-09-22 3:45 3% [dpdk-dev] [PATCH 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
2015-09-22 3:45 19% ` [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation Jingjing Wu
@ 2015-10-28 8:41 3% ` Jingjing Wu
2015-10-28 8:41 20% ` [dpdk-dev] [PATCH v2 4/4] doc: extend commands in testpmd and update release note Jingjing Wu
1 sibling, 1 reply; 200+ results
From: Jingjing Wu @ 2015-10-28 8:41 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
This patch set extends flow director to VF filtering in i40e driver.
v2 change:
- rework the doc, including release notes and testpmd guide
Jingjing Wu (4):
ethdev: extend struct to support flow director in VFs
i40e: extend flow diretcor to support filtering in VFs
testpmd: extend commands
doc: extend commands in testpmd and remove related ABI deprecation
app/test-pmd/cmdline.c | 41 ++++++++++++++++++++++++++---
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_2_2.rst | 2 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 15 ++++++-----
drivers/net/i40e/i40e_ethdev.c | 4 +--
drivers/net/i40e/i40e_fdir.c | 15 ++++++++---
lib/librte_ether/rte_eth_ctrl.h | 2 ++
7 files changed, 64 insertions(+), 19 deletions(-)
--
2.4.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 4/4] doc: extend commands in testpmd and update release note
2015-10-28 8:41 3% ` [dpdk-dev] [PATCH v2 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
@ 2015-10-28 8:41 20% ` Jingjing Wu
0 siblings, 0 replies; 200+ results
From: Jingjing Wu @ 2015-10-28 8:41 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
Modify the doc about flow director commands to support filtering in VFs.
Remove related ABI deprecation.
update release note.
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_2_2.rst | 2 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 15 +++++++++------
3 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a391ff0..cd2b80c 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,10 +17,6 @@ Deprecation Notices
imissed, ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
-* ABI changes are planned for struct rte_eth_fdir_flow_ext in order to support
- flow director filtering in VF. The release 2.1 does not contain these ABI
- changes, but release 2.2 will, and no backwards compatibility is planned.
-
* ABI changes are planned for struct rte_eth_fdir_filter and
rte_eth_fdir_masks in order to support new flow director modes,
MAC VLAN and Cloud, on x550. The MAC VLAN mode means the MAC and
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index de6916e..d934776 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -124,6 +124,8 @@ ABI Changes
* librte_cfgfile: Allow longer names and values by increasing the constants
CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
+* The rte_eth_fdir_flow_ext structure is changed. New fields are added to
+ support flow director filtering in VF.
Shared Library Versions
-----------------------
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 71d831b..eae5249 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1644,14 +1644,16 @@ Different NICs may have different capabilities, command show port fdir (port_id)
flow (ipv4-other|ipv4-frag|ipv6-other|ipv6-frag)
src (src_ip_address) dst (dst_ip_address) \
vlan (vlan_value) flexbytes (flexbytes_value) \
- (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+ (drop|fwd) pf|vf(vf_id) queue (queue_id) \
+ fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) \
flow (ipv4-tcp|ipv4-udp|ipv6-tcp|ipv6-udp) \
src (src_ip_address) (src_port) \
dst (dst_ip_address) (dst_port) \
vlan (vlan_value) flexbytes (flexbytes_value) \
- (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+ (drop|fwd) queue pf|vf(vf_id) (queue_id) \
+ fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) \
flow (ipv4-sctp|ipv6-sctp) \
@@ -1659,21 +1661,22 @@ Different NICs may have different capabilities, command show port fdir (port_id)
dst (dst_ip_address) (dst_port)
tag (verification_tag) vlan (vlan_value) \
flexbytes (flexbytes_value) (drop|fwd) \
- queue (queue_id) fd_id (fd_id_value)
+ pf|vf(vf_id) queue (queue_id) fd_id (fd_id_value)
flow_director_filter (port_id) (add|del|update) flow l2_payload \
ether (ethertype) flexbytes (flexbytes_value) \
- (drop|fwd) queue (queue_id) fd_id (fd_id_value)
+ (drop|fwd) pf|vf(vf_id) queue (queue_id)
+ fd_id (fd_id_value)
For example, to add an ipv4-udp flow type filter::
testpmd> flow_director_filter 0 add flow ipv4-udp src 2.2.2.3 32 \
- dst 2.2.2.5 33 vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
+ dst 2.2.2.5 33 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
For example, add an ipv4-other flow type filter::
testpmd> flow_director_filter 0 add flow ipv4-other src 2.2.2.3 \
- dst 2.2.2.5 vlan 0x1 flexbytes (0x88,0x48) fwd queue 1 fd_id 1
+ dst 2.2.2.5 vlan 0x1 flexbytes (0x88,0x48) fwd pf queue 1 fd_id 1
flush_flow_director
~~~~~~~~~~~~~~~~~~~
--
2.4.0
^ permalink raw reply [relevance 20%]
* Re: [dpdk-dev] [PATCH] doc: Add missing new line before code block
2015-10-20 2:41 3% [dpdk-dev] [PATCH] doc: Add missing new line before code block Tetsuya Mukawa
@ 2015-10-28 9:33 0% ` Mcnamara, John
0 siblings, 0 replies; 200+ results
From: Mcnamara, John @ 2015-10-28 9:33 UTC (permalink / raw)
To: Tetsuya Mukawa, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Tetsuya Mukawa
> Sent: Tuesday, October 20, 2015 3:42 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] doc: Add missing new line before code block
>
> The patch adds missing new line to "Managing ABI updates" section.
>
> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 0/7] librte_table: add key_mask parameter to
@ 2015-10-28 17:11 3% roy.fan.zhang
2015-10-28 17:11 6% ` [dpdk-dev] [PATCH v4 1/7] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters roy.fan.zhang
0 siblings, 1 reply; 200+ results
From: roy.fan.zhang @ 2015-10-28 17:11 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patchset links to ABI change announced for librte_table.
The key_mask parameters has been added to the hash table
parameter structure for 8-byte key and 16-byte key extendible
bucket and LRU tables.
v2:
*updated release note
v3:
*merged release note with source code patch
*fixed build error: added missing symbol to
librte_table/rte_table_version.map
v4:
*modified rte_prefetch offsets to improve hash/lru table
lookup performance.
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Fan Zhang (7):
librte_table: add key_mask parameter to 8- and 16-bytes key hash
parameters
librte_table: add 16 byte hash table operations with computed lookup
app/test: modify app/test_table_combined and app/test_table_tables
app/test-pipeline: modify pipeline test
example/ip_pipeline: add parse_hex_string for internal use
example/ip_pipeline/pipeline: update flow_classification pipeline
librte_table: performance improvement on rte_prefetch offset
app/test-pipeline/pipeline_hash.c | 4 +
app/test/test_table_combined.c | 5 +-
app/test/test_table_tables.c | 6 +-
doc/guides/rel_notes/deprecation.rst | 4 -
doc/guides/rel_notes/release_2_2.rst | 4 +
examples/ip_pipeline/config_parse.c | 52 +++
.../pipeline/pipeline_flow_classification_be.c | 56 ++-
examples/ip_pipeline/pipeline_be.h | 4 +
lib/librte_table/rte_table_hash.h | 20 +
lib/librte_table/rte_table_hash_ext.c | 10 +-
lib/librte_table/rte_table_hash_key16.c | 446 +++++++++++++++++++--
lib/librte_table/rte_table_hash_key32.c | 35 +-
lib/librte_table/rte_table_hash_key8.c | 105 +++--
lib/librte_table/rte_table_hash_lru.c | 10 +-
lib/librte_table/rte_table_version.map | 7 +
15 files changed, 673 insertions(+), 95 deletions(-)
--
2.1.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 1/7] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters
2015-10-28 17:11 3% [dpdk-dev] [PATCH v4 0/7] librte_table: add key_mask parameter to roy.fan.zhang
@ 2015-10-28 17:11 6% ` roy.fan.zhang
0 siblings, 0 replies; 200+ results
From: roy.fan.zhang @ 2015-10-28 17:11 UTC (permalink / raw)
To: dev
From: Fan Zhang <roy.fan.zhang@intel.com>
This patch relates to ABI change proposed for librte_table.
The key_mask parameter is added for 8-byte and 16-byte
key extendible bucket and LRU tables.The release notes
is updated and the deprecation notice is removed.
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_2_2.rst | 4 +++
lib/librte_table/rte_table_hash.h | 12 ++++++++
lib/librte_table/rte_table_hash_key16.c | 52 ++++++++++++++++++++++++++-----
lib/librte_table/rte_table_hash_key8.c | 54 +++++++++++++++++++++++++++------
lib/librte_table/rte_table_version.map | 7 +++++
6 files changed, 112 insertions(+), 21 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a391ff0..16ec9f8 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -44,10 +44,6 @@ Deprecation Notices
* librte_table: New functions for table entry bulk add/delete will be added
to the table operations structure.
-* librte_table hash: Key mask parameter will be added to the hash table
- parameter structure for 8-byte key and 16-byte key extendible bucket and
- LRU tables.
-
* librte_pipeline: The prototype for the pipeline input port, output port
and table action handlers will be updated:
the pipeline parameter will be added, the packets mask parameter will be
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 128f956..7beba40 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -132,6 +132,10 @@ ABI Changes
* librte_cfgfile: Allow longer names and values by increasing the constants
CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
+* librte_table hash: The key mask parameter is added to the hash table
+ parameter structure for 8-byte key and 16-byte key extendible bucket
+ and LRU tables.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_table/rte_table_hash.h b/lib/librte_table/rte_table_hash.h
index 9181942..e2c60e1 100644
--- a/lib/librte_table/rte_table_hash.h
+++ b/lib/librte_table/rte_table_hash.h
@@ -196,6 +196,9 @@ struct rte_table_hash_key8_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -226,6 +229,9 @@ struct rte_table_hash_key8_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket hash table operations for pre-computed key signature */
@@ -257,6 +263,9 @@ struct rte_table_hash_key16_lru_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** LRU hash table operations for pre-computed key signature */
@@ -284,6 +293,9 @@ struct rte_table_hash_key16_ext_params {
/** Byte offset within packet meta-data where the key is located */
uint32_t key_offset;
+
+ /** Bit-mask to be AND-ed to the key on lookup */
+ uint8_t *key_mask;
};
/** Extendible bucket operations for pre-computed key signature */
diff --git a/lib/librte_table/rte_table_hash_key16.c b/lib/librte_table/rte_table_hash_key16.c
index f6a3306..0d6cc55 100644
--- a/lib/librte_table/rte_table_hash_key16.c
+++ b/lib/librte_table/rte_table_hash_key16.c
@@ -85,6 +85,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask[2];
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -164,6 +165,14 @@ rte_table_hash_create_key16_lru(void *params,
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = ((uint64_t *)p->key_mask)[0];
+ f->key_mask[1] = ((uint64_t *)p->key_mask)[1];
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_16 *bucket;
@@ -384,6 +393,14 @@ rte_table_hash_create_key16_ext(void *params,
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
+ if (p->key_mask != NULL) {
+ f->key_mask[0] = (((uint64_t *)p->key_mask)[0]);
+ f->key_mask[1] = (((uint64_t *)p->key_mask)[1]);
+ } else {
+ f->key_mask[0] = 0xFFFFFFFFFFFFFFFFLLU;
+ f->key_mask[1] = 0xFFFFFFFFFFFFFFFFLLU;
+ }
+
return f;
}
@@ -609,11 +626,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -631,11 +651,14 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket2, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket2, pos); \
\
pkt_mask = (bucket2->signature[pos] & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -658,12 +681,15 @@ rte_table_hash_entry_delete_key16_ext(
void *a; \
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
+ uint64_t hash_key_buffer[2]; \
uint32_t pos; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer[0] = key[0] & f->key_mask[0]; \
+ hash_key_buffer[1] = key[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key, bucket, pos); \
+ lookup_key16_cmp(hash_key_buffer, bucket, pos); \
\
pkt_mask = (bucket->signature[pos] & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -749,13 +775,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
@@ -778,13 +810,19 @@ rte_table_hash_entry_delete_key16_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_key_buffer20[2]; \
+ uint64_t hash_key_buffer21[2]; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_key_buffer20[0] = key20[0] & f->key_mask[0]; \
+ hash_key_buffer20[1] = key20[1] & f->key_mask[1]; \
+ hash_key_buffer21[0] = key21[0] & f->key_mask[0]; \
+ hash_key_buffer21[1] = key21[1] & f->key_mask[1]; \
\
- lookup_key16_cmp(key20, bucket20, pos20); \
- lookup_key16_cmp(key21, bucket21, pos21); \
+ lookup_key16_cmp(hash_key_buffer20, bucket20, pos20); \
+ lookup_key16_cmp(hash_key_buffer21, bucket21, pos21); \
\
pkt20_mask = (bucket20->signature[pos20] & 1LLU) << pkt20_index;\
pkt21_mask = (bucket21->signature[pos21] & 1LLU) << pkt21_index;\
diff --git a/lib/librte_table/rte_table_hash_key8.c b/lib/librte_table/rte_table_hash_key8.c
index b351a49..ccb20cf 100644
--- a/lib/librte_table/rte_table_hash_key8.c
+++ b/lib/librte_table/rte_table_hash_key8.c
@@ -82,6 +82,7 @@ struct rte_table_hash {
uint32_t bucket_size;
uint32_t signature_offset;
uint32_t key_offset;
+ uint64_t key_mask;
rte_table_hash_op_hash f_hash;
uint64_t seed;
@@ -160,6 +161,11 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
f->f_hash = p->f_hash;
f->seed = p->seed;
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets; i++) {
struct rte_bucket_4_8 *bucket;
@@ -372,6 +378,11 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
f->stack = (uint32_t *)
&f->memory[(n_buckets + n_buckets_ext) * f->bucket_size];
+ if (p->key_mask != NULL)
+ f->key_mask = ((uint64_t *)p->key_mask)[0];
+ else
+ f->key_mask = 0xFFFFFFFFFFFFFFFFLLU;
+
for (i = 0; i < n_buckets_ext; i++)
f->stack[i] = i;
@@ -586,9 +597,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t *key; \
uint64_t signature; \
uint32_t bucket_index; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf1, f->key_offset);\
- signature = f->f_hash(key, RTE_TABLE_HASH_KEY_SIZE, f->seed);\
+ hash_key_buffer = *key & f->key_mask; \
+ signature = f->f_hash(&hash_key_buffer, \
+ RTE_TABLE_HASH_KEY_SIZE, f->seed); \
bucket_index = signature & (f->n_buckets - 1); \
bucket1 = (struct rte_bucket_4_8 *) \
&f->memory[bucket_index * f->bucket_size]; \
@@ -602,10 +616,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = key[0] & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -624,10 +640,12 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
key = RTE_MBUF_METADATA_UINT64_PTR(mbuf2, f->key_offset);\
+ hash_key_buffer = *key & f->key_mask; \
\
- lookup_key8_cmp(key, bucket2, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket2, pos); \
\
pkt_mask = ((bucket2->signature >> pos) & 1LLU) << pkt2_index;\
pkts_mask_out |= pkt_mask; \
@@ -651,11 +669,13 @@ rte_table_hash_entry_delete_key8_ext(
uint64_t pkt_mask, bucket_mask; \
uint64_t *key; \
uint32_t pos; \
+ uint64_t hash_key_buffer; \
\
bucket = buckets[pkt_index]; \
key = keys[pkt_index]; \
+ hash_key_buffer = (*key) & f->key_mask; \
\
- lookup_key8_cmp(key, bucket, pos); \
+ lookup_key8_cmp((&hash_key_buffer), bucket, pos); \
\
pkt_mask = ((bucket->signature >> pos) & 1LLU) << pkt_index;\
pkts_mask_out |= pkt_mask; \
@@ -736,6 +756,8 @@ rte_table_hash_entry_delete_key8_ext(
#define lookup2_stage1_dosig(mbuf10, mbuf11, bucket10, bucket11, f)\
{ \
uint64_t *key10, *key11; \
+ uint64_t hash_offset_buffer10; \
+ uint64_t hash_offset_buffer11; \
uint64_t signature10, signature11; \
uint32_t bucket10_index, bucket11_index; \
rte_table_hash_op_hash f_hash = f->f_hash; \
@@ -744,14 +766,18 @@ rte_table_hash_entry_delete_key8_ext(
\
key10 = RTE_MBUF_METADATA_UINT64_PTR(mbuf10, key_offset);\
key11 = RTE_MBUF_METADATA_UINT64_PTR(mbuf11, key_offset);\
+ hash_offset_buffer10 = *key10 & f->key_mask; \
+ hash_offset_buffer11 = *key11 & f->key_mask; \
\
- signature10 = f_hash(key10, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature10 = f_hash(&hash_offset_buffer10, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket10_index = signature10 & (f->n_buckets - 1); \
bucket10 = (struct rte_bucket_4_8 *) \
&f->memory[bucket10_index * f->bucket_size]; \
rte_prefetch0(bucket10); \
\
- signature11 = f_hash(key11, RTE_TABLE_HASH_KEY_SIZE, seed);\
+ signature11 = f_hash(&hash_offset_buffer11, \
+ RTE_TABLE_HASH_KEY_SIZE, seed); \
bucket11_index = signature11 & (f->n_buckets - 1); \
bucket11 = (struct rte_bucket_4_8 *) \
&f->memory[bucket11_index * f->bucket_size]; \
@@ -764,13 +790,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask; \
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
@@ -793,13 +823,17 @@ rte_table_hash_entry_delete_key8_ext(
void *a20, *a21; \
uint64_t pkt20_mask, pkt21_mask, bucket20_mask, bucket21_mask;\
uint64_t *key20, *key21; \
+ uint64_t hash_offset_buffer20; \
+ uint64_t hash_offset_buffer21; \
uint32_t pos20, pos21; \
\
key20 = RTE_MBUF_METADATA_UINT64_PTR(mbuf20, f->key_offset);\
key21 = RTE_MBUF_METADATA_UINT64_PTR(mbuf21, f->key_offset);\
+ hash_offset_buffer20 = *key20 & f->key_mask; \
+ hash_offset_buffer21 = *key21 & f->key_mask; \
\
- lookup_key8_cmp(key20, bucket20, pos20); \
- lookup_key8_cmp(key21, bucket21, pos21); \
+ lookup_key8_cmp((&hash_offset_buffer20), bucket20, pos20);\
+ lookup_key8_cmp((&hash_offset_buffer21), bucket21, pos21);\
\
pkt20_mask = ((bucket20->signature >> pos20) & 1LLU) << pkt20_index;\
pkt21_mask = ((bucket21->signature >> pos21) & 1LLU) << pkt21_index;\
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index d33f926..2138698 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -19,3 +19,10 @@ DPDK_2.0 {
local: *;
};
+
+DPDK_2.2 {
+ global:
+
+ rte_table_hash_key16_ext_dosig_ops;
+
+} DPDK_2.0;
--
2.1.0
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH v6 1/3] i40e: RSS/FD granularity configuration
@ 2015-10-29 9:38 3% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2015-10-29 9:38 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
On Thu, Oct 29, 2015 at 02:02:50PM +0800, Helin Zhang wrote:
> The default input set of fields of a received packet are loaded from
> firmware, which cannot be modified even users want to use different
> fields for RSS or flow director. Here adds more flexibilities of
> selecting packet fields for hash calculation or flow director for
> users.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> Signed-off-by: Andrey Chilikin <andrey.chilikin@intel.com>
> ---
> drivers/net/i40e/i40e_ethdev.c | 742 ++++++++++++++++++++++++++++++++++++++++
> drivers/net/i40e/i40e_ethdev.h | 7 +
> drivers/net/i40e/i40e_fdir.c | 31 ++
> lib/librte_ether/rte_eth_ctrl.h | 114 +++++-
> 4 files changed, 890 insertions(+), 4 deletions(-)
>
<snip>
> @@ -672,6 +776,8 @@ struct rte_eth_hash_filter_info {
> uint8_t enable;
> /** Global configurations of hash filter */
> struct rte_eth_hash_global_conf global_conf;
> + /** Global configurations of hash filter input set */
> + struct rte_eth_input_set_conf input_set_conf;
> } info;
> };
>
Hi Helin,
Just to check: Does this change affect the size of the structure and cause ABI
issues?
/Bruce
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 14/15] mk: Introduce ARMv7 architecture
@ 2015-10-29 12:43 3% ` Jan Viktorin
0 siblings, 0 replies; 200+ results
From: Jan Viktorin @ 2015-10-29 12:43 UTC (permalink / raw)
To: David Hunt, David Marchand; +Cc: Vlastimil Kosar, dev
From: Vlastimil Kosar <kosar@rehivetech.com>
Make DPDK run on ARMv7-A architecture. This patch assumes
ARM Cortex-A9. However, it is known to be working on Cortex-A7
and Cortex-A15.
Signed-off-by: Vlastimil Kosar <kosar@rehivetech.com>
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
---
v2:
* the -mtune parameter of GCC is configurable now
* the -mfpu=neon can be turned off
v3: XMM_SIZE is defined in rte_vect.h in a following patch
v4:
* update release notes for 2.2
* get rid of CONFIG_RTE_BITMAP_OPTIMIZATIONS=0 setting
* rename arm defconfig: "armv7-a" -> "arvm7a"
* disable pipeline and table modules unless lpm is fixed
---
config/defconfig_arm-armv7a-linuxapp-gcc | 74 ++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_2_2.rst | 5 +++
mk/arch/arm/rte.vars.mk | 39 +++++++++++++++++
mk/machine/armv7-a/rte.vars.mk | 67 +++++++++++++++++++++++++++++
4 files changed, 185 insertions(+)
create mode 100644 config/defconfig_arm-armv7a-linuxapp-gcc
create mode 100644 mk/arch/arm/rte.vars.mk
create mode 100644 mk/machine/armv7-a/rte.vars.mk
diff --git a/config/defconfig_arm-armv7a-linuxapp-gcc b/config/defconfig_arm-armv7a-linuxapp-gcc
new file mode 100644
index 0000000..d623222
--- /dev/null
+++ b/config/defconfig_arm-armv7a-linuxapp-gcc
@@ -0,0 +1,74 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All right reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#include "common_linuxapp"
+
+CONFIG_RTE_MACHINE="armv7-a"
+
+CONFIG_RTE_ARCH="arm"
+CONFIG_RTE_ARCH_ARM=y
+CONFIG_RTE_ARCH_ARMv7=y
+CONFIG_RTE_ARCH_ARM_TUNE="cortex-a9"
+CONFIG_RTE_ARCH_ARM_NEON=y
+
+CONFIG_RTE_TOOLCHAIN="gcc"
+CONFIG_RTE_TOOLCHAIN_GCC=y
+
+# ARM doesn't have support for vmware TSC map
+CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
+
+# KNI is not supported on 32-bit
+CONFIG_RTE_LIBRTE_KNI=n
+
+# PCI is usually not used on ARM
+CONFIG_RTE_EAL_IGB_UIO=n
+
+# fails to compile on ARM
+CONFIG_RTE_LIBRTE_ACL=n
+CONFIG_RTE_LIBRTE_LPM=n
+CONFIG_RTE_LIBRTE_TABLE=n
+CONFIG_RTE_LIBRTE_PIPELINE=n
+
+# cannot use those on ARM
+CONFIG_RTE_KNI_KMOD=n
+CONFIG_RTE_LIBRTE_EM_PMD=n
+CONFIG_RTE_LIBRTE_IGB_PMD=n
+CONFIG_RTE_LIBRTE_CXGBE_PMD=n
+CONFIG_RTE_LIBRTE_E1000_PMD=n
+CONFIG_RTE_LIBRTE_ENIC_PMD=n
+CONFIG_RTE_LIBRTE_FM10K_PMD=n
+CONFIG_RTE_LIBRTE_I40E_PMD=n
+CONFIG_RTE_LIBRTE_IXGBE_PMD=n
+CONFIG_RTE_LIBRTE_MLX4_PMD=n
+CONFIG_RTE_LIBRTE_MPIPE_PMD=n
+CONFIG_RTE_LIBRTE_VIRTIO_PMD=n
+CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
+CONFIG_RTE_LIBRTE_PMD_XENVIRT=n
+CONFIG_RTE_LIBRTE_PMD_BNX2X=n
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index be6f827..43a3a3c 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -23,6 +23,11 @@ New Features
* **Added vhost-user multiple queue support.**
+* **Introduce ARMv7 architecture**
+
+ It is now possible to build DPDK for the ARMv7 platform and test with
+ virtual PMD drivers.
+
Resolved Issues
---------------
diff --git a/mk/arch/arm/rte.vars.mk b/mk/arch/arm/rte.vars.mk
new file mode 100644
index 0000000..df0c043
--- /dev/null
+++ b/mk/arch/arm/rte.vars.mk
@@ -0,0 +1,39 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+ARCH ?= arm
+CROSS ?=
+
+CPU_CFLAGS ?= -marm -DRTE_CACHE_LINE_SIZE=64 -munaligned-access
+CPU_LDFLAGS ?=
+CPU_ASFLAGS ?= -felf
+
+export ARCH CROSS CPU_CFLAGS CPU_LDFLAGS CPU_ASFLAGS
diff --git a/mk/machine/armv7-a/rte.vars.mk b/mk/machine/armv7-a/rte.vars.mk
new file mode 100644
index 0000000..48d3979
--- /dev/null
+++ b/mk/machine/armv7-a/rte.vars.mk
@@ -0,0 +1,67 @@
+# BSD LICENSE
+#
+# Copyright (C) 2015 RehiveTech. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of RehiveTech nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+#
+# machine:
+#
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
+# overrides the one defined in arch.
+# - may override any previously defined variable
+#
+
+# ARCH =
+# CROSS =
+# MACHINE_CFLAGS =
+# MACHINE_LDFLAGS =
+# MACHINE_ASFLAGS =
+# CPU_CFLAGS =
+# CPU_LDFLAGS =
+# CPU_ASFLAGS =
+
+CPU_CFLAGS += -mfloat-abi=softfp
+
+MACHINE_CFLAGS += -march=armv7-a
+
+ifdef CONFIG_RTE_ARCH_ARM_TUNE
+MACHINE_CFLAGS += -mtune=$(CONFIG_RTE_ARCH_ARM_TUNE)
+endif
+
+ifeq ($(CONFIG_RTE_ARCH_ARM_NEON),y)
+MACHINE_CFLAGS += -mfpu=neon
+endif
--
2.6.1
^ permalink raw reply [relevance 3%]
Results 1001-1200 of ~18000 next (newer) | prev (older) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2015-06-15 16:51 [dpdk-dev] [PATCH 0/3 v2] remove code marked as deprecated in 2.0 Stephen Hemminger
2015-08-16 22:51 ` [dpdk-dev] [PATCH 0/3] announce deprecation of functions commented as deprecated Thomas Monjalon
2015-08-16 22:51 4% ` [dpdk-dev] [PATCH 2/3] doc: announce removal of kni functions using port id Thomas Monjalon
2015-08-16 22:51 4% ` [dpdk-dev] [PATCH 3/3] doc: announce ring PMD functions removal Thomas Monjalon
2015-06-16 1:38 [dpdk-dev] [PATCH] abi: Announce abi changes plan for vhost-user multiple queues Ouyang Changchun
2015-08-12 14:57 5% ` Thomas Monjalon
2015-06-17 3:36 [dpdk-dev] [PATCH] abi: announce abi changes plan for struct rte_eth_fdir_flow_ext Jingjing Wu
2015-08-12 10:25 5% ` Thomas Monjalon
2015-07-09 2:47 [dpdk-dev] [PATCH] doc: announce ABI change of rte_fdir_filter, rte_fdir_masks Wenzhuo Lu
2015-07-10 2:24 ` [dpdk-dev] [PATCH v2] doc: announce ABI change of rte_eth_fdir_filter, rte_eth_fdir_masks Wenzhuo Lu
2015-08-04 8:54 7% ` Mcnamara, John
2015-08-04 8:56 4% ` Mcnamara, John
2015-08-12 14:19 4% ` Thomas Monjalon
2015-07-13 10:26 [dpdk-dev] [PATCH] ethdev: fix ABI breakage in lro code John McNamara
2015-08-03 2:39 ` Chao Zhu
2015-08-03 3:45 ` Chao Zhu
2015-08-03 8:41 8% ` Thomas Monjalon
2015-08-03 12:53 7% ` Neil Horman
2015-07-13 16:25 [dpdk-dev] [PATCH] hash: rename unused field to "reserved" Bruce Richardson
2015-07-13 16:38 ` [dpdk-dev] [PATCH v2] " Bruce Richardson
2015-09-22 23:01 3% ` Stephen Hemminger
2015-07-15 13:11 [dpdk-dev] [PATCH v6 0/9] Expose IXGBE extended stats to DPDK apps Maryam Tahhan
2015-07-15 13:11 ` [dpdk-dev] [PATCH v6 4/9] ethdev: remove HW specific stats in stats structs Maryam Tahhan
2015-08-17 14:53 0% ` Olivier MATZ
2015-08-19 12:53 0% ` Tahhan, Maryam
2015-07-16 11:36 [dpdk-dev] [PATCH] doc: announce ABI change for librte_cfgfile Cristian Dumitrescu
2015-07-16 12:49 ` Singh, Jasvinder
2015-08-15 9:09 4% ` Thomas Monjalon
2015-07-16 17:07 [dpdk-dev] [PATCH] doc: announce ABI change for librte_pipeline Cristian Dumitrescu
2015-07-17 12:03 ` Singh, Jasvinder
2015-08-15 21:49 4% ` Thomas Monjalon
2015-07-16 21:21 [dpdk-dev] [PATCH] doc: announce ABI change for librte_sched Stephen Hemminger
2015-07-16 21:28 ` Neil Horman
2015-08-15 8:58 4% ` Thomas Monjalon
2015-07-20 7:03 [dpdk-dev] [PATCH] doc: announce ABI change for rte_eth_fdir_filter Jingjing Wu
2015-08-04 8:52 4% ` Mcnamara, John
2015-08-12 10:38 4% ` Thomas Monjalon
2015-08-04 8:53 4% ` Mcnamara, John
2015-07-20 14:12 [dpdk-dev] [PATCH] examples: new example: l2fwd-ethtool Liang-Min Larry Wang
2015-07-23 15:00 ` [dpdk-dev] [PATCH v2 0/2] Example: l2fwd-ethtool Liang-Min Larry Wang
2015-07-23 15:00 ` [dpdk-dev] [PATCH v2 1/2] Remove ABI requierment for external library builds Liang-Min Larry Wang
2015-10-21 16:31 4% ` Thomas Monjalon
2015-07-23 10:59 [dpdk-dev] [PATCH v2] announce ABI change for librte_table Cristian Dumitrescu
2015-07-23 11:05 ` Singh, Jasvinder
2015-08-15 21:48 4% ` Thomas Monjalon
2015-07-30 1:57 [dpdk-dev] [PATCH v2] doc: announce abi change for interrupt mode Cunming Liang
2015-07-30 5:04 ` [dpdk-dev] [PATCH v3] " Cunming Liang
2015-07-31 1:00 ` Zhang, Helin
2015-08-12 8:51 4% ` Thomas Monjalon
2015-08-03 22:47 4% [dpdk-dev] [dpdk-announce] release candidate 2.1.0-rc3 Thomas Monjalon
2015-08-11 2:12 7% [dpdk-dev] [PATCH] doc: announce ABI change for old flow director APIs removing Jingjing Wu
2015-08-11 3:01 4% ` Zhang, Helin
2015-08-12 9:02 4% ` Thomas Monjalon
2015-08-11 5:41 4% ` Liu, Jijiang
2015-08-11 11:57 3% [dpdk-dev] [PATCH] doc: restructured release notes documentation John McNamara
2015-08-11 16:29 4% [dpdk-dev] [PATCH] doc: add missing API headers Thomas Monjalon
2015-08-11 22:58 4% [dpdk-dev] [PATCH] doc: simplify release notes cover Thomas Monjalon
2015-08-13 11:04 [dpdk-dev] [PATCH] doc: updated release notes for r2.1 John McNamara
2015-08-13 11:04 5% ` John McNamara
2015-08-17 14:39 4% [dpdk-dev] [PATCH 1/2] doc: announce removal of jhash2 function Thomas Monjalon
2015-08-17 14:39 4% ` [dpdk-dev] [PATCH 2/2] doc: announce removal of LPM memory location Thomas Monjalon
2015-08-19 20:46 2% [dpdk-dev] [PATCH v2] Move common functions in eal_thread.c Ravi Kerur
2015-08-28 16:49 [dpdk-dev] [PATCH 0/2] rte_sched: cleanups Stephen Hemminger
2015-08-28 16:49 ` [dpdk-dev] [PATCH 2/2] rte_sched: remove useless bitmap_free Stephen Hemminger
2015-09-11 17:28 3% ` Dumitrescu, Cristian
2015-09-11 19:18 0% ` Stephen Hemminger
2015-08-29 0:16 [dpdk-dev] [PATCH v4 0/2] ethdev: add port speed capability bitmap Marc Sune
2015-10-04 21:12 ` [dpdk-dev] [PATCH v5 0/4] ethdev: add speed capabilities and refactor link API Marc Sune
2015-10-04 21:12 ` [dpdk-dev] [PATCH v5 3/4] ethdev: redesign link speed config API Marc Sune
2015-10-05 10:59 4% ` Neil Horman
2015-10-07 13:31 5% ` Marc Sune
2015-10-04 21:12 13% ` [dpdk-dev] [PATCH v5 4/4] doc: update with link changes Marc Sune
2015-08-31 3:55 [dpdk-dev] [RFC PATCH v2] Add VHOST PMD Tetsuya Mukawa
2015-08-31 3:55 ` [dpdk-dev] [RFC PATCH v2] vhost: " Tetsuya Mukawa
2015-10-16 12:52 ` Bruce Richardson
2015-10-19 1:51 ` Tetsuya Mukawa
2015-10-19 9:32 ` Loftus, Ciara
2015-10-19 9:45 ` Bruce Richardson
2015-10-19 10:50 ` Tetsuya Mukawa
2015-10-19 13:26 4% ` Panu Matilainen
2015-10-19 13:27 0% ` Richardson, Bruce
2015-10-21 4:35 0% ` Tetsuya Mukawa
2015-10-21 6:25 3% ` Panu Matilainen
2015-10-21 10:22 0% ` Bruce Richardson
2015-10-22 9:50 0% ` Tetsuya Mukawa
2015-09-01 19:17 [dpdk-dev] ixgbe: prefetch packet headers in vector PMD receive function Zoltan Kiss
2015-09-07 12:25 ` [dpdk-dev] [PATCH] " Zoltan Kiss
2015-09-07 12:57 ` Richardson, Bruce
2015-09-07 14:15 ` Zoltan Kiss
2015-09-07 14:41 ` Richardson, Bruce
2015-09-25 18:28 ` Zoltan Kiss
2015-09-27 23:19 ` Ananyev, Konstantin
2015-10-14 16:10 ` Zoltan Kiss
2015-10-14 23:23 ` Ananyev, Konstantin
2015-10-15 10:32 ` Zoltan Kiss
2015-10-15 15:43 3% ` Ananyev, Konstantin
2015-10-19 16:30 0% ` Zoltan Kiss
2015-10-19 18:57 0% ` Ananyev, Konstantin
2015-10-20 9:58 0% ` Zoltan Kiss
2015-09-01 20:18 [dpdk-dev] [PATCH 0/9] clean deprecated code Thomas Monjalon
2015-09-01 21:30 5% ` [dpdk-dev] [PATCH 1/9] ethdev: remove Rx interrupt switch Thomas Monjalon
2015-09-01 21:30 2% ` [dpdk-dev] [PATCH 2/9] mbuf: remove packet type from offload flags Thomas Monjalon
2015-09-01 21:30 11% ` [dpdk-dev] [PATCH 3/9] ethdev: remove SCTP flow entries switch Thomas Monjalon
2015-09-01 21:31 4% ` [dpdk-dev] [PATCH 9/9] ring: remove deprecated functions Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 01/10] doc: init next release notes Thomas Monjalon
2015-09-03 15:44 3% ` Mcnamara, John
2015-09-04 7:50 0% ` Thomas Monjalon
2015-09-02 13:16 7% ` [dpdk-dev] [PATCH v2 02/10] ethdev: remove Rx interrupt switch Thomas Monjalon
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 03/10] mbuf: remove packet type from offload flags Thomas Monjalon
2015-09-02 13:16 15% ` [dpdk-dev] [PATCH v2 04/10] ethdev: remove SCTP flow entries switch Thomas Monjalon
2015-09-02 13:16 4% ` [dpdk-dev] [PATCH v2 05/10] eal: remove deprecated function Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 06/10] mem: remove dummy malloc library Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 07/10] lpm: remove deprecated field Thomas Monjalon
2015-09-02 13:16 2% ` [dpdk-dev] [PATCH v2 08/10] acl: remove old API Thomas Monjalon
2015-09-02 13:16 3% ` [dpdk-dev] [PATCH v2 09/10] kni: remove deprecated functions Thomas Monjalon
2015-09-02 13:16 5% ` [dpdk-dev] [PATCH v2 10/10] ring: " Thomas Monjalon
2015-09-04 7:50 0% ` [dpdk-dev] [PATCH v2 00/10] clean deprecated code Thomas Monjalon
2015-09-02 12:49 [dpdk-dev] rte_eal_init() alternative? Montorsi, Francesco
2015-09-02 12:56 ` Bruce Richardson
2015-09-02 13:10 ` Thomas Monjalon
2015-10-08 14:58 ` Montorsi, Francesco
2015-10-09 8:25 3% ` Panu Matilainen
2015-10-09 10:03 3% ` Montorsi, Francesco
2015-10-09 10:40 0% ` Panu Matilainen
2015-10-09 16:03 0% ` Thomas F Herbert
2015-09-02 13:49 5% [dpdk-dev] libdpdk upstream changes for ecosystem best practices Robie Basak
2015-09-02 14:18 3% ` Thomas Monjalon
2015-09-02 16:01 0% ` Stephen Hemminger
2015-09-18 10:39 5% ` Robie Basak
2015-09-02 14:13 [dpdk-dev] [PATCH 1/1] ip_frag: fix creating ipv6 fragment extension header Piotr
2015-09-04 15:51 ` Ananyev, Konstantin
2015-09-07 11:21 4% ` Dumitrescu, Cristian
2015-09-07 11:23 0% ` Ananyev, Konstantin
2015-09-07 11:24 0% ` Dumitrescu, Cristian
2015-09-02 15:53 3% [dpdk-dev] [PATCH] librte_cfgfile (rte_cfgfile.h): modify the macros values Jasvinder Singh
2015-09-02 20:47 3% ` Thomas Monjalon
2015-09-03 9:53 0% ` Singh, Jasvinder
[not found] <1441289108-4501-1-git-send-email-jasvinder.singh@intel.com>
2015-09-03 14:18 3% ` [dpdk-dev] [PATCH v2] " Jasvinder Singh
2015-09-03 14:33 0% ` Thomas Monjalon
2015-09-04 10:58 8% ` [dpdk-dev] [PATCH v3] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
2015-09-07 11:23 0% ` Dumitrescu, Cristian
2015-10-22 14:03 4% ` [dpdk-dev] [PATCH v4 0/2] cfgfile: " Jasvinder Singh
2015-10-22 14:03 8% ` [dpdk-dev] [PATCH v4 2/2] librte_cfgfile(rte_cfgfile.h): " Jasvinder Singh
2015-09-04 9:05 [dpdk-dev] [PATCH 0/3] clean deprecated code in hash library Pablo de Lara
2015-09-04 9:05 4% ` [dpdk-dev] [PATCH 3/3] hash: remove deprecated functions and macros Pablo de Lara
2015-09-08 10:11 3% [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-08 10:11 3% ` [dpdk-dev] [PATCH 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
2015-09-08 10:11 5% ` [dpdk-dev] [PATCH 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-09-08 12:57 0% ` [dpdk-dev] [PATCH 0/4] librte_table: add name parameter to lpm table Dumitrescu, Cristian
2015-10-09 10:49 0% ` Thomas Monjalon
2015-09-09 8:44 [dpdk-dev] DPDK 2.2 roadmap Thomas Monjalon
2015-09-15 9:16 4% ` David Marchand
2015-09-11 10:31 [dpdk-dev] [PATCH v2 0/5] pipeline: add bulk add/delete functions for table Maciej Gajdzica
2015-09-11 10:31 5% ` [dpdk-dev] [PATCH v2 5/5] doc: modify release notes and deprecation notice for table and pipeline Maciej Gajdzica
2015-10-13 7:34 ` [dpdk-dev] [PATCH v3 0/5] pipeline: add bulk add/delete functions for table Marcin Kerlin
2015-10-13 7:34 5% ` [dpdk-dev] [PATCH v3 5/5] doc: modify release notes and deprecation notice for table and pipeline Marcin Kerlin
2015-09-11 11:04 15% [dpdk-dev] [PATCH] doc: add guideline for updating release notes John McNamara
2015-09-11 13:35 3% [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data roy.fan.zhang
2015-09-11 13:35 3% ` [dpdk-dev] [PATCH 1/4] librte_port: " roy.fan.zhang
2015-09-11 13:35 5% ` [dpdk-dev] [PATCH 4/4] librte_port: modify release notes and deprecation notice roy.fan.zhang
2015-09-11 13:40 0% ` [dpdk-dev] [PATCH 0/4]librte_port: modify macros to access packet meta-data Dumitrescu, Cristian
2015-10-19 15:02 0% ` Thomas Monjalon
2015-09-17 16:03 3% [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Jasvinder Singh
2015-09-17 16:03 3% ` [dpdk-dev] [PATCH v2 1/4] librte_table: modify LPM table parameter structure Jasvinder Singh
2015-09-17 16:03 5% ` [dpdk-dev] [PATCH v2 4/4] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-09-17 16:13 0% ` [dpdk-dev] [PATCH v2 0/4]librte_table: add name parameter to lpm table Dumitrescu, Cristian
2015-10-12 14:06 0% ` Thomas Monjalon
2015-09-18 20:33 [dpdk-dev] [PATCH 0/7] Add hierarchical support to make install Mario Carrillo
2015-10-01 0:11 ` [dpdk-dev] [PATCH v3 0/8] Add instalation rules for dpdk files Mario Carrillo
2015-10-01 0:11 ` [dpdk-dev] [PATCH v3 8/8] mk: Add rule for installing runtime files Mario Carrillo
2015-10-02 11:15 ` Panu Matilainen
2015-10-02 11:25 3% ` Bruce Richardson
2015-09-22 3:45 3% [dpdk-dev] [PATCH 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
2015-09-22 3:45 19% ` [dpdk-dev] [PATCH 4/4] doc: extend commands in testpmd and remove related ABI deprecation Jingjing Wu
2015-10-27 7:54 7% ` Zhang, Helin
2015-10-27 9:35 4% ` Thomas Monjalon
2015-10-28 2:06 4% ` Wu, Jingjing
2015-10-28 8:41 3% ` [dpdk-dev] [PATCH v2 0/4] extend flow director to support VF filtering in i40e driver Jingjing Wu
2015-10-28 8:41 20% ` [dpdk-dev] [PATCH v2 4/4] doc: extend commands in testpmd and update release note Jingjing Wu
2015-09-24 7:34 9% [dpdk-dev] [PATCH 0/3] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:34 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
2015-09-24 7:42 0% ` Panu Matilainen
2015-09-24 7:34 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
2015-09-24 7:50 9% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Panu Matilainen
2015-09-24 7:50 19% ` [dpdk-dev] [PATCH 1/3] scripts: permit passing extra compiler & linker flags to ABI validator Panu Matilainen
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 2/3] scripts: move two identical config fixups into a function Panu Matilainen
2015-09-24 7:50 14% ` [dpdk-dev] [PATCH 3/3] scripts: teach ABI validator about CONFIG_RTE_KNI_KMOD Panu Matilainen
2015-09-24 10:23 4% ` [dpdk-dev] [PATCH 0/3 v2] Minor abi-validator improvements Neil Horman
2015-09-25 6:05 [dpdk-dev] [PATCH 0/6] Support new flow director modes on Intel x550 NIC Wenzhuo Lu
2015-10-22 7:11 ` [dpdk-dev] [PATCH v3 0/7] " Wenzhuo Lu
2015-10-22 7:11 ` [dpdk-dev] [PATCH v3 1/7] lib/librte_ether: modify the structures for fdir new modes Wenzhuo Lu
2015-10-22 12:57 3% ` Bruce Richardson
2015-10-23 1:22 0% ` Lu, Wenzhuo
2015-10-23 9:58 0% ` Bruce Richardson
2015-10-23 13:06 0% ` Lu, Wenzhuo
2015-09-25 22:33 3% [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters roy.fan.zhang
2015-09-25 22:33 3% ` [dpdk-dev] [PATCH 2/8] librte_table: add key_mask parameter to 16-byte " roy.fan.zhang
2015-09-25 22:33 5% ` [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice roy.fan.zhang
2015-10-12 14:24 0% ` Thomas Monjalon
2015-09-28 20:07 0% ` [dpdk-dev] [PATCH 0/8] librte_table: add key_mask parameter to 8-byte key Dumitrescu, Cristian
2015-09-30 12:12 [dpdk-dev] [PATCH 0/4] eth_ring: perf test and usability improvements Bruce Richardson
2015-09-30 12:12 3% ` [dpdk-dev] [PATCH 2/4] rte_ring: store memzone pointer inside ring Bruce Richardson
2015-10-13 14:29 0% ` Olivier MATZ
[not found] <PATCH>
2015-09-28 13:03 ` [dpdk-dev] [PATCH 00/20] remove pci driver from vdevs Bernard Iremonger
2015-09-28 13:03 ` [dpdk-dev] [PATCH 02/20] librte_ether: add fields from rte_pci_driver to rte_eth_dev_data Bernard Iremonger
2015-09-30 13:18 3% ` Neil Horman
2015-09-30 13:23 3% ` Bruce Richardson
2015-09-30 15:38 [dpdk-dev] [PATCH] ip_pipeline: add more functions to routing-pipeline Dumitrescu, Cristian
2015-10-01 9:05 ` [dpdk-dev] [PATCH v2] " Jasvinder Singh
2015-10-01 11:00 4% ` Neil Horman
2015-10-01 12:37 5% ` Dumitrescu, Cristian
2015-10-01 17:18 0% ` Neil Horman
2015-09-30 22:28 [dpdk-dev] [PATCH 0/2] uio_msi: device driver Stephen Hemminger
2015-10-01 10:59 ` Avi Kivity
2015-10-01 14:57 3% ` Stephen Hemminger
2015-10-01 19:48 0% ` Alexander Duyck
2015-10-01 22:00 0% ` Stephen Hemminger
2015-10-01 23:03 0% ` Alexander Duyck
2015-10-01 23:39 0% ` Stephen Hemminger
2015-10-01 23:43 0% ` Alexander Duyck
[not found] <20150930143927-mutt-send-email-mst@redhat.com>
2015-09-30 11:53 ` [dpdk-dev] Having troubles binding an SR-IOV VF to uio_pci_generic on Amazon instance Vlad Zolotarov
2015-09-30 12:03 ` Michael S. Tsirkin
2015-09-30 12:16 ` Vlad Zolotarov
2015-09-30 12:27 ` Michael S. Tsirkin
2015-09-30 12:50 ` Vlad Zolotarov
2015-09-30 15:26 ` Michael S. Tsirkin
2015-09-30 18:15 ` Vlad Zolotarov
2015-09-30 18:55 ` Michael S. Tsirkin
2015-09-30 19:06 ` Vlad Zolotarov
2015-09-30 19:39 ` Michael S. Tsirkin
2015-09-30 20:09 ` Vlad Zolotarov
2015-09-30 21:36 ` Stephen Hemminger
2015-10-01 8:00 ` Vlad Zolotarov
2015-10-01 14:47 3% ` Stephen Hemminger
2015-10-01 15:03 0% ` Vlad Zolotarov
2015-10-01 19:54 [dpdk-dev] [PATCHv5 0/8] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
2015-10-01 19:54 2% ` [dpdk-dev] [PATCHv5 1/8] " Konstantin Ananyev
2015-10-14 11:39 ` [dpdk-dev] [dpdk-dev, PATCHv5, " Amine Kherbouche
2015-10-14 11:49 4% ` Ananyev, Konstantin
2015-10-01 23:37 1% [dpdk-dev] [PATCH] crc: deinline crc functions Stephen Hemminger
2015-10-13 16:04 0% ` Richardson, Bruce
2015-10-02 21:50 [dpdk-dev] [PATCH] lib: added support for armv7 architecture David Hunt
2015-10-11 21:17 2% ` [dpdk-dev] " Jan Viktorin
2015-10-03 8:58 [dpdk-dev] [PATCH v1 00/12] Support for ARM(v7) Jan Viktorin
2015-10-03 8:58 3% ` [dpdk-dev] [PATCH v1 01/12] mk: Introduce ARMv7 architecture Jan Viktorin
2015-10-05 17:57 3% [dpdk-dev] [PATCH 0/3] Add RETA configuration to mlx5 Adrien Mazarguil
2015-10-12 16:30 [dpdk-dev] [PATCH 8/8] librte_table: modify release notes and deprecation notice Mcnamara, John
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 0/8] librte_table: add key_mask parameter to 8-byte key Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 1/8] librte_table: add key_mask parameter to 8-byte key hash parameters Jasvinder Singh
2015-10-13 13:57 3% ` [dpdk-dev] [PATCH v2 2/8] librte_table: add key_mask parameter to 16-byte " Jasvinder Singh
2015-10-13 13:57 5% ` [dpdk-dev] [PATCH v2 8/8] librte_table: modify release notes and deprecation notice Jasvinder Singh
2015-10-21 12:18 3% ` [dpdk-dev] [PATCH v3 0/6] librte_table: add key_mask parameter to hash table parameter structure Jasvinder Singh
2015-10-21 12:18 6% ` [dpdk-dev] [PATCH v3 1/6] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters Jasvinder Singh
2015-10-15 16:51 [dpdk-dev] [PATCH] add a dpdk contributors guide John McNamara
2015-10-15 16:51 3% ` [dpdk-dev] [PATCH] doc: add " John McNamara
2015-10-15 21:36 0% ` Thomas Monjalon
2015-10-20 11:03 2% ` [dpdk-dev] [PATCH v2] " John McNamara
2015-10-23 10:18 2% ` [dpdk-dev] [PATCH v3] " John McNamara
2015-10-20 2:41 3% [dpdk-dev] [PATCH] doc: Add missing new line before code block Tetsuya Mukawa
2015-10-28 9:33 0% ` Mcnamara, John
2015-10-20 10:34 4% [dpdk-dev] [PATCH v2 1/1] ethdev: remove the imissed deprecation tag Maryam Tahhan
2015-10-22 12:06 [dpdk-dev] [PATCHv6 0/9] ethdev: add new API to retrieve RX/TX queue information Konstantin Ananyev
2015-10-22 12:06 2% ` [dpdk-dev] [PATCHv6 1/9] " Konstantin Ananyev
2015-10-27 12:51 3% ` [dpdk-dev] [PATCHv7 0/9] " Konstantin Ananyev
2015-10-27 12:51 2% ` [dpdk-dev] [PATCHv7 1/9] " Konstantin Ananyev
2015-10-27 12:51 10% ` [dpdk-dev] [PATCHv7 9/9] doc: release notes update for queue_info_get() and (rx|tx)_desc_limit Konstantin Ananyev
2015-10-22 12:06 4% ` [dpdk-dev] [PATCHv6 9/9] doc: release notes update for queue_info_get() Konstantin Ananyev
2015-10-23 13:51 3% [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Michal Jastrzebski
2015-10-23 13:51 1% ` [dpdk-dev] [PATCH v1 1/3] " Michal Jastrzebski
2015-10-23 14:38 3% ` Bruce Richardson
2015-10-23 14:59 0% ` Jastrzebski, MichalX K
2015-10-23 13:51 5% ` [dpdk-dev] [PATCH v1 3/3] doc: update release 2.2 after changes in librte_lpm Michal Jastrzebski
2015-10-23 14:21 0% ` Bruce Richardson
2015-10-23 14:33 0% ` Jastrzebski, MichalX K
2015-10-23 16:20 0% ` [dpdk-dev] [PATCH v1 0/3] lpm: increase number of next hops for lpm (ipv4) Matthew Hall
2015-10-23 16:33 3% ` Stephen Hemminger
2015-10-23 18:38 0% ` Matthew Hall
2015-10-23 19:13 0% ` Vladimir Medvedkin
2015-10-23 19:59 1% ` Stephen Hemminger
2015-10-24 6:09 0% ` Matthew Hall
2015-10-25 17:52 0% ` Vladimir Medvedkin
[not found] ` <20151026115519.GA7576@MKJASTRX-MOBL>
2015-10-26 11:57 0% ` Jastrzebski, MichalX K
2015-10-26 14:03 3% ` Vladimir Medvedkin
2015-10-26 15:39 0% ` Michal Jastrzebski
2015-10-26 16:59 0% ` Vladimir Medvedkin
2015-10-26 12:13 0% ` Jastrzebski, MichalX K
[not found] <1443993003-1059-1-git-send-email-marcdevel@gmail.com>
2015-10-25 21:59 ` [dpdk-dev] [PATCH v6 0/5] ethdev: add speed capabilities and refactor link API Marc Sune
2015-10-25 21:59 13% ` [dpdk-dev] [PATCH v6 4/5] doc: update with link changes Marc Sune
2015-10-26 16:37 [dpdk-dev] [PATCH v2 00/16] Support ARMv7 architecture Jan Viktorin
2015-10-26 16:37 3% ` [dpdk-dev] [PATCH v2 01/16] mk: Introduce " Jan Viktorin
2015-10-27 19:13 ` [dpdk-dev] [PATCH v3 00/17] Support " Jan Viktorin
2015-10-27 19:13 3% ` [dpdk-dev] [PATCH v3 01/17] mk: Introduce " Jan Viktorin
2015-10-29 12:43 ` [dpdk-dev] [PATCH v4 00/15] Support " Jan Viktorin
2015-10-29 12:43 3% ` [dpdk-dev] [PATCH v4 14/15] mk: Introduce " Jan Viktorin
2015-10-28 17:11 3% [dpdk-dev] [PATCH v4 0/7] librte_table: add key_mask parameter to roy.fan.zhang
2015-10-28 17:11 6% ` [dpdk-dev] [PATCH v4 1/7] librte_table: add key_mask parameter to 8- and 16-bytes key hash parameters roy.fan.zhang
2015-10-29 4:09 [dpdk-dev] [PATCH v5 0/2] i40e: RSS/FD granularity configuration Helin Zhang
2015-10-29 6:02 ` [dpdk-dev] [PATCH v6 0/3] " Helin Zhang
2015-10-29 6:02 ` [dpdk-dev] [PATCH v6 1/3] " Helin Zhang
2015-10-29 9:38 3% ` Bruce Richardson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).