DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] doc: update bnxt guide
@ 2020-05-01 22:03 Ajit Khaparde
  2020-05-05 16:58 ` Ajit Khaparde
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-01 22:03 UTC (permalink / raw)
  To: dev; +Cc: JP Lee

    - Update list of supported adapters.
    - Update list of supported features.
    - Add some details to describe the features.
    - Remove obsolete limitations.
    - Fix and update links.

Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 doc/guides/nics/bnxt.rst | 1131 +++++++++++++++++++++++++++++++++-----
 1 file changed, 1001 insertions(+), 130 deletions(-)

diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 434ba9d6c..0da312557 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -1,138 +1,1009 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright 2016-2019 Broadcom
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 Broadcom Inc.
 
 BNXT Poll Mode Driver
 =====================
 
-The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
-Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
-Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
-NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
-Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
-Series of Communications Processors**.  A complete list with links to reference
-material is included below.
+The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
 
+A complete list with links to reference material is in the Appendix section.
 
-BNXT PMD Features
+CPU Support
+-----------
+
+BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
+
+Kernel Dependency
 -----------------
 
-The BNXT PMD includes support for the following features:
-
-   * Multiple transmit and receive queues
-   * Queue start/stop
-   * RSS hash
-   * RSS key configuration
-   * RSS reta configuration
-   * VMDq
-   * Packet type parsing
-   * Configurable RX CRC stripping
-   * L3/L4 checksum offload
-   * LRO offload
-   * TSO offload
-   * VLAN offload
-   * SR-IOV VF
-   * Basic and extended port statistics
-   * Link state reporting
-   * Flow control
-   * Ethertype filtering
-   * N-tuple filtering
-   * Promiscuous mode
-   * Unicast and multicast MAC filtering
-   * Scatter/gather transmit and receive
-   * Jumbo frames
-   * Vector PMD
-
-BNXT Vector PMD
----------------
-
-The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
-provides significantly improved performance over the base implementation,
-however it does not support all of the features that are supported by the
-base (non-vector) implementation. Vector mode will be selected and enabled
-automatically when the port is started if allowed by the current configuration.
-
-RX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode receive will be enabled if the following constrainsts are met:
-   * Packets must fit within a single mbuf (no scatter RX).
-   * LRO offload must be disabled.
-
-TX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode transmit will be enabled if the following constrainsts are met:
-   * Packets must be contained within a single mbuf (no gather TX).
-   * All transmit offloads other than VLAN insertion must be disabled.
-
-BNXT PMD Supported Chipsets and Adapters
-----------------------------------------
-
-Chipsets and adapters supported by the bnxt PMD include:
-
-  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network Controllers**
-
-       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
-
-  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers**
-
-       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
-       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter
-       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter
-       * P150P - Single-port 50 Gigabit Ethernet Adapter
-       * P210P - Dual-port 10 Gigabit Ethernet Adapter
-       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
-       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
-       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
-       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
-       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
-       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
-
-    Information about Ethernet adapters in the NetXtreme family of
-    adapters can be found in the `NetXtreme® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
-
-       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
-       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
-       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
-
-    Information about the Stingray family of SmartNIC adapters can be found in the
-    `Stingray® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
-
-    These ARM based processors target a broad range of networking applications
-    including virtual CPE (vCPE) and NFV appliances, 10G service routers and
-    gateways, control plane processing for Ethernet switches and network
-    attached storage (NAS).
-
-    Information about the StrataGX family of adapters can be found in the
-    `StrataGX® BCM58712
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712>`_
-    and `StrataGX® BCM58713
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713>`_
-    sections of the `Broadcom website <http://www.broadcom.com/>`_.
+BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping device memory to userspace, registering interrupts, etc. VFIO is more secure than UIO, relying on IOMMU protection. UIO requires the IOMMU disabled or configured to pass-through mode.
+
+Operating Systems supported:
+
+* Red Hat Enterprise Linux release 8.1 (Ootpa)
+* Red Hat Enterprise Linux release 8.0 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.7 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Red Hat Enterprise Linux Server release 7.3 (Maipo)
+* Red Hat Enterprise Linux Server release 7.2 (Maipo)
+* CentOS Linux release 8.0
+* CentOS Linux release 7.7
+* CentOS Linux release 7.6.1810
+* CentOS Linux release 7.5.1804
+* CentOS Linux release 7.4.1708
+* Fedora 31
+* FreeBSD 12.1
+* Suse 15SP1
+* Ubuntu 19.04
+* Ubuntu 18.04
+* Ubuntu 16.10
+* Ubuntu 16.04
+* Ubuntu 14.04
+
+The BNXT PMD supports operating with:
+
+* Linux vfio-pci
+* Linux uio_pci_generic
+* Linux igb_uio
+* BSD nic_uio
+
+Compiling BNXT PMD
+------------------
+
+To compile the BNXT PMD:
+
+.. code-block:: console
+
+    make config T=x86_64-native-linux-gcc && make // for x86-64
+    make config T=x86_32-native-linux-gcc && make // for x86-32
+    make config T=armv8a-linux-gcc && make // for ARMv8
+
+Bind the device to one of the kernel modules listed above
+
+.. code-block:: console
+
+    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
+
+Load an application (e.g. testpmd) with a default configuration (e.g. a single TX /RX queue):
+
+.. code-block:: console
+
+    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
+
+Running BNXT PMD
+----------------
+
+The BNXT PMD can run on PF or VF.
+
+PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment of part of the network port resources to guest operating systems using the SR-IOV standard. NIC is logically distributed among multiple virtual machines (VMs), while still having global data in common to share with the PF and other VFs.
+
+Sysadmin can create and configure VFs:
+
+.. code-block:: console
+
+  echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
+  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
+
+Sysadmin also can change the VF property such as MAC address, transparent VLAN, TX rate limit, and trusted VF:
+
+.. code-block:: console
+
+  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
+  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
+
+Running on VF
+~~~~~~~~~~~~~
+
+Flow Bifurcation
+^^^^^^^^^^^^^^^^
+
+The Flow Bifurcation splits the incoming data traffic to user space applications (such as DPDK applications) and/or kernel space programs (such as the Linux kernel stack). It can direct some traffic, for example data plane traffic, to DPDK, while directing some other traffic, for example control plane traffic, to the traditional Linux networking stack.
+
+Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
+
+Benefits of the flow bifurcation include:
+
+* Better performance with less CPU overhead, as user application can directly access the NIC for data path
+* NIC is still being controlled by the kernel, as control traffic is forwarded only to the kernel driver
+* Control commands, e.g. ethtool, will work as usual
+
+Running on a VF, the BXNT PMD supports the flow bifurcation with a combination of SR-IOV and packet classification and/or forwarding capability. In the simplest case of flow bifurcation, a PF driver configures a NIC to forward all user traffic directly to VFs with matching destination MAC address, while the rest of the traffic is forwarded to a PF. Note that the broadcast packets will be forwarded to both PF and VF.
+
+.. code-block:: console
+
+    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
+
+Trusted VF
+^^^^^^^^^^
+
+By default, VFs are *not* allowed to perform privileged operations, such as modifying the VF’s MAC address in the guest. These security measures are designed to prevent possible attacks. However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these operations performed by a VF would be legitimate and better to be allowed.
+
+To enable VF to request "trusted mode," a new trusted VF concept was introduced in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some privileged operations.
+
+The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable trusted attribute on the VF. It is preferable to enable the Trusted setting on a VF before starting applications. However, the BNXT PMD handles dynamic changes in trusted settings as well.
+
+Note that control commands, e.g., ethtool, will work via the kernel PF driver, *not* via the trusted VF driver.
+
+Operations supported by trusted VF:
+
+* MAC address configuration
+* Flow rule creation
+
+Operations *not* supported by trusted VF:
+
+* Firmware upgrade
+* Promiscuous mode setting
+
+Running on PF
+~~~~~~~~~~~~~
+
+Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the features which the PF can enable or request. In a multiport NIC, each port will have a corresponding PF. Also depending on the configuration of the NIC there can be more than one PF associated per port. A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other PF or run the PMD on both the PFs. In such cases, the firmware picks one of the PFs as a master PF.
+
+Much like in the trusted VF, the DPDK application must be *trusted* and expected to be *well-behaved*.
+
+Features
+--------
+
+The BNXT PMD supports the following features:
+
+* Port Control
+    * Port MTU
+    * LED
+    * Flow Control and Autoneg
+* Packet Filtering
+    * Unicast MAC Filter
+    * Multicast MAC Filter
+    * VLAN Filtering
+    * Allmulticast Mode
+    * Promiscuous Mode
+* Stateless Offloads
+    * CRC Offload
+    * Checksum Offload (IPv4, TCP, and UDP)
+    * Multi-Queue (TSS and RSS)
+    * Segmentation and Reassembly (TSO and LRO)
+* VLAN insert strip
+* Stats Collection
+* Generic Flow Offload
+
+Port Control
+~~~~~~~~~~~~
+
+**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574 bytes:
+
+.. code-block:: console
+
+    testpmd> port config mtu (port_id) mtu_value
+    testpmd> show port info (port_id)
+
+**LED**: Application tunes on (or off) a port LED, typically for a port identification:
+
+.. code-block:: console
+
+    int rte_eth_led_on (uint16_t port_id)
+    int rte_eth_led_off (uint16_t port_id)
+
+**Flow Control and Autoneg**: Application tunes on (or off) flow control and/or auto-negotiation on a port:
+
+.. code-block:: console
+
+    testpmd> set flow_ctrl rx (on|off) (port_id)
+    testpmd> set flow_ctrl tx (on|off) (port_id)
+    testpmd> set flow_ctrl autoneg (on|off) (port_id)
+
+Note that the BNXT PMD does *not* support some options and ignores them when requested:
+
+* high_water
+* low_water
+* pause_time
+* mac_ctrl_frame_fwd
+* send_xon
+
+Packet Filtering
+~~~~~~~~~~~~~~~~
+
+Applications control the packet-forwarding behaviors with packet filters.
+
+The BNXT PMD supports hardware-based packet filtering:
+
+* UC (Unicast) MAC Filters
+    * No unicast packets are forwarded to an application except the one with DMAC address added to the port
+    * At initialization, the station MAC address is added to the port
+* MC (Multicast) MAC Filters
+    * No multicast packets are forwarded to an application except the one with MC address added to the port
+    * When the application listens to a multicast group, it adds the MC address to the port
+* VLAN Filtering Mode
+    * When enabled, no packets are forwarded to an application except the ones with the VLAN tag assigned to the port
+* Allmulticast Mode
+    * When enabled, every multicast packet received on the port is forwarded to the application
+    * Typical usage is routing applications
+* Promiscuous Mode
+    * When enabled, every packet received on the port is forwarded to the application
+
+Unicast MAC Filter
+^^^^^^^^^^^^^^^^^^
+
+The application adds (or removes) MAC addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) macs
+    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Multicast MAC Filter
+^^^^^^^^^^^^^^^^^^^^
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) mcast_macs
+    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
+
+VLAN Filtering
+^^^^^^^^^^^^^^
+
+The application enables (or disables) VLAN filtering mode. When the mode is enabled, no packets are forwarded to an application except ones with VLAN tag assigned for the application.
+
+.. code-block:: console
+
+    testpmd> vlan set filter (on|off) (port_id)
+    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
+
+Allmulticast Mode
+^^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the allmulticast mode. When the mode is enabled, every multicast packet received is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set allmulti (port_id) (on|off)
+
+Promiscuous Mode
+^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the promiscuous mode. When the mode is enabled on a port, every packet received on the port is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set promisc port_id (on|off)
+
+Stateless Offloads
+~~~~~~~~~~~~~~~~~~
+
+Like Linux, DPDK provides enabling hardware offload of some stateless processing (such as checksum calculation) of the stack, alleviating the CPU from having to burn cycles on every packet.
+
+Listed below are the stateless offloads supported by the BNXT PMD:
+
+* CRC offload (for both TX and RX packets)
+* Checksum Offload (for both TX and RX packets)
+    * IPv4 Checksum Offload
+    * TCP Checksum Offload
+    * UDP Checksum Offload
+* Segmentation/Reassembly Offloads
+    * TCP Segmentation Offload (TSO)
+    * Large Receive Offload (LRO)
+* Multi-Queue
+    * Transmit Side Scaling (TSS)
+    * Receive Side Scaling (RSS)
+
+Also, the BNXT PMD supports stateless offloads on inner frames for tunneled packets. Listed below are the tunneling protocols supported by the BNXT PMD:
+
+* VXLAN
+* GRE
+* NVGRE
+
+Note that enabling (or disabling) stateless offloads requires applications to stop DPDK before changing configuration.
+
+CRC Offload
+^^^^^^^^^^^
+
+The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic Redundancy Check) that allows detection of corrupted data within the entire frame as received on the receiver side.
+
+The BNXT PMD supports hardware-based CRC offload:
+
+* TX: calculate and insert CRC
+* RX: check and remove CRC, notify the application on CRC error
+
+Note that the CRC offload is always turned on.
+
+Checksum Offload
+^^^^^^^^^^^^^^^^
+
+The application enables hardware checksum calculation for IPv4, TCP, and UDP.
+
+.. code-block:: console
+
+    testpmd> port stop (port_id)
+    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
+    testpmd> set fwd csum
+
+Multi-Queue
+^^^^^^^^^^^
+
+Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side Scaling), is a common networking technique that allows for more efficient load balancing across multiple CPU cores.
+
+The application enables multiple TX and RX queues when starts.
+
+.. code-block:: console
+
+    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
+
+**TSS**
+
+TSS distributes network transmit processing across several hardware-based transmit queues, allowing outbound network traffic to be processed by multiple CPU cores.
+
+**RSS**
+
+RSS distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPU cores.
+
+The application can select the RSS mode, i.e. select the header fields that are included for hash calculation. The BNXT PMD supports the RSS mode of ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
+
+For tunneled packets, RSS hash is calculated over inner frame header fields. Applications may want to select the tunnel header fields for hash calculation, and it will be supported in 20.08 using RSS level.
+
+.. code-block:: console
+
+    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
+
+    // note that the testpmd defaults the RSS mode to ip
+    // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
+    testpmd> port config (port_id) rss default
+
+    // to check the current RSS configuration, such as RSS function and RSS key
+    testpmd> show port (port_id) rss-hash key
+
+    // RSS is enabled by default. However, application can disable RSS as follows
+    testpmd> port config (port_id) rss none
+
+Application can change the flow distribution, i.e. remap the received traffic to CPU cores, using RSS RETA (Redirection Table).
+
+.. code-block:: console
+
+    // application queries the current RSS RETA configuration
+    testpmd> show port (port_id) rss reta size (mask0, mask1)
+
+    // application changes the RSS RETA configuration
+    testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
+
+TSO
+^^^
+
+TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum Transmit Unit). NIC breaks it into multiple segments before sending it to the network.
+
+The BNXT PMD supports hardware-based TSO.
+
+.. code-block:: console
+
+    // display the status of TSO
+    testpmd> tso show (port_id)
+
+    // enable/disable TSO
+    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
+
+    // set TSO segment size
+    testpmd> tso set segment_size (port_id)
+
+The BNXT PMD also supports hardware-based tunneled TSO.
+
+.. code-block:: console
+
+    // display the status of tunneled TSO
+    testpmd> tunnel_tso show (port_id)
+
+    // enable/disable tunneled TSO
+    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
+
+    // set tunneled TSO segment size
+    testpmd> tunnel_tso set segment_size (port_id)
+
+Note that the checksum offload is always assumed to be enabled for TSO.
+
+LRO
+^^^
+
+LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP packets from a single stream into a larger buffer, before passing to the networking stack.
+
+The BNXT PMD supports hardware-based LRO.
+
+.. code-block:: console
+
+    // display the status of LRO
+    testpmd> show port (port_id) rx_offload capabilities
+    testpmd> show port (port_id) rx_offload configuration
+
+    // enable/disable LRO
+    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
+
+    // set max LRO packet (datagram) size
+    testpmd> port config (port_id) max-lro-pkt-size (max_size)
+
+The BNXT PMD also supports tunneled LRO.
+
+Some applications, such as routing, should *not* change the packet headers as they pass through (i.e. received from and sent back to the network). In such a case, GRO (Generic Receive Offload) should be used instead of LRO.
+
+VLAN Insert/Strip
+~~~~~~~~~~~~~~~~~
+
+DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD supports hardware-based VLAN insert/strip offload for both single and double VLAN packets.
+
+
+VLAN Insert
+^^^^^^^^^^^
+
+Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is 0x8100.
+
+.. code-block:: console
+
+    // configure outer TPID value for a port
+    testpmd> vlan set outer tpid (tpid_value) (port_id)
+
+The inner TPID set will be rejected as the BNXT PMD supports inserting only an outer VLAN. Note that when a packet has a single VLAN, the tag is considered as outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
+
+The BNXT PMD supports various TPID values shown below. Any other values will be rejected.
+
+* ``0x8100``
+* ``0x88a8``
+* ``0x9100``
+* ``0x9200``
+* ``0x9300``
+
+The BNXT PMD supports the VLAN insert offload per-packet basis. The application provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD inserts the VLAN tag (via hardware) using the provided TCI along with the configured TPID.
+
+.. code-block:: console
+
+    // enable VLAN insert offload
+    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
+
+    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+        tci_value = mbuf->vlan_tci_outer
+    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+        tci_value = mbuf->vlan_tci
+
+VLAN Strip
+^^^^^^^^^^
+
+The application configures the per-port VLAN strip offload.
+
+.. code-block:: console
+
+    // enable VLAN strip on a port
+    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
+
+    // notify application VLAN strip via mbuf
+    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+System operators may run a PTP (Precision Time Protocol) client application to synchronize the time on the NIC (and optionally, on the system) to a PTP master.
+
+The BNXT PMD supports a PTP client application to communicate with a PTP master clock using DPDK IEEE1588 APIs. Note that the PTP client application need to run on PF and vector mode needs to be disabled.
+
+For the PTP time synchronization support, the BNXT PMD must be compiled with ``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending).
+
+.. code-block:: console
+
+    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
+
+When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP packets to application via mbuf.
+
+.. code-block:: console
+
+    // RX packet completion will indicate whether the packet is PTP
+    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+
+Statistics Collection
+~~~~~~~~~~~~~~~~~~~~~
+
+In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the similar functionalities via rte_eth_stats and rte_eth_xstats.
+
+The BNXT PMD supports both basic and extended stats collection:
+
+* Basic stats
+* Extended stats
+
+Basic Stats
+^^^^^^^^^^^
+
+The application collects per-port and per-queue stats using rte_eth_stats APIs.
+
+.. code-block:: console
+
+    testpmd> show port stats (port_id)
+
+Basic stats include:
+
+* ipackets
+* ibytes
+* opackets
+* obytes
+* imissed
+* ierrors
+* oerrors
+
+By default, the BNXT PMD supports per-queue stats for 16 queues. For more than 16 queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues.
+
+Extended Stats
+^^^^^^^^^^^^^^
+
+Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor provides its own set of counters.
+
+The BNXT PMD provides a rich set of counters, including per-flow counters, per-cos counters, per-priority counters, etc.
+
+.. code-block:: console
+
+    testpmd> show port xstats (port_id)
+
+Shown below is the elaborated sequence to retrieve extended stats:
+
+.. code-block:: console
+
+    // application queries the number of xstats
+    len = rte_eth_xstats_get(port_id, NULL, 0);
+    // BNXT PMD returns the size of xstats array (i.e. the number of entries)
+    // BNXT PMD returns 0, if the feature is compiled out or disabled
+
+    // application allocates memory for xstats
+    struct rte_eth_xstats_name *names; // name is 64 character or less
+    struct rte_eth_xstats *xstats;
+    names = calloc(len, sizeof(*names));
+    xstats = calloc(len, sizeof(*xstats));
+
+    // application retrieves xstats // names and values
+    ret = rte_eth_xstats_get_names(port_id, *names, len);
+    ret = rte_eth_xstats_get(port_id, *xstats, len);
+
+    // application checks the xstats
+    // application may repeat the below:
+    len = rte_eth_xstats_reset(port_id); // reset the xstats
+
+    // reset can be skipped, if application wants to see accumulated stats
+    // run traffic
+    // probably stop the traffic
+    // retrieve xstats // no need to retrieve xstats names again
+    // check xstats
+
+See Appendix section for a list of extended counters provided by the BNXT PMD.
+
+Generic Flow Offload
+~~~~~~~~~~~~~~~~~~~~
+
+Applications can get benefit by offloading all or part of flow processing to hardware. For example, applications can offload packet classification only (partial offload) or whole match-action (full offload).
+
+DPDK offers the Generic Flow API (rte_flow API) to configure hardware to perform flow processing.
+
+Listed below are the rte_flow APIs BNXT PMD supports:
+
+* rte_flow_validate
+* rte_flow_create
+* rte_flow_destroy
+* rte_flow_flush
+
+Host Based Flow Table Management
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with 20.05 BNXT PMD supports host based flow table management. This is a new mechanism that should allow higher flow scalability than what is currently supported. This new approach also defines a new rte_flow parser, and mapper which currently supports basic packet classification in the receive path. The patchset uses a newly implemented control-plane firmware interface which optimizes flow insertions and deletions.
+
+This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+
+Application Support
+-------------------
+
+Firmware
+~~~~~~~~
+
+The BNXT PMD supports the application to retrieve the firmware version.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+
+Note that the applications cannot update the firmware using BNXT PMD.
+
+Multiple Processes
+~~~~~~~~~~~~~~~~~~
+
+When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single instance of DPDK, the BNXT PMD supports a single primary application and one or more secondary applications. Note that the DPDK-layer (*not* the PMD-layer) ensures there is only one primary application.
+
+There are two modes:
+
+Manual mode
+
+* Application notifies whether it is primary or secondary using *proc-type* flag
+* 1st process should be spawned with ``--proc-type=primary``
+* All subsequent processes should be spawned with ``--proc-type=secondary``
+
+Auto detection mode
+
+* Application is using ``proc-type=auto`` flag
+* A process is spawned as a secondary if a primary is already running
+
+The BNXT PMD uses the info to skip a device initialization, i.e. performs a device initialization only when being brought up by a primary application.
+
+Runtime Queue Setup
+~~~~~~~~~~~~~~~~~~~
+
+Typically, a DPDK application allocates TX and RX queues statically: i.e. queues are allocated at start. However, an application may want to increase (or decrease) the number of queues dynamically for various reasons, e.g. power savings.
+
+The BNXT PMD supports applications to increase or decrease queues at runtime.
+
+.. code-block:: console
+
+    testpmd> port config all (rxq|txq) (num_queues)
+
+Note that a DPDK application must allocate default queues (one for TX and one for RX at minimum) at initialization.
+
+Descriptor Status
+~~~~~~~~~~~~~~~~~
+
+Applications may use the descriptor status for various reasons, e.g. for power savings. For example, an application may stop polling and change to interrupt mode when the descriptor status shows no packets to service for a while.
+
+The BNXT PMD supports the application to retrieve both TX and RX descriptor status.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
+
+Bonding
+~~~~~~~
+
+DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
+
+.. code-block:: console
+
+    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+
+Vector Processing
+-----------------
+
+Vector processing provides significantly improved performance over scalar processing (see Vector Processor, here).
+
+The BNXT PMD supports the vector processing using SSE (Streaming SIMD Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is currently limited to Intel/AMD CPU architecture. Support for ARM is *not* currently implemented.
+
+This improved performance comes from several optimizations:
+
+* Batching
+    * TX: processing completions in bulk
+    * RX: allocating mbufs in bulk
+* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
+* Some stateless offloads are *not* supported with vector processing
+    * TX: no offloads will be supported
+    * RX: reduced RX offloads (listed below) will be supported::
+
+       DEV_RX_OFFLOAD_VLAN_STRIP
+       DEV_RX_OFFLOAD_KEEP_CRC
+       DEV_RX_OFFLOAD_JUMBO_FRAME
+       DEV_RX_OFFLOAD_IPV4_CKSUM
+       DEV_RX_OFFLOAD_UDP_CKSUM
+       DEV_RX_OFFLOAD_TCP_CKSUM
+       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       DEV_RX_OFFLOAD_RSS_HASH
+       DEV_RX_OFFLOAD_VLAN_FILTER
+
+The BNXT Vector PMD is enabled in DPDK builds by default. When required, it has to be disabled in the DPDK build configuration by setting ``CONFIG_RTE_LIBRTE_BNXT_INC_VECTOR=n``.
+
+However, a decision to enable vector mode will be made when the port transitions from stopped to started. Any TX offloads or some RX offloads (other than listed above) will disable the vector mode. Therefore, offload configuration changes that impact vector mode must be made when the port is stopped.
+
+Note that TX (or RX) vector mode can be enabled independently from RX (or TX) vector mode.
+
+Appendix
+--------
+
+Supported Chipsets and Adapters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
+
+BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
+* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
+* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
+* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
+* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
+* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
+* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
+* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
+* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
+
+BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about the Stingray family of SmartNIC adapters can be found in the `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
+
+BCM5873x StrataGX® Family of Communications Processors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These ARM-based processors target a broad range of networking applications, including virtual CPE (vCPE) and NFV appliances, 10G service routers and gateways, control plane processing for Ethernet switches, and network-attached storage (NAS).
+
+* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``
+
+Extended Stats
+~~~~~~~~~~~~~~
+
+Listed below are the extended stats supported by the BNXT PMD:
+
+* ``rx_good_packets``
+* ``tx_good_packets``
+* ``rx_good_bytes``
+* ``tx_good_bytes``
+* ``rx_missed_errors``
+* ``rx_errors``
+* ``tx_errors``
+* ``rx_mbuf_allocation_errors``
+* ``rx_q0packets``
+* ``rx_q0bytes``
+* ``rx_q0errors``
+* ``tx_q0packets``
+* ``tx_q0bytes``
+* ``rx_64b_frames``
+* ``rx_65b_127b_frames``
+* ``rx_128b_255b_frames``
+* ``rx_256b_511b_frames``
+* ``rx_512b_1023b_frames``
+* ``rx_1024b_1518b_frames``
+* ``rx_good_vlan_frames``
+* ``rx_1519b_2047b_frames``
+* ``rx_2048b_4095b_frames``
+* ``rx_4096b_9216b_frames``
+* ``rx_9217b_16383b_frames``
+* ``rx_total_frames``
+* ``rx_ucast_frames``
+* ``rx_mcast_frames``
+* ``rx_bcast_frames``
+* ``rx_fcs_err_frames``
+* ``rx_ctrl_frames``
+* ``rx_pause_frames``
+* ``rx_pfc_frames``
+* ``rx_unsupported_opcode_frames``
+* ``rx_unsupported_da_pausepfc_frames``
+* ``rx_wrong_sa_frames``
+* ``rx_align_err_frames``
+* ``rx_oor_len_frames``
+* ``rx_code_err_frames``
+* ``rx_false_carrier_frames``
+* ``rx_ovrsz_frames``
+* ``rx_jbr_frames``
+* ``rx_mtu_err_frames``
+* ``rx_match_crc_frames``
+* ``rx_promiscuous_frames``
+* ``rx_tagged_frames``
+* ``rx_double_tagged_frames``
+* ``rx_trunc_frames``
+* ``rx_good_frames``
+* ``rx_sch_crc_err_frames``
+* ``rx_undrsz_frames``
+* ``rx_frag_frames``
+* ``rx_eee_lpi_events``
+* ``rx_eee_lpi_duration``
+* ``rx_llfc_physical_msgs``
+* ``rx_llfc_logical_msgs``
+* ``rx_llfc_msgs_with_crc_err``
+* ``rx_hcfc_msgs``
+* ``rx_hcfc_msgs_with_crc_err``
+* ``rx_bytes``
+* ``rx_runt_bytes``
+* ``rx_runt_frames``
+* ``rx_pfc_xon2xoff_frames_pri0``
+* ``rx_pfc_xon2xoff_frames_pri1``
+* ``rx_pfc_xon2xoff_frames_pri2``
+* ``rx_pfc_xon2xoff_frames_pri3``
+* ``rx_pfc_xon2xoff_frames_pri4``
+* ``rx_pfc_xon2xoff_frames_pri5``
+* ``rx_pfc_xon2xoff_frames_pri6``
+* ``rx_pfc_xon2xoff_frames_pri7``
+* ``rx_pfc_ena_frames_pri0``
+* ``rx_pfc_ena_frames_pri1``
+* ``rx_pfc_ena_frames_pri2``
+* ``rx_pfc_ena_frames_pri3``
+* ``rx_pfc_ena_frames_pri4``
+* ``rx_pfc_ena_frames_pri5``
+* ``rx_pfc_ena_frames_pri6``
+* ``rx_pfc_ena_frames_pri7``
+* ``rx_stat_discard``
+* ``rx_stat_err``
+* ``tx_64b_frames``
+* ``tx_65b_127b_frames``
+* ``tx_128b_255b_frames``
+* ``tx_256b_511b_frames``
+* ``tx_512b_1023b_frames``
+* ``tx_1024b_1518b_frames``
+* ``tx_good_vlan_frames``
+* ``tx_1519b_2047b_frames``
+* ``tx_2048b_4095b_frames``
+* ``tx_4096b_9216b_frames``
+* ``tx_9217b_16383b_frames``
+* ``tx_good_frames``
+* ``tx_total_frames``
+* ``tx_ucast_frames``
+* ``tx_mcast_frames``
+* ``tx_bcast_frames``
+* ``tx_pause_frames``
+* ``tx_pfc_frames``
+* ``tx_jabber_frames``
+* ``tx_fcs_err_frames``
+* ``tx_control_frames``
+* ``tx_oversz_frames``
+* ``tx_single_dfrl_frames``
+* ``tx_multi_dfrl_frames``
+* ``tx_single_coll_frames``
+* ``tx_multi_coll_frames``
+* ``tx_late_coll_frames``
+* ``tx_excessive_coll_frames``
+* ``tx_frag_frames``
+* ``tx_err``
+* ``tx_tagged_frames``
+* ``tx_dbl_tagged_frames``
+* ``tx_runt_frames``
+* ``tx_fifo_underruns``
+* ``tx_eee_lpi_events``
+* ``tx_eee_lpi_duration``
+* ``tx_total_collisions``
+* ``tx_bytes``
+* ``tx_pfc_ena_frames_pri0``
+* ``tx_pfc_ena_frames_pri1``
+* ``tx_pfc_ena_frames_pri2``
+* ``tx_pfc_ena_frames_pri3``
+* ``tx_pfc_ena_frames_pri4``
+* ``tx_pfc_ena_frames_pri5``
+* ``tx_pfc_ena_frames_pri6``
+* ``tx_pfc_ena_frames_pri7``
+* ``tx_llfc_logical_msgs``
+* ``tx_hcfc_msgs``
+* ``tx_xthol_frames``
+* ``tx_stat_discard``
+* ``tx_stat_error``
+* ``tx_ucast_pkts``
+* ``tx_mcast_pkts``
+* ``tx_bcast_pkts``
+* ``tx_discard_pkts``
+* ``tx_drop_pkts``
+* ``tx_ucast_bytes``
+* ``tx_mcast_bytes``
+* ``tx_bcast_bytes``
+* ``rx_ucast_pkts``
+* ``rx_mcast_pkts``
+* ``rx_bcast_pkts``
+* ``rx_discard_pkts``
+* ``rx_drop_pkts``
+* ``rx_ucast_bytes``
+* ``rx_mcast_bytes``
+* ``rx_bcast_bytes``
+* ``rx_agg_pkts``
+* ``rx_agg_bytes``
+* ``rx_agg_events``
+* ``rx_agg_aborts``
+* ``link_down_events``
+* ``continuous_pause_events``
+* ``resume_pause_events``
+* ``continuous_roce_pause_events``
+* ``resume_roce_pause_events``
+* ``rx_bytes_cos0``
+* ``rx_bytes_cos1``
+* ``rx_bytes_cos2``
+* ``rx_bytes_cos3``
+* ``rx_bytes_cos4``
+* ``rx_bytes_cos5``
+* ``rx_bytes_cos6``
+* ``rx_bytes_cos7``
+* ``rx_packets_cos0``
+* ``rx_packets_cos1``
+* ``rx_packets_cos2``
+* ``rx_packets_cos3``
+* ``rx_packets_cos4``
+* ``rx_packets_cos5``
+* ``rx_packets_cos6``
+* ``rx_packets_cos7``
+* ``pfc_pri0_rx_duration_us``
+* ``pfc_pri0_rx_transitions``
+* ``pfc_pri1_rx_duration_us``
+* ``pfc_pri1_rx_transitions``
+* ``pfc_pri2_rx_duration_us``
+* ``pfc_pri2_rx_transitions``
+* ``pfc_pri3_rx_duration_us``
+* ``pfc_pri3_rx_transitions``
+* ``pfc_pri4_rx_duration_us``
+* ``pfc_pri4_rx_transitions``
+* ``pfc_pri5_rx_duration_us``
+* ``pfc_pri5_rx_transitions``
+* ``pfc_pri6_rx_duration_us``
+* ``pfc_pri6_rx_transitions``
+* ``pfc_pri7_rx_duration_us``
+* ``pfc_pri7_rx_transitions``
+* ``rx_bits``
+* ``rx_buffer_passed_threshold``
+* ``rx_pcs_symbol_err``
+* ``rx_corrected_bits``
+* ``rx_discard_bytes_cos0``
+* ``rx_discard_bytes_cos1``
+* ``rx_discard_bytes_cos2``
+* ``rx_discard_bytes_cos3``
+* ``rx_discard_bytes_cos4``
+* ``rx_discard_bytes_cos5``
+* ``rx_discard_bytes_cos6``
+* ``rx_discard_bytes_cos7``
+* ``rx_discard_packets_cos0``
+* ``rx_discard_packets_cos1``
+* ``rx_discard_packets_cos2``
+* ``rx_discard_packets_cos3``
+* ``rx_discard_packets_cos4``
+* ``rx_discard_packets_cos5``
+* ``rx_discard_packets_cos6``
+* ``rx_discard_packets_cos7``
+* ``tx_bytes_cos0``
+* ``tx_bytes_cos1``
+* ``tx_bytes_cos2``
+* ``tx_bytes_cos3``
+* ``tx_bytes_cos4``
+* ``tx_bytes_cos5``
+* ``tx_bytes_cos6``
+* ``tx_bytes_cos7``
+* ``tx_packets_cos0``
+* ``tx_packets_cos1``
+* ``tx_packets_cos2``
+* ``tx_packets_cos3``
+* ``tx_packets_cos4``
+* ``tx_packets_cos5``
+* ``tx_packets_cos6``
+* ``tx_packets_cos7``
+* ``pfc_pri0_tx_duration_us``
+* ``pfc_pri0_tx_transitions``
+* ``pfc_pri1_tx_duration_us``
+* ``pfc_pri1_tx_transitions``
+* ``pfc_pri2_tx_duration_us``
+* ``pfc_pri2_tx_transitions``
+* ``pfc_pri3_tx_duration_us``
+* ``pfc_pri3_tx_transitions``
+* ``pfc_pri4_tx_duration_us``
+* ``pfc_pri4_tx_transitions``
+* ``pfc_pri5_tx_duration_us``
+* ``pfc_pri5_tx_transitions``
+* ``pfc_pri6_tx_duration_us``
+* ``pfc_pri6_tx_transitions``
+* ``pfc_pri7_tx_duration_us``
+* ``pfc_pri7_tx_transitions``
+* ``flow_0_bytes``
+* ``flow_0_packets``
+* ``...``
+* ``flow_1023_bytes``
+* ``flow_1023_packets``
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update bnxt guide
  2020-05-01 22:03 [dpdk-dev] [PATCH] doc: update bnxt guide Ajit Khaparde
@ 2020-05-05 16:58 ` Ajit Khaparde
  2020-05-06  8:57 ` Ferruh Yigit
  2020-05-14 10:39 ` Kovacevic, Marko
  2 siblings, 0 replies; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-05 16:58 UTC (permalink / raw)
  To: dpdk-dev; +Cc: JP Lee

On Fri, May 1, 2020 at 3:03 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
wrote:

>     - Update list of supported adapters.
>     - Update list of supported features.
>     - Add some details to describe the features.
>     - Remove obsolete limitations.
>     - Fix and update links.
>
> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  doc/guides/nics/bnxt.rst | 1131 +++++++++++++++++++++++++++++++++-----
>  1 file changed, 1001 insertions(+), 130 deletions(-)
>
Applied to dpdk-next-net-brcm. Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update bnxt guide
  2020-05-01 22:03 [dpdk-dev] [PATCH] doc: update bnxt guide Ajit Khaparde
  2020-05-05 16:58 ` Ajit Khaparde
@ 2020-05-06  8:57 ` Ferruh Yigit
  2020-05-06 16:09   ` Ajit Khaparde
  2020-05-14 10:39 ` Kovacevic, Marko
  2 siblings, 1 reply; 16+ messages in thread
From: Ferruh Yigit @ 2020-05-06  8:57 UTC (permalink / raw)
  To: Ajit Khaparde, dev, Kovacevic, Marko; +Cc: JP Lee, John McNamara

On 5/1/2020 11:03 PM, Ajit Khaparde wrote:
>     - Update list of supported adapters.
>     - Update list of supported features.
>     - Add some details to describe the features.
>     - Remove obsolete limitations.
>     - Fix and update links.
> 
> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  doc/guides/nics/bnxt.rst | 1131 +++++++++++++++++++++++++++++++++-----
>  1 file changed, 1001 insertions(+), 130 deletions(-)
> 
> diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> index 434ba9d6c..0da312557 100644
> --- a/doc/guides/nics/bnxt.rst
> +++ b/doc/guides/nics/bnxt.rst
> @@ -1,138 +1,1009 @@
> -..  SPDX-License-Identifier: BSD-3-Clause
> -    Copyright 2016-2019 Broadcom
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright 2020 Broadcom Inc.
>  
>  BNXT Poll Mode Driver
>  =====================
>  
> -The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
> -Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
> -Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
> -NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
> -Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
> -Series of Communications Processors**.  A complete list with links to reference
> -material is included below.
> +The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors.

Hi Ajit,

This adds single line with 358 char length, most probably the web output is good
but it is hard to read/maintain as .rst doc.

What is the motivation for joining the multiple lines?

For the content we need your ack, but for the style/formatting/language etc..
would you mind waiting for an ack from doc maintainers, cc'ed, before merge?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update bnxt guide
  2020-05-06  8:57 ` Ferruh Yigit
@ 2020-05-06 16:09   ` Ajit Khaparde
  2020-05-13  9:08     ` Ferruh Yigit
  0 siblings, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-06 16:09 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dpdk-dev, Kovacevic, Marko, JP Lee, John McNamara

On Wed, May 6, 2020 at 1:57 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 5/1/2020 11:03 PM, Ajit Khaparde wrote:
> >     - Update list of supported adapters.
> >     - Update list of supported features.
> >     - Add some details to describe the features.
> >     - Remove obsolete limitations.
> >     - Fix and update links.
> >
> > Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> > Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > ---
> >  doc/guides/nics/bnxt.rst | 1131 +++++++++++++++++++++++++++++++++-----
> >  1 file changed, 1001 insertions(+), 130 deletions(-)
> >
> > diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> > index 434ba9d6c..0da312557 100644
> > --- a/doc/guides/nics/bnxt.rst
> > +++ b/doc/guides/nics/bnxt.rst
> > @@ -1,138 +1,1009 @@
> > -..  SPDX-License-Identifier: BSD-3-Clause
> > -    Copyright 2016-2019 Broadcom
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > +   Copyright 2020 Broadcom Inc.
> >
> >  BNXT Poll Mode Driver
> >  =====================
> >
> > -The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters
> based on
> > -Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X
> NetXtreme-C®
> > -Family of Ethernet Network Controllers**, the **Broadcom
> BCM574XX/BCM575XX
> > -NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom
> BCM588XX
> > -Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX®
> BCM5871X
> > -Series of Communications Processors**.  A complete list with links to
> reference
> > -material is included below.
> > +The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements
> support for adapters based on Ethernet controllers and SoCs belonging to
> the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network
> Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters,
> and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
>
> Hi Ajit,
>
> This adds single line with 358 char length, most probably the web output
> is good
> but it is hard to read/maintain as .rst doc.
>
> What is the motivation for joining the multiple lines?
>
No specific reason. Mostly an oversight. We can fix it if necessary.


>
> For the content we need your ack, but for the style/formatting/language
> etc..
> would you mind waiting for an ack from doc maintainers, cc'ed, before
> merge?
>
Sure. Not an issue.

Thanks

>
> Thanks,
> ferruh
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update bnxt guide
  2020-05-06 16:09   ` Ajit Khaparde
@ 2020-05-13  9:08     ` Ferruh Yigit
  0 siblings, 0 replies; 16+ messages in thread
From: Ferruh Yigit @ 2020-05-13  9:08 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: dpdk-dev, Kovacevic, Marko, JP Lee, John McNamara

On 5/6/2020 5:09 PM, Ajit Khaparde wrote:
> 
> 
> On Wed, May 6, 2020 at 1:57 AM Ferruh Yigit <ferruh.yigit@intel.com
> <mailto:ferruh.yigit@intel.com>> wrote:
> 
>     On 5/1/2020 11:03 PM, Ajit Khaparde wrote:
>     >     - Update list of supported adapters.
>     >     - Update list of supported features.
>     >     - Add some details to describe the features.
>     >     - Remove obsolete limitations.
>     >     - Fix and update links.
>     >
>     > Signed-off-by: JP Lee <jongpil.lee@broadcom.com
>     <mailto:jongpil.lee@broadcom.com>>
>     > Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com
>     <mailto:ajit.khaparde@broadcom.com>>
>     > ---
>     >  doc/guides/nics/bnxt.rst | 1131 +++++++++++++++++++++++++++++++++-----
>     >  1 file changed, 1001 insertions(+), 130 deletions(-)
>     >
>     > diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
>     > index 434ba9d6c..0da312557 100644
>     > --- a/doc/guides/nics/bnxt.rst
>     > +++ b/doc/guides/nics/bnxt.rst
>     > @@ -1,138 +1,1009 @@
>     > -..  SPDX-License-Identifier: BSD-3-Clause
>     > -    Copyright 2016-2019 Broadcom
>     > +.. SPDX-License-Identifier: BSD-3-Clause
>     > +   Copyright 2020 Broadcom Inc.
>     > 
>     >  BNXT Poll Mode Driver
>     >  =====================
>     > 
>     > -The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
>     > -Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X
>     NetXtreme-C®
>     > -Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
>     > -NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom
>     BCM588XX
>     > -Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
>     > -Series of Communications Processors**.  A complete list with links to
>     reference
>     > -material is included below.
>     > +The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements
>     support for adapters based on Ethernet controllers and SoCs belonging to the
>     Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network
>     Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters,
>     and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
> 
>     Hi Ajit,
> 
>     This adds single line with 358 char length, most probably the web output is good
>     but it is hard to read/maintain as .rst doc.
> 
>     What is the motivation for joining the multiple lines?
> 
> No specific reason. Mostly an oversight. We can fix it if necessary.
>  
> 
> 
>     For the content we need your ack, but for the style/formatting/language etc..
>     would you mind waiting for an ack from doc maintainers, cc'ed, before merge?
> 
> Sure. Not an issue.
> 

There was no doc review, unfortunately. At least can you send a new version by
keeping the line length around ~80. And I suggest if there is not other change
in the text, keep the original line length.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update bnxt guide
  2020-05-01 22:03 [dpdk-dev] [PATCH] doc: update bnxt guide Ajit Khaparde
  2020-05-05 16:58 ` Ajit Khaparde
  2020-05-06  8:57 ` Ferruh Yigit
@ 2020-05-14 10:39 ` Kovacevic, Marko
  2020-05-15 21:56   ` [dpdk-dev] [PATCH v2] " Ajit Khaparde
  2 siblings, 1 reply; 16+ messages in thread
From: Kovacevic, Marko @ 2020-05-14 10:39 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: JP Lee, Mcnamara, John, Yigit, Ferruh

Few spelling mistakes:

0001-doc-update-bnxt-guide.patch
89c89
< -Vector mode receive will be enabled if the following constraint's are met:
---
> -Vector mode receive will be enabled if the following constrainsts are met:
96c96
< -Vector mode transmit will be enabled if the following constraint's are met:
---
> -Vector mode transmit will be enabled if the following constrainsts are met:
158c158
< -  * **Broadcom StrataGX® BCM5871X Series of Communications Processors**
---
> -  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**



A lot of lines over the 80 line mark all over the document making it unreadable in the docs section,
Even though its fine once it's rendered and viewed on the website. Still think it should be cleaned up a bit.

Once this is done you can add my ACK


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v2] doc: update bnxt guide
  2020-05-14 10:39 ` Kovacevic, Marko
@ 2020-05-15 21:56   ` Ajit Khaparde
  2020-05-18  9:34     ` Ferruh Yigit
  0 siblings, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-15 21:56 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, JP Lee, Kovacevic, Marko

    - Update list of supported adapters.
    - Update list of supported features.
    - Add some details to describe the features.
    - Remove obsolete limitations.
    - Fix and update links.

Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kovacevic, Marko <marko.kovacevic@intel.com>
---
v1->v2: Some lines were too long in v1. Made then shorter. Checked for typos.
---
 doc/guides/nics/bnxt.rst | 1145 +++++++++++++++++++++++++++++++++-----
 1 file changed, 1015 insertions(+), 130 deletions(-)

diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 434ba9d6c..e4b589566 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -1,138 +1,1023 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright 2016-2019 Broadcom
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 Broadcom Inc.
 
 BNXT Poll Mode Driver
 =====================
 
-The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
-Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
-Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
-NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
-Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
-Series of Communications Processors**.  A complete list with links to reference
-material is included below.
+The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
 
+A complete list with links to reference material is in the Appendix section.
 
-BNXT PMD Features
+CPU Support
+-----------
+
+BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
+
+Kernel Dependency
+-----------------
+
+BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping device memory to userspace, registering interrupts, etc.
+VFIO is more secure than UIO, relying on IOMMU protection.
+UIO requires the IOMMU disabled or configured to pass-through mode.
+
+Operating Systems supported:
+
+* Red Hat Enterprise Linux release 8.1 (Ootpa)
+* Red Hat Enterprise Linux release 8.0 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.7 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Red Hat Enterprise Linux Server release 7.3 (Maipo)
+* Red Hat Enterprise Linux Server release 7.2 (Maipo)
+* CentOS Linux release 8.0
+* CentOS Linux release 7.7
+* CentOS Linux release 7.6.1810
+* CentOS Linux release 7.5.1804
+* CentOS Linux release 7.4.1708
+* Fedora 31
+* FreeBSD 12.1
+* Suse 15SP1
+* Ubuntu 19.04
+* Ubuntu 18.04
+* Ubuntu 16.10
+* Ubuntu 16.04
+* Ubuntu 14.04
+
+The BNXT PMD supports operating with:
+
+* Linux vfio-pci
+* Linux uio_pci_generic
+* Linux igb_uio
+* BSD nic_uio
+
+Compiling BNXT PMD
+------------------
+
+To compile the BNXT PMD:
+
+.. code-block:: console
+
+    make config T=x86_64-native-linux-gcc && make // for x86-64
+    make config T=x86_32-native-linux-gcc && make // for x86-32
+    make config T=armv8a-linux-gcc && make // for ARMv8
+
+Bind the device to one of the kernel modules listed above
+
+.. code-block:: console
+
+    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
+
+Load an application (e.g. testpmd) with a default configuration (e.g. a single TX /RX queue):
+
+.. code-block:: console
+
+    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
+
+Running BNXT PMD
+----------------
+
+The BNXT PMD can run on PF or VF.
+
+PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment of part of the network port resources to guest operating systems using the SR-IOV standard.
+NIC is logically distributed among multiple virtual machines (VMs), while still having global data in common to share with the PF and other VFs.
+
+Sysadmin can create and configure VFs:
+
+.. code-block:: console
+
+  echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
+  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
+
+Sysadmin also can change the VF property such as MAC address, transparent VLAN, TX rate limit, and trusted VF:
+
+.. code-block:: console
+
+  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
+  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
+
+Running on VF
+~~~~~~~~~~~~~
+
+Flow Bifurcation
+^^^^^^^^^^^^^^^^
+
+The Flow Bifurcation splits the incoming data traffic to user space applications (such as DPDK applications) and/or kernel space programs (such as the Linux kernel stack).
+It can direct some traffic, for example data plane traffic, to DPDK.
+Rest of the traffic, for example control plane traffic, would be redirected to to the traditional Linux networking stack.
+
+Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
+
+Benefits of the flow bifurcation include:
+
+* Better performance with less CPU overhead, as user application can directly access the NIC for data path
+* NIC is still being controlled by the kernel, as control traffic is forwarded only to the kernel driver
+* Control commands, e.g. ethtool, will work as usual
+
+Running on a VF, the BXNT PMD supports the flow bifurcation with a combination of SR-IOV and packet classification and/or forwarding capability.
+In the simplest case of flow bifurcation, a PF driver configures a NIC to forward all user traffic directly to VFs with matching destination MAC address, while the rest of the traffic is forwarded to a PF.
+Note that the broadcast packets will be forwarded to both PF and VF.
+
+.. code-block:: console
+
+    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
+
+Trusted VF
+^^^^^^^^^^
+
+By default, VFs are *not* allowed to perform privileged operations, such as modifying the VF’s MAC address in the guest. These security measures are designed to prevent possible attacks.
+However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these operations performed by a VF would be legitimate and can be allowed.
+
+To enable VF to request "trusted mode," a new trusted VF concept was introduced in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some privileged operations.
+
+The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable trusted attribute on the VF. It is preferable to enable the Trusted setting on a VF before starting applications.
+However, the BNXT PMD handles dynamic changes in trusted settings as well.
+
+Note that control commands, e.g., ethtool, will work via the kernel PF driver, *not* via the trusted VF driver.
+
+Operations supported by trusted VF:
+
+* MAC address configuration
+* Flow rule creation
+
+Operations *not* supported by trusted VF:
+
+* Firmware upgrade
+* Promiscuous mode setting
+
+Running on PF
+~~~~~~~~~~~~~
+
+Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the features which the PF can enable or request.
+In a multiport NIC, each port will have a corresponding PF. Also depending on the configuration of the NIC there can be more than one PF associated per port.
+A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other PF or run the PMD on both the PFs. In such cases, the firmware picks one of the PFs as a master PF.
+
+Much like in the trusted VF, the DPDK application must be *trusted* and expected to be *well-behaved*.
+
+Features
+--------
+
+The BNXT PMD supports the following features:
+
+* Port Control
+    * Port MTU
+    * LED
+    * Flow Control and Autoneg
+* Packet Filtering
+    * Unicast MAC Filter
+    * Multicast MAC Filter
+    * VLAN Filtering
+    * Allmulticast Mode
+    * Promiscuous Mode
+* Stateless Offloads
+    * CRC Offload
+    * Checksum Offload (IPv4, TCP, and UDP)
+    * Multi-Queue (TSS and RSS)
+    * Segmentation and Reassembly (TSO and LRO)
+* VLAN insert strip
+* Stats Collection
+* Generic Flow Offload
+
+Port Control
+~~~~~~~~~~~~
+
+**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574 bytes:
+
+.. code-block:: console
+
+    testpmd> port config mtu (port_id) mtu_value
+    testpmd> show port info (port_id)
+
+**LED**: Application tunes on (or off) a port LED, typically for a port identification:
+
+.. code-block:: console
+
+    int rte_eth_led_on (uint16_t port_id)
+    int rte_eth_led_off (uint16_t port_id)
+
+**Flow Control and Autoneg**: Application tunes on (or off) flow control and/or auto-negotiation on a port:
+
+.. code-block:: console
+
+    testpmd> set flow_ctrl rx (on|off) (port_id)
+    testpmd> set flow_ctrl tx (on|off) (port_id)
+    testpmd> set flow_ctrl autoneg (on|off) (port_id)
+
+Note that the BNXT PMD does *not* support some options and ignores them when requested:
+
+* high_water
+* low_water
+* pause_time
+* mac_ctrl_frame_fwd
+* send_xon
+
+Packet Filtering
+~~~~~~~~~~~~~~~~
+
+Applications control the packet-forwarding behaviors with packet filters.
+
+The BNXT PMD supports hardware-based packet filtering:
+
+* UC (Unicast) MAC Filters
+    * No unicast packets are forwarded to an application except the one with DMAC address added to the port
+    * At initialization, the station MAC address is added to the port
+* MC (Multicast) MAC Filters
+    * No multicast packets are forwarded to an application except the one with MC address added to the port
+    * When the application listens to a multicast group, it adds the MC address to the port
+* VLAN Filtering Mode
+    * When enabled, no packets are forwarded to an application except the ones with the VLAN tag assigned to the port
+* Allmulticast Mode
+    * When enabled, every multicast packet received on the port is forwarded to the application
+    * Typical usage is routing applications
+* Promiscuous Mode
+    * When enabled, every packet received on the port is forwarded to the application
+
+Unicast MAC Filter
+^^^^^^^^^^^^^^^^^^
+
+The application adds (or removes) MAC addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) macs
+    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Multicast MAC Filter
+^^^^^^^^^^^^^^^^^^^^
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) mcast_macs
+    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
+
+VLAN Filtering
+^^^^^^^^^^^^^^
+
+The application enables (or disables) VLAN filtering mode. When the mode is enabled, no packets are forwarded to an application except ones with VLAN tag assigned for the application.
+
+.. code-block:: console
+
+    testpmd> vlan set filter (on|off) (port_id)
+    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
+
+Allmulticast Mode
+^^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the allmulticast mode. When the mode is enabled, every multicast packet received is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set allmulti (port_id) (on|off)
+
+Promiscuous Mode
+^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the promiscuous mode. When the mode is enabled on a port, every packet received on the port is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set promisc port_id (on|off)
+
+Stateless Offloads
+~~~~~~~~~~~~~~~~~~
+
+Like Linux, DPDK provides enabling hardware offload of some stateless processing (such as checksum calculation) of the stack, alleviating the CPU from having to burn cycles on every packet.
+
+Listed below are the stateless offloads supported by the BNXT PMD:
+
+* CRC offload (for both TX and RX packets)
+* Checksum Offload (for both TX and RX packets)
+    * IPv4 Checksum Offload
+    * TCP Checksum Offload
+    * UDP Checksum Offload
+* Segmentation/Reassembly Offloads
+    * TCP Segmentation Offload (TSO)
+    * Large Receive Offload (LRO)
+* Multi-Queue
+    * Transmit Side Scaling (TSS)
+    * Receive Side Scaling (RSS)
+
+Also, the BNXT PMD supports stateless offloads on inner frames for tunneled packets. Listed below are the tunneling protocols supported by the BNXT PMD:
+
+* VXLAN
+* GRE
+* NVGRE
+
+Note that enabling (or disabling) stateless offloads requires applications to stop DPDK before changing configuration.
+
+CRC Offload
+^^^^^^^^^^^
+
+The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic Redundancy Check) that allows detection of corrupted data within the entire frame as received on the receiver side.
+
+The BNXT PMD supports hardware-based CRC offload:
+
+* TX: calculate and insert CRC
+* RX: check and remove CRC, notify the application on CRC error
+
+Note that the CRC offload is always turned on.
+
+Checksum Offload
+^^^^^^^^^^^^^^^^
+
+The application enables hardware checksum calculation for IPv4, TCP, and UDP.
+
+.. code-block:: console
+
+    testpmd> port stop (port_id)
+    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
+    testpmd> set fwd csum
+
+Multi-Queue
+^^^^^^^^^^^
+
+Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side Scaling), is a common networking technique that allows for more efficient load balancing across multiple CPU cores.
+
+The application enables multiple TX and RX queues when starts.
+
+.. code-block:: console
+
+    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
+
+**TSS**
+
+TSS distributes network transmit processing across several hardware-based transmit queues, allowing outbound network traffic to be processed by multiple CPU cores.
+
+**RSS**
+
+RSS distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPU cores.
+
+The application can select the RSS mode, i.e. select the header fields that are included for hash calculation. The BNXT PMD supports the RSS mode of ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
+
+For tunneled packets, RSS hash is calculated over inner frame header fields. Applications may want to select the tunnel header fields for hash calculation, and it will be supported in 20.08 using RSS level.
+
+.. code-block:: console
+
+    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
+
+    // note that the testpmd defaults the RSS mode to ip
+    // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
+    testpmd> port config (port_id) rss default
+
+    // to check the current RSS configuration, such as RSS function and RSS key
+    testpmd> show port (port_id) rss-hash key
+
+    // RSS is enabled by default. However, application can disable RSS as follows
+    testpmd> port config (port_id) rss none
+
+Application can change the flow distribution, i.e. remap the received traffic to CPU cores, using RSS RETA (Redirection Table).
+
+.. code-block:: console
+
+    // application queries the current RSS RETA configuration
+    testpmd> show port (port_id) rss reta size (mask0, mask1)
+
+    // application changes the RSS RETA configuration
+    testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
+
+TSO
+^^^
+
+TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum Transmit Unit). NIC breaks it into multiple segments before sending it to the network.
+
+The BNXT PMD supports hardware-based TSO.
+
+.. code-block:: console
+
+    // display the status of TSO
+    testpmd> tso show (port_id)
+
+    // enable/disable TSO
+    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
+
+    // set TSO segment size
+    testpmd> tso set segment_size (port_id)
+
+The BNXT PMD also supports hardware-based tunneled TSO.
+
+.. code-block:: console
+
+    // display the status of tunneled TSO
+    testpmd> tunnel_tso show (port_id)
+
+    // enable/disable tunneled TSO
+    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
+
+    // set tunneled TSO segment size
+    testpmd> tunnel_tso set segment_size (port_id)
+
+Note that the checksum offload is always assumed to be enabled for TSO.
+
+LRO
+^^^
+
+LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP packets from a single stream into a larger buffer, before passing to the networking stack.
+
+The BNXT PMD supports hardware-based LRO.
+
+.. code-block:: console
+
+    // display the status of LRO
+    testpmd> show port (port_id) rx_offload capabilities
+    testpmd> show port (port_id) rx_offload configuration
+
+    // enable/disable LRO
+    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
+
+    // set max LRO packet (datagram) size
+    testpmd> port config (port_id) max-lro-pkt-size (max_size)
+
+The BNXT PMD also supports tunneled LRO.
+
+Some applications, such as routing, should *not* change the packet headers as they pass through (i.e. received from and sent back to the network). In such a case, GRO (Generic Receive Offload) should be used instead of LRO.
+
+VLAN Insert/Strip
+~~~~~~~~~~~~~~~~~
+
+DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD supports hardware-based VLAN insert/strip offload for both single and double VLAN packets.
+
+
+VLAN Insert
+^^^^^^^^^^^
+
+Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is 0x8100.
+
+.. code-block:: console
+
+    // configure outer TPID value for a port
+    testpmd> vlan set outer tpid (tpid_value) (port_id)
+
+The inner TPID set will be rejected as the BNXT PMD supports inserting only an outer VLAN. Note that when a packet has a single VLAN, the tag is considered as outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
+
+The BNXT PMD supports various TPID values shown below. Any other values will be rejected.
+
+* ``0x8100``
+* ``0x88a8``
+* ``0x9100``
+* ``0x9200``
+* ``0x9300``
+
+The BNXT PMD supports the VLAN insert offload per-packet basis. The application provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD inserts the VLAN tag (via hardware) using the provided TCI along with the configured TPID.
+
+.. code-block:: console
+
+    // enable VLAN insert offload
+    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
+
+    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+        tci_value = mbuf->vlan_tci_outer
+    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+        tci_value = mbuf->vlan_tci
+
+VLAN Strip
+^^^^^^^^^^
+
+The application configures the per-port VLAN strip offload.
+
+.. code-block:: console
+
+    // enable VLAN strip on a port
+    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
+
+    // notify application VLAN strip via mbuf
+    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+System operators may run a PTP (Precision Time Protocol) client application to synchronize the time on the NIC (and optionally, on the system) to a PTP master.
+
+The BNXT PMD supports a PTP client application to communicate with a PTP master clock using DPDK IEEE1588 APIs. Note that the PTP client application need to run on PF and vector mode needs to be disabled.
+
+For the PTP time synchronization support, the BNXT PMD must be compiled with ``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending).
+
+.. code-block:: console
+
+    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
+
+When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP packets to application via mbuf.
+
+.. code-block:: console
+
+    // RX packet completion will indicate whether the packet is PTP
+    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+
+Statistics Collection
+~~~~~~~~~~~~~~~~~~~~~
+
+In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the similar functionalities via rte_eth_stats and rte_eth_xstats.
+
+The BNXT PMD supports both basic and extended stats collection:
+
+* Basic stats
+* Extended stats
+
+Basic Stats
+^^^^^^^^^^^
+
+The application collects per-port and per-queue stats using rte_eth_stats APIs.
+
+.. code-block:: console
+
+    testpmd> show port stats (port_id)
+
+Basic stats include:
+
+* ipackets
+* ibytes
+* opackets
+* obytes
+* imissed
+* ierrors
+* oerrors
+
+By default, the BNXT PMD supports per-queue stats for 16 queues. For more than 16 queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues.
+
+Extended Stats
+^^^^^^^^^^^^^^
+
+Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor provides its own set of counters.
+
+The BNXT PMD provides a rich set of counters, including per-flow counters, per-cos counters, per-priority counters, etc.
+
+.. code-block:: console
+
+    testpmd> show port xstats (port_id)
+
+Shown below is the elaborated sequence to retrieve extended stats:
+
+.. code-block:: console
+
+    // application queries the number of xstats
+    len = rte_eth_xstats_get(port_id, NULL, 0);
+    // BNXT PMD returns the size of xstats array (i.e. the number of entries)
+    // BNXT PMD returns 0, if the feature is compiled out or disabled
+
+    // application allocates memory for xstats
+    struct rte_eth_xstats_name *names; // name is 64 character or less
+    struct rte_eth_xstats *xstats;
+    names = calloc(len, sizeof(*names));
+    xstats = calloc(len, sizeof(*xstats));
+
+    // application retrieves xstats // names and values
+    ret = rte_eth_xstats_get_names(port_id, *names, len);
+    ret = rte_eth_xstats_get(port_id, *xstats, len);
+
+    // application checks the xstats
+    // application may repeat the below:
+    len = rte_eth_xstats_reset(port_id); // reset the xstats
+
+    // reset can be skipped, if application wants to see accumulated stats
+    // run traffic
+    // probably stop the traffic
+    // retrieve xstats // no need to retrieve xstats names again
+    // check xstats
+
+See Appendix section for a list of extended counters provided by the BNXT PMD.
+
+Generic Flow Offload
+~~~~~~~~~~~~~~~~~~~~
+
+Applications can get benefit by offloading all or part of flow processing to hardware. For example, applications can offload packet classification only (partial offload) or whole match-action (full offload).
+
+DPDK offers the Generic Flow API (rte_flow API) to configure hardware to perform flow processing.
+
+Listed below are the rte_flow APIs BNXT PMD supports:
+
+* rte_flow_validate
+* rte_flow_create
+* rte_flow_destroy
+* rte_flow_flush
+
+Host Based Flow Table Management
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with 20.05 BNXT PMD supports host based flow table management. This is a new mechanism that should allow higher flow scalability than what is currently supported. This new approach also defines a new rte_flow parser, and mapper which currently supports basic packet classification in the receive path.
+
+The feature uses a newly implemented control-plane firmware interface which optimizes flow insertions and deletions.
+
+This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+
+Application Support
+-------------------
+
+Firmware
+~~~~~~~~
+
+The BNXT PMD supports the application to retrieve the firmware version.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+
+Note that the applications cannot update the firmware using BNXT PMD.
+
+Multiple Processes
+~~~~~~~~~~~~~~~~~~
+
+When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single instance of DPDK, the BNXT PMD supports a single primary application and one or more secondary applications. Note that the DPDK-layer (*not* the PMD-layer) ensures there is only one primary application.
+
+There are two modes:
+
+Manual mode
+
+* Application notifies whether it is primary or secondary using *proc-type* flag
+* 1st process should be spawned with ``--proc-type=primary``
+* All subsequent processes should be spawned with ``--proc-type=secondary``
+
+Auto detection mode
+
+* Application is using ``proc-type=auto`` flag
+* A process is spawned as a secondary if a primary is already running
+
+The BNXT PMD uses the info to skip a device initialization, i.e. performs a device initialization only when being brought up by a primary application.
+
+Runtime Queue Setup
+~~~~~~~~~~~~~~~~~~~
+
+Typically, a DPDK application allocates TX and RX queues statically: i.e. queues are allocated at start. However, an application may want to increase (or decrease) the number of queues dynamically for various reasons, e.g. power savings.
+
+The BNXT PMD supports applications to increase or decrease queues at runtime.
+
+.. code-block:: console
+
+    testpmd> port config all (rxq|txq) (num_queues)
+
+Note that a DPDK application must allocate default queues (one for TX and one for RX at minimum) at initialization.
+
+Descriptor Status
+~~~~~~~~~~~~~~~~~
+
+Applications may use the descriptor status for various reasons, e.g. for power savings. For example, an application may stop polling and change to interrupt mode when the descriptor status shows no packets to service for a while.
+
+The BNXT PMD supports the application to retrieve both TX and RX descriptor status.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
+
+Bonding
+~~~~~~~
+
+DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
+
+.. code-block:: console
+
+    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+
+Vector Processing
 -----------------
 
-The BNXT PMD includes support for the following features:
-
-   * Multiple transmit and receive queues
-   * Queue start/stop
-   * RSS hash
-   * RSS key configuration
-   * RSS reta configuration
-   * VMDq
-   * Packet type parsing
-   * Configurable RX CRC stripping
-   * L3/L4 checksum offload
-   * LRO offload
-   * TSO offload
-   * VLAN offload
-   * SR-IOV VF
-   * Basic and extended port statistics
-   * Link state reporting
-   * Flow control
-   * Ethertype filtering
-   * N-tuple filtering
-   * Promiscuous mode
-   * Unicast and multicast MAC filtering
-   * Scatter/gather transmit and receive
-   * Jumbo frames
-   * Vector PMD
-
-BNXT Vector PMD
----------------
-
-The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
-provides significantly improved performance over the base implementation,
-however it does not support all of the features that are supported by the
-base (non-vector) implementation. Vector mode will be selected and enabled
-automatically when the port is started if allowed by the current configuration.
-
-RX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode receive will be enabled if the following constrainsts are met:
-   * Packets must fit within a single mbuf (no scatter RX).
-   * LRO offload must be disabled.
-
-TX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode transmit will be enabled if the following constrainsts are met:
-   * Packets must be contained within a single mbuf (no gather TX).
-   * All transmit offloads other than VLAN insertion must be disabled.
-
-BNXT PMD Supported Chipsets and Adapters
-----------------------------------------
-
-Chipsets and adapters supported by the bnxt PMD include:
-
-  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network Controllers**
-
-       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
-
-  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers**
-
-       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
-       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter
-       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter
-       * P150P - Single-port 50 Gigabit Ethernet Adapter
-       * P210P - Dual-port 10 Gigabit Ethernet Adapter
-       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
-       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
-       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
-       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
-       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
-       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
-
-    Information about Ethernet adapters in the NetXtreme family of
-    adapters can be found in the `NetXtreme® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
-
-       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
-       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
-       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
-
-    Information about the Stingray family of SmartNIC adapters can be found in the
-    `Stingray® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
-
-    These ARM based processors target a broad range of networking applications
-    including virtual CPE (vCPE) and NFV appliances, 10G service routers and
-    gateways, control plane processing for Ethernet switches and network
-    attached storage (NAS).
-
-    Information about the StrataGX family of adapters can be found in the
-    `StrataGX® BCM58712
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712>`_
-    and `StrataGX® BCM58713
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713>`_
-    sections of the `Broadcom website <http://www.broadcom.com/>`_.
+Vector processing provides significantly improved performance over scalar processing (see Vector Processor, here).
+
+The BNXT PMD supports the vector processing using SSE (Streaming SIMD Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is currently limited to Intel/AMD CPU architecture. Support for ARM is *not* currently implemented.
+
+This improved performance comes from several optimizations:
+
+* Batching
+    * TX: processing completions in bulk
+    * RX: allocating mbufs in bulk
+* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
+* Some stateless offloads are *not* supported with vector processing
+    * TX: no offloads will be supported
+    * RX: reduced RX offloads (listed below) will be supported::
+
+       DEV_RX_OFFLOAD_VLAN_STRIP
+       DEV_RX_OFFLOAD_KEEP_CRC
+       DEV_RX_OFFLOAD_JUMBO_FRAME
+       DEV_RX_OFFLOAD_IPV4_CKSUM
+       DEV_RX_OFFLOAD_UDP_CKSUM
+       DEV_RX_OFFLOAD_TCP_CKSUM
+       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       DEV_RX_OFFLOAD_RSS_HASH
+       DEV_RX_OFFLOAD_VLAN_FILTER
+
+The BNXT Vector PMD is enabled in DPDK builds by default. When required, it has to be disabled in the DPDK build configuration by setting ``CONFIG_RTE_LIBRTE_BNXT_INC_VECTOR=n``.
+
+However, a decision to enable vector mode will be made when the port transitions from stopped to started. Any TX offloads or some RX offloads (other than listed above) will disable the vector mode.
+Offload configuration changes that impact vector mode must be made when the port is stopped.
+
+Note that TX (or RX) vector mode can be enabled independently from RX (or TX) vector mode.
+
+Appendix
+--------
+
+Supported Chipsets and Adapters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
+
+BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
+* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
+* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
+* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
+* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
+* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
+* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
+* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
+* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
+
+BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about the Stingray family of SmartNIC adapters can be found in the `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
+
+BCM5873x StrataGX® Family of Communications Processors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These ARM-based processors target a broad range of networking applications, including virtual CPE (vCPE) and NFV appliances, 10G service routers and gateways, control plane processing for Ethernet switches, and network-attached storage (NAS).
+
+* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``
+
+Extended Stats
+~~~~~~~~~~~~~~
+
+Listed below are the extended stats supported by the BNXT PMD:
+
+* ``rx_good_packets``
+* ``tx_good_packets``
+* ``rx_good_bytes``
+* ``tx_good_bytes``
+* ``rx_missed_errors``
+* ``rx_errors``
+* ``tx_errors``
+* ``rx_mbuf_allocation_errors``
+* ``rx_q0packets``
+* ``rx_q0bytes``
+* ``rx_q0errors``
+* ``tx_q0packets``
+* ``tx_q0bytes``
+* ``rx_64b_frames``
+* ``rx_65b_127b_frames``
+* ``rx_128b_255b_frames``
+* ``rx_256b_511b_frames``
+* ``rx_512b_1023b_frames``
+* ``rx_1024b_1518b_frames``
+* ``rx_good_vlan_frames``
+* ``rx_1519b_2047b_frames``
+* ``rx_2048b_4095b_frames``
+* ``rx_4096b_9216b_frames``
+* ``rx_9217b_16383b_frames``
+* ``rx_total_frames``
+* ``rx_ucast_frames``
+* ``rx_mcast_frames``
+* ``rx_bcast_frames``
+* ``rx_fcs_err_frames``
+* ``rx_ctrl_frames``
+* ``rx_pause_frames``
+* ``rx_pfc_frames``
+* ``rx_unsupported_opcode_frames``
+* ``rx_unsupported_da_pausepfc_frames``
+* ``rx_wrong_sa_frames``
+* ``rx_align_err_frames``
+* ``rx_oor_len_frames``
+* ``rx_code_err_frames``
+* ``rx_false_carrier_frames``
+* ``rx_ovrsz_frames``
+* ``rx_jbr_frames``
+* ``rx_mtu_err_frames``
+* ``rx_match_crc_frames``
+* ``rx_promiscuous_frames``
+* ``rx_tagged_frames``
+* ``rx_double_tagged_frames``
+* ``rx_trunc_frames``
+* ``rx_good_frames``
+* ``rx_sch_crc_err_frames``
+* ``rx_undrsz_frames``
+* ``rx_frag_frames``
+* ``rx_eee_lpi_events``
+* ``rx_eee_lpi_duration``
+* ``rx_llfc_physical_msgs``
+* ``rx_llfc_logical_msgs``
+* ``rx_llfc_msgs_with_crc_err``
+* ``rx_hcfc_msgs``
+* ``rx_hcfc_msgs_with_crc_err``
+* ``rx_bytes``
+* ``rx_runt_bytes``
+* ``rx_runt_frames``
+* ``rx_pfc_xon2xoff_frames_pri0``
+* ``rx_pfc_xon2xoff_frames_pri1``
+* ``rx_pfc_xon2xoff_frames_pri2``
+* ``rx_pfc_xon2xoff_frames_pri3``
+* ``rx_pfc_xon2xoff_frames_pri4``
+* ``rx_pfc_xon2xoff_frames_pri5``
+* ``rx_pfc_xon2xoff_frames_pri6``
+* ``rx_pfc_xon2xoff_frames_pri7``
+* ``rx_pfc_ena_frames_pri0``
+* ``rx_pfc_ena_frames_pri1``
+* ``rx_pfc_ena_frames_pri2``
+* ``rx_pfc_ena_frames_pri3``
+* ``rx_pfc_ena_frames_pri4``
+* ``rx_pfc_ena_frames_pri5``
+* ``rx_pfc_ena_frames_pri6``
+* ``rx_pfc_ena_frames_pri7``
+* ``rx_stat_discard``
+* ``rx_stat_err``
+* ``tx_64b_frames``
+* ``tx_65b_127b_frames``
+* ``tx_128b_255b_frames``
+* ``tx_256b_511b_frames``
+* ``tx_512b_1023b_frames``
+* ``tx_1024b_1518b_frames``
+* ``tx_good_vlan_frames``
+* ``tx_1519b_2047b_frames``
+* ``tx_2048b_4095b_frames``
+* ``tx_4096b_9216b_frames``
+* ``tx_9217b_16383b_frames``
+* ``tx_good_frames``
+* ``tx_total_frames``
+* ``tx_ucast_frames``
+* ``tx_mcast_frames``
+* ``tx_bcast_frames``
+* ``tx_pause_frames``
+* ``tx_pfc_frames``
+* ``tx_jabber_frames``
+* ``tx_fcs_err_frames``
+* ``tx_control_frames``
+* ``tx_oversz_frames``
+* ``tx_single_dfrl_frames``
+* ``tx_multi_dfrl_frames``
+* ``tx_single_coll_frames``
+* ``tx_multi_coll_frames``
+* ``tx_late_coll_frames``
+* ``tx_excessive_coll_frames``
+* ``tx_frag_frames``
+* ``tx_err``
+* ``tx_tagged_frames``
+* ``tx_dbl_tagged_frames``
+* ``tx_runt_frames``
+* ``tx_fifo_underruns``
+* ``tx_eee_lpi_events``
+* ``tx_eee_lpi_duration``
+* ``tx_total_collisions``
+* ``tx_bytes``
+* ``tx_pfc_ena_frames_pri0``
+* ``tx_pfc_ena_frames_pri1``
+* ``tx_pfc_ena_frames_pri2``
+* ``tx_pfc_ena_frames_pri3``
+* ``tx_pfc_ena_frames_pri4``
+* ``tx_pfc_ena_frames_pri5``
+* ``tx_pfc_ena_frames_pri6``
+* ``tx_pfc_ena_frames_pri7``
+* ``tx_llfc_logical_msgs``
+* ``tx_hcfc_msgs``
+* ``tx_xthol_frames``
+* ``tx_stat_discard``
+* ``tx_stat_error``
+* ``tx_ucast_pkts``
+* ``tx_mcast_pkts``
+* ``tx_bcast_pkts``
+* ``tx_discard_pkts``
+* ``tx_drop_pkts``
+* ``tx_ucast_bytes``
+* ``tx_mcast_bytes``
+* ``tx_bcast_bytes``
+* ``rx_ucast_pkts``
+* ``rx_mcast_pkts``
+* ``rx_bcast_pkts``
+* ``rx_discard_pkts``
+* ``rx_drop_pkts``
+* ``rx_ucast_bytes``
+* ``rx_mcast_bytes``
+* ``rx_bcast_bytes``
+* ``rx_agg_pkts``
+* ``rx_agg_bytes``
+* ``rx_agg_events``
+* ``rx_agg_aborts``
+* ``link_down_events``
+* ``continuous_pause_events``
+* ``resume_pause_events``
+* ``continuous_roce_pause_events``
+* ``resume_roce_pause_events``
+* ``rx_bytes_cos0``
+* ``rx_bytes_cos1``
+* ``rx_bytes_cos2``
+* ``rx_bytes_cos3``
+* ``rx_bytes_cos4``
+* ``rx_bytes_cos5``
+* ``rx_bytes_cos6``
+* ``rx_bytes_cos7``
+* ``rx_packets_cos0``
+* ``rx_packets_cos1``
+* ``rx_packets_cos2``
+* ``rx_packets_cos3``
+* ``rx_packets_cos4``
+* ``rx_packets_cos5``
+* ``rx_packets_cos6``
+* ``rx_packets_cos7``
+* ``pfc_pri0_rx_duration_us``
+* ``pfc_pri0_rx_transitions``
+* ``pfc_pri1_rx_duration_us``
+* ``pfc_pri1_rx_transitions``
+* ``pfc_pri2_rx_duration_us``
+* ``pfc_pri2_rx_transitions``
+* ``pfc_pri3_rx_duration_us``
+* ``pfc_pri3_rx_transitions``
+* ``pfc_pri4_rx_duration_us``
+* ``pfc_pri4_rx_transitions``
+* ``pfc_pri5_rx_duration_us``
+* ``pfc_pri5_rx_transitions``
+* ``pfc_pri6_rx_duration_us``
+* ``pfc_pri6_rx_transitions``
+* ``pfc_pri7_rx_duration_us``
+* ``pfc_pri7_rx_transitions``
+* ``rx_bits``
+* ``rx_buffer_passed_threshold``
+* ``rx_pcs_symbol_err``
+* ``rx_corrected_bits``
+* ``rx_discard_bytes_cos0``
+* ``rx_discard_bytes_cos1``
+* ``rx_discard_bytes_cos2``
+* ``rx_discard_bytes_cos3``
+* ``rx_discard_bytes_cos4``
+* ``rx_discard_bytes_cos5``
+* ``rx_discard_bytes_cos6``
+* ``rx_discard_bytes_cos7``
+* ``rx_discard_packets_cos0``
+* ``rx_discard_packets_cos1``
+* ``rx_discard_packets_cos2``
+* ``rx_discard_packets_cos3``
+* ``rx_discard_packets_cos4``
+* ``rx_discard_packets_cos5``
+* ``rx_discard_packets_cos6``
+* ``rx_discard_packets_cos7``
+* ``tx_bytes_cos0``
+* ``tx_bytes_cos1``
+* ``tx_bytes_cos2``
+* ``tx_bytes_cos3``
+* ``tx_bytes_cos4``
+* ``tx_bytes_cos5``
+* ``tx_bytes_cos6``
+* ``tx_bytes_cos7``
+* ``tx_packets_cos0``
+* ``tx_packets_cos1``
+* ``tx_packets_cos2``
+* ``tx_packets_cos3``
+* ``tx_packets_cos4``
+* ``tx_packets_cos5``
+* ``tx_packets_cos6``
+* ``tx_packets_cos7``
+* ``pfc_pri0_tx_duration_us``
+* ``pfc_pri0_tx_transitions``
+* ``pfc_pri1_tx_duration_us``
+* ``pfc_pri1_tx_transitions``
+* ``pfc_pri2_tx_duration_us``
+* ``pfc_pri2_tx_transitions``
+* ``pfc_pri3_tx_duration_us``
+* ``pfc_pri3_tx_transitions``
+* ``pfc_pri4_tx_duration_us``
+* ``pfc_pri4_tx_transitions``
+* ``pfc_pri5_tx_duration_us``
+* ``pfc_pri5_tx_transitions``
+* ``pfc_pri6_tx_duration_us``
+* ``pfc_pri6_tx_transitions``
+* ``pfc_pri7_tx_duration_us``
+* ``pfc_pri7_tx_transitions``
+* ``flow_0_bytes``
+* ``flow_0_packets``
+* ``...``
+* ``flow_1023_bytes``
+* ``flow_1023_packets``
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v2] doc: update bnxt guide
  2020-05-15 21:56   ` [dpdk-dev] [PATCH v2] " Ajit Khaparde
@ 2020-05-18  9:34     ` Ferruh Yigit
  2020-05-18 17:38       ` [dpdk-dev] [PATCH v3] " Ajit Khaparde
  2020-05-18 17:58       ` [dpdk-dev] [PATCH v4] " Ajit Khaparde
  0 siblings, 2 replies; 16+ messages in thread
From: Ferruh Yigit @ 2020-05-18  9:34 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: JP Lee, Kovacevic, Marko

On 5/15/2020 10:56 PM, Ajit Khaparde wrote:
>     - Update list of supported adapters.
>     - Update list of supported features.
>     - Add some details to describe the features.
>     - Remove obsolete limitations.
>     - Fix and update links.
> 
> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Kovacevic, Marko <marko.kovacevic@intel.com>

<...>

> +Extended Stats
> +~~~~~~~~~~~~~~
> +
> +Listed below are the extended stats supported by the BNXT PMD:
> +
> +* ``rx_good_packets``
> +* ``tx_good_packets``
> +* ``rx_good_bytes``
> +* ``tx_good_bytes``
> +* ``rx_missed_errors``
> +* ``rx_errors``
> +* ``tx_errors``
> +* ``rx_mbuf_allocation_errors``
> +* ``rx_q0packets``
> +* ``rx_q0bytes``
> +* ``rx_q0errors``
> +* ``tx_q0packets``
> +* ``tx_q0bytes``
> +* ``rx_64b_frames``
> +* ``rx_65b_127b_frames``
> +* ``rx_128b_255b_frames``
> +* ``rx_256b_511b_frames``
> +* ``rx_512b_1023b_frames``
> +* ``rx_1024b_1518b_frames``
> +* ``rx_good_vlan_frames``
> +* ``rx_1519b_2047b_frames``
> +* ``rx_2048b_4095b_frames``
> +* ``rx_4096b_9216b_frames``
> +* ``rx_9217b_16383b_frames``
> +* ``rx_total_frames``
> +* ``rx_ucast_frames``
> +* ``rx_mcast_frames``
> +* ``rx_bcast_frames``
> +* ``rx_fcs_err_frames``
> +* ``rx_ctrl_frames``
> +* ``rx_pause_frames``
> +* ``rx_pfc_frames``
> +* ``rx_unsupported_opcode_frames``
> +* ``rx_unsupported_da_pausepfc_frames``
> +* ``rx_wrong_sa_frames``
> +* ``rx_align_err_frames``
> +* ``rx_oor_len_frames``
> +* ``rx_code_err_frames``
> +* ``rx_false_carrier_frames``
> +* ``rx_ovrsz_frames``
> +* ``rx_jbr_frames``
> +* ``rx_mtu_err_frames``
> +* ``rx_match_crc_frames``
> +* ``rx_promiscuous_frames``
> +* ``rx_tagged_frames``
> +* ``rx_double_tagged_frames``
> +* ``rx_trunc_frames``
> +* ``rx_good_frames``
> +* ``rx_sch_crc_err_frames``
> +* ``rx_undrsz_frames``
> +* ``rx_frag_frames``
> +* ``rx_eee_lpi_events``
> +* ``rx_eee_lpi_duration``
> +* ``rx_llfc_physical_msgs``
> +* ``rx_llfc_logical_msgs``
> +* ``rx_llfc_msgs_with_crc_err``
> +* ``rx_hcfc_msgs``
> +* ``rx_hcfc_msgs_with_crc_err``
> +* ``rx_bytes``
> +* ``rx_runt_bytes``
> +* ``rx_runt_frames``
> +* ``rx_pfc_xon2xoff_frames_pri0``
> +* ``rx_pfc_xon2xoff_frames_pri1``
> +* ``rx_pfc_xon2xoff_frames_pri2``
> +* ``rx_pfc_xon2xoff_frames_pri3``
> +* ``rx_pfc_xon2xoff_frames_pri4``
> +* ``rx_pfc_xon2xoff_frames_pri5``
> +* ``rx_pfc_xon2xoff_frames_pri6``
> +* ``rx_pfc_xon2xoff_frames_pri7``
> +* ``rx_pfc_ena_frames_pri0``
> +* ``rx_pfc_ena_frames_pri1``
> +* ``rx_pfc_ena_frames_pri2``
> +* ``rx_pfc_ena_frames_pri3``
> +* ``rx_pfc_ena_frames_pri4``
> +* ``rx_pfc_ena_frames_pri5``
> +* ``rx_pfc_ena_frames_pri6``
> +* ``rx_pfc_ena_frames_pri7``
> +* ``rx_stat_discard``
> +* ``rx_stat_err``
> +* ``tx_64b_frames``
> +* ``tx_65b_127b_frames``
> +* ``tx_128b_255b_frames``
> +* ``tx_256b_511b_frames``
> +* ``tx_512b_1023b_frames``
> +* ``tx_1024b_1518b_frames``
> +* ``tx_good_vlan_frames``
> +* ``tx_1519b_2047b_frames``
> +* ``tx_2048b_4095b_frames``
> +* ``tx_4096b_9216b_frames``
> +* ``tx_9217b_16383b_frames``
> +* ``tx_good_frames``
> +* ``tx_total_frames``
> +* ``tx_ucast_frames``
> +* ``tx_mcast_frames``
> +* ``tx_bcast_frames``
> +* ``tx_pause_frames``
> +* ``tx_pfc_frames``
> +* ``tx_jabber_frames``
> +* ``tx_fcs_err_frames``
> +* ``tx_control_frames``
> +* ``tx_oversz_frames``
> +* ``tx_single_dfrl_frames``
> +* ``tx_multi_dfrl_frames``
> +* ``tx_single_coll_frames``
> +* ``tx_multi_coll_frames``
> +* ``tx_late_coll_frames``
> +* ``tx_excessive_coll_frames``
> +* ``tx_frag_frames``
> +* ``tx_err``
> +* ``tx_tagged_frames``
> +* ``tx_dbl_tagged_frames``
> +* ``tx_runt_frames``
> +* ``tx_fifo_underruns``
> +* ``tx_eee_lpi_events``
> +* ``tx_eee_lpi_duration``
> +* ``tx_total_collisions``
> +* ``tx_bytes``
> +* ``tx_pfc_ena_frames_pri0``
> +* ``tx_pfc_ena_frames_pri1``
> +* ``tx_pfc_ena_frames_pri2``
> +* ``tx_pfc_ena_frames_pri3``
> +* ``tx_pfc_ena_frames_pri4``
> +* ``tx_pfc_ena_frames_pri5``
> +* ``tx_pfc_ena_frames_pri6``
> +* ``tx_pfc_ena_frames_pri7``
> +* ``tx_llfc_logical_msgs``
> +* ``tx_hcfc_msgs``
> +* ``tx_xthol_frames``
> +* ``tx_stat_discard``
> +* ``tx_stat_error``
> +* ``tx_ucast_pkts``
> +* ``tx_mcast_pkts``
> +* ``tx_bcast_pkts``
> +* ``tx_discard_pkts``
> +* ``tx_drop_pkts``
> +* ``tx_ucast_bytes``
> +* ``tx_mcast_bytes``
> +* ``tx_bcast_bytes``
> +* ``rx_ucast_pkts``
> +* ``rx_mcast_pkts``
> +* ``rx_bcast_pkts``
> +* ``rx_discard_pkts``
> +* ``rx_drop_pkts``
> +* ``rx_ucast_bytes``
> +* ``rx_mcast_bytes``
> +* ``rx_bcast_bytes``
> +* ``rx_agg_pkts``
> +* ``rx_agg_bytes``
> +* ``rx_agg_events``
> +* ``rx_agg_aborts``
> +* ``link_down_events``
> +* ``continuous_pause_events``
> +* ``resume_pause_events``
> +* ``continuous_roce_pause_events``
> +* ``resume_roce_pause_events``
> +* ``rx_bytes_cos0``
> +* ``rx_bytes_cos1``
> +* ``rx_bytes_cos2``
> +* ``rx_bytes_cos3``
> +* ``rx_bytes_cos4``
> +* ``rx_bytes_cos5``
> +* ``rx_bytes_cos6``
> +* ``rx_bytes_cos7``
> +* ``rx_packets_cos0``
> +* ``rx_packets_cos1``
> +* ``rx_packets_cos2``
> +* ``rx_packets_cos3``
> +* ``rx_packets_cos4``
> +* ``rx_packets_cos5``
> +* ``rx_packets_cos6``
> +* ``rx_packets_cos7``
> +* ``pfc_pri0_rx_duration_us``
> +* ``pfc_pri0_rx_transitions``
> +* ``pfc_pri1_rx_duration_us``
> +* ``pfc_pri1_rx_transitions``
> +* ``pfc_pri2_rx_duration_us``
> +* ``pfc_pri2_rx_transitions``
> +* ``pfc_pri3_rx_duration_us``
> +* ``pfc_pri3_rx_transitions``
> +* ``pfc_pri4_rx_duration_us``
> +* ``pfc_pri4_rx_transitions``
> +* ``pfc_pri5_rx_duration_us``
> +* ``pfc_pri5_rx_transitions``
> +* ``pfc_pri6_rx_duration_us``
> +* ``pfc_pri6_rx_transitions``
> +* ``pfc_pri7_rx_duration_us``
> +* ``pfc_pri7_rx_transitions``
> +* ``rx_bits``
> +* ``rx_buffer_passed_threshold``
> +* ``rx_pcs_symbol_err``
> +* ``rx_corrected_bits``
> +* ``rx_discard_bytes_cos0``
> +* ``rx_discard_bytes_cos1``
> +* ``rx_discard_bytes_cos2``
> +* ``rx_discard_bytes_cos3``
> +* ``rx_discard_bytes_cos4``
> +* ``rx_discard_bytes_cos5``
> +* ``rx_discard_bytes_cos6``
> +* ``rx_discard_bytes_cos7``
> +* ``rx_discard_packets_cos0``
> +* ``rx_discard_packets_cos1``
> +* ``rx_discard_packets_cos2``
> +* ``rx_discard_packets_cos3``
> +* ``rx_discard_packets_cos4``
> +* ``rx_discard_packets_cos5``
> +* ``rx_discard_packets_cos6``
> +* ``rx_discard_packets_cos7``
> +* ``tx_bytes_cos0``
> +* ``tx_bytes_cos1``
> +* ``tx_bytes_cos2``
> +* ``tx_bytes_cos3``
> +* ``tx_bytes_cos4``
> +* ``tx_bytes_cos5``
> +* ``tx_bytes_cos6``
> +* ``tx_bytes_cos7``
> +* ``tx_packets_cos0``
> +* ``tx_packets_cos1``
> +* ``tx_packets_cos2``
> +* ``tx_packets_cos3``
> +* ``tx_packets_cos4``
> +* ``tx_packets_cos5``
> +* ``tx_packets_cos6``
> +* ``tx_packets_cos7``
> +* ``pfc_pri0_tx_duration_us``
> +* ``pfc_pri0_tx_transitions``
> +* ``pfc_pri1_tx_duration_us``
> +* ``pfc_pri1_tx_transitions``
> +* ``pfc_pri2_tx_duration_us``
> +* ``pfc_pri2_tx_transitions``
> +* ``pfc_pri3_tx_duration_us``
> +* ``pfc_pri3_tx_transitions``
> +* ``pfc_pri4_tx_duration_us``
> +* ``pfc_pri4_tx_transitions``
> +* ``pfc_pri5_tx_duration_us``
> +* ``pfc_pri5_tx_transitions``
> +* ``pfc_pri6_tx_duration_us``
> +* ``pfc_pri6_tx_transitions``
> +* ``pfc_pri7_tx_duration_us``
> +* ``pfc_pri7_tx_transitions``
> +* ``flow_0_bytes``
> +* ``flow_0_packets``
> +* ``...``
> +* ``flow_1023_bytes``
> +* ``flow_1023_packets``
> 

Hi Ajit,

Just recognized this part. This seems copy/paste of the extended stats, it does
not provide additional documenatation, and it is hard to keep these kind of
things up to date in long term.
It is up to you, but I suggest either just giving a high level information that
extendend stats are supported or provide a link to the code for the list of the
supported stats (that way it can always stay up to date).

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v3] doc: update bnxt guide
  2020-05-18  9:34     ` Ferruh Yigit
@ 2020-05-18 17:38       ` Ajit Khaparde
  2020-05-18 17:58       ` [dpdk-dev] [PATCH v4] " Ajit Khaparde
  1 sibling, 0 replies; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-18 17:38 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, JP Lee, Kovacevic Marko

    - Update list of supported adapters.
    - Update list of supported features.
    - Add some details to describe the features.
    - Remove obsolete limitations.
    - Fix and update links.

Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
---
v1->v2: Some lines were too long in v1. Made then shorter. Checked for typos.
v2->v3: Removed list of extended stats.
---
 doc/guides/nics/bnxt.rst | 895 +++++++++++++++++++++++++++++++++------
 1 file changed, 765 insertions(+), 130 deletions(-)

diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 434ba9d6c..403124624 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -1,138 +1,773 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright 2016-2019 Broadcom
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 Broadcom Inc.
 
 BNXT Poll Mode Driver
 =====================
 
-The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
-Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
-Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
-NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
-Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
-Series of Communications Processors**.  A complete list with links to reference
-material is included below.
+The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
 
+A complete list with links to reference material is in the Appendix section.
 
-BNXT PMD Features
+CPU Support
+-----------
+
+BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
+
+Kernel Dependency
 -----------------
 
-The BNXT PMD includes support for the following features:
-
-   * Multiple transmit and receive queues
-   * Queue start/stop
-   * RSS hash
-   * RSS key configuration
-   * RSS reta configuration
-   * VMDq
-   * Packet type parsing
-   * Configurable RX CRC stripping
-   * L3/L4 checksum offload
-   * LRO offload
-   * TSO offload
-   * VLAN offload
-   * SR-IOV VF
-   * Basic and extended port statistics
-   * Link state reporting
-   * Flow control
-   * Ethertype filtering
-   * N-tuple filtering
-   * Promiscuous mode
-   * Unicast and multicast MAC filtering
-   * Scatter/gather transmit and receive
-   * Jumbo frames
-   * Vector PMD
-
-BNXT Vector PMD
----------------
-
-The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
-provides significantly improved performance over the base implementation,
-however it does not support all of the features that are supported by the
-base (non-vector) implementation. Vector mode will be selected and enabled
-automatically when the port is started if allowed by the current configuration.
-
-RX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode receive will be enabled if the following constrainsts are met:
-   * Packets must fit within a single mbuf (no scatter RX).
-   * LRO offload must be disabled.
-
-TX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode transmit will be enabled if the following constrainsts are met:
-   * Packets must be contained within a single mbuf (no gather TX).
-   * All transmit offloads other than VLAN insertion must be disabled.
-
-BNXT PMD Supported Chipsets and Adapters
-----------------------------------------
-
-Chipsets and adapters supported by the bnxt PMD include:
-
-  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network Controllers**
-
-       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
-
-  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers**
-
-       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
-       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter
-       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter
-       * P150P - Single-port 50 Gigabit Ethernet Adapter
-       * P210P - Dual-port 10 Gigabit Ethernet Adapter
-       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
-       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
-       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
-       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
-       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
-       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
-
-    Information about Ethernet adapters in the NetXtreme family of
-    adapters can be found in the `NetXtreme® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
-
-       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
-       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
-       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
-
-    Information about the Stingray family of SmartNIC adapters can be found in the
-    `Stingray® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
-
-    These ARM based processors target a broad range of networking applications
-    including virtual CPE (vCPE) and NFV appliances, 10G service routers and
-    gateways, control plane processing for Ethernet switches and network
-    attached storage (NAS).
-
-    Information about the StrataGX family of adapters can be found in the
-    `StrataGX® BCM58712
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712>`_
-    and `StrataGX® BCM58713
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713>`_
-    sections of the `Broadcom website <http://www.broadcom.com/>`_.
+BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping device memory to userspace, registering interrupts, etc.
+VFIO is more secure than UIO, relying on IOMMU protection.
+UIO requires the IOMMU disabled or configured to pass-through mode.
+
+Operating Systems supported:
+
+* Red Hat Enterprise Linux release 8.1 (Ootpa)
+* Red Hat Enterprise Linux release 8.0 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.7 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Red Hat Enterprise Linux Server release 7.3 (Maipo)
+* Red Hat Enterprise Linux Server release 7.2 (Maipo)
+* CentOS Linux release 8.0
+* CentOS Linux release 7.7
+* CentOS Linux release 7.6.1810
+* CentOS Linux release 7.5.1804
+* CentOS Linux release 7.4.1708
+* Fedora 31
+* FreeBSD 12.1
+* Suse 15SP1
+* Ubuntu 19.04
+* Ubuntu 18.04
+* Ubuntu 16.10
+* Ubuntu 16.04
+* Ubuntu 14.04
+
+The BNXT PMD supports operating with:
+
+* Linux vfio-pci
+* Linux uio_pci_generic
+* Linux igb_uio
+* BSD nic_uio
+
+Compiling BNXT PMD
+------------------
+
+To compile the BNXT PMD:
+
+.. code-block:: console
+
+    make config T=x86_64-native-linux-gcc && make // for x86-64
+    make config T=x86_32-native-linux-gcc && make // for x86-32
+    make config T=armv8a-linux-gcc && make // for ARMv8
+
+Bind the device to one of the kernel modules listed above
+
+.. code-block:: console
+
+    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
+
+Load an application (e.g. testpmd) with a default configuration (e.g. a single TX /RX queue):
+
+.. code-block:: console
+
+    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
+
+Running BNXT PMD
+----------------
+
+The BNXT PMD can run on PF or VF.
+
+PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment of part of the network port resources to guest operating systems using the SR-IOV standard.
+NIC is logically distributed among multiple virtual machines (VMs), while still having global data in common to share with the PF and other VFs.
+
+Sysadmin can create and configure VFs:
+
+.. code-block:: console
+
+  echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
+  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
+
+Sysadmin also can change the VF property such as MAC address, transparent VLAN, TX rate limit, and trusted VF:
+
+.. code-block:: console
+
+  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
+  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
+
+Running on VF
+~~~~~~~~~~~~~
+
+Flow Bifurcation
+^^^^^^^^^^^^^^^^
+
+The Flow Bifurcation splits the incoming data traffic to user space applications (such as DPDK applications) and/or kernel space programs (such as the Linux kernel stack).
+It can direct some traffic, for example data plane traffic, to DPDK.
+Rest of the traffic, for example control plane traffic, would be redirected to to the traditional Linux networking stack.
+
+Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
+
+Benefits of the flow bifurcation include:
+
+* Better performance with less CPU overhead, as user application can directly access the NIC for data path
+* NIC is still being controlled by the kernel, as control traffic is forwarded only to the kernel driver
+* Control commands, e.g. ethtool, will work as usual
+
+Running on a VF, the BXNT PMD supports the flow bifurcation with a combination of SR-IOV and packet classification and/or forwarding capability.
+In the simplest case of flow bifurcation, a PF driver configures a NIC to forward all user traffic directly to VFs with matching destination MAC address, while the rest of the traffic is forwarded to a PF.
+Note that the broadcast packets will be forwarded to both PF and VF.
+
+.. code-block:: console
+
+    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
+
+Trusted VF
+^^^^^^^^^^
+
+By default, VFs are *not* allowed to perform privileged operations, such as modifying the VF’s MAC address in the guest. These security measures are designed to prevent possible attacks.
+However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these operations performed by a VF would be legitimate and can be allowed.
+
+To enable VF to request "trusted mode," a new trusted VF concept was introduced in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some privileged operations.
+
+The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable trusted attribute on the VF. It is preferable to enable the Trusted setting on a VF before starting applications.
+However, the BNXT PMD handles dynamic changes in trusted settings as well.
+
+Note that control commands, e.g., ethtool, will work via the kernel PF driver, *not* via the trusted VF driver.
+
+Operations supported by trusted VF:
+
+* MAC address configuration
+* Flow rule creation
+
+Operations *not* supported by trusted VF:
+
+* Firmware upgrade
+* Promiscuous mode setting
+
+Running on PF
+~~~~~~~~~~~~~
+
+Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the features which the PF can enable or request.
+In a multiport NIC, each port will have a corresponding PF. Also depending on the configuration of the NIC there can be more than one PF associated per port.
+A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other PF or run the PMD on both the PFs. In such cases, the firmware picks one of the PFs as a master PF.
+
+Much like in the trusted VF, the DPDK application must be *trusted* and expected to be *well-behaved*.
+
+Features
+--------
+
+The BNXT PMD supports the following features:
+
+* Port Control
+    * Port MTU
+    * LED
+    * Flow Control and Autoneg
+* Packet Filtering
+    * Unicast MAC Filter
+    * Multicast MAC Filter
+    * VLAN Filtering
+    * Allmulticast Mode
+    * Promiscuous Mode
+* Stateless Offloads
+    * CRC Offload
+    * Checksum Offload (IPv4, TCP, and UDP)
+    * Multi-Queue (TSS and RSS)
+    * Segmentation and Reassembly (TSO and LRO)
+* VLAN insert strip
+* Stats Collection
+* Generic Flow Offload
+
+Port Control
+~~~~~~~~~~~~
+
+**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574 bytes:
+
+.. code-block:: console
+
+    testpmd> port config mtu (port_id) mtu_value
+    testpmd> show port info (port_id)
+
+**LED**: Application tunes on (or off) a port LED, typically for a port identification:
+
+.. code-block:: console
+
+    int rte_eth_led_on (uint16_t port_id)
+    int rte_eth_led_off (uint16_t port_id)
+
+**Flow Control and Autoneg**: Application tunes on (or off) flow control and/or auto-negotiation on a port:
+
+.. code-block:: console
+
+    testpmd> set flow_ctrl rx (on|off) (port_id)
+    testpmd> set flow_ctrl tx (on|off) (port_id)
+    testpmd> set flow_ctrl autoneg (on|off) (port_id)
+
+Note that the BNXT PMD does *not* support some options and ignores them when requested:
+
+* high_water
+* low_water
+* pause_time
+* mac_ctrl_frame_fwd
+* send_xon
+
+Packet Filtering
+~~~~~~~~~~~~~~~~
+
+Applications control the packet-forwarding behaviors with packet filters.
+
+The BNXT PMD supports hardware-based packet filtering:
+
+* UC (Unicast) MAC Filters
+    * No unicast packets are forwarded to an application except the one with DMAC address added to the port
+    * At initialization, the station MAC address is added to the port
+* MC (Multicast) MAC Filters
+    * No multicast packets are forwarded to an application except the one with MC address added to the port
+    * When the application listens to a multicast group, it adds the MC address to the port
+* VLAN Filtering Mode
+    * When enabled, no packets are forwarded to an application except the ones with the VLAN tag assigned to the port
+* Allmulticast Mode
+    * When enabled, every multicast packet received on the port is forwarded to the application
+    * Typical usage is routing applications
+* Promiscuous Mode
+    * When enabled, every packet received on the port is forwarded to the application
+
+Unicast MAC Filter
+^^^^^^^^^^^^^^^^^^
+
+The application adds (or removes) MAC addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) macs
+    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Multicast MAC Filter
+^^^^^^^^^^^^^^^^^^^^
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) mcast_macs
+    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
+
+VLAN Filtering
+^^^^^^^^^^^^^^
+
+The application enables (or disables) VLAN filtering mode. When the mode is enabled, no packets are forwarded to an application except ones with VLAN tag assigned for the application.
+
+.. code-block:: console
+
+    testpmd> vlan set filter (on|off) (port_id)
+    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
+
+Allmulticast Mode
+^^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the allmulticast mode. When the mode is enabled, every multicast packet received is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set allmulti (port_id) (on|off)
+
+Promiscuous Mode
+^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the promiscuous mode. When the mode is enabled on a port, every packet received on the port is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set promisc port_id (on|off)
+
+Stateless Offloads
+~~~~~~~~~~~~~~~~~~
+
+Like Linux, DPDK provides enabling hardware offload of some stateless processing (such as checksum calculation) of the stack, alleviating the CPU from having to burn cycles on every packet.
+
+Listed below are the stateless offloads supported by the BNXT PMD:
+
+* CRC offload (for both TX and RX packets)
+* Checksum Offload (for both TX and RX packets)
+    * IPv4 Checksum Offload
+    * TCP Checksum Offload
+    * UDP Checksum Offload
+* Segmentation/Reassembly Offloads
+    * TCP Segmentation Offload (TSO)
+    * Large Receive Offload (LRO)
+* Multi-Queue
+    * Transmit Side Scaling (TSS)
+    * Receive Side Scaling (RSS)
+
+Also, the BNXT PMD supports stateless offloads on inner frames for tunneled packets. Listed below are the tunneling protocols supported by the BNXT PMD:
+
+* VXLAN
+* GRE
+* NVGRE
+
+Note that enabling (or disabling) stateless offloads requires applications to stop DPDK before changing configuration.
+
+CRC Offload
+^^^^^^^^^^^
+
+The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic Redundancy Check) that allows detection of corrupted data within the entire frame as received on the receiver side.
+
+The BNXT PMD supports hardware-based CRC offload:
+
+* TX: calculate and insert CRC
+* RX: check and remove CRC, notify the application on CRC error
+
+Note that the CRC offload is always turned on.
+
+Checksum Offload
+^^^^^^^^^^^^^^^^
+
+The application enables hardware checksum calculation for IPv4, TCP, and UDP.
+
+.. code-block:: console
+
+    testpmd> port stop (port_id)
+    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
+    testpmd> set fwd csum
+
+Multi-Queue
+^^^^^^^^^^^
+
+Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side Scaling), is a common networking technique that allows for more efficient load balancing across multiple CPU cores.
+
+The application enables multiple TX and RX queues when starts.
+
+.. code-block:: console
+
+    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
+
+**TSS**
+
+TSS distributes network transmit processing across several hardware-based transmit queues, allowing outbound network traffic to be processed by multiple CPU cores.
+
+**RSS**
+
+RSS distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPU cores.
+
+The application can select the RSS mode, i.e. select the header fields that are included for hash calculation. The BNXT PMD supports the RSS mode of ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
+
+For tunneled packets, RSS hash is calculated over inner frame header fields. Applications may want to select the tunnel header fields for hash calculation, and it will be supported in 20.08 using RSS level.
+
+.. code-block:: console
+
+    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
+
+    // note that the testpmd defaults the RSS mode to ip
+    // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
+    testpmd> port config (port_id) rss default
+
+    // to check the current RSS configuration, such as RSS function and RSS key
+    testpmd> show port (port_id) rss-hash key
+
+    // RSS is enabled by default. However, application can disable RSS as follows
+    testpmd> port config (port_id) rss none
+
+Application can change the flow distribution, i.e. remap the received traffic to CPU cores, using RSS RETA (Redirection Table).
+
+.. code-block:: console
+
+    // application queries the current RSS RETA configuration
+    testpmd> show port (port_id) rss reta size (mask0, mask1)
+
+    // application changes the RSS RETA configuration
+    testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
+
+TSO
+^^^
+
+TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum Transmit Unit). NIC breaks it into multiple segments before sending it to the network.
+
+The BNXT PMD supports hardware-based TSO.
+
+.. code-block:: console
+
+    // display the status of TSO
+    testpmd> tso show (port_id)
+
+    // enable/disable TSO
+    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
+
+    // set TSO segment size
+    testpmd> tso set segment_size (port_id)
+
+The BNXT PMD also supports hardware-based tunneled TSO.
+
+.. code-block:: console
+
+    // display the status of tunneled TSO
+    testpmd> tunnel_tso show (port_id)
+
+    // enable/disable tunneled TSO
+    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
+
+    // set tunneled TSO segment size
+    testpmd> tunnel_tso set segment_size (port_id)
+
+Note that the checksum offload is always assumed to be enabled for TSO.
+
+LRO
+^^^
+
+LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP packets from a single stream into a larger buffer, before passing to the networking stack.
+
+The BNXT PMD supports hardware-based LRO.
+
+.. code-block:: console
+
+    // display the status of LRO
+    testpmd> show port (port_id) rx_offload capabilities
+    testpmd> show port (port_id) rx_offload configuration
+
+    // enable/disable LRO
+    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
+
+    // set max LRO packet (datagram) size
+    testpmd> port config (port_id) max-lro-pkt-size (max_size)
+
+The BNXT PMD also supports tunneled LRO.
+
+Some applications, such as routing, should *not* change the packet headers as they pass through (i.e. received from and sent back to the network). In such a case, GRO (Generic Receive Offload) should be used instead of LRO.
+
+VLAN Insert/Strip
+~~~~~~~~~~~~~~~~~
+
+DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD supports hardware-based VLAN insert/strip offload for both single and double VLAN packets.
+
+
+VLAN Insert
+^^^^^^^^^^^
+
+Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is 0x8100.
+
+.. code-block:: console
+
+    // configure outer TPID value for a port
+    testpmd> vlan set outer tpid (tpid_value) (port_id)
+
+The inner TPID set will be rejected as the BNXT PMD supports inserting only an outer VLAN. Note that when a packet has a single VLAN, the tag is considered as outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
+
+The BNXT PMD supports various TPID values shown below. Any other values will be rejected.
+
+* ``0x8100``
+* ``0x88a8``
+* ``0x9100``
+* ``0x9200``
+* ``0x9300``
+
+The BNXT PMD supports the VLAN insert offload per-packet basis. The application provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD inserts the VLAN tag (via hardware) using the provided TCI along with the configured TPID.
+
+.. code-block:: console
+
+    // enable VLAN insert offload
+    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
+
+    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+        tci_value = mbuf->vlan_tci_outer
+    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+        tci_value = mbuf->vlan_tci
+
+VLAN Strip
+^^^^^^^^^^
+
+The application configures the per-port VLAN strip offload.
+
+.. code-block:: console
+
+    // enable VLAN strip on a port
+    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
+
+    // notify application VLAN strip via mbuf
+    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+System operators may run a PTP (Precision Time Protocol) client application to synchronize the time on the NIC (and optionally, on the system) to a PTP master.
+
+The BNXT PMD supports a PTP client application to communicate with a PTP master clock using DPDK IEEE1588 APIs. Note that the PTP client application need to run on PF and vector mode needs to be disabled.
+
+For the PTP time synchronization support, the BNXT PMD must be compiled with ``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending).
+
+.. code-block:: console
+
+    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
+
+When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP packets to application via mbuf.
+
+.. code-block:: console
+
+    // RX packet completion will indicate whether the packet is PTP
+    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+
+Statistics Collection
+~~~~~~~~~~~~~~~~~~~~~
+
+In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the similar functionalities via rte_eth_stats and rte_eth_xstats.
+
+The BNXT PMD supports both basic and extended stats collection:
+
+* Basic stats
+* Extended stats
+
+Basic Stats
+^^^^^^^^^^^
+
+The application collects per-port and per-queue stats using rte_eth_stats APIs.
+
+.. code-block:: console
+
+    testpmd> show port stats (port_id)
+
+Basic stats include:
+
+* ipackets
+* ibytes
+* opackets
+* obytes
+* imissed
+* ierrors
+* oerrors
+
+By default, the BNXT PMD supports per-queue stats for 16 queues. For more than 16 queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues.
+
+Extended Stats
+^^^^^^^^^^^^^^
+
+Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor provides its own set of counters.
+
+The BNXT PMD provides a rich set of counters, including per-flow counters, per-cos counters, per-priority counters, etc.
+
+.. code-block:: console
+
+    testpmd> show port xstats (port_id)
+
+Shown below is the elaborated sequence to retrieve extended stats:
+
+.. code-block:: console
+
+    // application queries the number of xstats
+    len = rte_eth_xstats_get(port_id, NULL, 0);
+    // BNXT PMD returns the size of xstats array (i.e. the number of entries)
+    // BNXT PMD returns 0, if the feature is compiled out or disabled
+
+    // application allocates memory for xstats
+    struct rte_eth_xstats_name *names; // name is 64 character or less
+    struct rte_eth_xstats *xstats;
+    names = calloc(len, sizeof(*names));
+    xstats = calloc(len, sizeof(*xstats));
+
+    // application retrieves xstats // names and values
+    ret = rte_eth_xstats_get_names(port_id, *names, len);
+    ret = rte_eth_xstats_get(port_id, *xstats, len);
+
+    // application checks the xstats
+    // application may repeat the below:
+    len = rte_eth_xstats_reset(port_id); // reset the xstats
+
+    // reset can be skipped, if application wants to see accumulated stats
+    // run traffic
+    // probably stop the traffic
+    // retrieve xstats // no need to retrieve xstats names again
+    // check xstats
+
+See Appendix section for a list of extended counters provided by the BNXT PMD.
+
+Generic Flow Offload
+~~~~~~~~~~~~~~~~~~~~
+
+Applications can get benefit by offloading all or part of flow processing to hardware. For example, applications can offload packet classification only (partial offload) or whole match-action (full offload).
+
+DPDK offers the Generic Flow API (rte_flow API) to configure hardware to perform flow processing.
+
+Listed below are the rte_flow APIs BNXT PMD supports:
+
+* rte_flow_validate
+* rte_flow_create
+* rte_flow_destroy
+* rte_flow_flush
+
+Host Based Flow Table Management
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with 20.05 BNXT PMD supports host based flow table management. This is a new mechanism that should allow higher flow scalability than what is currently supported. This new approach also defines a new rte_flow parser, and mapper which currently supports basic packet classification in the receive path.
+
+The feature uses a newly implemented control-plane firmware interface which optimizes flow insertions and deletions.
+
+This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+
+Application Support
+-------------------
+
+Firmware
+~~~~~~~~
+
+The BNXT PMD supports the application to retrieve the firmware version.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+
+Note that the applications cannot update the firmware using BNXT PMD.
+
+Multiple Processes
+~~~~~~~~~~~~~~~~~~
+
+When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single instance of DPDK, the BNXT PMD supports a single primary application and one or more secondary applications. Note that the DPDK-layer (*not* the PMD-layer) ensures there is only one primary application.
+
+There are two modes:
+
+Manual mode
+
+* Application notifies whether it is primary or secondary using *proc-type* flag
+* 1st process should be spawned with ``--proc-type=primary``
+* All subsequent processes should be spawned with ``--proc-type=secondary``
+
+Auto detection mode
+
+* Application is using ``proc-type=auto`` flag
+* A process is spawned as a secondary if a primary is already running
+
+The BNXT PMD uses the info to skip a device initialization, i.e. performs a device initialization only when being brought up by a primary application.
+
+Runtime Queue Setup
+~~~~~~~~~~~~~~~~~~~
+
+Typically, a DPDK application allocates TX and RX queues statically: i.e. queues are allocated at start. However, an application may want to increase (or decrease) the number of queues dynamically for various reasons, e.g. power savings.
+
+The BNXT PMD supports applications to increase or decrease queues at runtime.
+
+.. code-block:: console
+
+    testpmd> port config all (rxq|txq) (num_queues)
+
+Note that a DPDK application must allocate default queues (one for TX and one for RX at minimum) at initialization.
+
+Descriptor Status
+~~~~~~~~~~~~~~~~~
+
+Applications may use the descriptor status for various reasons, e.g. for power savings. For example, an application may stop polling and change to interrupt mode when the descriptor status shows no packets to service for a while.
+
+The BNXT PMD supports the application to retrieve both TX and RX descriptor status.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
+
+Bonding
+~~~~~~~
+
+DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
+
+.. code-block:: console
+
+    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+
+Vector Processing
+-----------------
+
+Vector processing provides significantly improved performance over scalar processing (see Vector Processor, here).
+
+The BNXT PMD supports the vector processing using SSE (Streaming SIMD Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is currently limited to Intel/AMD CPU architecture. Support for ARM is *not* currently implemented.
+
+This improved performance comes from several optimizations:
+
+* Batching
+    * TX: processing completions in bulk
+    * RX: allocating mbufs in bulk
+* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
+* Some stateless offloads are *not* supported with vector processing
+    * TX: no offloads will be supported
+    * RX: reduced RX offloads (listed below) will be supported::
+
+       DEV_RX_OFFLOAD_VLAN_STRIP
+       DEV_RX_OFFLOAD_KEEP_CRC
+       DEV_RX_OFFLOAD_JUMBO_FRAME
+       DEV_RX_OFFLOAD_IPV4_CKSUM
+       DEV_RX_OFFLOAD_UDP_CKSUM
+       DEV_RX_OFFLOAD_TCP_CKSUM
+       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       DEV_RX_OFFLOAD_RSS_HASH
+       DEV_RX_OFFLOAD_VLAN_FILTER
+
+The BNXT Vector PMD is enabled in DPDK builds by default. When required, it has to be disabled in the DPDK build configuration by setting ``CONFIG_RTE_LIBRTE_BNXT_INC_VECTOR=n``.
+
+However, a decision to enable vector mode will be made when the port transitions from stopped to started. Any TX offloads or some RX offloads (other than listed above) will disable the vector mode.
+Offload configuration changes that impact vector mode must be made when the port is stopped.
+
+Note that TX (or RX) vector mode can be enabled independently from RX (or TX) vector mode.
+
+Appendix
+--------
+
+Supported Chipsets and Adapters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
+
+BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
+* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
+* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
+* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
+* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
+* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
+* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
+* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
+* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
+
+BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about the Stingray family of SmartNIC adapters can be found in the `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
+
+BCM5873x StrataGX® Family of Communications Processors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These ARM-based processors target a broad range of networking applications, including virtual CPE (vCPE) and NFV appliances, 10G service routers and gateways, control plane processing for Ethernet switches, and network-attached storage (NAS).
+
+* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v4] doc: update bnxt guide
  2020-05-18  9:34     ` Ferruh Yigit
  2020-05-18 17:38       ` [dpdk-dev] [PATCH v3] " Ajit Khaparde
@ 2020-05-18 17:58       ` Ajit Khaparde
  2020-05-18 22:04         ` Ajit Khaparde
  1 sibling, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-18 17:58 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, JP Lee, Kovacevic Marko

    - Update list of supported adapters.
    - Update list of supported features.
    - Add some details to describe the features.
    - Remove obsolete limitations.
    - Fix and update links.

Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
---
v1->v2: Some lines were too long in v1. Made then shorter. Checked for typos.
v2->v3: Removed list of extended stats.
v3->v4: Removed an irrelevant line.
---
 doc/guides/nics/bnxt.rst | 893 +++++++++++++++++++++++++++++++++------
 1 file changed, 763 insertions(+), 130 deletions(-)

diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 434ba9d6c..3aad7ea4a 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -1,138 +1,771 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright 2016-2019 Broadcom
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 Broadcom Inc.
 
 BNXT Poll Mode Driver
 =====================
 
-The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
-Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
-Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
-NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
-Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
-Series of Communications Processors**.  A complete list with links to reference
-material is included below.
+The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
 
+A complete list with links to reference material is in the Appendix section.
 
-BNXT PMD Features
+CPU Support
+-----------
+
+BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
+
+Kernel Dependency
+-----------------
+
+BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping device memory to userspace, registering interrupts, etc.
+VFIO is more secure than UIO, relying on IOMMU protection.
+UIO requires the IOMMU disabled or configured to pass-through mode.
+
+Operating Systems supported:
+
+* Red Hat Enterprise Linux release 8.1 (Ootpa)
+* Red Hat Enterprise Linux release 8.0 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.7 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Red Hat Enterprise Linux Server release 7.3 (Maipo)
+* Red Hat Enterprise Linux Server release 7.2 (Maipo)
+* CentOS Linux release 8.0
+* CentOS Linux release 7.7
+* CentOS Linux release 7.6.1810
+* CentOS Linux release 7.5.1804
+* CentOS Linux release 7.4.1708
+* Fedora 31
+* FreeBSD 12.1
+* Suse 15SP1
+* Ubuntu 19.04
+* Ubuntu 18.04
+* Ubuntu 16.10
+* Ubuntu 16.04
+* Ubuntu 14.04
+
+The BNXT PMD supports operating with:
+
+* Linux vfio-pci
+* Linux uio_pci_generic
+* Linux igb_uio
+* BSD nic_uio
+
+Compiling BNXT PMD
+------------------
+
+To compile the BNXT PMD:
+
+.. code-block:: console
+
+    make config T=x86_64-native-linux-gcc && make // for x86-64
+    make config T=x86_32-native-linux-gcc && make // for x86-32
+    make config T=armv8a-linux-gcc && make // for ARMv8
+
+Bind the device to one of the kernel modules listed above
+
+.. code-block:: console
+
+    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
+
+Load an application (e.g. testpmd) with a default configuration (e.g. a single TX /RX queue):
+
+.. code-block:: console
+
+    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
+
+Running BNXT PMD
+----------------
+
+The BNXT PMD can run on PF or VF.
+
+PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment of part of the network port resources to guest operating systems using the SR-IOV standard.
+NIC is logically distributed among multiple virtual machines (VMs), while still having global data in common to share with the PF and other VFs.
+
+Sysadmin can create and configure VFs:
+
+.. code-block:: console
+
+  echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
+  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
+
+Sysadmin also can change the VF property such as MAC address, transparent VLAN, TX rate limit, and trusted VF:
+
+.. code-block:: console
+
+  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
+  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
+
+Running on VF
+~~~~~~~~~~~~~
+
+Flow Bifurcation
+^^^^^^^^^^^^^^^^
+
+The Flow Bifurcation splits the incoming data traffic to user space applications (such as DPDK applications) and/or kernel space programs (such as the Linux kernel stack).
+It can direct some traffic, for example data plane traffic, to DPDK.
+Rest of the traffic, for example control plane traffic, would be redirected to to the traditional Linux networking stack.
+
+Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
+
+Benefits of the flow bifurcation include:
+
+* Better performance with less CPU overhead, as user application can directly access the NIC for data path
+* NIC is still being controlled by the kernel, as control traffic is forwarded only to the kernel driver
+* Control commands, e.g. ethtool, will work as usual
+
+Running on a VF, the BXNT PMD supports the flow bifurcation with a combination of SR-IOV and packet classification and/or forwarding capability.
+In the simplest case of flow bifurcation, a PF driver configures a NIC to forward all user traffic directly to VFs with matching destination MAC address, while the rest of the traffic is forwarded to a PF.
+Note that the broadcast packets will be forwarded to both PF and VF.
+
+.. code-block:: console
+
+    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
+
+Trusted VF
+^^^^^^^^^^
+
+By default, VFs are *not* allowed to perform privileged operations, such as modifying the VF’s MAC address in the guest. These security measures are designed to prevent possible attacks.
+However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these operations performed by a VF would be legitimate and can be allowed.
+
+To enable VF to request "trusted mode," a new trusted VF concept was introduced in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some privileged operations.
+
+The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable trusted attribute on the VF. It is preferable to enable the Trusted setting on a VF before starting applications.
+However, the BNXT PMD handles dynamic changes in trusted settings as well.
+
+Note that control commands, e.g., ethtool, will work via the kernel PF driver, *not* via the trusted VF driver.
+
+Operations supported by trusted VF:
+
+* MAC address configuration
+* Flow rule creation
+
+Operations *not* supported by trusted VF:
+
+* Firmware upgrade
+* Promiscuous mode setting
+
+Running on PF
+~~~~~~~~~~~~~
+
+Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the features which the PF can enable or request.
+In a multiport NIC, each port will have a corresponding PF. Also depending on the configuration of the NIC there can be more than one PF associated per port.
+A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other PF or run the PMD on both the PFs. In such cases, the firmware picks one of the PFs as a master PF.
+
+Much like in the trusted VF, the DPDK application must be *trusted* and expected to be *well-behaved*.
+
+Features
+--------
+
+The BNXT PMD supports the following features:
+
+* Port Control
+    * Port MTU
+    * LED
+    * Flow Control and Autoneg
+* Packet Filtering
+    * Unicast MAC Filter
+    * Multicast MAC Filter
+    * VLAN Filtering
+    * Allmulticast Mode
+    * Promiscuous Mode
+* Stateless Offloads
+    * CRC Offload
+    * Checksum Offload (IPv4, TCP, and UDP)
+    * Multi-Queue (TSS and RSS)
+    * Segmentation and Reassembly (TSO and LRO)
+* VLAN insert strip
+* Stats Collection
+* Generic Flow Offload
+
+Port Control
+~~~~~~~~~~~~
+
+**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574 bytes:
+
+.. code-block:: console
+
+    testpmd> port config mtu (port_id) mtu_value
+    testpmd> show port info (port_id)
+
+**LED**: Application tunes on (or off) a port LED, typically for a port identification:
+
+.. code-block:: console
+
+    int rte_eth_led_on (uint16_t port_id)
+    int rte_eth_led_off (uint16_t port_id)
+
+**Flow Control and Autoneg**: Application tunes on (or off) flow control and/or auto-negotiation on a port:
+
+.. code-block:: console
+
+    testpmd> set flow_ctrl rx (on|off) (port_id)
+    testpmd> set flow_ctrl tx (on|off) (port_id)
+    testpmd> set flow_ctrl autoneg (on|off) (port_id)
+
+Note that the BNXT PMD does *not* support some options and ignores them when requested:
+
+* high_water
+* low_water
+* pause_time
+* mac_ctrl_frame_fwd
+* send_xon
+
+Packet Filtering
+~~~~~~~~~~~~~~~~
+
+Applications control the packet-forwarding behaviors with packet filters.
+
+The BNXT PMD supports hardware-based packet filtering:
+
+* UC (Unicast) MAC Filters
+    * No unicast packets are forwarded to an application except the one with DMAC address added to the port
+    * At initialization, the station MAC address is added to the port
+* MC (Multicast) MAC Filters
+    * No multicast packets are forwarded to an application except the one with MC address added to the port
+    * When the application listens to a multicast group, it adds the MC address to the port
+* VLAN Filtering Mode
+    * When enabled, no packets are forwarded to an application except the ones with the VLAN tag assigned to the port
+* Allmulticast Mode
+    * When enabled, every multicast packet received on the port is forwarded to the application
+    * Typical usage is routing applications
+* Promiscuous Mode
+    * When enabled, every packet received on the port is forwarded to the application
+
+Unicast MAC Filter
+^^^^^^^^^^^^^^^^^^
+
+The application adds (or removes) MAC addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) macs
+    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Multicast MAC Filter
+^^^^^^^^^^^^^^^^^^^^
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) mcast_macs
+    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Application adds (or removes) Multicast addresses to enable (or disable) whitelist filtering to accept packets.
+
+Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
+
+VLAN Filtering
+^^^^^^^^^^^^^^
+
+The application enables (or disables) VLAN filtering mode. When the mode is enabled, no packets are forwarded to an application except ones with VLAN tag assigned for the application.
+
+.. code-block:: console
+
+    testpmd> vlan set filter (on|off) (port_id)
+    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
+
+Allmulticast Mode
+^^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the allmulticast mode. When the mode is enabled, every multicast packet received is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set allmulti (port_id) (on|off)
+
+Promiscuous Mode
+^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the promiscuous mode. When the mode is enabled on a port, every packet received on the port is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set promisc port_id (on|off)
+
+Stateless Offloads
+~~~~~~~~~~~~~~~~~~
+
+Like Linux, DPDK provides enabling hardware offload of some stateless processing (such as checksum calculation) of the stack, alleviating the CPU from having to burn cycles on every packet.
+
+Listed below are the stateless offloads supported by the BNXT PMD:
+
+* CRC offload (for both TX and RX packets)
+* Checksum Offload (for both TX and RX packets)
+    * IPv4 Checksum Offload
+    * TCP Checksum Offload
+    * UDP Checksum Offload
+* Segmentation/Reassembly Offloads
+    * TCP Segmentation Offload (TSO)
+    * Large Receive Offload (LRO)
+* Multi-Queue
+    * Transmit Side Scaling (TSS)
+    * Receive Side Scaling (RSS)
+
+Also, the BNXT PMD supports stateless offloads on inner frames for tunneled packets. Listed below are the tunneling protocols supported by the BNXT PMD:
+
+* VXLAN
+* GRE
+* NVGRE
+
+Note that enabling (or disabling) stateless offloads requires applications to stop DPDK before changing configuration.
+
+CRC Offload
+^^^^^^^^^^^
+
+The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic Redundancy Check) that allows detection of corrupted data within the entire frame as received on the receiver side.
+
+The BNXT PMD supports hardware-based CRC offload:
+
+* TX: calculate and insert CRC
+* RX: check and remove CRC, notify the application on CRC error
+
+Note that the CRC offload is always turned on.
+
+Checksum Offload
+^^^^^^^^^^^^^^^^
+
+The application enables hardware checksum calculation for IPv4, TCP, and UDP.
+
+.. code-block:: console
+
+    testpmd> port stop (port_id)
+    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
+    testpmd> set fwd csum
+
+Multi-Queue
+^^^^^^^^^^^
+
+Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side Scaling), is a common networking technique that allows for more efficient load balancing across multiple CPU cores.
+
+The application enables multiple TX and RX queues when starts.
+
+.. code-block:: console
+
+    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
+
+**TSS**
+
+TSS distributes network transmit processing across several hardware-based transmit queues, allowing outbound network traffic to be processed by multiple CPU cores.
+
+**RSS**
+
+RSS distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPU cores.
+
+The application can select the RSS mode, i.e. select the header fields that are included for hash calculation. The BNXT PMD supports the RSS mode of ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
+
+For tunneled packets, RSS hash is calculated over inner frame header fields. Applications may want to select the tunnel header fields for hash calculation, and it will be supported in 20.08 using RSS level.
+
+.. code-block:: console
+
+    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
+
+    // note that the testpmd defaults the RSS mode to ip
+    // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
+    testpmd> port config (port_id) rss default
+
+    // to check the current RSS configuration, such as RSS function and RSS key
+    testpmd> show port (port_id) rss-hash key
+
+    // RSS is enabled by default. However, application can disable RSS as follows
+    testpmd> port config (port_id) rss none
+
+Application can change the flow distribution, i.e. remap the received traffic to CPU cores, using RSS RETA (Redirection Table).
+
+.. code-block:: console
+
+    // application queries the current RSS RETA configuration
+    testpmd> show port (port_id) rss reta size (mask0, mask1)
+
+    // application changes the RSS RETA configuration
+    testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
+
+TSO
+^^^
+
+TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum Transmit Unit). NIC breaks it into multiple segments before sending it to the network.
+
+The BNXT PMD supports hardware-based TSO.
+
+.. code-block:: console
+
+    // display the status of TSO
+    testpmd> tso show (port_id)
+
+    // enable/disable TSO
+    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
+
+    // set TSO segment size
+    testpmd> tso set segment_size (port_id)
+
+The BNXT PMD also supports hardware-based tunneled TSO.
+
+.. code-block:: console
+
+    // display the status of tunneled TSO
+    testpmd> tunnel_tso show (port_id)
+
+    // enable/disable tunneled TSO
+    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
+
+    // set tunneled TSO segment size
+    testpmd> tunnel_tso set segment_size (port_id)
+
+Note that the checksum offload is always assumed to be enabled for TSO.
+
+LRO
+^^^
+
+LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP packets from a single stream into a larger buffer, before passing to the networking stack.
+
+The BNXT PMD supports hardware-based LRO.
+
+.. code-block:: console
+
+    // display the status of LRO
+    testpmd> show port (port_id) rx_offload capabilities
+    testpmd> show port (port_id) rx_offload configuration
+
+    // enable/disable LRO
+    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
+
+    // set max LRO packet (datagram) size
+    testpmd> port config (port_id) max-lro-pkt-size (max_size)
+
+The BNXT PMD also supports tunneled LRO.
+
+Some applications, such as routing, should *not* change the packet headers as they pass through (i.e. received from and sent back to the network). In such a case, GRO (Generic Receive Offload) should be used instead of LRO.
+
+VLAN Insert/Strip
+~~~~~~~~~~~~~~~~~
+
+DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD supports hardware-based VLAN insert/strip offload for both single and double VLAN packets.
+
+
+VLAN Insert
+^^^^^^^^^^^
+
+Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is 0x8100.
+
+.. code-block:: console
+
+    // configure outer TPID value for a port
+    testpmd> vlan set outer tpid (tpid_value) (port_id)
+
+The inner TPID set will be rejected as the BNXT PMD supports inserting only an outer VLAN. Note that when a packet has a single VLAN, the tag is considered as outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
+
+The BNXT PMD supports various TPID values shown below. Any other values will be rejected.
+
+* ``0x8100``
+* ``0x88a8``
+* ``0x9100``
+* ``0x9200``
+* ``0x9300``
+
+The BNXT PMD supports the VLAN insert offload per-packet basis. The application provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD inserts the VLAN tag (via hardware) using the provided TCI along with the configured TPID.
+
+.. code-block:: console
+
+    // enable VLAN insert offload
+    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
+
+    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+        tci_value = mbuf->vlan_tci_outer
+    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+        tci_value = mbuf->vlan_tci
+
+VLAN Strip
+^^^^^^^^^^
+
+The application configures the per-port VLAN strip offload.
+
+.. code-block:: console
+
+    // enable VLAN strip on a port
+    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
+
+    // notify application VLAN strip via mbuf
+    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+System operators may run a PTP (Precision Time Protocol) client application to synchronize the time on the NIC (and optionally, on the system) to a PTP master.
+
+The BNXT PMD supports a PTP client application to communicate with a PTP master clock using DPDK IEEE1588 APIs. Note that the PTP client application need to run on PF and vector mode needs to be disabled.
+
+For the PTP time synchronization support, the BNXT PMD must be compiled with ``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending).
+
+.. code-block:: console
+
+    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
+
+When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP packets to application via mbuf.
+
+.. code-block:: console
+
+    // RX packet completion will indicate whether the packet is PTP
+    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+
+Statistics Collection
+~~~~~~~~~~~~~~~~~~~~~
+
+In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the similar functionalities via rte_eth_stats and rte_eth_xstats.
+
+The BNXT PMD supports both basic and extended stats collection:
+
+* Basic stats
+* Extended stats
+
+Basic Stats
+^^^^^^^^^^^
+
+The application collects per-port and per-queue stats using rte_eth_stats APIs.
+
+.. code-block:: console
+
+    testpmd> show port stats (port_id)
+
+Basic stats include:
+
+* ipackets
+* ibytes
+* opackets
+* obytes
+* imissed
+* ierrors
+* oerrors
+
+By default, the BNXT PMD supports per-queue stats for 16 queues. For more than 16 queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues.
+
+Extended Stats
+^^^^^^^^^^^^^^
+
+Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor provides its own set of counters.
+
+The BNXT PMD provides a rich set of counters, including per-flow counters, per-cos counters, per-priority counters, etc.
+
+.. code-block:: console
+
+    testpmd> show port xstats (port_id)
+
+Shown below is the elaborated sequence to retrieve extended stats:
+
+.. code-block:: console
+
+    // application queries the number of xstats
+    len = rte_eth_xstats_get(port_id, NULL, 0);
+    // BNXT PMD returns the size of xstats array (i.e. the number of entries)
+    // BNXT PMD returns 0, if the feature is compiled out or disabled
+
+    // application allocates memory for xstats
+    struct rte_eth_xstats_name *names; // name is 64 character or less
+    struct rte_eth_xstats *xstats;
+    names = calloc(len, sizeof(*names));
+    xstats = calloc(len, sizeof(*xstats));
+
+    // application retrieves xstats // names and values
+    ret = rte_eth_xstats_get_names(port_id, *names, len);
+    ret = rte_eth_xstats_get(port_id, *xstats, len);
+
+    // application checks the xstats
+    // application may repeat the below:
+    len = rte_eth_xstats_reset(port_id); // reset the xstats
+
+    // reset can be skipped, if application wants to see accumulated stats
+    // run traffic
+    // probably stop the traffic
+    // retrieve xstats // no need to retrieve xstats names again
+    // check xstats
+
+Generic Flow Offload
+~~~~~~~~~~~~~~~~~~~~
+
+Applications can get benefit by offloading all or part of flow processing to hardware. For example, applications can offload packet classification only (partial offload) or whole match-action (full offload).
+
+DPDK offers the Generic Flow API (rte_flow API) to configure hardware to perform flow processing.
+
+Listed below are the rte_flow APIs BNXT PMD supports:
+
+* rte_flow_validate
+* rte_flow_create
+* rte_flow_destroy
+* rte_flow_flush
+
+Host Based Flow Table Management
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with 20.05 BNXT PMD supports host based flow table management. This is a new mechanism that should allow higher flow scalability than what is currently supported. This new approach also defines a new rte_flow parser, and mapper which currently supports basic packet classification in the receive path.
+
+The feature uses a newly implemented control-plane firmware interface which optimizes flow insertions and deletions.
+
+This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+
+Application Support
+-------------------
+
+Firmware
+~~~~~~~~
+
+The BNXT PMD supports the application to retrieve the firmware version.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+
+Note that the applications cannot update the firmware using BNXT PMD.
+
+Multiple Processes
+~~~~~~~~~~~~~~~~~~
+
+When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single instance of DPDK, the BNXT PMD supports a single primary application and one or more secondary applications. Note that the DPDK-layer (*not* the PMD-layer) ensures there is only one primary application.
+
+There are two modes:
+
+Manual mode
+
+* Application notifies whether it is primary or secondary using *proc-type* flag
+* 1st process should be spawned with ``--proc-type=primary``
+* All subsequent processes should be spawned with ``--proc-type=secondary``
+
+Auto detection mode
+
+* Application is using ``proc-type=auto`` flag
+* A process is spawned as a secondary if a primary is already running
+
+The BNXT PMD uses the info to skip a device initialization, i.e. performs a device initialization only when being brought up by a primary application.
+
+Runtime Queue Setup
+~~~~~~~~~~~~~~~~~~~
+
+Typically, a DPDK application allocates TX and RX queues statically: i.e. queues are allocated at start. However, an application may want to increase (or decrease) the number of queues dynamically for various reasons, e.g. power savings.
+
+The BNXT PMD supports applications to increase or decrease queues at runtime.
+
+.. code-block:: console
+
+    testpmd> port config all (rxq|txq) (num_queues)
+
+Note that a DPDK application must allocate default queues (one for TX and one for RX at minimum) at initialization.
+
+Descriptor Status
+~~~~~~~~~~~~~~~~~
+
+Applications may use the descriptor status for various reasons, e.g. for power savings. For example, an application may stop polling and change to interrupt mode when the descriptor status shows no packets to service for a while.
+
+The BNXT PMD supports the application to retrieve both TX and RX descriptor status.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
+
+Bonding
+~~~~~~~
+
+DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
+
+.. code-block:: console
+
+    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+
+Vector Processing
 -----------------
 
-The BNXT PMD includes support for the following features:
-
-   * Multiple transmit and receive queues
-   * Queue start/stop
-   * RSS hash
-   * RSS key configuration
-   * RSS reta configuration
-   * VMDq
-   * Packet type parsing
-   * Configurable RX CRC stripping
-   * L3/L4 checksum offload
-   * LRO offload
-   * TSO offload
-   * VLAN offload
-   * SR-IOV VF
-   * Basic and extended port statistics
-   * Link state reporting
-   * Flow control
-   * Ethertype filtering
-   * N-tuple filtering
-   * Promiscuous mode
-   * Unicast and multicast MAC filtering
-   * Scatter/gather transmit and receive
-   * Jumbo frames
-   * Vector PMD
-
-BNXT Vector PMD
----------------
-
-The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
-provides significantly improved performance over the base implementation,
-however it does not support all of the features that are supported by the
-base (non-vector) implementation. Vector mode will be selected and enabled
-automatically when the port is started if allowed by the current configuration.
-
-RX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode receive will be enabled if the following constrainsts are met:
-   * Packets must fit within a single mbuf (no scatter RX).
-   * LRO offload must be disabled.
-
-TX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode transmit will be enabled if the following constrainsts are met:
-   * Packets must be contained within a single mbuf (no gather TX).
-   * All transmit offloads other than VLAN insertion must be disabled.
-
-BNXT PMD Supported Chipsets and Adapters
-----------------------------------------
-
-Chipsets and adapters supported by the bnxt PMD include:
-
-  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network Controllers**
-
-       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
-
-  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers**
-
-       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
-       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter
-       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter
-       * P150P - Single-port 50 Gigabit Ethernet Adapter
-       * P210P - Dual-port 10 Gigabit Ethernet Adapter
-       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
-       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
-       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
-       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
-       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
-       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
-
-    Information about Ethernet adapters in the NetXtreme family of
-    adapters can be found in the `NetXtreme® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
-
-       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
-       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
-       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
-
-    Information about the Stingray family of SmartNIC adapters can be found in the
-    `Stingray® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
-
-    These ARM based processors target a broad range of networking applications
-    including virtual CPE (vCPE) and NFV appliances, 10G service routers and
-    gateways, control plane processing for Ethernet switches and network
-    attached storage (NAS).
-
-    Information about the StrataGX family of adapters can be found in the
-    `StrataGX® BCM58712
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712>`_
-    and `StrataGX® BCM58713
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713>`_
-    sections of the `Broadcom website <http://www.broadcom.com/>`_.
+Vector processing provides significantly improved performance over scalar processing (see Vector Processor, here).
+
+The BNXT PMD supports the vector processing using SSE (Streaming SIMD Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is currently limited to Intel/AMD CPU architecture. Support for ARM is *not* currently implemented.
+
+This improved performance comes from several optimizations:
+
+* Batching
+    * TX: processing completions in bulk
+    * RX: allocating mbufs in bulk
+* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
+* Some stateless offloads are *not* supported with vector processing
+    * TX: no offloads will be supported
+    * RX: reduced RX offloads (listed below) will be supported::
+
+       DEV_RX_OFFLOAD_VLAN_STRIP
+       DEV_RX_OFFLOAD_KEEP_CRC
+       DEV_RX_OFFLOAD_JUMBO_FRAME
+       DEV_RX_OFFLOAD_IPV4_CKSUM
+       DEV_RX_OFFLOAD_UDP_CKSUM
+       DEV_RX_OFFLOAD_TCP_CKSUM
+       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       DEV_RX_OFFLOAD_RSS_HASH
+       DEV_RX_OFFLOAD_VLAN_FILTER
+
+The BNXT Vector PMD is enabled in DPDK builds by default. When required, it has to be disabled in the DPDK build configuration by setting ``CONFIG_RTE_LIBRTE_BNXT_INC_VECTOR=n``.
+
+However, a decision to enable vector mode will be made when the port transitions from stopped to started. Any TX offloads or some RX offloads (other than listed above) will disable the vector mode.
+Offload configuration changes that impact vector mode must be made when the port is stopped.
+
+Note that TX (or RX) vector mode can be enabled independently from RX (or TX) vector mode.
+
+Appendix
+--------
+
+Supported Chipsets and Adapters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
+
+BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
+* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
+* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
+* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
+* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
+* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
+* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
+* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
+* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
+
+BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about the Stingray family of SmartNIC adapters can be found in the `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
+
+BCM5873x StrataGX® Family of Communications Processors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These ARM-based processors target a broad range of networking applications, including virtual CPE (vCPE) and NFV appliances, 10G service routers and gateways, control plane processing for Ethernet switches, and network-attached storage (NAS).
+
+* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v4] doc: update bnxt guide
  2020-05-18 17:58       ` [dpdk-dev] [PATCH v4] " Ajit Khaparde
@ 2020-05-18 22:04         ` Ajit Khaparde
  2020-05-19  8:15           ` Ferruh Yigit
  0 siblings, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-18 22:04 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Ferruh Yigit, JP Lee, Kovacevic Marko

On Mon, May 18, 2020 at 10:58 AM Ajit Khaparde <ajit.khaparde@broadcom.com>
wrote:

>     - Update list of supported adapters.
>     - Update list of supported features.
>     - Add some details to describe the features.
>     - Remove obsolete limitations.
>     - Fix and update links.
>
> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
> ---
> v1->v2: Some lines were too long in v1. Made then shorter. Checked for
> typos.
> v2->v3: Removed list of extended stats.
> v3->v4: Removed an irrelevant line.
>

Patch applied to dpdk-next-net-brcm. Thanks

> ---
>  doc/guides/nics/bnxt.rst | 893 +++++++++++++++++++++++++++++++++------
>  1 file changed, 763 insertions(+), 130 deletions(-)
>
> diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> index 434ba9d6c..3aad7ea4a 100644
> --- a/doc/guides/nics/bnxt.rst
> +++ b/doc/guides/nics/bnxt.rst
> @@ -1,138 +1,771 @@
> -..  SPDX-License-Identifier: BSD-3-Clause
> -    Copyright 2016-2019 Broadcom
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright 2020 Broadcom Inc.
>
>  BNXT Poll Mode Driver
>  =====================
>
> -The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based
> on
> -Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X
> NetXtreme-C®
> -Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
> -NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom
> BCM588XX
> -Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX®
> BCM5871X
> -Series of Communications Processors**.  A complete list with links to
> reference
> -material is included below.
> +The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements
> support for adapters based on Ethernet controllers and SoCs belonging to
> the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network
> Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters,
> and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
>
> +A complete list with links to reference material is in the Appendix
> section.
>
> -BNXT PMD Features
> +CPU Support
> +-----------
> +
> +BNXT PMD supports multiple CPU architectures, including x86-32, x86-64,
> and ARMv8.
> +
> +Kernel Dependency
> +-----------------
> +
> +BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device,
> mapping device memory to userspace, registering interrupts, etc.
> +VFIO is more secure than UIO, relying on IOMMU protection.
> +UIO requires the IOMMU disabled or configured to pass-through mode.
> +
> +Operating Systems supported:
> +
> +* Red Hat Enterprise Linux release 8.1 (Ootpa)
> +* Red Hat Enterprise Linux release 8.0 (Ootpa)
> +* Red Hat Enterprise Linux Server release 7.7 (Maipo)
> +* Red Hat Enterprise Linux Server release 7.6 (Maipo)
> +* Red Hat Enterprise Linux Server release 7.5 (Maipo)
> +* Red Hat Enterprise Linux Server release 7.4 (Maipo)
> +* Red Hat Enterprise Linux Server release 7.3 (Maipo)
> +* Red Hat Enterprise Linux Server release 7.2 (Maipo)
> +* CentOS Linux release 8.0
> +* CentOS Linux release 7.7
> +* CentOS Linux release 7.6.1810
> +* CentOS Linux release 7.5.1804
> +* CentOS Linux release 7.4.1708
> +* Fedora 31
> +* FreeBSD 12.1
> +* Suse 15SP1
> +* Ubuntu 19.04
> +* Ubuntu 18.04
> +* Ubuntu 16.10
> +* Ubuntu 16.04
> +* Ubuntu 14.04
> +
> +The BNXT PMD supports operating with:
> +
> +* Linux vfio-pci
> +* Linux uio_pci_generic
> +* Linux igb_uio
> +* BSD nic_uio
> +
> +Compiling BNXT PMD
> +------------------
> +
> +To compile the BNXT PMD:
> +
> +.. code-block:: console
> +
> +    make config T=x86_64-native-linux-gcc && make // for x86-64
> +    make config T=x86_32-native-linux-gcc && make // for x86-32
> +    make config T=armv8a-linux-gcc && make // for ARMv8
> +
> +Bind the device to one of the kernel modules listed above
> +
> +.. code-block:: console
> +
> +    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic
> bus_id:device_id.function_id
> +
> +Load an application (e.g. testpmd) with a default configuration (e.g. a
> single TX /RX queue):
> +
> +.. code-block:: console
> +
> +    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
> +
> +Running BNXT PMD
> +----------------
> +
> +The BNXT PMD can run on PF or VF.
> +
> +PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct
> assignment of part of the network port resources to guest operating systems
> using the SR-IOV standard.
> +NIC is logically distributed among multiple virtual machines (VMs), while
> still having global data in common to share with the PF and other VFs.
> +
> +Sysadmin can create and configure VFs:
> +
> +.. code-block:: console
> +
> +  echo num_vfs >
> /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
> +  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
> +
> +Sysadmin also can change the VF property such as MAC address, transparent
> VLAN, TX rate limit, and trusted VF:
> +
> +.. code-block:: console
> +
> +  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate
> (rate_value) trust (enable|disable)
> +  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100
> trust disable
> +
> +Running on VF
> +~~~~~~~~~~~~~
> +
> +Flow Bifurcation
> +^^^^^^^^^^^^^^^^
> +
> +The Flow Bifurcation splits the incoming data traffic to user space
> applications (such as DPDK applications) and/or kernel space programs (such
> as the Linux kernel stack).
> +It can direct some traffic, for example data plane traffic, to DPDK.
> +Rest of the traffic, for example control plane traffic, would be
> redirected to to the traditional Linux networking stack.
> +
> +Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
> +
> +Benefits of the flow bifurcation include:
> +
> +* Better performance with less CPU overhead, as user application can
> directly access the NIC for data path
> +* NIC is still being controlled by the kernel, as control traffic is
> forwarded only to the kernel driver
> +* Control commands, e.g. ethtool, will work as usual
> +
> +Running on a VF, the BXNT PMD supports the flow bifurcation with a
> combination of SR-IOV and packet classification and/or forwarding
> capability.
> +In the simplest case of flow bifurcation, a PF driver configures a NIC to
> forward all user traffic directly to VFs with matching destination MAC
> address, while the rest of the traffic is forwarded to a PF.
> +Note that the broadcast packets will be forwarded to both PF and VF.
> +
> +.. code-block:: console
> +
> +    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst
> 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
> +
> +Trusted VF
> +^^^^^^^^^^
> +
> +By default, VFs are *not* allowed to perform privileged operations, such
> as modifying the VF’s MAC address in the guest. These security measures are
> designed to prevent possible attacks.
> +However, when a DPDK application can be trusted (e.g., OVS-DPDK, here),
> these operations performed by a VF would be legitimate and can be allowed.
> +
> +To enable VF to request "trusted mode," a new trusted VF concept was
> introduced in Linux kernel 4.4 and allowed VFs to become “trusted” and
> perform some privileged operations.
> +
> +The BNXT PMD supports the trusted VF mode of operation. Only a PF can
> enable trusted attribute on the VF. It is preferable to enable the Trusted
> setting on a VF before starting applications.
> +However, the BNXT PMD handles dynamic changes in trusted settings as well.
> +
> +Note that control commands, e.g., ethtool, will work via the kernel PF
> driver, *not* via the trusted VF driver.
> +
> +Operations supported by trusted VF:
> +
> +* MAC address configuration
> +* Flow rule creation
> +
> +Operations *not* supported by trusted VF:
> +
> +* Firmware upgrade
> +* Promiscuous mode setting
> +
> +Running on PF
> +~~~~~~~~~~~~~
> +
> +Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed
> on the features which the PF can enable or request.
> +In a multiport NIC, each port will have a corresponding PF. Also
> depending on the configuration of the NIC there can be more than one PF
> associated per port.
> +A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the
> other PF or run the PMD on both the PFs. In such cases, the firmware picks
> one of the PFs as a master PF.
> +
> +Much like in the trusted VF, the DPDK application must be *trusted* and
> expected to be *well-behaved*.
> +
> +Features
> +--------
> +
> +The BNXT PMD supports the following features:
> +
> +* Port Control
> +    * Port MTU
> +    * LED
> +    * Flow Control and Autoneg
> +* Packet Filtering
> +    * Unicast MAC Filter
> +    * Multicast MAC Filter
> +    * VLAN Filtering
> +    * Allmulticast Mode
> +    * Promiscuous Mode
> +* Stateless Offloads
> +    * CRC Offload
> +    * Checksum Offload (IPv4, TCP, and UDP)
> +    * Multi-Queue (TSS and RSS)
> +    * Segmentation and Reassembly (TSO and LRO)
> +* VLAN insert strip
> +* Stats Collection
> +* Generic Flow Offload
> +
> +Port Control
> +~~~~~~~~~~~~
> +
> +**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to
> 9,574 bytes:
> +
> +.. code-block:: console
> +
> +    testpmd> port config mtu (port_id) mtu_value
> +    testpmd> show port info (port_id)
> +
> +**LED**: Application tunes on (or off) a port LED, typically for a port
> identification:
> +
> +.. code-block:: console
> +
> +    int rte_eth_led_on (uint16_t port_id)
> +    int rte_eth_led_off (uint16_t port_id)
> +
> +**Flow Control and Autoneg**: Application tunes on (or off) flow control
> and/or auto-negotiation on a port:
> +
> +.. code-block:: console
> +
> +    testpmd> set flow_ctrl rx (on|off) (port_id)
> +    testpmd> set flow_ctrl tx (on|off) (port_id)
> +    testpmd> set flow_ctrl autoneg (on|off) (port_id)
> +
> +Note that the BNXT PMD does *not* support some options and ignores them
> when requested:
> +
> +* high_water
> +* low_water
> +* pause_time
> +* mac_ctrl_frame_fwd
> +* send_xon
> +
> +Packet Filtering
> +~~~~~~~~~~~~~~~~
> +
> +Applications control the packet-forwarding behaviors with packet filters.
> +
> +The BNXT PMD supports hardware-based packet filtering:
> +
> +* UC (Unicast) MAC Filters
> +    * No unicast packets are forwarded to an application except the one
> with DMAC address added to the port
> +    * At initialization, the station MAC address is added to the port
> +* MC (Multicast) MAC Filters
> +    * No multicast packets are forwarded to an application except the one
> with MC address added to the port
> +    * When the application listens to a multicast group, it adds the MC
> address to the port
> +* VLAN Filtering Mode
> +    * When enabled, no packets are forwarded to an application except the
> ones with the VLAN tag assigned to the port
> +* Allmulticast Mode
> +    * When enabled, every multicast packet received on the port is
> forwarded to the application
> +    * Typical usage is routing applications
> +* Promiscuous Mode
> +    * When enabled, every packet received on the port is forwarded to the
> application
> +
> +Unicast MAC Filter
> +^^^^^^^^^^^^^^^^^^
> +
> +The application adds (or removes) MAC addresses to enable (or disable)
> whitelist filtering to accept packets.
> +
> +.. code-block:: console
> +
> +    testpmd> show port (port_id) macs
> +    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
> +
> +Multicast MAC Filter
> +^^^^^^^^^^^^^^^^^^^^
> +
> +Application adds (or removes) Multicast addresses to enable (or disable)
> whitelist filtering to accept packets.
> +
> +.. code-block:: console
> +
> +    testpmd> show port (port_id) mcast_macs
> +    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
> +
> +Application adds (or removes) Multicast addresses to enable (or disable)
> whitelist filtering to accept packets.
> +
> +Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds
> more than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
> +
> +VLAN Filtering
> +^^^^^^^^^^^^^^
> +
> +The application enables (or disables) VLAN filtering mode. When the mode
> is enabled, no packets are forwarded to an application except ones with
> VLAN tag assigned for the application.
> +
> +.. code-block:: console
> +
> +    testpmd> vlan set filter (on|off) (port_id)
> +    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
> +
> +Allmulticast Mode
> +^^^^^^^^^^^^^^^^^
> +
> +The application enables (or disables) the allmulticast mode. When the
> mode is enabled, every multicast packet received is forwarded to the
> application.
> +
> +.. code-block:: console
> +
> +    testpmd> show port info (port_id)
> +    testpmd> set allmulti (port_id) (on|off)
> +
> +Promiscuous Mode
> +^^^^^^^^^^^^^^^^
> +
> +The application enables (or disables) the promiscuous mode. When the mode
> is enabled on a port, every packet received on the port is forwarded to the
> application.
> +
> +.. code-block:: console
> +
> +    testpmd> show port info (port_id)
> +    testpmd> set promisc port_id (on|off)
> +
> +Stateless Offloads
> +~~~~~~~~~~~~~~~~~~
> +
> +Like Linux, DPDK provides enabling hardware offload of some stateless
> processing (such as checksum calculation) of the stack, alleviating the CPU
> from having to burn cycles on every packet.
> +
> +Listed below are the stateless offloads supported by the BNXT PMD:
> +
> +* CRC offload (for both TX and RX packets)
> +* Checksum Offload (for both TX and RX packets)
> +    * IPv4 Checksum Offload
> +    * TCP Checksum Offload
> +    * UDP Checksum Offload
> +* Segmentation/Reassembly Offloads
> +    * TCP Segmentation Offload (TSO)
> +    * Large Receive Offload (LRO)
> +* Multi-Queue
> +    * Transmit Side Scaling (TSS)
> +    * Receive Side Scaling (RSS)
> +
> +Also, the BNXT PMD supports stateless offloads on inner frames for
> tunneled packets. Listed below are the tunneling protocols supported by the
> BNXT PMD:
> +
> +* VXLAN
> +* GRE
> +* NVGRE
> +
> +Note that enabling (or disabling) stateless offloads requires
> applications to stop DPDK before changing configuration.
> +
> +CRC Offload
> +^^^^^^^^^^^
> +
> +The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC
> (Cyclic Redundancy Check) that allows detection of corrupted data within
> the entire frame as received on the receiver side.
> +
> +The BNXT PMD supports hardware-based CRC offload:
> +
> +* TX: calculate and insert CRC
> +* RX: check and remove CRC, notify the application on CRC error
> +
> +Note that the CRC offload is always turned on.
> +
> +Checksum Offload
> +^^^^^^^^^^^^^^^^
> +
> +The application enables hardware checksum calculation for IPv4, TCP, and
> UDP.
> +
> +.. code-block:: console
> +
> +    testpmd> port stop (port_id)
> +    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
> +    testpmd> set fwd csum
> +
> +Multi-Queue
> +^^^^^^^^^^^
> +
> +Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive
> Side Scaling), is a common networking technique that allows for more
> efficient load balancing across multiple CPU cores.
> +
> +The application enables multiple TX and RX queues when starts.
> +
> +.. code-block:: console
> +
> +    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
> +
> +**TSS**
> +
> +TSS distributes network transmit processing across several hardware-based
> transmit queues, allowing outbound network traffic to be processed by
> multiple CPU cores.
> +
> +**RSS**
> +
> +RSS distributes network receive processing across several hardware-based
> receive queues, allowing inbound network traffic to be processed by
> multiple CPU cores.
> +
> +The application can select the RSS mode, i.e. select the header fields
> that are included for hash calculation. The BNXT PMD supports the RSS mode
> of ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
> +
> +For tunneled packets, RSS hash is calculated over inner frame header
> fields. Applications may want to select the tunnel header fields for hash
> calculation, and it will be supported in 20.08 using RSS level.
> +
> +.. code-block:: console
> +
> +    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
> +
> +    // note that the testpmd defaults the RSS mode to ip
> +    // ensure to issue the command below to enable L4 header (TCP or UDP)
> along with IPv4 header
> +    testpmd> port config (port_id) rss default
> +
> +    // to check the current RSS configuration, such as RSS function and
> RSS key
> +    testpmd> show port (port_id) rss-hash key
> +
> +    // RSS is enabled by default. However, application can disable RSS as
> follows
> +    testpmd> port config (port_id) rss none
> +
> +Application can change the flow distribution, i.e. remap the received
> traffic to CPU cores, using RSS RETA (Redirection Table).
> +
> +.. code-block:: console
> +
> +    // application queries the current RSS RETA configuration
> +    testpmd> show port (port_id) rss reta size (mask0, mask1)
> +
> +    // application changes the RSS RETA configuration
> +    testpmd> port config (port_id) rss reta (hash, queue) [, (hash,
> queue)]
> +
> +TSO
> +^^^
> +
> +TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload),
> enables the TCP/IP stack to pass to the NIC a larger datagram than the MTU
> (Maximum Transmit Unit). NIC breaks it into multiple segments before
> sending it to the network.
> +
> +The BNXT PMD supports hardware-based TSO.
> +
> +.. code-block:: console
> +
> +    // display the status of TSO
> +    testpmd> tso show (port_id)
> +
> +    // enable/disable TSO
> +    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
> +
> +    // set TSO segment size
> +    testpmd> tso set segment_size (port_id)
> +
> +The BNXT PMD also supports hardware-based tunneled TSO.
> +
> +.. code-block:: console
> +
> +    // display the status of tunneled TSO
> +    testpmd> tunnel_tso show (port_id)
> +
> +    // enable/disable tunneled TSO
> +    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso
> (on|off)
> +
> +    // set tunneled TSO segment size
> +    testpmd> tunnel_tso set segment_size (port_id)
> +
> +Note that the checksum offload is always assumed to be enabled for TSO.
> +
> +LRO
> +^^^
> +
> +LRO (Large Receive Offload) enables NIC to aggregate multiple incoming
> TCP/IP packets from a single stream into a larger buffer, before passing to
> the networking stack.
> +
> +The BNXT PMD supports hardware-based LRO.
> +
> +.. code-block:: console
> +
> +    // display the status of LRO
> +    testpmd> show port (port_id) rx_offload capabilities
> +    testpmd> show port (port_id) rx_offload configuration
> +
> +    // enable/disable LRO
> +    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
> +
> +    // set max LRO packet (datagram) size
> +    testpmd> port config (port_id) max-lro-pkt-size (max_size)
> +
> +The BNXT PMD also supports tunneled LRO.
> +
> +Some applications, such as routing, should *not* change the packet
> headers as they pass through (i.e. received from and sent back to the
> network). In such a case, GRO (Generic Receive Offload) should be used
> instead of LRO.
> +
> +VLAN Insert/Strip
> +~~~~~~~~~~~~~~~~~
> +
> +DPDK application offloads VLAN insert/strip to improve performance. The
> BNXT PMD supports hardware-based VLAN insert/strip offload for both single
> and double VLAN packets.
> +
> +
> +VLAN Insert
> +^^^^^^^^^^^
> +
> +Application configures the VLAN TPID (Tag Protocol ID). By default, the
> TPID is 0x8100.
> +
> +.. code-block:: console
> +
> +    // configure outer TPID value for a port
> +    testpmd> vlan set outer tpid (tpid_value) (port_id)
> +
> +The inner TPID set will be rejected as the BNXT PMD supports inserting
> only an outer VLAN. Note that when a packet has a single VLAN, the tag is
> considered as outer, i.e. the inner VLAN is relevant only when a packet is
> double-tagged.
> +
> +The BNXT PMD supports various TPID values shown below. Any other values
> will be rejected.
> +
> +* ``0x8100``
> +* ``0x88a8``
> +* ``0x9100``
> +* ``0x9200``
> +* ``0x9300``
> +
> +The BNXT PMD supports the VLAN insert offload per-packet basis. The
> application provides the TCI (Tag Control Info) for a packet via mbuf. In
> turn, the BNXT PMD inserts the VLAN tag (via hardware) using the provided
> TCI along with the configured TPID.
> +
> +.. code-block:: console
> +
> +    // enable VLAN insert offload
> +    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert
> (on|off)
> +
> +    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to
> single-tagged packet
> +        tci_value = mbuf->vlan_tci_outer
> +    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to
> untagged packet
> +        tci_value = mbuf->vlan_tci
> +
> +VLAN Strip
> +^^^^^^^^^^
> +
> +The application configures the per-port VLAN strip offload.
> +
> +.. code-block:: console
> +
> +    // enable VLAN strip on a port
> +    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
> +
> +    // notify application VLAN strip via mbuf
> +    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is
> found and stripped
> +    mbuf->vlan_tci = tci_value                      // TCI of the
> stripped VLAN
> +
> +Time Synchronization
> +~~~~~~~~~~~~~~~~~~~~
> +
> +System operators may run a PTP (Precision Time Protocol) client
> application to synchronize the time on the NIC (and optionally, on the
> system) to a PTP master.
> +
> +The BNXT PMD supports a PTP client application to communicate with a PTP
> master clock using DPDK IEEE1588 APIs. Note that the PTP client application
> need to run on PF and vector mode needs to be disabled.
> +
> +For the PTP time synchronization support, the BNXT PMD must be compiled
> with ``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently
> pending).
> +
> +.. code-block:: console
> +
> +    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
> +
> +When enabled, the BNXT PMD configures hardware to insert IEEE 1588
> timestamps to the outgoing PTP packets and reports IEEE 1588 timestamps
> from the incoming PTP packets to application via mbuf.
> +
> +.. code-block:: console
> +
> +    // RX packet completion will indicate whether the packet is PTP
> +    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
> +
> +Statistics Collection
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK
> provides the similar functionalities via rte_eth_stats and rte_eth_xstats.
> +
> +The BNXT PMD supports both basic and extended stats collection:
> +
> +* Basic stats
> +* Extended stats
> +
> +Basic Stats
> +^^^^^^^^^^^
> +
> +The application collects per-port and per-queue stats using rte_eth_stats
> APIs.
> +
> +.. code-block:: console
> +
> +    testpmd> show port stats (port_id)
> +
> +Basic stats include:
> +
> +* ipackets
> +* ibytes
> +* opackets
> +* obytes
> +* imissed
> +* ierrors
> +* oerrors
> +
> +By default, the BNXT PMD supports per-queue stats for 16 queues. For more
> than 16 queues, BNXT PMD should be compiled with
> ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues.
> +
> +Extended Stats
> +^^^^^^^^^^^^^^
> +
> +Unlike basic stats, the extended stats are vendor-specific, i.e. each
> vendor provides its own set of counters.
> +
> +The BNXT PMD provides a rich set of counters, including per-flow
> counters, per-cos counters, per-priority counters, etc.
> +
> +.. code-block:: console
> +
> +    testpmd> show port xstats (port_id)
> +
> +Shown below is the elaborated sequence to retrieve extended stats:
> +
> +.. code-block:: console
> +
> +    // application queries the number of xstats
> +    len = rte_eth_xstats_get(port_id, NULL, 0);
> +    // BNXT PMD returns the size of xstats array (i.e. the number of
> entries)
> +    // BNXT PMD returns 0, if the feature is compiled out or disabled
> +
> +    // application allocates memory for xstats
> +    struct rte_eth_xstats_name *names; // name is 64 character or less
> +    struct rte_eth_xstats *xstats;
> +    names = calloc(len, sizeof(*names));
> +    xstats = calloc(len, sizeof(*xstats));
> +
> +    // application retrieves xstats // names and values
> +    ret = rte_eth_xstats_get_names(port_id, *names, len);
> +    ret = rte_eth_xstats_get(port_id, *xstats, len);
> +
> +    // application checks the xstats
> +    // application may repeat the below:
> +    len = rte_eth_xstats_reset(port_id); // reset the xstats
> +
> +    // reset can be skipped, if application wants to see accumulated stats
> +    // run traffic
> +    // probably stop the traffic
> +    // retrieve xstats // no need to retrieve xstats names again
> +    // check xstats
> +
> +Generic Flow Offload
> +~~~~~~~~~~~~~~~~~~~~
> +
> +Applications can get benefit by offloading all or part of flow processing
> to hardware. For example, applications can offload packet classification
> only (partial offload) or whole match-action (full offload).
> +
> +DPDK offers the Generic Flow API (rte_flow API) to configure hardware to
> perform flow processing.
> +
> +Listed below are the rte_flow APIs BNXT PMD supports:
> +
> +* rte_flow_validate
> +* rte_flow_create
> +* rte_flow_destroy
> +* rte_flow_flush
> +
> +Host Based Flow Table Management
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Starting with 20.05 BNXT PMD supports host based flow table management.
> This is a new mechanism that should allow higher flow scalability than what
> is currently supported. This new approach also defines a new rte_flow
> parser, and mapper which currently supports basic packet classification in
> the receive path.
> +
> +The feature uses a newly implemented control-plane firmware interface
> which optimizes flow insertions and deletions.
> +
> +This is a tech preview feature, and is disabled by default. It can be
> enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
> +
> +Application Support
> +-------------------
> +
> +Firmware
> +~~~~~~~~
> +
> +The BNXT PMD supports the application to retrieve the firmware version.
> +
> +.. code-block:: console
> +
> +    testpmd> show port info (port_id)
> +
> +Note that the applications cannot update the firmware using BNXT PMD.
> +
> +Multiple Processes
> +~~~~~~~~~~~~~~~~~~
> +
> +When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a
> single instance of DPDK, the BNXT PMD supports a single primary application
> and one or more secondary applications. Note that the DPDK-layer (*not* the
> PMD-layer) ensures there is only one primary application.
> +
> +There are two modes:
> +
> +Manual mode
> +
> +* Application notifies whether it is primary or secondary using
> *proc-type* flag
> +* 1st process should be spawned with ``--proc-type=primary``
> +* All subsequent processes should be spawned with
> ``--proc-type=secondary``
> +
> +Auto detection mode
> +
> +* Application is using ``proc-type=auto`` flag
> +* A process is spawned as a secondary if a primary is already running
> +
> +The BNXT PMD uses the info to skip a device initialization, i.e. performs
> a device initialization only when being brought up by a primary application.
> +
> +Runtime Queue Setup
> +~~~~~~~~~~~~~~~~~~~
> +
> +Typically, a DPDK application allocates TX and RX queues statically: i.e.
> queues are allocated at start. However, an application may want to increase
> (or decrease) the number of queues dynamically for various reasons, e.g.
> power savings.
> +
> +The BNXT PMD supports applications to increase or decrease queues at
> runtime.
> +
> +.. code-block:: console
> +
> +    testpmd> port config all (rxq|txq) (num_queues)
> +
> +Note that a DPDK application must allocate default queues (one for TX and
> one for RX at minimum) at initialization.
> +
> +Descriptor Status
> +~~~~~~~~~~~~~~~~~
> +
> +Applications may use the descriptor status for various reasons, e.g. for
> power savings. For example, an application may stop polling and change to
> interrupt mode when the descriptor status shows no packets to service for a
> while.
> +
> +The BNXT PMD supports the application to retrieve both TX and RX
> descriptor status.
> +
> +.. code-block:: console
> +
> +    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id)
> status
> +
> +Bonding
> +~~~~~~~
> +
> +DPDK implements a light-weight library to allow PMDs to be bonded
> together and provide a single logical PMD to the application.
> +
> +.. code-block:: console
> +
> +    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F
> device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ –
> --socket_num=1 – -i --port-topology=chained
> +    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev
> 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d'
> – --socket-num=1 – -i --port-topology=chained
> +
> +Vector Processing
>  -----------------
>
> -The BNXT PMD includes support for the following features:
> -
> -   * Multiple transmit and receive queues
> -   * Queue start/stop
> -   * RSS hash
> -   * RSS key configuration
> -   * RSS reta configuration
> -   * VMDq
> -   * Packet type parsing
> -   * Configurable RX CRC stripping
> -   * L3/L4 checksum offload
> -   * LRO offload
> -   * TSO offload
> -   * VLAN offload
> -   * SR-IOV VF
> -   * Basic and extended port statistics
> -   * Link state reporting
> -   * Flow control
> -   * Ethertype filtering
> -   * N-tuple filtering
> -   * Promiscuous mode
> -   * Unicast and multicast MAC filtering
> -   * Scatter/gather transmit and receive
> -   * Jumbo frames
> -   * Vector PMD
> -
> -BNXT Vector PMD
> ----------------
> -
> -The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
> -provides significantly improved performance over the base implementation,
> -however it does not support all of the features that are supported by the
> -base (non-vector) implementation. Vector mode will be selected and enabled
> -automatically when the port is started if allowed by the current
> configuration.
> -
> -RX Requirements for Vector Mode
> -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Vector mode receive will be enabled if the following constrainsts are met:
> -   * Packets must fit within a single mbuf (no scatter RX).
> -   * LRO offload must be disabled.
> -
> -TX Requirements for Vector Mode
> -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> -
> -Vector mode transmit will be enabled if the following constrainsts are
> met:
> -   * Packets must be contained within a single mbuf (no gather TX).
> -   * All transmit offloads other than VLAN insertion must be disabled.
> -
> -BNXT PMD Supported Chipsets and Adapters
> -----------------------------------------
> -
> -Chipsets and adapters supported by the bnxt PMD include:
> -
> -  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network
> Controllers**
> -
> -       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
> -       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
> -       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
> -
> -  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network
> Controllers**
> -
> -       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
> -       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
> -       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet
> Adapter
> -       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
> -       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
> -       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet
> Adapter
> -       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
> -       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
> -       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
> -       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
> -       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
> -       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
> -       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
> -       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
> -       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
> -       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet
> Adapter
> -       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
> -       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet
> Adapter
> -       * P150P - Single-port 50 Gigabit Ethernet Adapter
> -       * P210P - Dual-port 10 Gigabit Ethernet Adapter
> -       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
> -       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
> -       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
> -       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
> -       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
> -       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
> -       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
> -       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
> -
> -    Information about Ethernet adapters in the NetXtreme family of
> -    adapters can be found in the `NetXtreme® Brand section
> -    <
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/
> >`_
> -    of the `Broadcom website <http://www.broadcom.com/>`_.
> -
> -  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
> -
> -       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
> -       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
> -       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
> -
> -    Information about the Stingray family of SmartNIC adapters can be
> found in the
> -    `Stingray® Brand section
> -    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
> -    of the `Broadcom website <http://www.broadcom.com/>`_.
> -
> -  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
> -
> -    These ARM based processors target a broad range of networking
> applications
> -    including virtual CPE (vCPE) and NFV appliances, 10G service routers
> and
> -    gateways, control plane processing for Ethernet switches and network
> -    attached storage (NAS).
> -
> -    Information about the StrataGX family of adapters can be found in the
> -    `StrataGX® BCM58712
> -    <
> http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712
> >`_
> -    and `StrataGX® BCM58713
> -    <
> http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713
> >`_
> -    sections of the `Broadcom website <http://www.broadcom.com/>`_.
> +Vector processing provides significantly improved performance over scalar
> processing (see Vector Processor, here).
> +
> +The BNXT PMD supports the vector processing using SSE (Streaming SIMD
> Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD)
> is currently limited to Intel/AMD CPU architecture. Support for ARM is
> *not* currently implemented.
> +
> +This improved performance comes from several optimizations:
> +
> +* Batching
> +    * TX: processing completions in bulk
> +    * RX: allocating mbufs in bulk
> +* Chained mbufs are *not* supported, i.e. a packet should fit a single
> mbuf
> +* Some stateless offloads are *not* supported with vector processing
> +    * TX: no offloads will be supported
> +    * RX: reduced RX offloads (listed below) will be supported::
> +
> +       DEV_RX_OFFLOAD_VLAN_STRIP
> +       DEV_RX_OFFLOAD_KEEP_CRC
> +       DEV_RX_OFFLOAD_JUMBO_FRAME
> +       DEV_RX_OFFLOAD_IPV4_CKSUM
> +       DEV_RX_OFFLOAD_UDP_CKSUM
> +       DEV_RX_OFFLOAD_TCP_CKSUM
> +       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
> +       DEV_RX_OFFLOAD_RSS_HASH
> +       DEV_RX_OFFLOAD_VLAN_FILTER
> +
> +The BNXT Vector PMD is enabled in DPDK builds by default. When required,
> it has to be disabled in the DPDK build configuration by setting
> ``CONFIG_RTE_LIBRTE_BNXT_INC_VECTOR=n``.
> +
> +However, a decision to enable vector mode will be made when the port
> transitions from stopped to started. Any TX offloads or some RX offloads
> (other than listed above) will disable the vector mode.
> +Offload configuration changes that impact vector mode must be made when
> the port is stopped.
> +
> +Note that TX (or RX) vector mode can be enabled independently from RX (or
> TX) vector mode.
> +
> +Appendix
> +--------
> +
> +Supported Chipsets and Adapters
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Information about Ethernet adapters in the NetXtreme family of adapters
> can be found in the `NetXtreme® Brand section <
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
> of the `Broadcom website <http://www.broadcom.com/>`_.
> +
> +* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
> +* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
> +* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
> +
> +BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Information about Ethernet adapters in the NetXtreme family of adapters
> can be found in the `NetXtreme® Brand section <
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
> of the `Broadcom website <http://www.broadcom.com/>`_.
> +
> +* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
> +* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
> +* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet
> Adapter``
> +* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
> +* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
> +* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
> +* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
> +* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
> +* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
> +* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
> +* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
> +* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
> +* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
> +* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
> +* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
> +* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
> +* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
> +* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet
> Adapter``
> +* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
> +* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
> +* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
> +* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
> +* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
> +* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
> +* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
> +* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
> +* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
> +* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
> +
> +BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Information about the Stingray family of SmartNIC adapters can be found
> in the `Stingray® Brand section <
> https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of
> the `Broadcom website <http://www.broadcom.com/>`_.
> +
> +* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
> +
> +BCM5873x StrataGX® Family of Communications Processors
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +These ARM-based processors target a broad range of networking
> applications, including virtual CPE (vCPE) and NFV appliances, 10G service
> routers and gateways, control plane processing for Ethernet switches, and
> network-attached storage (NAS).
> +
> +* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72
> based SoC``
> --
> 2.21.1 (Apple Git-122.3)
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v4] doc: update bnxt guide
  2020-05-18 22:04         ` Ajit Khaparde
@ 2020-05-19  8:15           ` Ferruh Yigit
  2020-05-20 20:03             ` [dpdk-dev] [PATCH v5] " Ajit Khaparde
  0 siblings, 1 reply; 16+ messages in thread
From: Ferruh Yigit @ 2020-05-19  8:15 UTC (permalink / raw)
  To: Ajit Khaparde, dpdk-dev; +Cc: JP Lee, Kovacevic Marko

On 5/18/2020 11:04 PM, Ajit Khaparde wrote:
> 
> 
> On Mon, May 18, 2020 at 10:58 AM Ajit Khaparde <ajit.khaparde@broadcom.com
> <mailto:ajit.khaparde@broadcom.com>> wrote:
> 
>         - Update list of supported adapters.
>         - Update list of supported features.
>         - Add some details to describe the features.
>         - Remove obsolete limitations.
>         - Fix and update links.
> 
>     Signed-off-by: JP Lee <jongpil.lee@broadcom.com
>     <mailto:jongpil.lee@broadcom.com>>
>     Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com
>     <mailto:ajit.khaparde@broadcom.com>>
>     Acked-by: Kovacevic Marko <marko.kovacevic@intel.com
>     <mailto:marko.kovacevic@intel.com>>
>     ---
>     v1->v2: Some lines were too long in v1. Made then shorter. Checked for typos.
>     v2->v3: Removed list of extended stats.
>     v3->v4: Removed an irrelevant line.
> 
> 
> Patch applied to dpdk-next-net-brcm. Thanks
> 
>     ---
>      doc/guides/nics/bnxt.rst | 893 +++++++++++++++++++++++++++++++++------
>      1 file changed, 763 insertions(+), 130 deletions(-)
> 
>     diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
>     index 434ba9d6c..3aad7ea4a 100644
>     --- a/doc/guides/nics/bnxt.rst
>     +++ b/doc/guides/nics/bnxt.rst
>     @@ -1,138 +1,771 @@
>     -..  SPDX-License-Identifier: BSD-3-Clause
>     -    Copyright 2016-2019 Broadcom
>     +.. SPDX-License-Identifier: BSD-3-Clause
>     +   Copyright 2020 Broadcom Inc.
> 
>      BNXT Poll Mode Driver
>      =====================
> 
>     -The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
>     -Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
>     -Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
>     -NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
>     -Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
>     -Series of Communications Processors**.  A complete list with links to reference
>     -material is included below.
>     +The Broadcom BNXT PMD (Poll Mode Driver, librte_pmd_bnxt) implements
>     support for adapters based on Ethernet controllers and SoCs belonging to the
>     Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network
>     Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters,
>     and the Broadcom StrataGX® BCM5873X Series of Communications Processors.
> 

Hi Ajit,

Both Marko's and my comment to keep .rst line length short seems not addressed.
This one long line makes the .rst file hard to read. Can you please fix this for
all document?

Thanks,
ferruh


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v5] doc: update bnxt guide
  2020-05-19  8:15           ` Ferruh Yigit
@ 2020-05-20 20:03             ` Ajit Khaparde
  2020-05-21  1:16               ` Ajit Khaparde
  0 siblings, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-20 20:03 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, JP Lee, Kovacevic Marko

    - Update list of supported adapters.
    - Update list of supported features.
    - Add some details to describe the features.
    - Remove obsolete limitations.
    - Fix and update links.

Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
---
v1->v2: Some lines were too long in v1. Made then shorter. Checked for typos.
v2->v3: Removed list of extended stats.
v3->v4: Removed an irrelevant line.
v4->v5: Shorten the long lines further.
---
 doc/guides/nics/bnxt.rst | 1019 +++++++++++++++++++++++++++++++++-----
 1 file changed, 889 insertions(+), 130 deletions(-)

diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 434ba9d6c..a0747d7e3 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -1,138 +1,897 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright 2016-2019 Broadcom
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2020 Broadcom Inc.
 
 BNXT Poll Mode Driver
 =====================
 
-The BNXT PMD (**librte_pmd_bnxt**) implements support for adapters based on
-Ethernet controllers and SoCs belonging to the **Broadcom BCM5730X NetXtreme-C®
-Family of Ethernet Network Controllers**, the **Broadcom BCM574XX/BCM575XX
-NetXtreme-E® Family of Ethernet Network Controllers**, the **Broadcom BCM588XX
-Stingray Family of SmartNIC Adapters**, and the **Broadcom StrataGX® BCM5871X
-Series of Communications Processors**.  A complete list with links to reference
-material is included below.
+The Broadcom BNXT PMD (**librte_pmd_bnxt**) implements support for adapters
+based on Ethernet controllers and SoCs belonging to the Broadcom
+BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers,
+the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom
+StrataGX® BCM5873X Series of Communications Processors.
 
+A complete list with links to reference material is in the Appendix section.
 
-BNXT PMD Features
+CPU Support
+-----------
+
+BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
+
+Kernel Dependency
+-----------------
+
+BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping
+device memory to userspace, registering interrupts, etc.
+VFIO is more secure than UIO, relying on IOMMU protection.
+UIO requires the IOMMU disabled or configured to pass-through mode.
+
+Operating Systems supported:
+
+* Red Hat Enterprise Linux release 8.1 (Ootpa)
+* Red Hat Enterprise Linux release 8.0 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.7 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Red Hat Enterprise Linux Server release 7.5 (Maipo)
+* Red Hat Enterprise Linux Server release 7.4 (Maipo)
+* Red Hat Enterprise Linux Server release 7.3 (Maipo)
+* Red Hat Enterprise Linux Server release 7.2 (Maipo)
+* CentOS Linux release 8.0
+* CentOS Linux release 7.7
+* CentOS Linux release 7.6.1810
+* CentOS Linux release 7.5.1804
+* CentOS Linux release 7.4.1708
+* Fedora 31
+* FreeBSD 12.1
+* Suse 15SP1
+* Ubuntu 19.04
+* Ubuntu 18.04
+* Ubuntu 16.10
+* Ubuntu 16.04
+* Ubuntu 14.04
+
+The BNXT PMD supports operating with:
+
+* Linux vfio-pci
+* Linux uio_pci_generic
+* Linux igb_uio
+* BSD nic_uio
+
+Compiling BNXT PMD
+------------------
+
+To compile the BNXT PMD:
+
+.. code-block:: console
+
+    make config T=x86_64-native-linux-gcc && make // for x86-64
+    make config T=x86_32-native-linux-gcc && make // for x86-32
+    make config T=armv8a-linux-gcc && make // for ARMv8
+
+Bind the device to one of the kernel modules listed above
+
+.. code-block:: console
+
+    ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
+
+Load an application (e.g. testpmd) with a default configuration (e.g. a single
+TX /RX queue):
+
+.. code-block:: console
+
+    ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2
+
+Running BNXT PMD
+----------------
+
+The BNXT PMD can run on PF or VF.
+
+PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment
+of part of the network port resources to guest operating systems using the
+SR-IOV standard.
+NIC is logically distributed among multiple virtual machines (VMs), while still
+having global data in common to share with the PF and other VFs.
+
+Sysadmin can create and configure VFs:
+
+.. code-block:: console
+
+  echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
+  (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
+
+Sysadmin also can change the VF property such as MAC address, transparent VLAN,
+TX rate limit, and trusted VF:
+
+.. code-block:: console
+
+  ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
+  (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
+
+Running on VF
+~~~~~~~~~~~~~
+
+Flow Bifurcation
+^^^^^^^^^^^^^^^^
+
+The Flow Bifurcation splits the incoming data traffic to user space applications
+(such as DPDK applications) and/or kernel space programs (such as the Linux
+kernel stack).
+It can direct some traffic, for example data plane traffic, to DPDK.
+Rest of the traffic, for example control plane traffic, would be redirected to
+the traditional Linux networking stack.
+
+Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
+
+Benefits of the flow bifurcation include:
+
+* Better performance with less CPU overhead, as user application can directly
+access the NIC for data path
+* NIC is still being controlled by the kernel, as control traffic is forwarded
+only to the kernel driver
+* Control commands, e.g. ethtool, will work as usual
+
+Running on a VF, the BXNT PMD supports the flow bifurcation with a combination
+of SR-IOV and packet classification and/or forwarding capability.
+In the simplest case of flow bifurcation, a PF driver configures a NIC to
+forward all user traffic directly to VFs with matching destination MAC address,
+while the rest of the traffic is forwarded to a PF.
+Note that the broadcast packets will be forwarded to both PF and VF.
+
+.. code-block:: console
+
+    (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
+
+Trusted VF
+^^^^^^^^^^
+
+By default, VFs are *not* allowed to perform privileged operations, such as
+modifying the VF’s MAC address in the guest. These security measures are
+designed to prevent possible attacks.
+However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these
+operations performed by a VF would be legitimate and can be allowed.
+
+To enable VF to request "trusted mode," a new trusted VF concept was introduced
+in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some
+privileged operations.
+
+The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable the
+trusted attribute on the VF. It is preferable to enable the Trusted setting on a
+VF before starting applications.
+However, the BNXT PMD handles dynamic changes in trusted settings as well.
+
+Note that control commands, e.g., ethtool, will work via the kernel PF driver,
+*not* via the trusted VF driver.
+
+Operations supported by trusted VF:
+
+* MAC address configuration
+* Flow rule creation
+
+Operations *not* supported by trusted VF:
+
+* Firmware upgrade
+* Promiscuous mode setting
+
+Running on PF
+~~~~~~~~~~~~~
+
+Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the
+features which the PF can enable or request. In a multiport NIC, each port will
+have a corresponding PF. Also depending on the configuration of the NIC there
+can be more than one PF associated per port.
+A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other
+PF or run the PMD on both the PFs. In such cases, the firmware picks one of the
+PFs as a master PF.
+
+Much like in the trusted VF, the DPDK application must be *trusted* and expected
+to be *well-behaved*.
+
+Features
+--------
+
+The BNXT PMD supports the following features:
+
+* Port Control
+    * Port MTU
+    * LED
+    * Flow Control and Autoneg
+* Packet Filtering
+    * Unicast MAC Filter
+    * Multicast MAC Filter
+    * VLAN Filtering
+    * Allmulticast Mode
+    * Promiscuous Mode
+* Stateless Offloads
+    * CRC Offload
+    * Checksum Offload (IPv4, TCP, and UDP)
+    * Multi-Queue (TSS and RSS)
+    * Segmentation and Reassembly (TSO and LRO)
+* VLAN insert strip
+* Stats Collection
+* Generic Flow Offload
+
+Port Control
+~~~~~~~~~~~~
+
+**Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574
+bytes:
+
+.. code-block:: console
+
+    testpmd> port config mtu (port_id) mtu_value
+    testpmd> show port info (port_id)
+
+**LED**: Application tunes on (or off) a port LED, typically for a port
+identification:
+
+.. code-block:: console
+
+    int rte_eth_led_on (uint16_t port_id)
+    int rte_eth_led_off (uint16_t port_id)
+
+**Flow Control and Autoneg**: Application tunes on (or off) flow control and/or
+auto-negotiation on a port:
+
+.. code-block:: console
+
+    testpmd> set flow_ctrl rx (on|off) (port_id)
+    testpmd> set flow_ctrl tx (on|off) (port_id)
+    testpmd> set flow_ctrl autoneg (on|off) (port_id)
+
+Note that the BNXT PMD does *not* support some options and ignores them when
+requested:
+
+* high_water
+* low_water
+* pause_time
+* mac_ctrl_frame_fwd
+* send_xon
+
+Packet Filtering
+~~~~~~~~~~~~~~~~
+
+Applications control the packet-forwarding behaviors with packet filters.
+
+The BNXT PMD supports hardware-based packet filtering:
+
+* UC (Unicast) MAC Filters
+    * No unicast packets are forwarded to an application except the one with
+    DMAC address added to the port
+    * At initialization, the station MAC address is added to the port
+* MC (Multicast) MAC Filters
+    * No multicast packets are forwarded to an application except the one with
+    MC address added to the port
+    * When the application listens to a multicast group, it adds the MC address
+    to the port
+* VLAN Filtering Mode
+    * When enabled, no packets are forwarded to an application except the ones
+    with the VLAN tag assigned to the port
+* Allmulticast Mode
+    * When enabled, every multicast packet received on the port is forwarded to
+    the application
+    * Typical usage is routing applications
+* Promiscuous Mode
+    * When enabled, every packet received on the port is forwarded to the
+    application
+
+Unicast MAC Filter
+^^^^^^^^^^^^^^^^^^
+
+The application adds (or removes) MAC addresses to enable (or disable)
+whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) macs
+    testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Multicast MAC Filter
+^^^^^^^^^^^^^^^^^^^^
+
+Application adds (or removes) Multicast addresses to enable (or disable)
+whitelist filtering to accept packets.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) mcast_macs
+    testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
+
+Application adds (or removes) Multicast addresses to enable (or disable)
+whitelist filtering to accept packets.
+
+Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
+than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
+
+VLAN Filtering
+^^^^^^^^^^^^^^
+
+The application enables (or disables) VLAN filtering mode. When the mode is
+enabled, no packets are forwarded to an application except ones with VLAN tag
+assigned for the application.
+
+.. code-block:: console
+
+    testpmd> vlan set filter (on|off) (port_id)
+    testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
+
+Allmulticast Mode
+^^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the allmulticast mode. When the mode is
+enabled, every multicast packet received is forwarded to the application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set allmulti (port_id) (on|off)
+
+Promiscuous Mode
+^^^^^^^^^^^^^^^^
+
+The application enables (or disables) the promiscuous mode. When the mode is
+enabled on a port, every packet received on the port is forwarded to the
+application.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+    testpmd> set promisc port_id (on|off)
+
+Stateless Offloads
+~~~~~~~~~~~~~~~~~~
+
+Like Linux, DPDK provides enabling hardware offload of some stateless processing
+(such as checksum calculation) of the stack, alleviating the CPU from having to
+burn cycles on every packet.
+
+Listed below are the stateless offloads supported by the BNXT PMD:
+
+* CRC offload (for both TX and RX packets)
+* Checksum Offload (for both TX and RX packets)
+    * IPv4 Checksum Offload
+    * TCP Checksum Offload
+    * UDP Checksum Offload
+* Segmentation/Reassembly Offloads
+    * TCP Segmentation Offload (TSO)
+    * Large Receive Offload (LRO)
+* Multi-Queue
+    * Transmit Side Scaling (TSS)
+    * Receive Side Scaling (RSS)
+
+Also, the BNXT PMD supports stateless offloads on inner frames for tunneled
+packets. Listed below are the tunneling protocols supported by the BNXT PMD:
+
+* VXLAN
+* GRE
+* NVGRE
+
+Note that enabling (or disabling) stateless offloads requires applications to
+stop DPDK before changing configuration.
+
+CRC Offload
+^^^^^^^^^^^
+
+The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic
+Redundancy Check) that allows detection of corrupted data within the entire
+frame as received on the receiver side.
+
+The BNXT PMD supports hardware-based CRC offload:
+
+* TX: calculate and insert CRC
+* RX: check and remove CRC, notify the application on CRC error
+
+Note that the CRC offload is always turned on.
+
+Checksum Offload
+^^^^^^^^^^^^^^^^
+
+The application enables hardware checksum calculation for IPv4, TCP, and UDP.
+
+.. code-block:: console
+
+    testpmd> port stop (port_id)
+    testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
+    testpmd> set fwd csum
+
+Multi-Queue
+^^^^^^^^^^^
+
+Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side
+Scaling), is a common networking technique that allows for more efficient load
+balancing across multiple CPU cores.
+
+The application enables multiple TX and RX queues when it is started.
+
+.. code-block:: console
+
+    testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2
+
+**TSS**
+
+TSS distributes network transmit processing across several hardware-based
+transmit queues, allowing outbound network traffic to be processed by multiple
+CPU cores.
+
+**RSS**
+
+RSS distributes network receive processing across several hardware-based receive
+queues, allowing inbound network traffic to be processed by multiple CPU cores.
+
+The application can select the RSS mode, i.e. select the header fields that are
+included for hash calculation. The BNXT PMD supports the RSS mode of
+``default|ip|tcp|udp|none``, where default mode is L3 and L4.
+
+For tunneled packets, RSS hash is calculated over inner frame header fields.
+Applications may want to select the tunnel header fields for hash calculation,
+and it will be supported in 20.08 using RSS level.
+
+.. code-block:: console
+
+    testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
+
+    // note that the testpmd defaults the RSS mode to ip
+    // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
+    testpmd> port config (port_id) rss default
+
+    // to check the current RSS configuration, such as RSS function and RSS key
+    testpmd> show port (port_id) rss-hash key
+
+    // RSS is enabled by default. However, application can disable RSS as follows
+    testpmd> port config (port_id) rss none
+
+Application can change the flow distribution, i.e. remap the received traffic to
+CPU cores, using RSS RETA (Redirection Table).
+
+.. code-block:: console
+
+    // application queries the current RSS RETA configuration
+    testpmd> show port (port_id) rss reta size (mask0, mask1)
+
+    // application changes the RSS RETA configuration
+    testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
+
+TSO
+^^^
+
+TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables
+the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum
+Transmit Unit). NIC breaks it into multiple segments before sending it to the
+network.
+
+The BNXT PMD supports hardware-based TSO.
+
+.. code-block:: console
+
+    // display the status of TSO
+    testpmd> tso show (port_id)
+
+    // enable/disable TSO
+    testpmd> port config (port_id) tx_offload tcp_tso (on|off)
+
+    // set TSO segment size
+    testpmd> tso set segment_size (port_id)
+
+The BNXT PMD also supports hardware-based tunneled TSO.
+
+.. code-block:: console
+
+    // display the status of tunneled TSO
+    testpmd> tunnel_tso show (port_id)
+
+    // enable/disable tunneled TSO
+    testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
+
+    // set tunneled TSO segment size
+    testpmd> tunnel_tso set segment_size (port_id)
+
+Note that the checksum offload is always assumed to be enabled for TSO.
+
+LRO
+^^^
+
+LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP
+packets from a single stream into a larger buffer, before passing to the
+networking stack.
+
+The BNXT PMD supports hardware-based LRO.
+
+.. code-block:: console
+
+    // display the status of LRO
+    testpmd> show port (port_id) rx_offload capabilities
+    testpmd> show port (port_id) rx_offload configuration
+
+    // enable/disable LRO
+    testpmd> port config (port_id) rx_offload tcp_lro (on|off)
+
+    // set max LRO packet (datagram) size
+    testpmd> port config (port_id) max-lro-pkt-size (max_size)
+
+The BNXT PMD also supports tunneled LRO.
+
+Some applications, such as routing, should *not* change the packet headers as
+they pass through (i.e. received from and sent back to the network). In such a
+case, GRO (Generic Receive Offload) should be used instead of LRO.
+
+VLAN Insert/Strip
+~~~~~~~~~~~~~~~~~
+
+DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD
+supports hardware-based VLAN insert/strip offload for both single and double
+VLAN packets.
+
+
+VLAN Insert
+^^^^^^^^^^^
+
+Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is
+0x8100.
+
+.. code-block:: console
+
+    // configure outer TPID value for a port
+    testpmd> vlan set outer tpid (tpid_value) (port_id)
+
+The inner TPID set will be rejected as the BNXT PMD supports inserting only an
+outer VLAN. Note that when a packet has a single VLAN, the tag is considered as
+outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
+
+The BNXT PMD supports various TPID values shown below. Any other values will be
+rejected.
+
+* ``0x8100``
+* ``0x88a8``
+* ``0x9100``
+* ``0x9200``
+* ``0x9300``
+
+The BNXT PMD supports the VLAN insert offload per-packet basis. The application
+provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD
+inserts the VLAN tag (via hardware) using the provided TCI along with the
+configured TPID.
+
+.. code-block:: console
+
+    // enable VLAN insert offload
+    testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
+
+    if (mbuf->ol_flags && PKT_TX_QINQ)       // case-1: insert VLAN to single-tagged packet
+        tci_value = mbuf->vlan_tci_outer
+    else if (mbuf->ol_flags && PKT_TX_VLAN)  // case-2: insert VLAN to untagged packet
+        tci_value = mbuf->vlan_tci
+
+VLAN Strip
+^^^^^^^^^^
+
+The application configures the per-port VLAN strip offload.
+
+.. code-block:: console
+
+    // enable VLAN strip on a port
+    testpmd> port config (port_id) tx_offload vlan_strip (on|off)
+
+    // notify application VLAN strip via mbuf
+    mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
+    mbuf->vlan_tci = tci_value                      // TCI of the stripped VLAN
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+System operators may run a PTP (Precision Time Protocol) client application to
+synchronize the time on the NIC (and optionally, on the system) to a PTP master.
+
+The BNXT PMD supports a PTP client application to communicate with a PTP master
+clock using DPDK IEEE1588 APIs. Note that the PTP client application needs to
+run on PF and vector mode needs to be disabled.
+
+For the PTP time synchronization support, the BNXT PMD must be compiled with
+``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending).
+
+.. code-block:: console
+
+    testpmd> set fwd ieee1588 // enable IEEE 1588 mode
+
+When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to
+the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP
+packets to application via mbuf.
+
+.. code-block:: console
+
+    // RX packet completion will indicate whether the packet is PTP
+    mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
+
+Statistics Collection
+~~~~~~~~~~~~~~~~~~~~~
+
+In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the
+similar functionalities via rte_eth_stats and rte_eth_xstats.
+
+The BNXT PMD supports both basic and extended stats collection:
+
+* Basic stats
+* Extended stats
+
+Basic Stats
+^^^^^^^^^^^
+
+The application collects per-port and per-queue stats using rte_eth_stats APIs.
+
+.. code-block:: console
+
+    testpmd> show port stats (port_id)
+
+Basic stats include:
+
+* ipackets
+* ibytes
+* opackets
+* obytes
+* imissed
+* ierrors
+* oerrors
+
+By default, per-queue stats for 16 queues are supported. For more than 16
+queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS``
+set to the desired number of queues.
+
+Extended Stats
+^^^^^^^^^^^^^^
+
+Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor
+provides its own set of counters.
+
+The BNXT PMD provides a rich set of counters, including per-flow counters,
+per-cos counters, per-priority counters, etc.
+
+.. code-block:: console
+
+    testpmd> show port xstats (port_id)
+
+Shown below is the elaborated sequence to retrieve extended stats:
+
+.. code-block:: console
+
+    // application queries the number of xstats
+    len = rte_eth_xstats_get(port_id, NULL, 0);
+    // BNXT PMD returns the size of xstats array (i.e. the number of entries)
+    // BNXT PMD returns 0, if the feature is compiled out or disabled
+
+    // application allocates memory for xstats
+    struct rte_eth_xstats_name *names; // name is 64 character or less
+    struct rte_eth_xstats *xstats;
+    names = calloc(len, sizeof(*names));
+    xstats = calloc(len, sizeof(*xstats));
+
+    // application retrieves xstats // names and values
+    ret = rte_eth_xstats_get_names(port_id, *names, len);
+    ret = rte_eth_xstats_get(port_id, *xstats, len);
+
+    // application checks the xstats
+    // application may repeat the below:
+    len = rte_eth_xstats_reset(port_id); // reset the xstats
+
+    // reset can be skipped, if application wants to see accumulated stats
+    // run traffic
+    // probably stop the traffic
+    // retrieve xstats // no need to retrieve xstats names again
+    // check xstats
+
+Generic Flow Offload
+~~~~~~~~~~~~~~~~~~~~
+
+Applications can get benefit by offloading all or part of flow processing to
+hardware. For example, applications can offload packet classification only
+(partial offload) or whole match-action (full offload).
+
+DPDK offers the Generic Flow API (rte_flow API) to configure hardware to
+perform flow processing.
+
+Listed below are the rte_flow APIs BNXT PMD supports:
+
+* rte_flow_validate
+* rte_flow_create
+* rte_flow_destroy
+* rte_flow_flush
+
+Host Based Flow Table Management
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with 20.05 BNXT PMD supports host based flow table management. This is
+a new mechanism that should allow higher flow scalability than what is currently
+supported. This new approach also defines a new rte_flow parser, and mapper
+which currently supports basic packet classification in the receive path.
+
+The feature uses a newly implemented control-plane firmware interface which
+optimizes flow insertions and deletions.
+
+This is a tech preview feature, and is disabled by default. It can be enabled
+using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
+
+Application Support
+-------------------
+
+Firmware
+~~~~~~~~
+
+The BNXT PMD supports the application to retrieve the firmware version.
+
+.. code-block:: console
+
+    testpmd> show port info (port_id)
+
+Note that the applications cannot update the firmware using BNXT PMD.
+
+Multiple Processes
+~~~~~~~~~~~~~~~~~~
+
+When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single
+instance of DPDK, the BNXT PMD supports a single primary application and one or
+more secondary applications. Note that the DPDK-layer (not the PMD) ensures
+there is only one primary application.
+
+There are two modes:
+
+Manual mode
+
+* Application notifies whether it is primary or secondary using *proc-type* flag
+* 1st process should be spawned with ``--proc-type=primary``
+* All subsequent processes should be spawned with ``--proc-type=secondary``
+
+Auto detection mode
+
+* Application is using ``proc-type=auto`` flag
+* A process is spawned as a secondary if a primary is already running
+
+The BNXT PMD uses the info to skip a device initialization, i.e. performs a
+device initialization only when being brought up by a primary application.
+
+Runtime Queue Setup
+~~~~~~~~~~~~~~~~~~~
+
+Typically, a DPDK application allocates TX and RX queues statically: i.e. queues
+are allocated at start. However, an application may want to increase (or
+decrease) the number of queues dynamically for various reasons, e.g. power
+savings.
+
+The BNXT PMD supports applications to increase or decrease queues at runtime.
+
+.. code-block:: console
+
+    testpmd> port config all (rxq|txq) (num_queues)
+
+Note that a DPDK application must allocate default queues (one for TX and one
+for RX at minimum) at initialization.
+
+Descriptor Status
+~~~~~~~~~~~~~~~~~
+
+Applications may use the descriptor status for various reasons, e.g. for power
+savings. For example, an application may stop polling and change to interrupt
+mode when the descriptor status shows no packets to service for a while.
+
+The BNXT PMD supports the application to retrieve both TX and RX descriptor
+status.
+
+.. code-block:: console
+
+    testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
+
+Bonding
+~~~~~~~
+
+DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
+
+.. code-block:: console
+
+    testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+
+Vector Processing
 -----------------
 
-The BNXT PMD includes support for the following features:
-
-   * Multiple transmit and receive queues
-   * Queue start/stop
-   * RSS hash
-   * RSS key configuration
-   * RSS reta configuration
-   * VMDq
-   * Packet type parsing
-   * Configurable RX CRC stripping
-   * L3/L4 checksum offload
-   * LRO offload
-   * TSO offload
-   * VLAN offload
-   * SR-IOV VF
-   * Basic and extended port statistics
-   * Link state reporting
-   * Flow control
-   * Ethertype filtering
-   * N-tuple filtering
-   * Promiscuous mode
-   * Unicast and multicast MAC filtering
-   * Scatter/gather transmit and receive
-   * Jumbo frames
-   * Vector PMD
-
-BNXT Vector PMD
----------------
-
-The BNXT PMD includes support for SSE vector mode on x86 platforms. Vector
-provides significantly improved performance over the base implementation,
-however it does not support all of the features that are supported by the
-base (non-vector) implementation. Vector mode will be selected and enabled
-automatically when the port is started if allowed by the current configuration.
-
-RX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode receive will be enabled if the following constrainsts are met:
-   * Packets must fit within a single mbuf (no scatter RX).
-   * LRO offload must be disabled.
-
-TX Requirements for Vector Mode
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Vector mode transmit will be enabled if the following constrainsts are met:
-   * Packets must be contained within a single mbuf (no gather TX).
-   * All transmit offloads other than VLAN insertion must be disabled.
-
-BNXT PMD Supported Chipsets and Adapters
-----------------------------------------
-
-Chipsets and adapters supported by the bnxt PMD include:
-
-  * **Broadcom BCM5730X NetXtreme-C® Family of Ethernet Network Controllers**
-
-       * M150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P150c - Single-port 40/50 Gigabit Ethernet Adapter
-       * P225c - Dual-port 10/25 Gigabit Ethernet Adapter
-
-  * **Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers**
-
-       * M125P - Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * M150P - Single-port OCP 2.0 50 Gigabit Ethernet Adapter
-       * M150PM - Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter
-       * M210P - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M210TP - Dual-port OCP 2.0 10 Gigabit Ethernet Adapter
-       * M11000G - Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N150G - Single-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * M225P - Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter
-       * N210P - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N210TP - Dual-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N225P - Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N250G - Dual-port OCP 3.0 50 Gigabit Ethernet Adapter
-       * N410SG - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N410SGBT - Quad-port OCP 3.0 10 Gigabit Ethernet Adapter
-       * N425G - Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter
-       * N1100G - Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2100G - Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter
-       * N2200G - Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter
-       * P150P - Single-port 50 Gigabit Ethernet Adapter
-       * P210P - Dual-port 10 Gigabit Ethernet Adapter
-       * P210TP - Dual-port 10 Gigabit Ethernet Adapter
-       * P225P - Dual-port 10/25 Gigabit Ethernet Adapter
-       * P410SG - Quad-port 10 Gigabit Ethernet Adapter
-       * P410SGBT - Quad-port 10 Gigabit Ethernet Adapter
-       * P425G - Quad-port 10/25 Gigabit Ethernet Adapter
-       * P1100G - Single-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2100G - Dual-port 10/25/50/100 Gigabit Ethernet Adapter
-       * P2200G - Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter
-
-    Information about Ethernet adapters in the NetXtreme family of
-    adapters can be found in the `NetXtreme® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom BCM588XX Stingray Family of SmartNIC Adapters**
-
-       * PS410T - Quad-port 10 Gigabit Ethernet SmartNIC
-       * PS225 - Dual-port 25 Gigabit Ethernet SmartNIC
-       * PS250 - Dual-Port 50 Gigabit Ethernet SmartNIC
-
-    Information about the Stingray family of SmartNIC adapters can be found in the
-    `Stingray® Brand section
-    <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_
-    of the `Broadcom website <http://www.broadcom.com/>`_.
-
-  * **Broadcom StrataGX® BCM5871X Series of Communucations Processors**
-
-    These ARM based processors target a broad range of networking applications
-    including virtual CPE (vCPE) and NFV appliances, 10G service routers and
-    gateways, control plane processing for Ethernet switches and network
-    attached storage (NAS).
-
-    Information about the StrataGX family of adapters can be found in the
-    `StrataGX® BCM58712
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58712>`_
-    and `StrataGX® BCM58713
-    <http://www.broadcom.com/products/embedded-and-networking-processors/communications/bcm58713>`_
-    sections of the `Broadcom website <http://www.broadcom.com/>`_.
+Vector processing provides significantly improved performance over scalar
+processing (see Vector Processor, here).
+
+The BNXT PMD supports the vector processing using SSE (Streaming SIMD
+Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is
+currently limited to Intel/AMD CPU architecture. Support for ARM is *not*
+currently implemented.
+
+This improved performance comes from several optimizations:
+
+* Batching
+    * TX: processing completions in bulk
+    * RX: allocating mbufs in bulk
+* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
+* Some stateless offloads are *not* supported with vector processing
+    * TX: no offloads will be supported
+    * RX: reduced RX offloads (listed below) will be supported::
+
+       DEV_RX_OFFLOAD_VLAN_STRIP
+       DEV_RX_OFFLOAD_KEEP_CRC
+       DEV_RX_OFFLOAD_JUMBO_FRAME
+       DEV_RX_OFFLOAD_IPV4_CKSUM
+       DEV_RX_OFFLOAD_UDP_CKSUM
+       DEV_RX_OFFLOAD_TCP_CKSUM
+       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       DEV_RX_OFFLOAD_RSS_HASH
+       DEV_RX_OFFLOAD_VLAN_FILTER
+
+The BNXT Vector PMD is enabled in DPDK builds by default.
+
+However, a decision to enable vector mode will be made when the port transitions
+from stopped to started. Any TX offloads or some RX offloads (other than listed
+above) will disable the vector mode.
+Offload configuration changes that impact vector mode must be made when the port
+is stopped.
+
+Note that TX (or RX) vector mode can be enabled independently from RX (or TX)
+vector mode.
+
+Appendix
+--------
+
+Supported Chipsets and Adapters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be
+found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
+* ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
+
+BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about Ethernet adapters in the NetXtreme family of adapters can be
+found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
+* ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
+* ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
+* ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
+* ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
+* ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
+* ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
+* ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
+* ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
+* ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
+* ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
+* ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
+* ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
+* ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
+* ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
+* ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
+* ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
+
+BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Information about the Stingray family of SmartNIC adapters can be found in the
+`Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
+
+* ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
+
+BCM5873x StrataGX® Family of Communications Processors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These ARM-based processors target a broad range of networking applications,
+including virtual CPE (vCPE) and NFV appliances, 10G service routers and
+gateways, control plane processing for Ethernet switches, and network-attached
+storage (NAS).
+
+* ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``
-- 
2.21.1 (Apple Git-122.3)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v5] doc: update bnxt guide
  2020-05-20 20:03             ` [dpdk-dev] [PATCH v5] " Ajit Khaparde
@ 2020-05-21  1:16               ` Ajit Khaparde
  2020-05-21 13:51                 ` Ferruh Yigit
  0 siblings, 1 reply; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-21  1:16 UTC (permalink / raw)
  To: dpdk-dev; +Cc: Ferruh Yigit, JP Lee, Kovacevic Marko

On Wed, May 20, 2020 at 1:04 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
wrote:

>     - Update list of supported adapters.
>     - Update list of supported features.
>     - Add some details to describe the features.
>     - Remove obsolete limitations.
>     - Fix and update links.
>
> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
> ---
> v1->v2: Some lines were too long in v1. Made then shorter. Checked for
> typos.
> v2->v3: Removed list of extended stats.
> v3->v4: Removed an irrelevant line.
> v4->v5: Shorten the long lines further.
>

Applied to dpdk-next-net-brcm. Thanks


>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v5] doc: update bnxt guide
  2020-05-21  1:16               ` Ajit Khaparde
@ 2020-05-21 13:51                 ` Ferruh Yigit
  2020-05-23  3:53                   ` Ajit Khaparde
  0 siblings, 1 reply; 16+ messages in thread
From: Ferruh Yigit @ 2020-05-21 13:51 UTC (permalink / raw)
  To: Ajit Khaparde, dpdk-dev; +Cc: JP Lee, Kovacevic Marko

On 5/21/2020 2:16 AM, Ajit Khaparde wrote:
> On Wed, May 20, 2020 at 1:04 PM Ajit Khaparde <ajit.khaparde@broadcom.com>
> wrote:
> 
>>     - Update list of supported adapters.
>>     - Update list of supported features.
>>     - Add some details to describe the features.
>>     - Remove obsolete limitations.
>>     - Fix and update links.
>>
>> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
>> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>> Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
>> ---
>> v1->v2: Some lines were too long in v1. Made then shorter. Checked for
>> typos.
>> v2->v3: Removed list of extended stats.
>> v3->v4: Removed an irrelevant line.
>> v4->v5: Shorten the long lines further.
>>
> 
> Applied to dpdk-next-net-brcm. Thanks

Fixed build warnings [1] as following [2] while merging, please double check the
changes.

[1]
.../doc/guides/nics/bnxt.rst:127: WARNING: Bullet list ends without a blank
line; unexpected unindent.
.../doc/guides/nics/bnxt.rst:258: WARNING: Bullet list ends without a blank
line; unexpected unindent.
.../doc/guides/nics/bnxt.rst:262: WARNING: Bullet list ends without a blank
line; unexpected unindent.
.../doc/guides/nics/bnxt.rst:267: WARNING: Bullet list ends without a blank
line; unexpected unindent.
.../doc/guides/nics/bnxt.rst:270: WARNING: Bullet list ends without a blank
line; unexpected unindent.
.../doc/guides/nics/bnxt.rst:274: WARNING: Bullet list ends without a blank
line; unexpected unindent.


[2]
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index a0747d7e33..ed650187e0 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -124,9 +124,9 @@ Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
 Benefits of the flow bifurcation include:

 * Better performance with less CPU overhead, as user application can directly
-access the NIC for data path
+  access the NIC for data path
 * NIC is still being controlled by the kernel, as control traffic is forwarded
-only to the kernel driver
+  only to the kernel driver
 * Control commands, e.g. ethtool, will work as usual

 Running on a VF, the BXNT PMD supports the flow bifurcation with a combination
@@ -255,23 +255,23 @@ The BNXT PMD supports hardware-based packet filtering:

 * UC (Unicast) MAC Filters
     * No unicast packets are forwarded to an application except the one with
-    DMAC address added to the port
+      DMAC address added to the port
     * At initialization, the station MAC address is added to the port
 * MC (Multicast) MAC Filters
     * No multicast packets are forwarded to an application except the one with
-    MC address added to the port
+      MC address added to the port
     * When the application listens to a multicast group, it adds the MC address
-    to the port
+      to the port
 * VLAN Filtering Mode
     * When enabled, no packets are forwarded to an application except the ones
-    with the VLAN tag assigned to the port
+      with the VLAN tag assigned to the port
 * Allmulticast Mode
     * When enabled, every multicast packet received on the port is forwarded to
-    the application
+      the application
     * Typical usage is routing applications
 * Promiscuous Mode
     * When enabled, every packet received on the port is forwarded to the
-    application
+      application

 Unicast MAC Filter
 ^^^^^^^^^^^^^^^^^^

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v5] doc: update bnxt guide
  2020-05-21 13:51                 ` Ferruh Yigit
@ 2020-05-23  3:53                   ` Ajit Khaparde
  0 siblings, 0 replies; 16+ messages in thread
From: Ajit Khaparde @ 2020-05-23  3:53 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dpdk-dev, JP Lee, Kovacevic Marko

On Thu, May 21, 2020 at 6:51 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 5/21/2020 2:16 AM, Ajit Khaparde wrote:
> > On Wed, May 20, 2020 at 1:04 PM Ajit Khaparde <
> ajit.khaparde@broadcom.com>
> > wrote:
> >
> >>     - Update list of supported adapters.
> >>     - Update list of supported features.
> >>     - Add some details to describe the features.
> >>     - Remove obsolete limitations.
> >>     - Fix and update links.
> >>
> >> Signed-off-by: JP Lee <jongpil.lee@broadcom.com>
> >> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> >> Acked-by: Kovacevic Marko <marko.kovacevic@intel.com>
> >> ---
> >> v1->v2: Some lines were too long in v1. Made then shorter. Checked for
> >> typos.
> >> v2->v3: Removed list of extended stats.
> >> v3->v4: Removed an irrelevant line.
> >> v4->v5: Shorten the long lines further.
> >>
> >
> > Applied to dpdk-next-net-brcm. Thanks
>
Ferruh, This looks good. Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-05-23  3:53 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-01 22:03 [dpdk-dev] [PATCH] doc: update bnxt guide Ajit Khaparde
2020-05-05 16:58 ` Ajit Khaparde
2020-05-06  8:57 ` Ferruh Yigit
2020-05-06 16:09   ` Ajit Khaparde
2020-05-13  9:08     ` Ferruh Yigit
2020-05-14 10:39 ` Kovacevic, Marko
2020-05-15 21:56   ` [dpdk-dev] [PATCH v2] " Ajit Khaparde
2020-05-18  9:34     ` Ferruh Yigit
2020-05-18 17:38       ` [dpdk-dev] [PATCH v3] " Ajit Khaparde
2020-05-18 17:58       ` [dpdk-dev] [PATCH v4] " Ajit Khaparde
2020-05-18 22:04         ` Ajit Khaparde
2020-05-19  8:15           ` Ferruh Yigit
2020-05-20 20:03             ` [dpdk-dev] [PATCH v5] " Ajit Khaparde
2020-05-21  1:16               ` Ajit Khaparde
2020-05-21 13:51                 ` Ferruh Yigit
2020-05-23  3:53                   ` Ajit Khaparde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).