DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] ethdev new offloads API
@ 2017-09-04  7:12 Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs Shahaf Shuler
                   ` (4 more replies)
  0 siblings, 5 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04  7:12 UTC (permalink / raw)
  To: thomas; +Cc: dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
With the new API, Tx and Rx offloads configuration is per queue.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

The new API does not have an equivalent for the below Tx flags:

* ETH_TXQ_FLAGS_NOREFCOUNT
* ETH_TXQ_FLAGS_NOMULTMEMP

The reason is that those flags are not to manage offloads, rather some
guarantee from application on the way it uses mbufs, therefore could not be
present as part of DEV_TX_OFFLOADS_*.
Such flags are useful only for benchmarks, and therefore provide a non-realistic    
performance for DPDK customers using simple benchmarks for evaluation.
Leveraging the work being done in this series to clean up those flags.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  New capabilities were added for PMD to advertize it has moved to the new
   offloads API.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html


Shahaf Shuler (4):
  ethdev: rename Rx and Tx configuration structs
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API
  ethdev: add helpers to move to the new offloads API

 app/test-pmd/config.c                           |   4 +-
 app/test-pmd/testpmd.h                          |   4 +-
 doc/guides/nics/features.rst                    |  27 +++--
 drivers/net/af_packet/rte_eth_af_packet.c       |   4 +-
 drivers/net/ark/ark_ethdev_rx.c                 |   4 +-
 drivers/net/ark/ark_ethdev_rx.h                 |   2 +-
 drivers/net/ark/ark_ethdev_tx.c                 |   2 +-
 drivers/net/ark/ark_ethdev_tx.h                 |   2 +-
 drivers/net/avp/avp_ethdev.c                    |   8 +-
 drivers/net/bnx2x/bnx2x_rxtx.c                  |   4 +-
 drivers/net/bnx2x/bnx2x_rxtx.h                  |   4 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |   4 +-
 drivers/net/bnxt/bnxt_rxq.c                     |   2 +-
 drivers/net/bnxt/bnxt_rxq.h                     |   2 +-
 drivers/net/bnxt/bnxt_txq.c                     |   2 +-
 drivers/net/bnxt/bnxt_txq.h                     |   2 +-
 drivers/net/bonding/rte_eth_bond_pmd.c          |   7 +-
 drivers/net/bonding/rte_eth_bond_private.h      |   4 +-
 drivers/net/cxgbe/cxgbe_ethdev.c                |   4 +-
 drivers/net/dpaa2/dpaa2_ethdev.c                |   4 +-
 drivers/net/e1000/e1000_ethdev.h                |   8 +-
 drivers/net/e1000/em_rxtx.c                     |   4 +-
 drivers/net/e1000/igb_ethdev.c                  |   8 +-
 drivers/net/e1000/igb_rxtx.c                    |   4 +-
 drivers/net/ena/ena_ethdev.c                    |  28 ++---
 drivers/net/enic/enic_ethdev.c                  |   6 +-
 drivers/net/failsafe/failsafe_ops.c             |   4 +-
 drivers/net/fm10k/fm10k_ethdev.c                |  12 +-
 drivers/net/i40e/i40e_ethdev.c                  |   4 +-
 drivers/net/i40e/i40e_ethdev_vf.c               |   4 +-
 drivers/net/i40e/i40e_rxtx.c                    |   4 +-
 drivers/net/i40e/i40e_rxtx.h                    |   4 +-
 drivers/net/ixgbe/ixgbe_ethdev.c                |   8 +-
 drivers/net/ixgbe/ixgbe_ethdev.h                |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                  |   4 +-
 drivers/net/kni/rte_eth_kni.c                   |   4 +-
 drivers/net/liquidio/lio_ethdev.c               |   8 +-
 drivers/net/mlx4/mlx4.c                         |  12 +-
 drivers/net/mlx5/mlx5_rxq.c                     |   4 +-
 drivers/net/mlx5/mlx5_rxtx.h                    |   6 +-
 drivers/net/mlx5/mlx5_txq.c                     |   4 +-
 drivers/net/nfp/nfp_net.c                       |  12 +-
 drivers/net/null/rte_eth_null.c                 |   4 +-
 drivers/net/pcap/rte_eth_pcap.c                 |   4 +-
 drivers/net/qede/qede_ethdev.c                  |   2 +-
 drivers/net/qede/qede_rxtx.c                    |   4 +-
 drivers/net/qede/qede_rxtx.h                    |   4 +-
 drivers/net/ring/rte_eth_ring.c                 |  20 ++--
 drivers/net/sfc/sfc_ethdev.c                    |   4 +-
 drivers/net/sfc/sfc_rx.c                        |   4 +-
 drivers/net/sfc/sfc_rx.h                        |   2 +-
 drivers/net/sfc/sfc_tx.c                        |   4 +-
 drivers/net/sfc/sfc_tx.h                        |   2 +-
 drivers/net/szedata2/rte_eth_szedata2.c         |   4 +-
 drivers/net/tap/rte_eth_tap.c                   |   4 +-
 drivers/net/thunderx/nicvf_ethdev.c             |   8 +-
 drivers/net/vhost/rte_eth_vhost.c               |   4 +-
 drivers/net/virtio/virtio_ethdev.c              |   2 +-
 drivers/net/virtio/virtio_ethdev.h              |   4 +-
 drivers/net/virtio/virtio_rxtx.c                |   8 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h            |   4 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c              |   4 +-
 drivers/net/xenvirt/rte_eth_xenvirt.c           |  20 ++--
 examples/ip_fragmentation/main.c                |   2 +-
 examples/ip_pipeline/app.h                      |   4 +-
 examples/ip_reassembly/main.c                   |   2 +-
 examples/ipsec-secgw/ipsec-secgw.c              |   2 +-
 examples/ipv4_multicast/main.c                  |   2 +-
 examples/l3fwd-acl/main.c                       |   2 +-
 examples/l3fwd-power/main.c                     |   2 +-
 examples/l3fwd-vf/main.c                        |   2 +-
 examples/l3fwd/main.c                           |   2 +-
 examples/netmap_compat/lib/compat_netmap.c      |   4 +-
 examples/performance-thread/l3fwd-thread/main.c |   2 +-
 examples/ptpclient/ptpclient.c                  |   2 +-
 examples/qos_sched/init.c                       |   4 +-
 examples/tep_termination/vxlan_setup.c          |   4 +-
 examples/vhost/main.c                           |   4 +-
 examples/vhost_xen/main.c                       |   2 +-
 examples/vmdq/main.c                            |   2 +-
 lib/librte_ether/rte_ethdev.c                   | 115 ++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h                   |  83 +++++++++++--
 test/test-pipeline/init.c                       |   4 +-
 test/test/test_kni.c                            |   4 +-
 test/test/test_link_bonding.c                   |   4 +-
 test/test/test_pmd_perf.c                       |   4 +-
 test/test/virtual_pmd.c                         |   8 +-
 87 files changed, 409 insertions(+), 225 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs
  2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
@ 2017-09-04  7:12 ` Shahaf Shuler
  2017-09-04 12:06   ` Ananyev, Konstantin
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 2/4] ethdev: introduce Rx queue offloads API Shahaf Shuler
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04  7:12 UTC (permalink / raw)
  To: thomas; +Cc: dev

Rename the structs rte_eth_txconf and rte_eth_rxconf to
rte_eth_txq_conf and rte_eth_rxq_conf respectively as those
structs represent per queue configuration.

Rename was done with the following commands:

find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
's/rte_eth_txconf/rte_eth_txq_conf/g'

find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
's/rte_eth_rxconf/rte_eth_rxq_conf/g'

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 app/test-pmd/config.c                           |  4 +--
 app/test-pmd/testpmd.h                          |  4 +--
 drivers/net/af_packet/rte_eth_af_packet.c       |  4 +--
 drivers/net/ark/ark_ethdev_rx.c                 |  4 +--
 drivers/net/ark/ark_ethdev_rx.h                 |  2 +-
 drivers/net/ark/ark_ethdev_tx.c                 |  2 +-
 drivers/net/ark/ark_ethdev_tx.h                 |  2 +-
 drivers/net/avp/avp_ethdev.c                    |  8 +++---
 drivers/net/bnx2x/bnx2x_rxtx.c                  |  4 +--
 drivers/net/bnx2x/bnx2x_rxtx.h                  |  4 +--
 drivers/net/bnxt/bnxt_ethdev.c                  |  4 +--
 drivers/net/bnxt/bnxt_rxq.c                     |  2 +-
 drivers/net/bnxt/bnxt_rxq.h                     |  2 +-
 drivers/net/bnxt/bnxt_txq.c                     |  2 +-
 drivers/net/bnxt/bnxt_txq.h                     |  2 +-
 drivers/net/bonding/rte_eth_bond_pmd.c          |  7 ++---
 drivers/net/bonding/rte_eth_bond_private.h      |  4 +--
 drivers/net/cxgbe/cxgbe_ethdev.c                |  4 +--
 drivers/net/dpaa2/dpaa2_ethdev.c                |  4 +--
 drivers/net/e1000/e1000_ethdev.h                |  8 +++---
 drivers/net/e1000/em_rxtx.c                     |  4 +--
 drivers/net/e1000/igb_ethdev.c                  |  8 +++---
 drivers/net/e1000/igb_rxtx.c                    |  4 +--
 drivers/net/ena/ena_ethdev.c                    | 28 +++++++++++---------
 drivers/net/enic/enic_ethdev.c                  |  6 ++---
 drivers/net/failsafe/failsafe_ops.c             |  4 +--
 drivers/net/fm10k/fm10k_ethdev.c                | 12 ++++-----
 drivers/net/i40e/i40e_ethdev.c                  |  4 +--
 drivers/net/i40e/i40e_ethdev_vf.c               |  4 +--
 drivers/net/i40e/i40e_rxtx.c                    |  4 +--
 drivers/net/i40e/i40e_rxtx.h                    |  4 +--
 drivers/net/ixgbe/ixgbe_ethdev.c                |  8 +++---
 drivers/net/ixgbe/ixgbe_ethdev.h                |  4 +--
 drivers/net/ixgbe/ixgbe_rxtx.c                  |  4 +--
 drivers/net/kni/rte_eth_kni.c                   |  4 +--
 drivers/net/liquidio/lio_ethdev.c               |  8 +++---
 drivers/net/mlx4/mlx4.c                         | 12 ++++-----
 drivers/net/mlx5/mlx5_rxq.c                     |  4 +--
 drivers/net/mlx5/mlx5_rxtx.h                    |  6 ++---
 drivers/net/mlx5/mlx5_txq.c                     |  4 +--
 drivers/net/nfp/nfp_net.c                       | 12 ++++-----
 drivers/net/null/rte_eth_null.c                 |  4 +--
 drivers/net/pcap/rte_eth_pcap.c                 |  4 +--
 drivers/net/qede/qede_ethdev.c                  |  2 +-
 drivers/net/qede/qede_rxtx.c                    |  4 +--
 drivers/net/qede/qede_rxtx.h                    |  4 +--
 drivers/net/ring/rte_eth_ring.c                 | 20 +++++++-------
 drivers/net/sfc/sfc_ethdev.c                    |  4 +--
 drivers/net/sfc/sfc_rx.c                        |  4 +--
 drivers/net/sfc/sfc_rx.h                        |  2 +-
 drivers/net/sfc/sfc_tx.c                        |  4 +--
 drivers/net/sfc/sfc_tx.h                        |  2 +-
 drivers/net/szedata2/rte_eth_szedata2.c         |  4 +--
 drivers/net/tap/rte_eth_tap.c                   |  4 +--
 drivers/net/thunderx/nicvf_ethdev.c             |  8 +++---
 drivers/net/vhost/rte_eth_vhost.c               |  4 +--
 drivers/net/virtio/virtio_ethdev.c              |  2 +-
 drivers/net/virtio/virtio_ethdev.h              |  4 +--
 drivers/net/virtio/virtio_rxtx.c                |  8 +++---
 drivers/net/vmxnet3/vmxnet3_ethdev.h            |  4 +--
 drivers/net/vmxnet3/vmxnet3_rxtx.c              |  4 +--
 drivers/net/xenvirt/rte_eth_xenvirt.c           | 20 +++++++-------
 examples/ip_fragmentation/main.c                |  2 +-
 examples/ip_pipeline/app.h                      |  4 +--
 examples/ip_reassembly/main.c                   |  2 +-
 examples/ipsec-secgw/ipsec-secgw.c              |  2 +-
 examples/ipv4_multicast/main.c                  |  2 +-
 examples/l3fwd-acl/main.c                       |  2 +-
 examples/l3fwd-power/main.c                     |  2 +-
 examples/l3fwd-vf/main.c                        |  2 +-
 examples/l3fwd/main.c                           |  2 +-
 examples/netmap_compat/lib/compat_netmap.c      |  4 +--
 examples/performance-thread/l3fwd-thread/main.c |  2 +-
 examples/ptpclient/ptpclient.c                  |  2 +-
 examples/qos_sched/init.c                       |  4 +--
 examples/tep_termination/vxlan_setup.c          |  4 +--
 examples/vhost/main.c                           |  4 +--
 examples/vhost_xen/main.c                       |  2 +-
 examples/vmdq/main.c                            |  2 +-
 lib/librte_ether/rte_ethdev.c                   |  4 +--
 lib/librte_ether/rte_ethdev.h                   | 24 +++++++++--------
 test/test-pipeline/init.c                       |  4 +--
 test/test/test_kni.c                            |  4 +--
 test/test/test_link_bonding.c                   |  4 +--
 test/test/test_pmd_perf.c                       |  4 +--
 test/test/virtual_pmd.c                         |  8 +++---
 86 files changed, 223 insertions(+), 214 deletions(-)

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 3ae3e1cd8..392f0c57f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1639,8 +1639,8 @@ rxtx_config_display(void)
 		printf("  packet len=%u - nb packet segments=%d\n",
 				(unsigned)tx_pkt_length, (int) tx_pkt_nb_segs);
 
-	struct rte_eth_rxconf *rx_conf = &ports[0].rx_conf;
-	struct rte_eth_txconf *tx_conf = &ports[0].tx_conf;
+	struct rte_eth_rxq_conf *rx_conf = &ports[0].rx_conf;
+	struct rte_eth_txq_conf *tx_conf = &ports[0].tx_conf;
 
 	printf("  nb forwarding cores=%d - nb forwarding ports=%d\n",
 	       nb_fwd_lcores, nb_fwd_ports);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c9d7739b8..507974f43 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -189,8 +189,8 @@ struct rte_port {
 	uint8_t                 need_reconfig_queues; /**< need reconfiguring queues or not */
 	uint8_t                 rss_flag;   /**< enable rss or not */
 	uint8_t                 dcb_flag;   /**< enable dcb */
-	struct rte_eth_rxconf   rx_conf;    /**< rx configuration */
-	struct rte_eth_txconf   tx_conf;    /**< tx configuration */
+	struct rte_eth_rxq_conf   rx_conf;    /**< rx configuration */
+	struct rte_eth_txq_conf   tx_conf;    /**< tx configuration */
 	struct ether_addr       *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	uint8_t                 slave_flag; /**< bonding slave port */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index 9a47852ca..7cba0aa91 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -395,7 +395,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
                    uint16_t rx_queue_id,
                    uint16_t nb_rx_desc __rte_unused,
                    unsigned int socket_id __rte_unused,
-                   const struct rte_eth_rxconf *rx_conf __rte_unused,
+		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
                    struct rte_mempool *mb_pool)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
@@ -428,7 +428,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
                    uint16_t tx_queue_id,
                    uint16_t nb_tx_desc __rte_unused,
                    unsigned int socket_id __rte_unused,
-                   const struct rte_eth_txconf *tx_conf __rte_unused)
+		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 
 	struct pmd_internals *internals = dev->data->dev_private;
diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
index f5d812a55..eb5a2c70a 100644
--- a/drivers/net/ark/ark_ethdev_rx.c
+++ b/drivers/net/ark/ark_ethdev_rx.c
@@ -140,7 +140,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
 			   unsigned int socket_id,
-			   const struct rte_eth_rxconf *rx_conf,
+			   const struct rte_eth_rxq_conf *rx_conf,
 			   struct rte_mempool *mb_pool)
 {
 	static int warning1;		/* = 0 */
@@ -163,7 +163,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	if (rx_conf != NULL && warning1 == 0) {
 		warning1 = 1;
 		PMD_DRV_LOG(INFO,
-			    "Arkville ignores rte_eth_rxconf argument.\n");
+			    "Arkville ignores rte_eth_rxq_conf argument.\n");
 	}
 
 	if (RTE_PKTMBUF_HEADROOM < ARK_RX_META_SIZE) {
diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
index 3a54a4c91..15b494243 100644
--- a/drivers/net/ark/ark_ethdev_rx.h
+++ b/drivers/net/ark/ark_ethdev_rx.h
@@ -45,7 +45,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			       uint16_t queue_idx,
 			       uint16_t nb_desc,
 			       unsigned int socket_id,
-			       const struct rte_eth_rxconf *rx_conf,
+			       const struct rte_eth_rxq_conf *rx_conf,
 			       struct rte_mempool *mp);
 uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
 				    uint16_t rx_queue_id);
diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
index 0e2d60deb..0e8aaf47a 100644
--- a/drivers/net/ark/ark_ethdev_tx.c
+++ b/drivers/net/ark/ark_ethdev_tx.c
@@ -234,7 +234,7 @@ eth_ark_tx_queue_setup(struct rte_eth_dev *dev,
 		       uint16_t queue_idx,
 		       uint16_t nb_desc,
 		       unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf __rte_unused)
+		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct ark_adapter *ark = (struct ark_adapter *)dev->data->dev_private;
 	struct ark_tx_queue *queue;
diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
index 8aaafc22e..eb7ab63ed 100644
--- a/drivers/net/ark/ark_ethdev_tx.h
+++ b/drivers/net/ark/ark_ethdev_tx.h
@@ -49,7 +49,7 @@ int eth_ark_tx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
 			   unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf);
+			   const struct rte_eth_txq_conf *tx_conf);
 void eth_ark_tx_queue_release(void *vtx_queue);
 int eth_ark_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
 int eth_ark_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index c746a0e2c..01bc08a7d 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -79,14 +79,14 @@ static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t rx_queue_id,
 				  uint16_t nb_rx_desc,
 				  unsigned int socket_id,
-				  const struct rte_eth_rxconf *rx_conf,
+				  const struct rte_eth_rxq_conf *rx_conf,
 				  struct rte_mempool *pool);
 
 static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
 				  uint16_t tx_queue_id,
 				  uint16_t nb_tx_desc,
 				  unsigned int socket_id,
-				  const struct rte_eth_txconf *tx_conf);
+				  const struct rte_eth_txq_conf *tx_conf);
 
 static uint16_t avp_recv_scattered_pkts(void *rx_queue,
 					struct rte_mbuf **rx_pkts,
@@ -1143,7 +1143,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc,
 		       unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
+		       const struct rte_eth_rxq_conf *rx_conf,
 		       struct rte_mempool *pool)
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
@@ -1207,7 +1207,7 @@ avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
 		       uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc,
 		       unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf)
+		       const struct rte_eth_txq_conf *tx_conf)
 {
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct avp_queue *txq;
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
index 5dd4aee7f..1a0c633b1 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.c
+++ b/drivers/net/bnx2x/bnx2x_rxtx.c
@@ -60,7 +60,7 @@ bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev,
 		       uint16_t queue_idx,
 		       uint16_t nb_desc,
 		       unsigned int socket_id,
-		       __rte_unused const struct rte_eth_rxconf *rx_conf,
+		       __rte_unused const struct rte_eth_rxq_conf *rx_conf,
 		       struct rte_mempool *mp)
 {
 	uint16_t j, idx;
@@ -246,7 +246,7 @@ bnx2x_dev_tx_queue_setup(struct rte_eth_dev *dev,
 		       uint16_t queue_idx,
 		       uint16_t nb_desc,
 		       unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf)
+		       const struct rte_eth_txq_conf *tx_conf)
 {
 	uint16_t i;
 	unsigned int tsize;
diff --git a/drivers/net/bnx2x/bnx2x_rxtx.h b/drivers/net/bnx2x/bnx2x_rxtx.h
index 2e38ec26a..1c6a6b38d 100644
--- a/drivers/net/bnx2x/bnx2x_rxtx.h
+++ b/drivers/net/bnx2x/bnx2x_rxtx.h
@@ -68,12 +68,12 @@ struct bnx2x_tx_queue {
 
 int bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 			      uint16_t nb_rx_desc, unsigned int socket_id,
-			      const struct rte_eth_rxconf *rx_conf,
+			      const struct rte_eth_rxq_conf *rx_conf,
 			      struct rte_mempool *mb_pool);
 
 int bnx2x_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 			      uint16_t nb_tx_desc, unsigned int socket_id,
-			      const struct rte_eth_txconf *tx_conf);
+			      const struct rte_eth_txq_conf *tx_conf);
 
 void bnx2x_dev_rx_queue_release(void *rxq);
 void bnx2x_dev_tx_queue_release(void *txq);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c9d11228b..508e6b752 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -391,7 +391,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 					DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	/* *INDENT-OFF* */
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = 8,
 			.hthresh = 8,
@@ -401,7 +401,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = 32,
 			.hthresh = 0,
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 0793820b1..d0ab47c36 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -293,7 +293,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       uint16_t queue_idx,
 			       uint16_t nb_desc,
 			       unsigned int socket_id,
-			       const struct rte_eth_rxconf *rx_conf,
+			       const struct rte_eth_rxq_conf *rx_conf,
 			       struct rte_mempool *mp)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 01aaa007f..29c0aa0a5 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -70,7 +70,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       uint16_t queue_idx,
 			       uint16_t nb_desc,
 			       unsigned int socket_id,
-			       const struct rte_eth_rxconf *rx_conf,
+			       const struct rte_eth_rxq_conf *rx_conf,
 			       struct rte_mempool *mp);
 void bnxt_free_rx_mbufs(struct bnxt *bp);
 
diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c
index 99dddddfc..f4701bd68 100644
--- a/drivers/net/bnxt/bnxt_txq.c
+++ b/drivers/net/bnxt/bnxt_txq.c
@@ -102,7 +102,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       uint16_t queue_idx,
 			       uint16_t nb_desc,
 			       unsigned int socket_id,
-			       const struct rte_eth_txconf *tx_conf)
+			       const struct rte_eth_txq_conf *tx_conf)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	struct bnxt_tx_queue *txq;
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 16f3a0bdd..5071dfd5b 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -70,6 +70,6 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       uint16_t queue_idx,
 			       uint16_t nb_desc,
 			       unsigned int socket_id,
-			       const struct rte_eth_txconf *tx_conf);
+			       const struct rte_eth_txq_conf *tx_conf);
 
 #endif
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 3ee70baa0..fbf7ffba5 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -2153,7 +2153,8 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 static int
 bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool)
+		const struct rte_eth_rxq_conf *rx_conf,
+		struct rte_mempool *mb_pool)
 {
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)
 			rte_zmalloc_socket(NULL, sizeof(struct bond_rx_queue),
@@ -2166,7 +2167,7 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 
 	bd_rx_q->nb_rx_desc = nb_rx_desc;
 
-	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct rte_eth_rxconf));
+	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct rte_eth_rxq_conf));
 	bd_rx_q->mb_pool = mb_pool;
 
 	dev->data->rx_queues[rx_queue_id] = bd_rx_q;
@@ -2177,7 +2178,7 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 static int
 bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf)
+		const struct rte_eth_txq_conf *tx_conf)
 {
 	struct bond_tx_queue *bd_tx_q  = (struct bond_tx_queue *)
 			rte_zmalloc_socket(NULL, sizeof(struct bond_tx_queue),
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 1fe6ff880..579a18c98 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -74,7 +74,7 @@ struct bond_rx_queue {
 	/**< Reference to eth_dev private structure */
 	uint16_t nb_rx_desc;
 	/**< Number of RX descriptors available for the queue */
-	struct rte_eth_rxconf rx_conf;
+	struct rte_eth_rxq_conf rx_conf;
 	/**< Copy of RX configuration structure for queue */
 	struct rte_mempool *mb_pool;
 	/**< Reference to mbuf pool to use for RX queue */
@@ -87,7 +87,7 @@ struct bond_tx_queue {
 	/**< Reference to dev private structure */
 	uint16_t nb_tx_desc;
 	/**< Number of TX descriptors available for the queue */
-	struct rte_eth_txconf tx_conf;
+	struct rte_eth_txq_conf tx_conf;
 	/**< Copy of TX configuration structure for queue */
 };
 
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 7bca45614..b8f965765 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -443,7 +443,7 @@ static int cxgbe_dev_tx_queue_stop(struct rte_eth_dev *eth_dev,
 static int cxgbe_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
 				    uint16_t queue_idx,	uint16_t nb_desc,
 				    unsigned int socket_id,
-				    const struct rte_eth_txconf *tx_conf)
+				    const struct rte_eth_txq_conf *tx_conf)
 {
 	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
 	struct adapter *adapter = pi->adapter;
@@ -552,7 +552,7 @@ static int cxgbe_dev_rx_queue_stop(struct rte_eth_dev *eth_dev,
 static int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 				    uint16_t queue_idx,	uint16_t nb_desc,
 				    unsigned int socket_id,
-				    const struct rte_eth_rxconf *rx_conf,
+				    const struct rte_eth_rxq_conf *rx_conf,
 				    struct rte_mempool *mp)
 {
 	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 429b3a086..80b79ecc2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -355,7 +355,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t rx_queue_id,
 			 uint16_t nb_rx_desc __rte_unused,
 			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_rxconf *rx_conf __rte_unused,
+			 const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 			 struct rte_mempool *mb_pool)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
@@ -440,7 +440,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t tx_queue_id,
 			 uint16_t nb_tx_desc __rte_unused,
 			 unsigned int socket_id __rte_unused,
-			 const struct rte_eth_txconf *tx_conf __rte_unused)
+			 const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct dpaa2_dev_priv *priv = dev->data->dev_private;
 	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 5668910c5..6390cc137 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -372,7 +372,7 @@ void igb_dev_free_queues(struct rte_eth_dev *dev);
 
 int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool);
 
 uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
@@ -385,7 +385,7 @@ int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
 int eth_igb_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf);
+		const struct rte_eth_txq_conf *tx_conf);
 
 int eth_igb_tx_done_cleanup(void *txq, uint32_t free_cnt);
 
@@ -441,7 +441,7 @@ void em_dev_free_queues(struct rte_eth_dev *dev);
 
 int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool);
 
 uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
@@ -454,7 +454,7 @@ int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
 
 int eth_em_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf);
+		const struct rte_eth_txq_conf *tx_conf);
 
 int eth_em_rx_init(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 31819c5bd..857b7167d 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1185,7 +1185,7 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
 			 uint16_t nb_desc,
 			 unsigned int socket_id,
-			 const struct rte_eth_txconf *tx_conf)
+			 const struct rte_eth_txq_conf *tx_conf)
 {
 	const struct rte_memzone *tz;
 	struct em_tx_queue *txq;
@@ -1347,7 +1347,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t queue_idx,
 		uint16_t nb_desc,
 		unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mp)
 {
 	const struct rte_memzone *rz;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index e4f7a9faf..7ac3703ac 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2252,7 +2252,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = IGB_DEFAULT_RX_PTHRESH,
 			.hthresh = IGB_DEFAULT_RX_HTHRESH,
@@ -2262,7 +2262,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = IGB_DEFAULT_TX_PTHRESH,
 			.hthresh = IGB_DEFAULT_TX_HTHRESH,
@@ -2339,7 +2339,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		break;
 	}
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = IGB_DEFAULT_RX_PTHRESH,
 			.hthresh = IGB_DEFAULT_RX_HTHRESH,
@@ -2349,7 +2349,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = IGB_DEFAULT_TX_PTHRESH,
 			.hthresh = IGB_DEFAULT_TX_HTHRESH,
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 1c80a2a1b..f4a7fe571 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1458,7 +1458,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
 			 uint16_t nb_desc,
 			 unsigned int socket_id,
-			 const struct rte_eth_txconf *tx_conf)
+			 const struct rte_eth_txq_conf *tx_conf)
 {
 	const struct rte_memzone *tz;
 	struct igb_tx_queue *txq;
@@ -1604,7 +1604,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
 			 uint16_t nb_desc,
 			 unsigned int socket_id,
-			 const struct rte_eth_rxconf *rx_conf,
+			 const struct rte_eth_rxq_conf *rx_conf,
 			 struct rte_mempool *mp)
 {
 	const struct rte_memzone *rz;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 80ce1f353..69fe5218d 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -193,10 +193,10 @@ static uint16_t eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
 static int ena_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			      uint16_t nb_desc, unsigned int socket_id,
-			      const struct rte_eth_txconf *tx_conf);
+			      const struct rte_eth_txq_conf *tx_conf);
 static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			      uint16_t nb_desc, unsigned int socket_id,
-			      const struct rte_eth_rxconf *rx_conf,
+			      const struct rte_eth_rxq_conf *rx_conf,
 			      struct rte_mempool *mp);
 static uint16_t eth_ena_recv_pkts(void *rx_queue,
 				  struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
@@ -940,11 +940,12 @@ static int ena_queue_restart(struct ena_ring *ring)
 	return 0;
 }
 
-static int ena_tx_queue_setup(struct rte_eth_dev *dev,
-			      uint16_t queue_idx,
-			      uint16_t nb_desc,
-			      __rte_unused unsigned int socket_id,
-			      __rte_unused const struct rte_eth_txconf *tx_conf)
+static int ena_tx_queue_setup(
+		struct rte_eth_dev *dev,
+		uint16_t queue_idx,
+		uint16_t nb_desc,
+		__rte_unused unsigned int socket_id,
+		__rte_unused const struct rte_eth_txq_conf *tx_conf)
 {
 	struct ena_com_create_io_ctx ctx =
 		/* policy set to _HOST just to satisfy icc compiler */
@@ -1042,12 +1043,13 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev,
 	return rc;
 }
 
-static int ena_rx_queue_setup(struct rte_eth_dev *dev,
-			      uint16_t queue_idx,
-			      uint16_t nb_desc,
-			      __rte_unused unsigned int socket_id,
-			      __rte_unused const struct rte_eth_rxconf *rx_conf,
-			      struct rte_mempool *mp)
+static int ena_rx_queue_setup(
+		struct rte_eth_dev *dev,
+		uint16_t queue_idx,
+		uint16_t nb_desc,
+		__rte_unused unsigned int socket_id,
+		__rte_unused const struct rte_eth_rxq_conf *rx_conf,
+		struct rte_mempool *mp)
 {
 	struct ena_com_create_io_ctx ctx =
 		/* policy set to _HOST just to satisfy icc compiler */
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index da8fec2d0..da7e88d23 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -191,7 +191,7 @@ static int enicpmd_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
 	uint16_t queue_idx,
 	uint16_t nb_desc,
 	unsigned int socket_id,
-	__rte_unused const struct rte_eth_txconf *tx_conf)
+	__rte_unused const struct rte_eth_txq_conf *tx_conf)
 {
 	int ret;
 	struct enic *enic = pmd_priv(eth_dev);
@@ -303,7 +303,7 @@ static int enicpmd_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 	uint16_t queue_idx,
 	uint16_t nb_desc,
 	unsigned int socket_id,
-	const struct rte_eth_rxconf *rx_conf,
+	const struct rte_eth_rxq_conf *rx_conf,
 	struct rte_mempool *mp)
 {
 	int ret;
@@ -485,7 +485,7 @@ static void enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
 		DEV_TX_OFFLOAD_UDP_CKSUM   |
 		DEV_TX_OFFLOAD_TCP_CKSUM   |
 		DEV_TX_OFFLOAD_TCP_TSO;
-	device_info->default_rxconf = (struct rte_eth_rxconf) {
+	device_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
 	};
 }
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index ff9ad155c..6f3f5ef56 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -384,7 +384,7 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id,
 		uint16_t nb_rx_desc,
 		unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool)
 {
 	struct sub_device *sdev;
@@ -452,7 +452,7 @@ fs_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t tx_queue_id,
 		uint16_t nb_tx_desc,
 		unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf)
+		const struct rte_eth_txq_conf *tx_conf)
 {
 	struct sub_device *sdev;
 	struct txq *txq;
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index e60d3a365..d6d9d9169 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1427,7 +1427,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = FM10K_DEFAULT_RX_PTHRESH,
 			.hthresh = FM10K_DEFAULT_RX_HTHRESH,
@@ -1437,7 +1437,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = FM10K_DEFAULT_TX_PTHRESH,
 			.hthresh = FM10K_DEFAULT_TX_HTHRESH,
@@ -1740,7 +1740,7 @@ check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request)
 }
 
 static inline int
-handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf)
+handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxq_conf *conf)
 {
 	uint16_t rx_free_thresh;
 
@@ -1805,7 +1805,7 @@ mempool_element_size_valid(struct rte_mempool *mp)
 static int
 fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	uint16_t nb_desc, unsigned int socket_id,
-	const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
+	const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
 {
 	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct fm10k_dev_info *dev_info =
@@ -1912,7 +1912,7 @@ fm10k_rx_queue_release(void *queue)
 }
 
 static inline int
-handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf)
+handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txq_conf *conf)
 {
 	uint16_t tx_free_thresh;
 	uint16_t tx_rs_thresh;
@@ -1971,7 +1971,7 @@ handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf)
 static int
 fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	uint16_t nb_desc, unsigned int socket_id,
-	const struct rte_eth_txconf *conf)
+	const struct rte_eth_txq_conf *conf)
 {
 	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct fm10k_tx_queue *q;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 8e0580c56..9dc422cbb 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2973,7 +2973,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = I40E_DEFAULT_RX_PTHRESH,
 			.hthresh = I40E_DEFAULT_RX_HTHRESH,
@@ -2983,7 +2983,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = I40E_DEFAULT_TX_PTHRESH,
 			.hthresh = I40E_DEFAULT_TX_HTHRESH,
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 7c5c16b85..61938d487 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2144,7 +2144,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		DEV_TX_OFFLOAD_TCP_CKSUM |
 		DEV_TX_OFFLOAD_SCTP_CKSUM;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = I40E_DEFAULT_RX_PTHRESH,
 			.hthresh = I40E_DEFAULT_RX_HTHRESH,
@@ -2154,7 +2154,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = I40E_DEFAULT_TX_PTHRESH,
 			.hthresh = I40E_DEFAULT_TX_HTHRESH,
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d42c23c05..f4e367db8 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1731,7 +1731,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
 			unsigned int socket_id,
-			const struct rte_eth_rxconf *rx_conf,
+			const struct rte_eth_rxq_conf *rx_conf,
 			struct rte_mempool *mp)
 {
 	struct i40e_vsi *vsi;
@@ -2010,7 +2010,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
 			unsigned int socket_id,
-			const struct rte_eth_txconf *tx_conf)
+			const struct rte_eth_txq_conf *tx_conf)
 {
 	struct i40e_vsi *vsi;
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 20084d649..9d48e33f9 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -201,13 +201,13 @@ int i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			    uint16_t queue_idx,
 			    uint16_t nb_desc,
 			    unsigned int socket_id,
-			    const struct rte_eth_rxconf *rx_conf,
+			    const struct rte_eth_rxq_conf *rx_conf,
 			    struct rte_mempool *mp);
 int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			    uint16_t queue_idx,
 			    uint16_t nb_desc,
 			    unsigned int socket_id,
-			    const struct rte_eth_txconf *tx_conf);
+			    const struct rte_eth_txq_conf *tx_conf);
 void i40e_dev_rx_queue_release(void *rxq);
 void i40e_dev_tx_queue_release(void *txq);
 uint16_t i40e_recv_pkts(void *rx_queue,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 22171d866..7022f2ecc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3665,7 +3665,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	    hw->mac.type == ixgbe_mac_X550EM_a)
 		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
 			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
@@ -3675,7 +3675,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
 			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
@@ -3776,7 +3776,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 				DEV_TX_OFFLOAD_SCTP_CKSUM  |
 				DEV_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
 			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
@@ -3786,7 +3786,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
 			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index caa50c8b9..4085a704a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -599,12 +599,12 @@ void ixgbe_dev_tx_queue_release(void *txq);
 
 int  ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool);
 
 int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf);
+		const struct rte_eth_txq_conf *tx_conf);
 
 uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id);
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 98d0e1a86..b6b21403d 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2397,7 +2397,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
 			 uint16_t nb_desc,
 			 unsigned int socket_id,
-			 const struct rte_eth_txconf *tx_conf)
+			 const struct rte_eth_txq_conf *tx_conf)
 {
 	const struct rte_memzone *tz;
 	struct ixgbe_tx_queue *txq;
@@ -2752,7 +2752,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
 			 uint16_t nb_desc,
 			 unsigned int socket_id,
-			 const struct rte_eth_rxconf *rx_conf,
+			 const struct rte_eth_rxq_conf *rx_conf,
 			 struct rte_mempool *mp)
 {
 	const struct rte_memzone *rz;
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index 72a2733ba..e2ef7644f 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -238,7 +238,7 @@ eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id,
 		uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
@@ -258,7 +258,7 @@ eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t tx_queue_id,
 		uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 	struct pmd_queue *q;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index a17fba501..e1bbddde7 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -1150,7 +1150,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
  * @param socket_id
  *    Where to allocate memory
  * @param rx_conf
- *    Pointer to the struction rte_eth_rxconf
+ *    Pointer to the struction rte_eth_rxq_conf
  * @param mp
  *    Pointer to the packet pool
  *
@@ -1161,7 +1161,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 static int
 lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
 		       uint16_t num_rx_descs, unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf __rte_unused,
+		       const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		       struct rte_mempool *mp)
 {
 	struct lio_device *lio_dev = LIO_DEV(eth_dev);
@@ -1242,7 +1242,7 @@ lio_dev_rx_queue_release(void *rxq)
  *   NUMA socket id, used for memory allocations
  *
  * @param tx_conf
- *   Pointer to the structure rte_eth_txconf
+ *   Pointer to the structure rte_eth_txq_conf
  *
  * @return
  *   - On success, return 0
@@ -1251,7 +1251,7 @@ lio_dev_rx_queue_release(void *rxq)
 static int
 lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
 		       uint16_t num_tx_descs, unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf __rte_unused)
+		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct lio_device *lio_dev = LIO_DEV(eth_dev);
 	int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 055de49a3..2db8b5646 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -539,7 +539,7 @@ priv_set_flags(struct priv *priv, unsigned int keep, unsigned int flags)
 
 static int
 txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
-	  unsigned int socket, const struct rte_eth_txconf *conf);
+	  unsigned int socket, const struct rte_eth_txq_conf *conf);
 
 static void
 txq_cleanup(struct txq *txq);
@@ -547,7 +547,7 @@ txq_cleanup(struct txq *txq);
 static int
 rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
 	  unsigned int socket, int inactive,
-	  const struct rte_eth_rxconf *conf,
+	  const struct rte_eth_rxq_conf *conf,
 	  struct rte_mempool *mp, int children_n,
 	  struct rxq *rxq_parent);
 
@@ -1762,7 +1762,7 @@ mlx4_tx_burst_secondary_setup(void *dpdk_txq, struct rte_mbuf **pkts,
  */
 static int
 txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
-	  unsigned int socket, const struct rte_eth_txconf *conf)
+	  unsigned int socket, const struct rte_eth_txq_conf *conf)
 {
 	struct priv *priv = mlx4_get_priv(dev);
 	struct txq tmpl = {
@@ -1954,7 +1954,7 @@ txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
  */
 static int
 mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
-		    unsigned int socket, const struct rte_eth_txconf *conf)
+		    unsigned int socket, const struct rte_eth_txq_conf *conf)
 {
 	struct priv *priv = dev->data->dev_private;
 	struct txq *txq = (*priv->txqs)[idx];
@@ -3830,7 +3830,7 @@ rxq_create_qp(struct rxq *rxq,
 static int
 rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
 	  unsigned int socket, int inactive,
-	  const struct rte_eth_rxconf *conf,
+	  const struct rte_eth_rxq_conf *conf,
 	  struct rte_mempool *mp, int children_n,
 	  struct rxq *rxq_parent)
 {
@@ -4007,7 +4007,7 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
  */
 static int
 mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
-		    unsigned int socket, const struct rte_eth_rxconf *conf,
+		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
 		    struct rte_mempool *mp)
 {
 	struct rxq *parent;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 35c5cb42e..85428950c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -843,7 +843,7 @@ rxq_setup(struct rxq_ctrl *tmpl)
 static int
 rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
 	       uint16_t desc, unsigned int socket,
-	       const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
+	       const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
 {
 	struct priv *priv = dev->data->dev_private;
 	struct rxq_ctrl tmpl = {
@@ -1110,7 +1110,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
  */
 int
 mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
-		    unsigned int socket, const struct rte_eth_rxconf *conf,
+		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
 		    struct rte_mempool *mp)
 {
 	struct priv *priv = dev->data->dev_private;
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 033e70f25..eb5315760 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -301,7 +301,7 @@ int priv_allow_flow_type(struct priv *, enum hash_rxq_flow_type);
 int priv_rehash_flows(struct priv *);
 void rxq_cleanup(struct rxq_ctrl *);
 int mlx5_rx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t, unsigned int,
-			const struct rte_eth_rxconf *, struct rte_mempool *);
+			const struct rte_eth_rxq_conf *, struct rte_mempool *);
 void mlx5_rx_queue_release(void *);
 int priv_rx_intr_vec_enable(struct priv *priv);
 void priv_rx_intr_vec_disable(struct priv *priv);
@@ -314,9 +314,9 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 
 void txq_cleanup(struct txq_ctrl *);
 int txq_ctrl_setup(struct rte_eth_dev *, struct txq_ctrl *, uint16_t,
-		   unsigned int, const struct rte_eth_txconf *);
+		   unsigned int, const struct rte_eth_txq_conf *);
 int mlx5_tx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t, unsigned int,
-			const struct rte_eth_txconf *);
+			const struct rte_eth_txq_conf *);
 void mlx5_tx_queue_release(void *);
 
 /* mlx5_rxtx.c */
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4b0b532b1..7b8c2f766 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -211,7 +211,7 @@ txq_setup(struct txq_ctrl *tmpl, struct txq_ctrl *txq_ctrl)
 int
 txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
 	       uint16_t desc, unsigned int socket,
-	       const struct rte_eth_txconf *conf)
+	       const struct rte_eth_txq_conf *conf)
 {
 	struct priv *priv = mlx5_get_priv(dev);
 	struct txq_ctrl tmpl = {
@@ -413,7 +413,7 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
  */
 int
 mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
-		    unsigned int socket, const struct rte_eth_txconf *conf)
+		    unsigned int socket, const struct rte_eth_txq_conf *conf)
 {
 	struct priv *priv = dev->data->dev_private;
 	struct txq *txq = (*priv->txqs)[idx];
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index a3bf5e1f1..4122824d9 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -79,13 +79,13 @@ static uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 static void nfp_net_rx_queue_release(void *rxq);
 static int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				  uint16_t nb_desc, unsigned int socket_id,
-				  const struct rte_eth_rxconf *rx_conf,
+				  const struct rte_eth_rxq_conf *rx_conf,
 				  struct rte_mempool *mp);
 static int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
 static void nfp_net_tx_queue_release(void *txq);
 static int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 				  uint16_t nb_desc, unsigned int socket_id,
-				  const struct rte_eth_txconf *tx_conf);
+				  const struct rte_eth_txq_conf *tx_conf);
 static int nfp_net_start(struct rte_eth_dev *dev);
 static void nfp_net_stats_get(struct rte_eth_dev *dev,
 			      struct rte_eth_stats *stats);
@@ -1119,7 +1119,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					     DEV_TX_OFFLOAD_UDP_CKSUM |
 					     DEV_TX_OFFLOAD_TCP_CKSUM;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_thresh = {
 			.pthresh = DEFAULT_RX_PTHRESH,
 			.hthresh = DEFAULT_RX_HTHRESH,
@@ -1129,7 +1129,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_thresh = {
 			.pthresh = DEFAULT_TX_PTHRESH,
 			.hthresh = DEFAULT_TX_HTHRESH,
@@ -1388,7 +1388,7 @@ static int
 nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		       uint16_t queue_idx, uint16_t nb_desc,
 		       unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
+		       const struct rte_eth_rxq_conf *rx_conf,
 		       struct rte_mempool *mp)
 {
 	const struct rte_memzone *tz;
@@ -1537,7 +1537,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 static int
 nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		       uint16_t nb_desc, unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf)
+		       const struct rte_eth_txq_conf *tx_conf)
 {
 	const struct rte_memzone *tz;
 	struct nfp_net_txq *txq;
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 5aef0591e..7ae14b77b 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -214,7 +214,7 @@ static int
 eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool)
 {
 	struct rte_mbuf *dummy_packet;
@@ -249,7 +249,7 @@ static int
 eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct rte_mbuf *dummy_packet;
 	struct pmd_internals *internals;
diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c
index defb3b419..874856712 100644
--- a/drivers/net/pcap/rte_eth_pcap.c
+++ b/drivers/net/pcap/rte_eth_pcap.c
@@ -634,7 +634,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id,
 		uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
@@ -652,7 +652,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t tx_queue_id,
 		uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4e9e89fad..5b6df9688 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1293,7 +1293,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.txq_flags = QEDE_TXQ_FLAGS,
 	};
 
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 5c3613c7c..98da5f975 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -40,7 +40,7 @@ static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 int
 qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		    uint16_t nb_desc, unsigned int socket_id,
-		    __rte_unused const struct rte_eth_rxconf *rx_conf,
+		    __rte_unused const struct rte_eth_rxq_conf *rx_conf,
 		    struct rte_mempool *mp)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
@@ -238,7 +238,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
 		    uint16_t queue_idx,
 		    uint16_t nb_desc,
 		    unsigned int socket_id,
-		    const struct rte_eth_txconf *tx_conf)
+		    const struct rte_eth_txq_conf *tx_conf)
 {
 	struct qede_dev *qdev = dev->data->dev_private;
 	struct ecore_dev *edev = &qdev->edev;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index b551fd6ae..0c10b8ebe 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -225,14 +225,14 @@ struct qede_fastpath {
  */
 int qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
-			const struct rte_eth_rxconf *rx_conf,
+			const struct rte_eth_rxq_conf *rx_conf,
 			struct rte_mempool *mp);
 
 int qede_tx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
 			unsigned int socket_id,
-			const struct rte_eth_txconf *tx_conf);
+			const struct rte_eth_txq_conf *tx_conf);
 
 void qede_rx_queue_release(void *rx_queue);
 
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 464d3d384..6d077e3cf 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -155,11 +155,12 @@ eth_dev_set_link_up(struct rte_eth_dev *dev)
 }
 
 static int
-eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
-				    uint16_t nb_rx_desc __rte_unused,
-				    unsigned int socket_id __rte_unused,
-				    const struct rte_eth_rxconf *rx_conf __rte_unused,
-				    struct rte_mempool *mb_pool __rte_unused)
+eth_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t rx_queue_id,
+		   uint16_t nb_rx_desc __rte_unused,
+		   unsigned int socket_id __rte_unused,
+		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
+		   struct rte_mempool *mb_pool __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 	dev->data->rx_queues[rx_queue_id] = &internals->rx_ring_queues[rx_queue_id];
@@ -167,10 +168,11 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 }
 
 static int
-eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
-				    uint16_t nb_tx_desc __rte_unused,
-				    unsigned int socket_id __rte_unused,
-				    const struct rte_eth_txconf *tx_conf __rte_unused)
+eth_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t tx_queue_id,
+		   uint16_t nb_tx_desc __rte_unused,
+		   unsigned int socket_id __rte_unused,
+		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 	dev->data->tx_queues[tx_queue_id] = &internals->tx_ring_queues[tx_queue_id];
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2b037d863..959a2b42f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -404,7 +404,7 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
 static int
 sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		   uint16_t nb_rx_desc, unsigned int socket_id,
-		   const struct rte_eth_rxconf *rx_conf,
+		   const struct rte_eth_rxq_conf *rx_conf,
 		   struct rte_mempool *mb_pool)
 {
 	struct sfc_adapter *sa = dev->data->dev_private;
@@ -461,7 +461,7 @@ sfc_rx_queue_release(void *queue)
 static int
 sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		   uint16_t nb_tx_desc, unsigned int socket_id,
-		   const struct rte_eth_txconf *tx_conf)
+		   const struct rte_eth_txq_conf *tx_conf)
 {
 	struct sfc_adapter *sa = dev->data->dev_private;
 	int rc;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 79ed046ce..079df6272 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -772,7 +772,7 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 
 static int
 sfc_rx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_rx_desc,
-		   const struct rte_eth_rxconf *rx_conf)
+		   const struct rte_eth_rxq_conf *rx_conf)
 {
 	const uint16_t rx_free_thresh_max = EFX_RXQ_LIMIT(nb_rx_desc);
 	int rc = 0;
@@ -903,7 +903,7 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
 int
 sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	     uint16_t nb_rx_desc, unsigned int socket_id,
-	     const struct rte_eth_rxconf *rx_conf,
+	     const struct rte_eth_rxq_conf *rx_conf,
 	     struct rte_mempool *mb_pool)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 9e6282ead..126c41089 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -156,7 +156,7 @@ void sfc_rx_stop(struct sfc_adapter *sa);
 
 int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
-		 const struct rte_eth_rxconf *rx_conf,
+		 const struct rte_eth_rxq_conf *rx_conf,
 		 struct rte_mempool *mb_pool);
 void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
 int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index bf596017a..fe030baa4 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -58,7 +58,7 @@
 
 static int
 sfc_tx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_tx_desc,
-		   const struct rte_eth_txconf *tx_conf)
+		   const struct rte_eth_txq_conf *tx_conf)
 {
 	unsigned int flags = tx_conf->txq_flags;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
@@ -128,7 +128,7 @@ sfc_tx_qflush_done(struct sfc_txq *txq)
 int
 sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	     uint16_t nb_tx_desc, unsigned int socket_id,
-	     const struct rte_eth_txconf *tx_conf)
+	     const struct rte_eth_txq_conf *tx_conf)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	struct sfc_txq_info *txq_info;
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index 0c1c7083b..90b5eb7d7 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -141,7 +141,7 @@ void sfc_tx_close(struct sfc_adapter *sa);
 
 int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 		 uint16_t nb_tx_desc, unsigned int socket_id,
-		 const struct rte_eth_txconf *tx_conf);
+		 const struct rte_eth_txq_conf *tx_conf);
 void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
 
 void sfc_tx_qflush_done(struct sfc_txq *txq);
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 9c0d57cc1..6ba24a263 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1253,7 +1253,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id,
 		uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
@@ -1287,7 +1287,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t tx_queue_id,
 		uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 	struct szedata2_tx_queue *txq = &internals->tx_queue[tx_queue_id];
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 9acea8398..5a1125a7a 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -918,7 +918,7 @@ tap_rx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t rx_queue_id,
 		   uint16_t nb_rx_desc,
 		   unsigned int socket_id,
-		   const struct rte_eth_rxconf *rx_conf __rte_unused,
+		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		   struct rte_mempool *mp)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
@@ -997,7 +997,7 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 		   uint16_t tx_queue_id,
 		   uint16_t nb_tx_desc __rte_unused,
 		   unsigned int socket_id __rte_unused,
-		   const struct rte_eth_txconf *tx_conf __rte_unused)
+		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct pmd_internals *internals = dev->data->dev_private;
 	int ret;
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index edc17f1d4..3ddca8b49 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -936,7 +936,7 @@ nicvf_set_rx_function(struct rte_eth_dev *dev)
 static int
 nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 const struct rte_eth_txconf *tx_conf)
+			 const struct rte_eth_txq_conf *tx_conf)
 {
 	uint16_t tx_free_thresh;
 	uint8_t is_single_pool;
@@ -1261,7 +1261,7 @@ nicvf_rxq_mbuf_setup(struct nicvf_rxq *rxq)
 static int
 nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 			 uint16_t nb_desc, unsigned int socket_id,
-			 const struct rte_eth_rxconf *rx_conf,
+			 const struct rte_eth_rxq_conf *rx_conf,
 			 struct rte_mempool *mp)
 {
 	uint16_t rx_free_thresh;
@@ -1403,12 +1403,12 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
 		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
 
-	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
 		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
 		.rx_drop_en = 0,
 	};
 
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
 		.txq_flags =
 			ETH_TXQ_FLAGS_NOMULTSEGS  |
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 0dac5e60e..c90d06bd7 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -831,7 +831,7 @@ static int
 eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		   uint16_t nb_rx_desc __rte_unused,
 		   unsigned int socket_id,
-		   const struct rte_eth_rxconf *rx_conf __rte_unused,
+		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		   struct rte_mempool *mb_pool)
 {
 	struct vhost_queue *vq;
@@ -854,7 +854,7 @@ static int
 eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		   uint16_t nb_tx_desc __rte_unused,
 		   unsigned int socket_id,
-		   const struct rte_eth_txconf *tx_conf __rte_unused)
+		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct vhost_queue *vq;
 
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e320811ed..763b30e9a 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1891,7 +1891,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
 	dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
 	dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
-	dev_info->default_txconf = (struct rte_eth_txconf) {
+	dev_info->default_txconf = (struct rte_eth_txq_conf) {
 		.txq_flags = ETH_TXQ_FLAGS_NOOFFLOADS
 	};
 
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index c3413c6d9..57f0d7ad2 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -89,12 +89,12 @@ int virtio_dev_rx_queue_done(void *rxq, uint16_t offset);
 
 int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool);
 
 int  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf);
+		const struct rte_eth_txq_conf *tx_conf);
 
 uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e30377c51..cff1d9b62 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -414,7 +414,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
 			unsigned int socket_id __rte_unused,
-			__rte_unused const struct rte_eth_rxconf *rx_conf,
+			__rte_unused const struct rte_eth_rxq_conf *rx_conf,
 			struct rte_mempool *mp)
 {
 	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
@@ -492,7 +492,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
 
 static void
 virtio_update_rxtx_handler(struct rte_eth_dev *dev,
-			   const struct rte_eth_txconf *tx_conf)
+			   const struct rte_eth_txq_conf *tx_conf)
 {
 	uint8_t use_simple_rxtx = 0;
 	struct virtio_hw *hw = dev->data->dev_private;
@@ -519,7 +519,7 @@ virtio_update_rxtx_handler(struct rte_eth_dev *dev,
  * struct rte_eth_dev *dev: Used to update dev
  * uint16_t nb_desc: Defaults to values read from config space
  * unsigned int socket_id: Used to allocate memzone
- * const struct rte_eth_txconf *tx_conf: Used to setup tx engine
+ * const struct rte_eth_txq_conf *tx_conf: Used to setup tx engine
  * uint16_t queue_idx: Just used as an index in dev txq list
  */
 int
@@ -527,7 +527,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			uint16_t queue_idx,
 			uint16_t nb_desc,
 			unsigned int socket_id __rte_unused,
-			const struct rte_eth_txconf *tx_conf)
+			const struct rte_eth_txq_conf *tx_conf)
 {
 	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
 	struct virtio_hw *hw = dev->data->dev_private;
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index b48058afc..98389fa74 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -189,11 +189,11 @@ void vmxnet3_dev_tx_queue_release(void *txq);
 
 int  vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 				uint16_t nb_rx_desc, unsigned int socket_id,
-				const struct rte_eth_rxconf *rx_conf,
+				const struct rte_eth_rxq_conf *rx_conf,
 				struct rte_mempool *mb_pool);
 int  vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 				uint16_t nb_tx_desc, unsigned int socket_id,
-				const struct rte_eth_txconf *tx_conf);
+				const struct rte_eth_txq_conf *tx_conf);
 
 int vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev);
 
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index d9cf43739..cfdf72f7f 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -888,7 +888,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
 			   unsigned int socket_id,
-			   const struct rte_eth_txconf *tx_conf)
+			   const struct rte_eth_txq_conf *tx_conf)
 {
 	struct vmxnet3_hw *hw = dev->data->dev_private;
 	const struct rte_memzone *mz;
@@ -993,7 +993,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			   uint16_t queue_idx,
 			   uint16_t nb_desc,
 			   unsigned int socket_id,
-			   __rte_unused const struct rte_eth_rxconf *rx_conf,
+			   __rte_unused const struct rte_eth_rxq_conf *rx_conf,
 			   struct rte_mempool *mp)
 {
 	const struct rte_memzone *mz;
diff --git a/drivers/net/xenvirt/rte_eth_xenvirt.c b/drivers/net/xenvirt/rte_eth_xenvirt.c
index e404b7755..792fbfb0a 100644
--- a/drivers/net/xenvirt/rte_eth_xenvirt.c
+++ b/drivers/net/xenvirt/rte_eth_xenvirt.c
@@ -492,11 +492,12 @@ virtio_queue_setup(struct rte_eth_dev *dev, int queue_type)
 }
 
 static int
-eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id,
-				uint16_t nb_rx_desc __rte_unused,
-				unsigned int socket_id __rte_unused,
-				const struct rte_eth_rxconf *rx_conf __rte_unused,
-				struct rte_mempool *mb_pool)
+eth_rx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t rx_queue_id,
+		   uint16_t nb_rx_desc __rte_unused,
+		   unsigned int socket_id __rte_unused,
+		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
+		   struct rte_mempool *mb_pool)
 {
 	struct virtqueue *vq;
 	vq = dev->data->rx_queues[rx_queue_id] = virtio_queue_setup(dev, VTNET_RQ);
@@ -505,10 +506,11 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id,
 }
 
 static int
-eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
-				uint16_t nb_tx_desc __rte_unused,
-				unsigned int socket_id __rte_unused,
-				const struct rte_eth_txconf *tx_conf __rte_unused)
+eth_tx_queue_setup(struct rte_eth_dev *dev,
+		   uint16_t tx_queue_id,
+		   uint16_t nb_tx_desc __rte_unused,
+		   unsigned int socket_id __rte_unused,
+		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	dev->data->tx_queues[tx_queue_id] = virtio_queue_setup(dev, VTNET_TQ);
 	return 0;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 8c0e17911..15f9426f2 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -869,7 +869,7 @@ main(int argc, char **argv)
 {
 	struct lcore_queue_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	struct rx_queue *rxq;
 	int socket, ret;
 	unsigned nb_ports;
diff --git a/examples/ip_pipeline/app.h b/examples/ip_pipeline/app.h
index e41290e74..59bb1bac8 100644
--- a/examples/ip_pipeline/app.h
+++ b/examples/ip_pipeline/app.h
@@ -103,7 +103,7 @@ struct app_pktq_hwq_in_params {
 	uint32_t size;
 	uint32_t burst;
 
-	struct rte_eth_rxconf conf;
+	struct rte_eth_rxq_conf conf;
 };
 
 struct app_pktq_hwq_out_params {
@@ -113,7 +113,7 @@ struct app_pktq_hwq_out_params {
 	uint32_t burst;
 	uint32_t dropless;
 	uint64_t n_retries;
-	struct rte_eth_txconf conf;
+	struct rte_eth_txq_conf conf;
 };
 
 struct app_pktq_swq_params {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index e62636cb4..746140f60 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -1017,7 +1017,7 @@ main(int argc, char **argv)
 {
 	struct lcore_queue_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	struct rx_queue *rxq;
 	int ret, socket;
 	unsigned nb_ports;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 99dc270cb..807d079cf 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1325,7 +1325,7 @@ static void
 port_init(uint8_t portid)
 {
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	uint16_t nb_tx_queue, nb_rx_queue;
 	uint16_t tx_queueid, rx_queueid, queue, lcore_id;
 	int32_t ret, socket_id;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 9a13d3530..a3c060778 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -668,7 +668,7 @@ main(int argc, char **argv)
 {
 	struct lcore_queue_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	uint16_t queueid;
 	unsigned lcore_id = 0, rx_lcore_id = 0;
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 8eff4de41..03124e142 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1887,7 +1887,7 @@ main(int argc, char **argv)
 {
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	unsigned nb_ports;
 	uint16_t queueid;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index fd442f5ef..f54decd20 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -1643,7 +1643,7 @@ main(int argc, char **argv)
 {
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	unsigned nb_ports;
 	uint16_t queueid;
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index 34e4a6bef..9a1ff8748 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -950,7 +950,7 @@ main(int argc, char **argv)
 {
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	unsigned nb_ports;
 	uint16_t queueid;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 81995fdbe..2e904b7ae 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -844,7 +844,7 @@ main(int argc, char **argv)
 {
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	unsigned nb_ports;
 	uint16_t queueid;
diff --git a/examples/netmap_compat/lib/compat_netmap.c b/examples/netmap_compat/lib/compat_netmap.c
index af2d9f3f7..2c245d1df 100644
--- a/examples/netmap_compat/lib/compat_netmap.c
+++ b/examples/netmap_compat/lib/compat_netmap.c
@@ -57,8 +57,8 @@ struct netmap_port {
 	struct rte_mempool   *pool;
 	struct netmap_if     *nmif;
 	struct rte_eth_conf   eth_conf;
-	struct rte_eth_txconf tx_conf;
-	struct rte_eth_rxconf rx_conf;
+	struct rte_eth_txq_conf tx_conf;
+	struct rte_eth_rxq_conf rx_conf;
 	int32_t  socket_id;
 	uint16_t nr_tx_rings;
 	uint16_t nr_rx_rings;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 7954b9744..e72b86e78 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,7 +3493,7 @@ int
 main(int argc, char **argv)
 {
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_txq_conf *txconf;
 	int ret;
 	int i;
 	unsigned nb_ports;
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index ddfcdb832..ac350f5fb 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -237,7 +237,7 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	/* Allocate and set up 1 TX queue per Ethernet port. */
 	for (q = 0; q < tx_rings; q++) {
 		/* Setup txq_flags */
-		struct rte_eth_txconf *txconf;
+		struct rte_eth_txq_conf *txconf;
 
 		rte_eth_dev_info_get(q, &dev_info);
 		txconf = &dev_info.default_txconf;
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index a82cbd7d5..955d051d2 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -104,8 +104,8 @@ app_init_port(uint8_t portid, struct rte_mempool *mp)
 {
 	int ret;
 	struct rte_eth_link link;
-	struct rte_eth_rxconf rx_conf;
-	struct rte_eth_txconf tx_conf;
+	struct rte_eth_rxq_conf rx_conf;
+	struct rte_eth_txq_conf tx_conf;
 	uint16_t rx_size;
 	uint16_t tx_size;
 
diff --git a/examples/tep_termination/vxlan_setup.c b/examples/tep_termination/vxlan_setup.c
index 050bb32d3..8d61e8891 100644
--- a/examples/tep_termination/vxlan_setup.c
+++ b/examples/tep_termination/vxlan_setup.c
@@ -138,8 +138,8 @@ vxlan_port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
 	uint16_t tx_ring_size = RTE_TEST_TX_DESC_DEFAULT;
 	struct rte_eth_udp_tunnel tunnel_udp;
-	struct rte_eth_rxconf *rxconf;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_rxq_conf *rxconf;
+	struct rte_eth_txq_conf *txconf;
 	struct vxlan_conf *pconf = &vxdev;
 
 	pconf->dst_port = udp_port;
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 4d1589d06..75c4c8341 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -269,8 +269,8 @@ port_init(uint8_t port)
 {
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_conf port_conf;
-	struct rte_eth_rxconf *rxconf;
-	struct rte_eth_txconf *txconf;
+	struct rte_eth_rxq_conf *rxconf;
+	struct rte_eth_txq_conf *txconf;
 	int16_t rx_rings, tx_rings;
 	uint16_t rx_ring_size, tx_ring_size;
 	int retval;
diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
index eba4d35aa..852269cdc 100644
--- a/examples/vhost_xen/main.c
+++ b/examples/vhost_xen/main.c
@@ -276,7 +276,7 @@ static inline int
 port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 {
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_rxconf *rxconf;
+	struct rte_eth_rxq_conf *rxconf;
 	struct rte_eth_conf port_conf;
 	uint16_t rx_rings, tx_rings = (uint16_t)rte_lcore_count();
 	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index 8949a1156..5c3a73789 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -189,7 +189,7 @@ static inline int
 port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 {
 	struct rte_eth_dev_info dev_info;
-	struct rte_eth_rxconf *rxconf;
+	struct rte_eth_rxq_conf *rxconf;
 	struct rte_eth_conf port_conf;
 	uint16_t rxRings, txRings;
 	uint16_t rxRingSize = RTE_TEST_RX_DESC_DEFAULT;
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0597641ee..da2424cc4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -997,7 +997,7 @@ rte_eth_dev_close(uint8_t port_id)
 int
 rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc, unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
+		       const struct rte_eth_rxq_conf *rx_conf,
 		       struct rte_mempool *mp)
 {
 	int ret;
@@ -1088,7 +1088,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf)
+		       const struct rte_eth_txq_conf *tx_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0adf3274a..c40db4ee0 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -686,7 +686,7 @@ struct rte_eth_txmode {
 /**
  * A structure used to configure an RX ring of an Ethernet port.
  */
-struct rte_eth_rxconf {
+struct rte_eth_rxq_conf {
 	struct rte_eth_thresh rx_thresh; /**< RX ring threshold registers. */
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
@@ -709,7 +709,7 @@ struct rte_eth_rxconf {
 /**
  * A structure used to configure a TX ring of an Ethernet port.
  */
-struct rte_eth_txconf {
+struct rte_eth_txq_conf {
 	struct rte_eth_thresh tx_thresh; /**< TX ring threshold registers. */
 	uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */
 	uint16_t tx_free_thresh; /**< Start freeing TX buffers if there are
@@ -956,8 +956,10 @@ struct rte_eth_dev_info {
 	uint8_t hash_key_size; /**< Hash key size in bytes */
 	/** Bit mask of RSS offloads, the bit offset also means flow type */
 	uint64_t flow_type_rss_offloads;
-	struct rte_eth_rxconf default_rxconf; /**< Default RX configuration */
-	struct rte_eth_txconf default_txconf; /**< Default TX configuration */
+	struct rte_eth_rxq_conf default_rxconf;
+	/**< Default RX queue configuration */
+	struct rte_eth_txq_conf default_txconf;
+	/**< Default TX queue configuration */
 	uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
 	uint16_t vmdq_queue_num;  /**< Queue number for VMDQ pools. */
 	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
@@ -975,7 +977,7 @@ struct rte_eth_dev_info {
  */
 struct rte_eth_rxq_info {
 	struct rte_mempool *mp;     /**< mempool used by that queue. */
-	struct rte_eth_rxconf conf; /**< queue config parameters. */
+	struct rte_eth_rxq_conf conf; /**< queue config parameters. */
 	uint8_t scattered_rx;       /**< scattered packets RX supported. */
 	uint16_t nb_desc;           /**< configured number of RXDs. */
 } __rte_cache_min_aligned;
@@ -985,7 +987,7 @@ struct rte_eth_rxq_info {
  * Used to retieve information about configured queue.
  */
 struct rte_eth_txq_info {
-	struct rte_eth_txconf conf; /**< queue config parameters. */
+	struct rte_eth_txq_conf conf; /**< queue config parameters. */
 	uint16_t nb_desc;           /**< configured number of TXDs. */
 } __rte_cache_min_aligned;
 
@@ -1185,7 +1187,7 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev,
 				    uint16_t rx_queue_id,
 				    uint16_t nb_rx_desc,
 				    unsigned int socket_id,
-				    const struct rte_eth_rxconf *rx_conf,
+				    const struct rte_eth_rxq_conf *rx_conf,
 				    struct rte_mempool *mb_pool);
 /**< @internal Set up a receive queue of an Ethernet device. */
 
@@ -1193,7 +1195,7 @@ typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev,
 				    uint16_t tx_queue_id,
 				    uint16_t nb_tx_desc,
 				    unsigned int socket_id,
-				    const struct rte_eth_txconf *tx_conf);
+				    const struct rte_eth_txq_conf *tx_conf);
 /**< @internal Setup a transmit queue of an Ethernet device. */
 
 typedef int (*eth_rx_enable_intr_t)(struct rte_eth_dev *dev,
@@ -1937,7 +1939,7 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  */
 int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf,
+		const struct rte_eth_rxq_conf *rx_conf,
 		struct rte_mempool *mb_pool);
 
 /**
@@ -1985,7 +1987,7 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  */
 int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf);
+		const struct rte_eth_txq_conf *tx_conf);
 
 /**
  * Return the NUMA socket to which an Ethernet device is connected
@@ -2972,7 +2974,7 @@ static inline int rte_eth_tx_descriptor_status(uint8_t port_id,
  *
  * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same tx queue without SW lock.
- * @see rte_eth_dev_info_get, struct rte_eth_txconf::txq_flags
+ * @see rte_eth_dev_info_get, struct rte_eth_txq_conf::txq_flags
  *
  * @param port_id
  *   The port identifier of the Ethernet device.
diff --git a/test/test-pipeline/init.c b/test/test-pipeline/init.c
index 1457c7890..eee75fb0e 100644
--- a/test/test-pipeline/init.c
+++ b/test/test-pipeline/init.c
@@ -117,7 +117,7 @@ static struct rte_eth_conf port_conf = {
 	},
 };
 
-static struct rte_eth_rxconf rx_conf = {
+static struct rte_eth_rxq_conf rx_conf = {
 	.rx_thresh = {
 		.pthresh = 8,
 		.hthresh = 8,
@@ -127,7 +127,7 @@ static struct rte_eth_rxconf rx_conf = {
 	.rx_drop_en = 0,
 };
 
-static struct rte_eth_txconf tx_conf = {
+static struct rte_eth_txq_conf tx_conf = {
 	.tx_thresh = {
 		.pthresh = 36,
 		.hthresh = 0,
diff --git a/test/test/test_kni.c b/test/test/test_kni.c
index db17fdf30..b5445e167 100644
--- a/test/test/test_kni.c
+++ b/test/test/test_kni.c
@@ -67,7 +67,7 @@ struct test_kni_stats {
 	volatile uint64_t egress;
 };
 
-static const struct rte_eth_rxconf rx_conf = {
+static const struct rte_eth_rxq_conf rx_conf = {
 	.rx_thresh = {
 		.pthresh = 8,
 		.hthresh = 8,
@@ -76,7 +76,7 @@ static const struct rte_eth_rxconf rx_conf = {
 	.rx_free_thresh = 0,
 };
 
-static const struct rte_eth_txconf tx_conf = {
+static const struct rte_eth_txq_conf tx_conf = {
 	.tx_thresh = {
 		.pthresh = 36,
 		.hthresh = 0,
diff --git a/test/test/test_link_bonding.c b/test/test/test_link_bonding.c
index dc28cea59..af23b1ae1 100644
--- a/test/test/test_link_bonding.c
+++ b/test/test/test_link_bonding.c
@@ -199,7 +199,7 @@ static struct rte_eth_conf default_pmd_conf = {
 	.lpbk_mode = 0,
 };
 
-static const struct rte_eth_rxconf rx_conf_default = {
+static const struct rte_eth_rxq_conf rx_conf_default = {
 	.rx_thresh = {
 		.pthresh = RX_PTHRESH,
 		.hthresh = RX_HTHRESH,
@@ -209,7 +209,7 @@ static const struct rte_eth_rxconf rx_conf_default = {
 	.rx_drop_en = 0,
 };
 
-static struct rte_eth_txconf tx_conf_default = {
+static struct rte_eth_txq_conf tx_conf_default = {
 	.tx_thresh = {
 		.pthresh = TX_PTHRESH,
 		.hthresh = TX_HTHRESH,
diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c
index 1ffd65a52..6f28ad303 100644
--- a/test/test/test_pmd_perf.c
+++ b/test/test/test_pmd_perf.c
@@ -109,7 +109,7 @@ static struct rte_eth_conf port_conf = {
 	.lpbk_mode = 1,  /* enable loopback */
 };
 
-static struct rte_eth_rxconf rx_conf = {
+static struct rte_eth_rxq_conf rx_conf = {
 	.rx_thresh = {
 		.pthresh = RX_PTHRESH,
 		.hthresh = RX_HTHRESH,
@@ -118,7 +118,7 @@ static struct rte_eth_rxconf rx_conf = {
 	.rx_free_thresh = 32,
 };
 
-static struct rte_eth_txconf tx_conf = {
+static struct rte_eth_txq_conf tx_conf = {
 	.tx_thresh = {
 		.pthresh = TX_PTHRESH,
 		.hthresh = TX_HTHRESH,
diff --git a/test/test/virtual_pmd.c b/test/test/virtual_pmd.c
index 9d46ad564..fb2479ced 100644
--- a/test/test/virtual_pmd.c
+++ b/test/test/virtual_pmd.c
@@ -124,7 +124,7 @@ static int
 virtual_ethdev_rx_queue_setup_success(struct rte_eth_dev *dev,
 		uint16_t rx_queue_id, uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool __rte_unused)
 {
 	struct virtual_ethdev_queue *rx_q;
@@ -147,7 +147,7 @@ static int
 virtual_ethdev_rx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused,
 		uint16_t rx_queue_id __rte_unused, uint16_t nb_rx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_rxconf *rx_conf __rte_unused,
+		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
 		struct rte_mempool *mb_pool __rte_unused)
 {
 	return -1;
@@ -157,7 +157,7 @@ static int
 virtual_ethdev_tx_queue_setup_success(struct rte_eth_dev *dev,
 		uint16_t tx_queue_id, uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	struct virtual_ethdev_queue *tx_q;
 
@@ -179,7 +179,7 @@ static int
 virtual_ethdev_tx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused,
 		uint16_t tx_queue_id __rte_unused, uint16_t nb_tx_desc __rte_unused,
 		unsigned int socket_id __rte_unused,
-		const struct rte_eth_txconf *tx_conf __rte_unused)
+		const struct rte_eth_txq_conf *tx_conf __rte_unused)
 {
 	return -1;
 }
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH 2/4] ethdev: introduce Rx queue offloads API
  2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs Shahaf Shuler
@ 2017-09-04  7:12 ` Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 3/4] ethdev: introduce Tx " Shahaf Shuler
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04  7:12 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Rx offloads.

The new API will re-use existing DEV_RX_OFFLOAD_* flags
to enable the different offloads. This will ease the process
of adding a new Rx offloads, as no ABI breakage is involved.
In addition, the offload configuration can be done per queue,
instead of per port.

The Rx queue offload API can be used only with devices which advertize
the RTE_ETH_DEV_RXQ_OFFLOAD capability. Otherwise the device
configuration will return with error.

PMDs which move to the new API however support rx offloads only per
port should return from the queue setup, in case of a mixed configuration,
with error (-ENOTSUP).

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  | 19 +++++++++++--------
 lib/librte_ether/rte_ethdev.c | 12 ++++++++++++
 lib/librte_ether/rte_ethdev.h | 35 ++++++++++++++++++++++++++++++++++-
 3 files changed, 57 insertions(+), 9 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..f2c8497c2 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,7 +206,7 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,8 +509,7 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +525,7 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -540,7 +540,7 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -557,6 +557,7 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -574,6 +575,7 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -586,6 +588,7 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index da2424cc4..50f8aa98d 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -722,6 +722,18 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	if ((!(dev->data->dev_flags & RTE_ETH_DEV_RXQ_OFFLOAD)) &&
+	     (dev_conf->rxmode.ignore_offloads == 1)) {
+		 /*
+		  * Application uses rte_eth_rxq_conf offloads API
+		  * but PMD not supports it.
+		  */
+		RTE_PMD_DEBUG_TRACE(
+			"port %d not supports rte_eth_rxq_conf offloads API\n",
+			port_id);
+		return -ENOTSUP;
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
 	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
 
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index c40db4ee0..90934418d 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -357,7 +357,14 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		ignore_offloads	 : 1;
+		/**
+		 * When set the rxmode offloads should be ignored,
+		 * instead the Rx offloads will be set on rte_eth_rxq_conf.
+		 * This bit is temporary till rxmode Rx offloads API will
+		 * be deprecated.
+		 */
 };
 
 /**
@@ -691,6 +698,12 @@ struct rte_eth_rxq_conf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Enable Rx offloads using DEV_RX_OFFLOAD_* flags.
+	 * Supported only for devices which advertize the
+	 * RTE_ETH_DEV_RXQ_OFFLOAD capability.
+	 */
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +920,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -1723,6 +1748,8 @@ struct rte_eth_dev_data {
 #define RTE_ETH_DEV_BONDED_SLAVE 0x0004
 /** Device supports device removal interrupt */
 #define RTE_ETH_DEV_INTR_RMV     0x0008
+/** Device supports the rte_eth_rxq_conf offloads API */
+#define RTE_ETH_DEV_RXQ_OFFLOAD 0x0010
 
 /**
  * @internal
@@ -1872,6 +1899,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        This offloads API is obsoleted and will be deprecated. Applications
+ *        should set the ignore_offloads bit on *rxmode* structure and use
+ *        the offloads field on *rte_eth_rxq_conf* structure.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1925,6 +1955,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
@@ -1936,6 +1968,7 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   - -ENOMEM: Unable to allocate the receive ring descriptors or to
  *      allocate network memory buffers from the memory pool when
  *      initializing receive descriptors.
+ *   - -ENOTSUP: Device not support the queue configuration.
  */
 int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH 3/4] ethdev: introduce Tx queue offloads API
  2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 2/4] ethdev: introduce Rx queue offloads API Shahaf Shuler
@ 2017-09-04  7:12 ` Shahaf Shuler
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new " Shahaf Shuler
  2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04  7:12 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Tx offloads.

The new API will re-use existing DEV_TX_OFFLOAD_* flags
to enable the different offloads. This will ease the process
of adding a new Tx offloads, as no ABI breakage is involved.
In addition, the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

The new API does not have an equivalent for the below, benchmark
specific, flags:

	- ETH_TXQ_FLAGS_NOREFCOUNT
	- ETH_TXQ_FLAGS_NOMULTMEMP

The Tx queue offload API can be used only with devices which advertize
the RTE_ETH_DEV_TXQ_OFFLOAD capability.

PMDs which move to the new API however support Tx offloads only per
port should return from the queue setup, in case of a mixed configuration,
with error (-ENOTSUP).

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  8 ++++++++
 lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f2c8497c2..bb25a1cee 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,6 +131,7 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
+* **[uses]    rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
@@ -220,6 +221,7 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
@@ -510,6 +512,7 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +529,7 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -541,6 +545,7 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -576,6 +582,7 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 90934418d..1293b9922 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -719,6 +719,14 @@ struct rte_eth_rxq_conf {
 #define ETH_TXQ_FLAGS_NOXSUMS \
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+	/**
+	 * When set the txq_flags should be ignored,
+	 * instead the Tx offloads will be set on offloads field
+	 * located on rte_eth_txq_conf struct.
+	 * This flag is temporary till the rte_eth_txq_conf.txq_flags
+	 * API will be deprecated.
+	 */
 /**
  * A structure used to configure a TX ring of an Ethernet port.
  */
@@ -730,6 +738,12 @@ struct rte_eth_txq_conf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Enable Tx offloads using DEV_TX_OFFLOAD_* flags.
+	 * Supported only for devices which advertize the
+	 * RTE_ETH_DEV_TXQ_OFFLOAD capability.
+	 */
 };
 
 /**
@@ -954,6 +968,8 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< multi segment send is supported. */
 
 struct rte_pci_device;
 
@@ -1750,6 +1766,8 @@ struct rte_eth_dev_data {
 #define RTE_ETH_DEV_INTR_RMV     0x0008
 /** Device supports the rte_eth_rxq_conf offloads API */
 #define RTE_ETH_DEV_RXQ_OFFLOAD 0x0010
+/** Device supports the rte_eth_txq_conf offloads API */
+#define RTE_ETH_DEV_TXQ_OFFLOAD 0x0020
 
 /**
  * @internal
@@ -2011,12 +2029,18 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsoleted and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
  * @return
  *   - 0: Success, the transmit queue is correctly set up.
  *   - -ENOMEM: Unable to allocate the transmit ring descriptors.
+ *   - -ENOTSUP: Device not support the queue configuration.
  */
 int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
                   ` (2 preceding siblings ...)
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 3/4] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-04  7:12 ` Shahaf Shuler
  2017-09-04 12:13   ` Ananyev, Konstantin
  2017-09-04 13:25   ` Ananyev, Konstantin
  2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
  4 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04  7:12 UTC (permalink / raw)
  To: thomas; +Cc: dev

A new offloads API was introduced by commits:

commit 121fff673172 ("ethdev: introduce Rx queue offloads API")
commit 35ac80d92f29 ("ethdev: introduce Tx queue offloads API")

In order to enable the PMDs to support only one of the APIs,
a conversion functions from the old to new API were added.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 lib/librte_ether/rte_ethdev.c | 99 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 2 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 50f8aa98d..1aa21a129 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1006,6 +1006,34 @@ rte_eth_dev_close(uint8_t port_id)
 	dev->data->tx_queues = NULL;
 }
 
+/**
+ * A conversion function from rxmode offloads API to rte_eth_rxq_conf
+ * offloads API.
+ */
+static void
+rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
+				struct rte_eth_rxq_conf *rxq_conf)
+{
+	if (rxmode->header_split == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		rxq_conf->offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+}
+
 int
 rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 		       uint16_t nb_rx_desc, unsigned int socket_id,
@@ -1016,6 +1044,8 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxq_conf rxq_trans_conf;
+	/* Holds translated configuration to be passed to the PMD */
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1062,6 +1092,11 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 		return -EINVAL;
 	}
 
+	if ((!(dev->data->dev_flags & RTE_ETH_DEV_RXQ_OFFLOAD)) &&
+	    (dev->data->dev_conf.rxmode.ignore_offloads == 1)) {
+		return -ENOTSUP;
+	}
+
 	if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
 			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
 			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
@@ -1086,8 +1121,15 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	rxq_trans_conf = *rx_conf;
+	if ((dev->data->dev_flags & RTE_ETH_DEV_RXQ_OFFLOAD) &&
+	    (dev->data->dev_conf.rxmode.ignore_offloads == 0)) {
+		rte_eth_convert_rxmode_offloads(&dev->data->dev_conf.rxmode,
+						&rxq_trans_conf);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &rxq_trans_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -1097,6 +1139,49 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags to rte_eth_txq_conf offloads API.
+ */
+static void
+rte_eth_convert_txq_flags(struct rte_eth_txq_conf *txq_conf)
+{
+	uint32_t txq_flags = txq_conf->txq_flags;
+	uint64_t *offloads = &txq_conf->offloads;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		*offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		*offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		*offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		*offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		*offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+}
+
+/**
+ * A conversion function between rte_eth_txq_conf offloads API to txq_flags
+ * offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(struct rte_eth_txq_conf *txq_conf)
+{
+	uint32_t *txq_flags = &txq_conf->txq_flags;
+	uint64_t offloads = txq_conf->offloads;
+
+	if (!(offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		*txq_flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		*txq_flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1104,6 +1189,8 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txq_conf txq_trans_conf;
+	/* Holds translated configuration to be passed to the PMD */
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1148,8 +1235,16 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	txq_trans_conf = *tx_conf;
+	if ((dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
+	    (!(tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)))
+		rte_eth_convert_txq_flags(&txq_trans_conf);
+	else if (!(dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
+		 (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE))
+		rte_eth_convert_txq_offloads(&txq_trans_conf);
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &txq_trans_conf);
 }
 
 void
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs Shahaf Shuler
@ 2017-09-04 12:06   ` Ananyev, Konstantin
  2017-09-04 12:45     ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-04 12:06 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

Hi Shaaf,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Monday, September 4, 2017 8:12 AM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs
> 
> Rename the structs rte_eth_txconf and rte_eth_rxconf to
> rte_eth_txq_conf and rte_eth_rxq_conf respectively as those
> structs represent per queue configuration.

If we are not going to force all PMDs to support new API in 17.11,
then there probably not much point in renaming these structs in 17.11.
I suppose most of users will stick with the old API till all PMDs will move
to the new one - that would allow them to avoid necessity to support both flavors.
In such case forcing them to modify their code without getting anything in return
seems like unnecessary hassle. 
Konstantin

> 
> Rename was done with the following commands:
> 
> find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
> 's/rte_eth_txconf/rte_eth_txq_conf/g'
> 
> find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
> 's/rte_eth_rxconf/rte_eth_rxq_conf/g'
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>  app/test-pmd/config.c                           |  4 +--
>  app/test-pmd/testpmd.h                          |  4 +--
>  drivers/net/af_packet/rte_eth_af_packet.c       |  4 +--
>  drivers/net/ark/ark_ethdev_rx.c                 |  4 +--
>  drivers/net/ark/ark_ethdev_rx.h                 |  2 +-
>  drivers/net/ark/ark_ethdev_tx.c                 |  2 +-
>  drivers/net/ark/ark_ethdev_tx.h                 |  2 +-
>  drivers/net/avp/avp_ethdev.c                    |  8 +++---
>  drivers/net/bnx2x/bnx2x_rxtx.c                  |  4 +--
>  drivers/net/bnx2x/bnx2x_rxtx.h                  |  4 +--
>  drivers/net/bnxt/bnxt_ethdev.c                  |  4 +--
>  drivers/net/bnxt/bnxt_rxq.c                     |  2 +-
>  drivers/net/bnxt/bnxt_rxq.h                     |  2 +-
>  drivers/net/bnxt/bnxt_txq.c                     |  2 +-
>  drivers/net/bnxt/bnxt_txq.h                     |  2 +-
>  drivers/net/bonding/rte_eth_bond_pmd.c          |  7 ++---
>  drivers/net/bonding/rte_eth_bond_private.h      |  4 +--
>  drivers/net/cxgbe/cxgbe_ethdev.c                |  4 +--
>  drivers/net/dpaa2/dpaa2_ethdev.c                |  4 +--
>  drivers/net/e1000/e1000_ethdev.h                |  8 +++---
>  drivers/net/e1000/em_rxtx.c                     |  4 +--
>  drivers/net/e1000/igb_ethdev.c                  |  8 +++---
>  drivers/net/e1000/igb_rxtx.c                    |  4 +--
>  drivers/net/ena/ena_ethdev.c                    | 28 +++++++++++---------
>  drivers/net/enic/enic_ethdev.c                  |  6 ++---
>  drivers/net/failsafe/failsafe_ops.c             |  4 +--
>  drivers/net/fm10k/fm10k_ethdev.c                | 12 ++++-----
>  drivers/net/i40e/i40e_ethdev.c                  |  4 +--
>  drivers/net/i40e/i40e_ethdev_vf.c               |  4 +--
>  drivers/net/i40e/i40e_rxtx.c                    |  4 +--
>  drivers/net/i40e/i40e_rxtx.h                    |  4 +--
>  drivers/net/ixgbe/ixgbe_ethdev.c                |  8 +++---
>  drivers/net/ixgbe/ixgbe_ethdev.h                |  4 +--
>  drivers/net/ixgbe/ixgbe_rxtx.c                  |  4 +--
>  drivers/net/kni/rte_eth_kni.c                   |  4 +--
>  drivers/net/liquidio/lio_ethdev.c               |  8 +++---
>  drivers/net/mlx4/mlx4.c                         | 12 ++++-----
>  drivers/net/mlx5/mlx5_rxq.c                     |  4 +--
>  drivers/net/mlx5/mlx5_rxtx.h                    |  6 ++---
>  drivers/net/mlx5/mlx5_txq.c                     |  4 +--
>  drivers/net/nfp/nfp_net.c                       | 12 ++++-----
>  drivers/net/null/rte_eth_null.c                 |  4 +--
>  drivers/net/pcap/rte_eth_pcap.c                 |  4 +--
>  drivers/net/qede/qede_ethdev.c                  |  2 +-
>  drivers/net/qede/qede_rxtx.c                    |  4 +--
>  drivers/net/qede/qede_rxtx.h                    |  4 +--
>  drivers/net/ring/rte_eth_ring.c                 | 20 +++++++-------
>  drivers/net/sfc/sfc_ethdev.c                    |  4 +--
>  drivers/net/sfc/sfc_rx.c                        |  4 +--
>  drivers/net/sfc/sfc_rx.h                        |  2 +-
>  drivers/net/sfc/sfc_tx.c                        |  4 +--
>  drivers/net/sfc/sfc_tx.h                        |  2 +-
>  drivers/net/szedata2/rte_eth_szedata2.c         |  4 +--
>  drivers/net/tap/rte_eth_tap.c                   |  4 +--
>  drivers/net/thunderx/nicvf_ethdev.c             |  8 +++---
>  drivers/net/vhost/rte_eth_vhost.c               |  4 +--
>  drivers/net/virtio/virtio_ethdev.c              |  2 +-
>  drivers/net/virtio/virtio_ethdev.h              |  4 +--
>  drivers/net/virtio/virtio_rxtx.c                |  8 +++---
>  drivers/net/vmxnet3/vmxnet3_ethdev.h            |  4 +--
>  drivers/net/vmxnet3/vmxnet3_rxtx.c              |  4 +--
>  drivers/net/xenvirt/rte_eth_xenvirt.c           | 20 +++++++-------
>  examples/ip_fragmentation/main.c                |  2 +-
>  examples/ip_pipeline/app.h                      |  4 +--
>  examples/ip_reassembly/main.c                   |  2 +-
>  examples/ipsec-secgw/ipsec-secgw.c              |  2 +-
>  examples/ipv4_multicast/main.c                  |  2 +-
>  examples/l3fwd-acl/main.c                       |  2 +-
>  examples/l3fwd-power/main.c                     |  2 +-
>  examples/l3fwd-vf/main.c                        |  2 +-
>  examples/l3fwd/main.c                           |  2 +-
>  examples/netmap_compat/lib/compat_netmap.c      |  4 +--
>  examples/performance-thread/l3fwd-thread/main.c |  2 +-
>  examples/ptpclient/ptpclient.c                  |  2 +-
>  examples/qos_sched/init.c                       |  4 +--
>  examples/tep_termination/vxlan_setup.c          |  4 +--
>  examples/vhost/main.c                           |  4 +--
>  examples/vhost_xen/main.c                       |  2 +-
>  examples/vmdq/main.c                            |  2 +-
>  lib/librte_ether/rte_ethdev.c                   |  4 +--
>  lib/librte_ether/rte_ethdev.h                   | 24 +++++++++--------
>  test/test-pipeline/init.c                       |  4 +--
>  test/test/test_kni.c                            |  4 +--
>  test/test/test_link_bonding.c                   |  4 +--
>  test/test/test_pmd_perf.c                       |  4 +--
>  test/test/virtual_pmd.c                         |  8 +++---
>  86 files changed, 223 insertions(+), 214 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 3ae3e1cd8..392f0c57f 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1639,8 +1639,8 @@ rxtx_config_display(void)
>  		printf("  packet len=%u - nb packet segments=%d\n",
>  				(unsigned)tx_pkt_length, (int) tx_pkt_nb_segs);
> 
> -	struct rte_eth_rxconf *rx_conf = &ports[0].rx_conf;
> -	struct rte_eth_txconf *tx_conf = &ports[0].tx_conf;
> +	struct rte_eth_rxq_conf *rx_conf = &ports[0].rx_conf;
> +	struct rte_eth_txq_conf *tx_conf = &ports[0].tx_conf;
> 
>  	printf("  nb forwarding cores=%d - nb forwarding ports=%d\n",
>  	       nb_fwd_lcores, nb_fwd_ports);
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index c9d7739b8..507974f43 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -189,8 +189,8 @@ struct rte_port {
>  	uint8_t                 need_reconfig_queues; /**< need reconfiguring queues or not */
>  	uint8_t                 rss_flag;   /**< enable rss or not */
>  	uint8_t                 dcb_flag;   /**< enable dcb */
> -	struct rte_eth_rxconf   rx_conf;    /**< rx configuration */
> -	struct rte_eth_txconf   tx_conf;    /**< tx configuration */
> +	struct rte_eth_rxq_conf   rx_conf;    /**< rx configuration */
> +	struct rte_eth_txq_conf   tx_conf;    /**< tx configuration */
>  	struct ether_addr       *mc_addr_pool; /**< pool of multicast addrs */
>  	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
>  	uint8_t                 slave_flag; /**< bonding slave port */
> diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
> index 9a47852ca..7cba0aa91 100644
> --- a/drivers/net/af_packet/rte_eth_af_packet.c
> +++ b/drivers/net/af_packet/rte_eth_af_packet.c
> @@ -395,7 +395,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
>                     uint16_t rx_queue_id,
>                     uint16_t nb_rx_desc __rte_unused,
>                     unsigned int socket_id __rte_unused,
> -                   const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>                     struct rte_mempool *mb_pool)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> @@ -428,7 +428,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
>                     uint16_t tx_queue_id,
>                     uint16_t nb_tx_desc __rte_unused,
>                     unsigned int socket_id __rte_unused,
> -                   const struct rte_eth_txconf *tx_conf __rte_unused)
> +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
> 
>  	struct pmd_internals *internals = dev->data->dev_private;
> diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c
> index f5d812a55..eb5a2c70a 100644
> --- a/drivers/net/ark/ark_ethdev_rx.c
> +++ b/drivers/net/ark/ark_ethdev_rx.c
> @@ -140,7 +140,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			   uint16_t queue_idx,
>  			   uint16_t nb_desc,
>  			   unsigned int socket_id,
> -			   const struct rte_eth_rxconf *rx_conf,
> +			   const struct rte_eth_rxq_conf *rx_conf,
>  			   struct rte_mempool *mb_pool)
>  {
>  	static int warning1;		/* = 0 */
> @@ -163,7 +163,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  	if (rx_conf != NULL && warning1 == 0) {
>  		warning1 = 1;
>  		PMD_DRV_LOG(INFO,
> -			    "Arkville ignores rte_eth_rxconf argument.\n");
> +			    "Arkville ignores rte_eth_rxq_conf argument.\n");
>  	}
> 
>  	if (RTE_PKTMBUF_HEADROOM < ARK_RX_META_SIZE) {
> diff --git a/drivers/net/ark/ark_ethdev_rx.h b/drivers/net/ark/ark_ethdev_rx.h
> index 3a54a4c91..15b494243 100644
> --- a/drivers/net/ark/ark_ethdev_rx.h
> +++ b/drivers/net/ark/ark_ethdev_rx.h
> @@ -45,7 +45,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			       uint16_t queue_idx,
>  			       uint16_t nb_desc,
>  			       unsigned int socket_id,
> -			       const struct rte_eth_rxconf *rx_conf,
> +			       const struct rte_eth_rxq_conf *rx_conf,
>  			       struct rte_mempool *mp);
>  uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
>  				    uint16_t rx_queue_id);
> diff --git a/drivers/net/ark/ark_ethdev_tx.c b/drivers/net/ark/ark_ethdev_tx.c
> index 0e2d60deb..0e8aaf47a 100644
> --- a/drivers/net/ark/ark_ethdev_tx.c
> +++ b/drivers/net/ark/ark_ethdev_tx.c
> @@ -234,7 +234,7 @@ eth_ark_tx_queue_setup(struct rte_eth_dev *dev,
>  		       uint16_t queue_idx,
>  		       uint16_t nb_desc,
>  		       unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf __rte_unused)
> +		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct ark_adapter *ark = (struct ark_adapter *)dev->data->dev_private;
>  	struct ark_tx_queue *queue;
> diff --git a/drivers/net/ark/ark_ethdev_tx.h b/drivers/net/ark/ark_ethdev_tx.h
> index 8aaafc22e..eb7ab63ed 100644
> --- a/drivers/net/ark/ark_ethdev_tx.h
> +++ b/drivers/net/ark/ark_ethdev_tx.h
> @@ -49,7 +49,7 @@ int eth_ark_tx_queue_setup(struct rte_eth_dev *dev,
>  			   uint16_t queue_idx,
>  			   uint16_t nb_desc,
>  			   unsigned int socket_id,
> -			   const struct rte_eth_txconf *tx_conf);
> +			   const struct rte_eth_txq_conf *tx_conf);
>  void eth_ark_tx_queue_release(void *vtx_queue);
>  int eth_ark_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
>  int eth_ark_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index c746a0e2c..01bc08a7d 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -79,14 +79,14 @@ static int avp_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  				  uint16_t rx_queue_id,
>  				  uint16_t nb_rx_desc,
>  				  unsigned int socket_id,
> -				  const struct rte_eth_rxconf *rx_conf,
> +				  const struct rte_eth_rxq_conf *rx_conf,
>  				  struct rte_mempool *pool);
> 
>  static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  				  uint16_t tx_queue_id,
>  				  uint16_t nb_tx_desc,
>  				  unsigned int socket_id,
> -				  const struct rte_eth_txconf *tx_conf);
> +				  const struct rte_eth_txq_conf *tx_conf);
> 
>  static uint16_t avp_recv_scattered_pkts(void *rx_queue,
>  					struct rte_mbuf **rx_pkts,
> @@ -1143,7 +1143,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
>  		       uint16_t rx_queue_id,
>  		       uint16_t nb_rx_desc,
>  		       unsigned int socket_id,
> -		       const struct rte_eth_rxconf *rx_conf,
> +		       const struct rte_eth_rxq_conf *rx_conf,
>  		       struct rte_mempool *pool)
>  {
>  	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
> @@ -1207,7 +1207,7 @@ avp_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
>  		       uint16_t tx_queue_id,
>  		       uint16_t nb_tx_desc,
>  		       unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf)
> +		       const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
>  	struct avp_queue *txq;
> diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
> index 5dd4aee7f..1a0c633b1 100644
> --- a/drivers/net/bnx2x/bnx2x_rxtx.c
> +++ b/drivers/net/bnx2x/bnx2x_rxtx.c
> @@ -60,7 +60,7 @@ bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  		       uint16_t queue_idx,
>  		       uint16_t nb_desc,
>  		       unsigned int socket_id,
> -		       __rte_unused const struct rte_eth_rxconf *rx_conf,
> +		       __rte_unused const struct rte_eth_rxq_conf *rx_conf,
>  		       struct rte_mempool *mp)
>  {
>  	uint16_t j, idx;
> @@ -246,7 +246,7 @@ bnx2x_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  		       uint16_t queue_idx,
>  		       uint16_t nb_desc,
>  		       unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf)
> +		       const struct rte_eth_txq_conf *tx_conf)
>  {
>  	uint16_t i;
>  	unsigned int tsize;
> diff --git a/drivers/net/bnx2x/bnx2x_rxtx.h b/drivers/net/bnx2x/bnx2x_rxtx.h
> index 2e38ec26a..1c6a6b38d 100644
> --- a/drivers/net/bnx2x/bnx2x_rxtx.h
> +++ b/drivers/net/bnx2x/bnx2x_rxtx.h
> @@ -68,12 +68,12 @@ struct bnx2x_tx_queue {
> 
>  int bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  			      uint16_t nb_rx_desc, unsigned int socket_id,
> -			      const struct rte_eth_rxconf *rx_conf,
> +			      const struct rte_eth_rxq_conf *rx_conf,
>  			      struct rte_mempool *mb_pool);
> 
>  int bnx2x_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  			      uint16_t nb_tx_desc, unsigned int socket_id,
> -			      const struct rte_eth_txconf *tx_conf);
> +			      const struct rte_eth_txq_conf *tx_conf);
> 
>  void bnx2x_dev_rx_queue_release(void *rxq);
>  void bnx2x_dev_tx_queue_release(void *txq);
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index c9d11228b..508e6b752 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -391,7 +391,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
>  					DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> 
>  	/* *INDENT-OFF* */
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = 8,
>  			.hthresh = 8,
> @@ -401,7 +401,7 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = 32,
>  			.hthresh = 0,
> diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
> index 0793820b1..d0ab47c36 100644
> --- a/drivers/net/bnxt/bnxt_rxq.c
> +++ b/drivers/net/bnxt/bnxt_rxq.c
> @@ -293,7 +293,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
>  			       uint16_t queue_idx,
>  			       uint16_t nb_desc,
>  			       unsigned int socket_id,
> -			       const struct rte_eth_rxconf *rx_conf,
> +			       const struct rte_eth_rxq_conf *rx_conf,
>  			       struct rte_mempool *mp)
>  {
>  	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
> diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
> index 01aaa007f..29c0aa0a5 100644
> --- a/drivers/net/bnxt/bnxt_rxq.h
> +++ b/drivers/net/bnxt/bnxt_rxq.h
> @@ -70,7 +70,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
>  			       uint16_t queue_idx,
>  			       uint16_t nb_desc,
>  			       unsigned int socket_id,
> -			       const struct rte_eth_rxconf *rx_conf,
> +			       const struct rte_eth_rxq_conf *rx_conf,
>  			       struct rte_mempool *mp);
>  void bnxt_free_rx_mbufs(struct bnxt *bp);
> 
> diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c
> index 99dddddfc..f4701bd68 100644
> --- a/drivers/net/bnxt/bnxt_txq.c
> +++ b/drivers/net/bnxt/bnxt_txq.c
> @@ -102,7 +102,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
>  			       uint16_t queue_idx,
>  			       uint16_t nb_desc,
>  			       unsigned int socket_id,
> -			       const struct rte_eth_txconf *tx_conf)
> +			       const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
>  	struct bnxt_tx_queue *txq;
> diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
> index 16f3a0bdd..5071dfd5b 100644
> --- a/drivers/net/bnxt/bnxt_txq.h
> +++ b/drivers/net/bnxt/bnxt_txq.h
> @@ -70,6 +70,6 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
>  			       uint16_t queue_idx,
>  			       uint16_t nb_desc,
>  			       unsigned int socket_id,
> -			       const struct rte_eth_txconf *tx_conf);
> +			       const struct rte_eth_txq_conf *tx_conf);
> 
>  #endif
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 3ee70baa0..fbf7ffba5 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -2153,7 +2153,8 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
>  static int
>  bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool)
> +		const struct rte_eth_rxq_conf *rx_conf,
> +		struct rte_mempool *mb_pool)
>  {
>  	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)
>  			rte_zmalloc_socket(NULL, sizeof(struct bond_rx_queue),
> @@ -2166,7 +2167,7 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> 
>  	bd_rx_q->nb_rx_desc = nb_rx_desc;
> 
> -	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct rte_eth_rxconf));
> +	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct rte_eth_rxq_conf));
>  	bd_rx_q->mb_pool = mb_pool;
> 
>  	dev->data->rx_queues[rx_queue_id] = bd_rx_q;
> @@ -2177,7 +2178,7 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  static int
>  bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf)
> +		const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct bond_tx_queue *bd_tx_q  = (struct bond_tx_queue *)
>  			rte_zmalloc_socket(NULL, sizeof(struct bond_tx_queue),
> diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
> index 1fe6ff880..579a18c98 100644
> --- a/drivers/net/bonding/rte_eth_bond_private.h
> +++ b/drivers/net/bonding/rte_eth_bond_private.h
> @@ -74,7 +74,7 @@ struct bond_rx_queue {
>  	/**< Reference to eth_dev private structure */
>  	uint16_t nb_rx_desc;
>  	/**< Number of RX descriptors available for the queue */
> -	struct rte_eth_rxconf rx_conf;
> +	struct rte_eth_rxq_conf rx_conf;
>  	/**< Copy of RX configuration structure for queue */
>  	struct rte_mempool *mb_pool;
>  	/**< Reference to mbuf pool to use for RX queue */
> @@ -87,7 +87,7 @@ struct bond_tx_queue {
>  	/**< Reference to dev private structure */
>  	uint16_t nb_tx_desc;
>  	/**< Number of TX descriptors available for the queue */
> -	struct rte_eth_txconf tx_conf;
> +	struct rte_eth_txq_conf tx_conf;
>  	/**< Copy of TX configuration structure for queue */
>  };
> 
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 7bca45614..b8f965765 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -443,7 +443,7 @@ static int cxgbe_dev_tx_queue_stop(struct rte_eth_dev *eth_dev,
>  static int cxgbe_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
>  				    uint16_t queue_idx,	uint16_t nb_desc,
>  				    unsigned int socket_id,
> -				    const struct rte_eth_txconf *tx_conf)
> +				    const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
>  	struct adapter *adapter = pi->adapter;
> @@ -552,7 +552,7 @@ static int cxgbe_dev_rx_queue_stop(struct rte_eth_dev *eth_dev,
>  static int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
>  				    uint16_t queue_idx,	uint16_t nb_desc,
>  				    unsigned int socket_id,
> -				    const struct rte_eth_rxconf *rx_conf,
> +				    const struct rte_eth_rxq_conf *rx_conf,
>  				    struct rte_mempool *mp)
>  {
>  	struct port_info *pi = (struct port_info *)(eth_dev->data->dev_private);
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 429b3a086..80b79ecc2 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -355,7 +355,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t rx_queue_id,
>  			 uint16_t nb_rx_desc __rte_unused,
>  			 unsigned int socket_id __rte_unused,
> -			 const struct rte_eth_rxconf *rx_conf __rte_unused,
> +			 const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  			 struct rte_mempool *mb_pool)
>  {
>  	struct dpaa2_dev_priv *priv = dev->data->dev_private;
> @@ -440,7 +440,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t tx_queue_id,
>  			 uint16_t nb_tx_desc __rte_unused,
>  			 unsigned int socket_id __rte_unused,
> -			 const struct rte_eth_txconf *tx_conf __rte_unused)
> +			 const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct dpaa2_dev_priv *priv = dev->data->dev_private;
>  	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
> diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
> index 5668910c5..6390cc137 100644
> --- a/drivers/net/e1000/e1000_ethdev.h
> +++ b/drivers/net/e1000/e1000_ethdev.h
> @@ -372,7 +372,7 @@ void igb_dev_free_queues(struct rte_eth_dev *dev);
> 
>  int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool);
> 
>  uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
> @@ -385,7 +385,7 @@ int eth_igb_tx_descriptor_status(void *tx_queue, uint16_t offset);
> 
>  int eth_igb_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf);
> +		const struct rte_eth_txq_conf *tx_conf);
> 
>  int eth_igb_tx_done_cleanup(void *txq, uint32_t free_cnt);
> 
> @@ -441,7 +441,7 @@ void em_dev_free_queues(struct rte_eth_dev *dev);
> 
>  int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool);
> 
>  uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
> @@ -454,7 +454,7 @@ int eth_em_tx_descriptor_status(void *tx_queue, uint16_t offset);
> 
>  int eth_em_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf);
> +		const struct rte_eth_txq_conf *tx_conf);
> 
>  int eth_em_rx_init(struct rte_eth_dev *dev);
> 
> diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
> index 31819c5bd..857b7167d 100644
> --- a/drivers/net/e1000/em_rxtx.c
> +++ b/drivers/net/e1000/em_rxtx.c
> @@ -1185,7 +1185,7 @@ eth_em_tx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t queue_idx,
>  			 uint16_t nb_desc,
>  			 unsigned int socket_id,
> -			 const struct rte_eth_txconf *tx_conf)
> +			 const struct rte_eth_txq_conf *tx_conf)
>  {
>  	const struct rte_memzone *tz;
>  	struct em_tx_queue *txq;
> @@ -1347,7 +1347,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t queue_idx,
>  		uint16_t nb_desc,
>  		unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mp)
>  {
>  	const struct rte_memzone *rz;
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index e4f7a9faf..7ac3703ac 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2252,7 +2252,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
>  	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = IGB_DEFAULT_RX_PTHRESH,
>  			.hthresh = IGB_DEFAULT_RX_HTHRESH,
> @@ -2262,7 +2262,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = IGB_DEFAULT_TX_PTHRESH,
>  			.hthresh = IGB_DEFAULT_TX_HTHRESH,
> @@ -2339,7 +2339,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		break;
>  	}
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = IGB_DEFAULT_RX_PTHRESH,
>  			.hthresh = IGB_DEFAULT_RX_HTHRESH,
> @@ -2349,7 +2349,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = IGB_DEFAULT_TX_PTHRESH,
>  			.hthresh = IGB_DEFAULT_TX_HTHRESH,
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 1c80a2a1b..f4a7fe571 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -1458,7 +1458,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t queue_idx,
>  			 uint16_t nb_desc,
>  			 unsigned int socket_id,
> -			 const struct rte_eth_txconf *tx_conf)
> +			 const struct rte_eth_txq_conf *tx_conf)
>  {
>  	const struct rte_memzone *tz;
>  	struct igb_tx_queue *txq;
> @@ -1604,7 +1604,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t queue_idx,
>  			 uint16_t nb_desc,
>  			 unsigned int socket_id,
> -			 const struct rte_eth_rxconf *rx_conf,
> +			 const struct rte_eth_rxq_conf *rx_conf,
>  			 struct rte_mempool *mp)
>  {
>  	const struct rte_memzone *rz;
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index 80ce1f353..69fe5218d 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -193,10 +193,10 @@ static uint16_t eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>  		uint16_t nb_pkts);
>  static int ena_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  			      uint16_t nb_desc, unsigned int socket_id,
> -			      const struct rte_eth_txconf *tx_conf);
> +			      const struct rte_eth_txq_conf *tx_conf);
>  static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  			      uint16_t nb_desc, unsigned int socket_id,
> -			      const struct rte_eth_rxconf *rx_conf,
> +			      const struct rte_eth_rxq_conf *rx_conf,
>  			      struct rte_mempool *mp);
>  static uint16_t eth_ena_recv_pkts(void *rx_queue,
>  				  struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
> @@ -940,11 +940,12 @@ static int ena_queue_restart(struct ena_ring *ring)
>  	return 0;
>  }
> 
> -static int ena_tx_queue_setup(struct rte_eth_dev *dev,
> -			      uint16_t queue_idx,
> -			      uint16_t nb_desc,
> -			      __rte_unused unsigned int socket_id,
> -			      __rte_unused const struct rte_eth_txconf *tx_conf)
> +static int ena_tx_queue_setup(
> +		struct rte_eth_dev *dev,
> +		uint16_t queue_idx,
> +		uint16_t nb_desc,
> +		__rte_unused unsigned int socket_id,
> +		__rte_unused const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct ena_com_create_io_ctx ctx =
>  		/* policy set to _HOST just to satisfy icc compiler */
> @@ -1042,12 +1043,13 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev,
>  	return rc;
>  }
> 
> -static int ena_rx_queue_setup(struct rte_eth_dev *dev,
> -			      uint16_t queue_idx,
> -			      uint16_t nb_desc,
> -			      __rte_unused unsigned int socket_id,
> -			      __rte_unused const struct rte_eth_rxconf *rx_conf,
> -			      struct rte_mempool *mp)
> +static int ena_rx_queue_setup(
> +		struct rte_eth_dev *dev,
> +		uint16_t queue_idx,
> +		uint16_t nb_desc,
> +		__rte_unused unsigned int socket_id,
> +		__rte_unused const struct rte_eth_rxq_conf *rx_conf,
> +		struct rte_mempool *mp)
>  {
>  	struct ena_com_create_io_ctx ctx =
>  		/* policy set to _HOST just to satisfy icc compiler */
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index da8fec2d0..da7e88d23 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -191,7 +191,7 @@ static int enicpmd_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
>  	uint16_t queue_idx,
>  	uint16_t nb_desc,
>  	unsigned int socket_id,
> -	__rte_unused const struct rte_eth_txconf *tx_conf)
> +	__rte_unused const struct rte_eth_txq_conf *tx_conf)
>  {
>  	int ret;
>  	struct enic *enic = pmd_priv(eth_dev);
> @@ -303,7 +303,7 @@ static int enicpmd_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
>  	uint16_t queue_idx,
>  	uint16_t nb_desc,
>  	unsigned int socket_id,
> -	const struct rte_eth_rxconf *rx_conf,
> +	const struct rte_eth_rxq_conf *rx_conf,
>  	struct rte_mempool *mp)
>  {
>  	int ret;
> @@ -485,7 +485,7 @@ static void enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
>  		DEV_TX_OFFLOAD_UDP_CKSUM   |
>  		DEV_TX_OFFLOAD_TCP_CKSUM   |
>  		DEV_TX_OFFLOAD_TCP_TSO;
> -	device_info->default_rxconf = (struct rte_eth_rxconf) {
> +	device_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
>  	};
>  }
> diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
> index ff9ad155c..6f3f5ef56 100644
> --- a/drivers/net/failsafe/failsafe_ops.c
> +++ b/drivers/net/failsafe/failsafe_ops.c
> @@ -384,7 +384,7 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc,
>  		unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool)
>  {
>  	struct sub_device *sdev;
> @@ -452,7 +452,7 @@ fs_tx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc,
>  		unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf)
> +		const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct sub_device *sdev;
>  	struct txq *txq;
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
> index e60d3a365..d6d9d9169 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -1427,7 +1427,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
>  	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
>  	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = FM10K_DEFAULT_RX_PTHRESH,
>  			.hthresh = FM10K_DEFAULT_RX_HTHRESH,
> @@ -1437,7 +1437,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = FM10K_DEFAULT_TX_PTHRESH,
>  			.hthresh = FM10K_DEFAULT_TX_HTHRESH,
> @@ -1740,7 +1740,7 @@ check_thresh(uint16_t min, uint16_t max, uint16_t div, uint16_t request)
>  }
> 
>  static inline int
> -handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf *conf)
> +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxq_conf *conf)
>  {
>  	uint16_t rx_free_thresh;
> 
> @@ -1805,7 +1805,7 @@ mempool_element_size_valid(struct rte_mempool *mp)
>  static int
>  fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
>  	uint16_t nb_desc, unsigned int socket_id,
> -	const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
> +	const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
>  {
>  	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>  	struct fm10k_dev_info *dev_info =
> @@ -1912,7 +1912,7 @@ fm10k_rx_queue_release(void *queue)
>  }
> 
>  static inline int
> -handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf)
> +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txq_conf *conf)
>  {
>  	uint16_t tx_free_thresh;
>  	uint16_t tx_rs_thresh;
> @@ -1971,7 +1971,7 @@ handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf *conf)
>  static int
>  fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
>  	uint16_t nb_desc, unsigned int socket_id,
> -	const struct rte_eth_txconf *conf)
> +	const struct rte_eth_txq_conf *conf)
>  {
>  	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>  	struct fm10k_tx_queue *q;
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 8e0580c56..9dc422cbb 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -2973,7 +2973,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	dev_info->reta_size = pf->hash_lut_size;
>  	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = I40E_DEFAULT_RX_PTHRESH,
>  			.hthresh = I40E_DEFAULT_RX_HTHRESH,
> @@ -2983,7 +2983,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = I40E_DEFAULT_TX_PTHRESH,
>  			.hthresh = I40E_DEFAULT_TX_HTHRESH,
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
> index 7c5c16b85..61938d487 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -2144,7 +2144,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		DEV_TX_OFFLOAD_TCP_CKSUM |
>  		DEV_TX_OFFLOAD_SCTP_CKSUM;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = I40E_DEFAULT_RX_PTHRESH,
>  			.hthresh = I40E_DEFAULT_RX_HTHRESH,
> @@ -2154,7 +2154,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = I40E_DEFAULT_TX_PTHRESH,
>  			.hthresh = I40E_DEFAULT_TX_HTHRESH,
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index d42c23c05..f4e367db8 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1731,7 +1731,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
>  			unsigned int socket_id,
> -			const struct rte_eth_rxconf *rx_conf,
> +			const struct rte_eth_rxq_conf *rx_conf,
>  			struct rte_mempool *mp)
>  {
>  	struct i40e_vsi *vsi;
> @@ -2010,7 +2010,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
>  			unsigned int socket_id,
> -			const struct rte_eth_txconf *tx_conf)
> +			const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct i40e_vsi *vsi;
>  	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
> index 20084d649..9d48e33f9 100644
> --- a/drivers/net/i40e/i40e_rxtx.h
> +++ b/drivers/net/i40e/i40e_rxtx.h
> @@ -201,13 +201,13 @@ int i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			    uint16_t queue_idx,
>  			    uint16_t nb_desc,
>  			    unsigned int socket_id,
> -			    const struct rte_eth_rxconf *rx_conf,
> +			    const struct rte_eth_rxq_conf *rx_conf,
>  			    struct rte_mempool *mp);
>  int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			    uint16_t queue_idx,
>  			    uint16_t nb_desc,
>  			    unsigned int socket_id,
> -			    const struct rte_eth_txconf *tx_conf);
> +			    const struct rte_eth_txq_conf *tx_conf);
>  void i40e_dev_rx_queue_release(void *rxq);
>  void i40e_dev_tx_queue_release(void *txq);
>  uint16_t i40e_recv_pkts(void *rx_queue,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 22171d866..7022f2ecc 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -3665,7 +3665,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	    hw->mac.type == ixgbe_mac_X550EM_a)
>  		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
>  			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
> @@ -3675,7 +3675,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
>  			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
> @@ -3776,7 +3776,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
>  				DEV_TX_OFFLOAD_SCTP_CKSUM  |
>  				DEV_TX_OFFLOAD_TCP_TSO;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
>  			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
> @@ -3786,7 +3786,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
>  			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
> index caa50c8b9..4085a704a 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> @@ -599,12 +599,12 @@ void ixgbe_dev_tx_queue_release(void *txq);
> 
>  int  ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool);
> 
>  int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf);
> +		const struct rte_eth_txq_conf *tx_conf);
> 
>  uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id);
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 98d0e1a86..b6b21403d 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -2397,7 +2397,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t queue_idx,
>  			 uint16_t nb_desc,
>  			 unsigned int socket_id,
> -			 const struct rte_eth_txconf *tx_conf)
> +			 const struct rte_eth_txq_conf *tx_conf)
>  {
>  	const struct rte_memzone *tz;
>  	struct ixgbe_tx_queue *txq;
> @@ -2752,7 +2752,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			 uint16_t queue_idx,
>  			 uint16_t nb_desc,
>  			 unsigned int socket_id,
> -			 const struct rte_eth_rxconf *rx_conf,
> +			 const struct rte_eth_rxq_conf *rx_conf,
>  			 struct rte_mempool *mp)
>  {
>  	const struct rte_memzone *rz;
> diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
> index 72a2733ba..e2ef7644f 100644
> --- a/drivers/net/kni/rte_eth_kni.c
> +++ b/drivers/net/kni/rte_eth_kni.c
> @@ -238,7 +238,7 @@ eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> @@ -258,7 +258,7 @@ eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
>  	struct pmd_queue *q;
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index a17fba501..e1bbddde7 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -1150,7 +1150,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
>   * @param socket_id
>   *    Where to allocate memory
>   * @param rx_conf
> - *    Pointer to the struction rte_eth_rxconf
> + *    Pointer to the struction rte_eth_rxq_conf
>   * @param mp
>   *    Pointer to the packet pool
>   *
> @@ -1161,7 +1161,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
>  static int
>  lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
>  		       uint16_t num_rx_descs, unsigned int socket_id,
> -		       const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		       const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		       struct rte_mempool *mp)
>  {
>  	struct lio_device *lio_dev = LIO_DEV(eth_dev);
> @@ -1242,7 +1242,7 @@ lio_dev_rx_queue_release(void *rxq)
>   *   NUMA socket id, used for memory allocations
>   *
>   * @param tx_conf
> - *   Pointer to the structure rte_eth_txconf
> + *   Pointer to the structure rte_eth_txq_conf
>   *
>   * @return
>   *   - On success, return 0
> @@ -1251,7 +1251,7 @@ lio_dev_rx_queue_release(void *rxq)
>  static int
>  lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
>  		       uint16_t num_tx_descs, unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf __rte_unused)
> +		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct lio_device *lio_dev = LIO_DEV(eth_dev);
>  	int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
> diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
> index 055de49a3..2db8b5646 100644
> --- a/drivers/net/mlx4/mlx4.c
> +++ b/drivers/net/mlx4/mlx4.c
> @@ -539,7 +539,7 @@ priv_set_flags(struct priv *priv, unsigned int keep, unsigned int flags)
> 
>  static int
>  txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
> -	  unsigned int socket, const struct rte_eth_txconf *conf);
> +	  unsigned int socket, const struct rte_eth_txq_conf *conf);
> 
>  static void
>  txq_cleanup(struct txq *txq);
> @@ -547,7 +547,7 @@ txq_cleanup(struct txq *txq);
>  static int
>  rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
>  	  unsigned int socket, int inactive,
> -	  const struct rte_eth_rxconf *conf,
> +	  const struct rte_eth_rxq_conf *conf,
>  	  struct rte_mempool *mp, int children_n,
>  	  struct rxq *rxq_parent);
> 
> @@ -1762,7 +1762,7 @@ mlx4_tx_burst_secondary_setup(void *dpdk_txq, struct rte_mbuf **pkts,
>   */
>  static int
>  txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
> -	  unsigned int socket, const struct rte_eth_txconf *conf)
> +	  unsigned int socket, const struct rte_eth_txq_conf *conf)
>  {
>  	struct priv *priv = mlx4_get_priv(dev);
>  	struct txq tmpl = {
> @@ -1954,7 +1954,7 @@ txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
>   */
>  static int
>  mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> -		    unsigned int socket, const struct rte_eth_txconf *conf)
> +		    unsigned int socket, const struct rte_eth_txq_conf *conf)
>  {
>  	struct priv *priv = dev->data->dev_private;
>  	struct txq *txq = (*priv->txqs)[idx];
> @@ -3830,7 +3830,7 @@ rxq_create_qp(struct rxq *rxq,
>  static int
>  rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
>  	  unsigned int socket, int inactive,
> -	  const struct rte_eth_rxconf *conf,
> +	  const struct rte_eth_rxq_conf *conf,
>  	  struct rte_mempool *mp, int children_n,
>  	  struct rxq *rxq_parent)
>  {
> @@ -4007,7 +4007,7 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
>   */
>  static int
>  mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> -		    unsigned int socket, const struct rte_eth_rxconf *conf,
> +		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
>  		    struct rte_mempool *mp)
>  {
>  	struct rxq *parent;
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index 35c5cb42e..85428950c 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -843,7 +843,7 @@ rxq_setup(struct rxq_ctrl *tmpl)
>  static int
>  rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
>  	       uint16_t desc, unsigned int socket,
> -	       const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
> +	       const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
>  {
>  	struct priv *priv = dev->data->dev_private;
>  	struct rxq_ctrl tmpl = {
> @@ -1110,7 +1110,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
>   */
>  int
>  mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> -		    unsigned int socket, const struct rte_eth_rxconf *conf,
> +		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
>  		    struct rte_mempool *mp)
>  {
>  	struct priv *priv = dev->data->dev_private;
> diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
> index 033e70f25..eb5315760 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.h
> +++ b/drivers/net/mlx5/mlx5_rxtx.h
> @@ -301,7 +301,7 @@ int priv_allow_flow_type(struct priv *, enum hash_rxq_flow_type);
>  int priv_rehash_flows(struct priv *);
>  void rxq_cleanup(struct rxq_ctrl *);
>  int mlx5_rx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t, unsigned int,
> -			const struct rte_eth_rxconf *, struct rte_mempool *);
> +			const struct rte_eth_rxq_conf *, struct rte_mempool *);
>  void mlx5_rx_queue_release(void *);
>  int priv_rx_intr_vec_enable(struct priv *priv);
>  void priv_rx_intr_vec_disable(struct priv *priv);
> @@ -314,9 +314,9 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id);
> 
>  void txq_cleanup(struct txq_ctrl *);
>  int txq_ctrl_setup(struct rte_eth_dev *, struct txq_ctrl *, uint16_t,
> -		   unsigned int, const struct rte_eth_txconf *);
> +		   unsigned int, const struct rte_eth_txq_conf *);
>  int mlx5_tx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t, unsigned int,
> -			const struct rte_eth_txconf *);
> +			const struct rte_eth_txq_conf *);
>  void mlx5_tx_queue_release(void *);
> 
>  /* mlx5_rxtx.c */
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 4b0b532b1..7b8c2f766 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -211,7 +211,7 @@ txq_setup(struct txq_ctrl *tmpl, struct txq_ctrl *txq_ctrl)
>  int
>  txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
>  	       uint16_t desc, unsigned int socket,
> -	       const struct rte_eth_txconf *conf)
> +	       const struct rte_eth_txq_conf *conf)
>  {
>  	struct priv *priv = mlx5_get_priv(dev);
>  	struct txq_ctrl tmpl = {
> @@ -413,7 +413,7 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
>   */
>  int
>  mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> -		    unsigned int socket, const struct rte_eth_txconf *conf)
> +		    unsigned int socket, const struct rte_eth_txq_conf *conf)
>  {
>  	struct priv *priv = dev->data->dev_private;
>  	struct txq *txq = (*priv->txqs)[idx];
> diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
> index a3bf5e1f1..4122824d9 100644
> --- a/drivers/net/nfp/nfp_net.c
> +++ b/drivers/net/nfp/nfp_net.c
> @@ -79,13 +79,13 @@ static uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>  static void nfp_net_rx_queue_release(void *rxq);
>  static int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  				  uint16_t nb_desc, unsigned int socket_id,
> -				  const struct rte_eth_rxconf *rx_conf,
> +				  const struct rte_eth_rxq_conf *rx_conf,
>  				  struct rte_mempool *mp);
>  static int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
>  static void nfp_net_tx_queue_release(void *txq);
>  static int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  				  uint16_t nb_desc, unsigned int socket_id,
> -				  const struct rte_eth_txconf *tx_conf);
> +				  const struct rte_eth_txq_conf *tx_conf);
>  static int nfp_net_start(struct rte_eth_dev *dev);
>  static void nfp_net_stats_get(struct rte_eth_dev *dev,
>  			      struct rte_eth_stats *stats);
> @@ -1119,7 +1119,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  					     DEV_TX_OFFLOAD_UDP_CKSUM |
>  					     DEV_TX_OFFLOAD_TCP_CKSUM;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_thresh = {
>  			.pthresh = DEFAULT_RX_PTHRESH,
>  			.hthresh = DEFAULT_RX_HTHRESH,
> @@ -1129,7 +1129,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_thresh = {
>  			.pthresh = DEFAULT_TX_PTHRESH,
>  			.hthresh = DEFAULT_TX_HTHRESH,
> @@ -1388,7 +1388,7 @@ static int
>  nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
>  		       uint16_t queue_idx, uint16_t nb_desc,
>  		       unsigned int socket_id,
> -		       const struct rte_eth_rxconf *rx_conf,
> +		       const struct rte_eth_rxq_conf *rx_conf,
>  		       struct rte_mempool *mp)
>  {
>  	const struct rte_memzone *tz;
> @@ -1537,7 +1537,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
>  static int
>  nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  		       uint16_t nb_desc, unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf)
> +		       const struct rte_eth_txq_conf *tx_conf)
>  {
>  	const struct rte_memzone *tz;
>  	struct nfp_net_txq *txq;
> diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
> index 5aef0591e..7ae14b77b 100644
> --- a/drivers/net/null/rte_eth_null.c
> +++ b/drivers/net/null/rte_eth_null.c
> @@ -214,7 +214,7 @@ static int
>  eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool)
>  {
>  	struct rte_mbuf *dummy_packet;
> @@ -249,7 +249,7 @@ static int
>  eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct rte_mbuf *dummy_packet;
>  	struct pmd_internals *internals;
> diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c
> index defb3b419..874856712 100644
> --- a/drivers/net/pcap/rte_eth_pcap.c
> +++ b/drivers/net/pcap/rte_eth_pcap.c
> @@ -634,7 +634,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> @@ -652,7 +652,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> 
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 4e9e89fad..5b6df9688 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1293,7 +1293,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
>  	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
>  	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.txq_flags = QEDE_TXQ_FLAGS,
>  	};
> 
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 5c3613c7c..98da5f975 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -40,7 +40,7 @@ static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
>  int
>  qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  		    uint16_t nb_desc, unsigned int socket_id,
> -		    __rte_unused const struct rte_eth_rxconf *rx_conf,
> +		    __rte_unused const struct rte_eth_rxq_conf *rx_conf,
>  		    struct rte_mempool *mp)
>  {
>  	struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
> @@ -238,7 +238,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
>  		    uint16_t queue_idx,
>  		    uint16_t nb_desc,
>  		    unsigned int socket_id,
> -		    const struct rte_eth_txconf *tx_conf)
> +		    const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct qede_dev *qdev = dev->data->dev_private;
>  	struct ecore_dev *edev = &qdev->edev;
> diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
> index b551fd6ae..0c10b8ebe 100644
> --- a/drivers/net/qede/qede_rxtx.h
> +++ b/drivers/net/qede/qede_rxtx.h
> @@ -225,14 +225,14 @@ struct qede_fastpath {
>   */
>  int qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>  			uint16_t nb_desc, unsigned int socket_id,
> -			const struct rte_eth_rxconf *rx_conf,
> +			const struct rte_eth_rxq_conf *rx_conf,
>  			struct rte_mempool *mp);
> 
>  int qede_tx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
>  			unsigned int socket_id,
> -			const struct rte_eth_txconf *tx_conf);
> +			const struct rte_eth_txq_conf *tx_conf);
> 
>  void qede_rx_queue_release(void *rx_queue);
> 
> diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
> index 464d3d384..6d077e3cf 100644
> --- a/drivers/net/ring/rte_eth_ring.c
> +++ b/drivers/net/ring/rte_eth_ring.c
> @@ -155,11 +155,12 @@ eth_dev_set_link_up(struct rte_eth_dev *dev)
>  }
> 
>  static int
> -eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> -				    uint16_t nb_rx_desc __rte_unused,
> -				    unsigned int socket_id __rte_unused,
> -				    const struct rte_eth_rxconf *rx_conf __rte_unused,
> -				    struct rte_mempool *mb_pool __rte_unused)
> +eth_rx_queue_setup(struct rte_eth_dev *dev,
> +		   uint16_t rx_queue_id,
> +		   uint16_t nb_rx_desc __rte_unused,
> +		   unsigned int socket_id __rte_unused,
> +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> +		   struct rte_mempool *mb_pool __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
>  	dev->data->rx_queues[rx_queue_id] = &internals->rx_ring_queues[rx_queue_id];
> @@ -167,10 +168,11 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  }
> 
>  static int
> -eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> -				    uint16_t nb_tx_desc __rte_unused,
> -				    unsigned int socket_id __rte_unused,
> -				    const struct rte_eth_txconf *tx_conf __rte_unused)
> +eth_tx_queue_setup(struct rte_eth_dev *dev,
> +		   uint16_t tx_queue_id,
> +		   uint16_t nb_tx_desc __rte_unused,
> +		   unsigned int socket_id __rte_unused,
> +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
>  	dev->data->tx_queues[tx_queue_id] = &internals->tx_ring_queues[tx_queue_id];
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 2b037d863..959a2b42f 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -404,7 +404,7 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
>  static int
>  sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		   uint16_t nb_rx_desc, unsigned int socket_id,
> -		   const struct rte_eth_rxconf *rx_conf,
> +		   const struct rte_eth_rxq_conf *rx_conf,
>  		   struct rte_mempool *mb_pool)
>  {
>  	struct sfc_adapter *sa = dev->data->dev_private;
> @@ -461,7 +461,7 @@ sfc_rx_queue_release(void *queue)
>  static int
>  sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		   uint16_t nb_tx_desc, unsigned int socket_id,
> -		   const struct rte_eth_txconf *tx_conf)
> +		   const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct sfc_adapter *sa = dev->data->dev_private;
>  	int rc;
> diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
> index 79ed046ce..079df6272 100644
> --- a/drivers/net/sfc/sfc_rx.c
> +++ b/drivers/net/sfc/sfc_rx.c
> @@ -772,7 +772,7 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
> 
>  static int
>  sfc_rx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_rx_desc,
> -		   const struct rte_eth_rxconf *rx_conf)
> +		   const struct rte_eth_rxq_conf *rx_conf)
>  {
>  	const uint16_t rx_free_thresh_max = EFX_RXQ_LIMIT(nb_rx_desc);
>  	int rc = 0;
> @@ -903,7 +903,7 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
>  int
>  sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
>  	     uint16_t nb_rx_desc, unsigned int socket_id,
> -	     const struct rte_eth_rxconf *rx_conf,
> +	     const struct rte_eth_rxq_conf *rx_conf,
>  	     struct rte_mempool *mb_pool)
>  {
>  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
> diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
> index 9e6282ead..126c41089 100644
> --- a/drivers/net/sfc/sfc_rx.h
> +++ b/drivers/net/sfc/sfc_rx.h
> @@ -156,7 +156,7 @@ void sfc_rx_stop(struct sfc_adapter *sa);
> 
>  int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
>  		 uint16_t nb_rx_desc, unsigned int socket_id,
> -		 const struct rte_eth_rxconf *rx_conf,
> +		 const struct rte_eth_rxq_conf *rx_conf,
>  		 struct rte_mempool *mb_pool);
>  void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
>  int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
> diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
> index bf596017a..fe030baa4 100644
> --- a/drivers/net/sfc/sfc_tx.c
> +++ b/drivers/net/sfc/sfc_tx.c
> @@ -58,7 +58,7 @@
> 
>  static int
>  sfc_tx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_tx_desc,
> -		   const struct rte_eth_txconf *tx_conf)
> +		   const struct rte_eth_txq_conf *tx_conf)
>  {
>  	unsigned int flags = tx_conf->txq_flags;
>  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
> @@ -128,7 +128,7 @@ sfc_tx_qflush_done(struct sfc_txq *txq)
>  int
>  sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
>  	     uint16_t nb_tx_desc, unsigned int socket_id,
> -	     const struct rte_eth_txconf *tx_conf)
> +	     const struct rte_eth_txq_conf *tx_conf)
>  {
>  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
>  	struct sfc_txq_info *txq_info;
> diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
> index 0c1c7083b..90b5eb7d7 100644
> --- a/drivers/net/sfc/sfc_tx.h
> +++ b/drivers/net/sfc/sfc_tx.h
> @@ -141,7 +141,7 @@ void sfc_tx_close(struct sfc_adapter *sa);
> 
>  int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
>  		 uint16_t nb_tx_desc, unsigned int socket_id,
> -		 const struct rte_eth_txconf *tx_conf);
> +		 const struct rte_eth_txq_conf *tx_conf);
>  void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
> 
>  void sfc_tx_qflush_done(struct sfc_txq *txq);
> diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
> index 9c0d57cc1..6ba24a263 100644
> --- a/drivers/net/szedata2/rte_eth_szedata2.c
> +++ b/drivers/net/szedata2/rte_eth_szedata2.c
> @@ -1253,7 +1253,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> @@ -1287,7 +1287,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
>  		uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
>  	struct szedata2_tx_queue *txq = &internals->tx_queue[tx_queue_id];
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index 9acea8398..5a1125a7a 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -918,7 +918,7 @@ tap_rx_queue_setup(struct rte_eth_dev *dev,
>  		   uint16_t rx_queue_id,
>  		   uint16_t nb_rx_desc,
>  		   unsigned int socket_id,
> -		   const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		   struct rte_mempool *mp)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
> @@ -997,7 +997,7 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
>  		   uint16_t tx_queue_id,
>  		   uint16_t nb_tx_desc __rte_unused,
>  		   unsigned int socket_id __rte_unused,
> -		   const struct rte_eth_txconf *tx_conf __rte_unused)
> +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct pmd_internals *internals = dev->data->dev_private;
>  	int ret;
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index edc17f1d4..3ddca8b49 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -936,7 +936,7 @@ nicvf_set_rx_function(struct rte_eth_dev *dev)
>  static int
>  nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
>  			 uint16_t nb_desc, unsigned int socket_id,
> -			 const struct rte_eth_txconf *tx_conf)
> +			 const struct rte_eth_txq_conf *tx_conf)
>  {
>  	uint16_t tx_free_thresh;
>  	uint8_t is_single_pool;
> @@ -1261,7 +1261,7 @@ nicvf_rxq_mbuf_setup(struct nicvf_rxq *rxq)
>  static int
>  nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
>  			 uint16_t nb_desc, unsigned int socket_id,
> -			 const struct rte_eth_rxconf *rx_conf,
> +			 const struct rte_eth_rxq_conf *rx_conf,
>  			 struct rte_mempool *mp)
>  {
>  	uint16_t rx_free_thresh;
> @@ -1403,12 +1403,12 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
>  		dev_info->flow_type_rss_offloads |= NICVF_RSS_OFFLOAD_TUNNEL;
> 
> -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
>  		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
>  		.rx_drop_en = 0,
>  	};
> 
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
>  		.txq_flags =
>  			ETH_TXQ_FLAGS_NOMULTSEGS  |
> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> index 0dac5e60e..c90d06bd7 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -831,7 +831,7 @@ static int
>  eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		   uint16_t nb_rx_desc __rte_unused,
>  		   unsigned int socket_id,
> -		   const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		   struct rte_mempool *mb_pool)
>  {
>  	struct vhost_queue *vq;
> @@ -854,7 +854,7 @@ static int
>  eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		   uint16_t nb_tx_desc __rte_unused,
>  		   unsigned int socket_id,
> -		   const struct rte_eth_txconf *tx_conf __rte_unused)
> +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct vhost_queue *vq;
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index e320811ed..763b30e9a 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -1891,7 +1891,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  	dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
>  	dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
>  	dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
> -	dev_info->default_txconf = (struct rte_eth_txconf) {
> +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
>  		.txq_flags = ETH_TXQ_FLAGS_NOOFFLOADS
>  	};
> 
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index c3413c6d9..57f0d7ad2 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -89,12 +89,12 @@ int virtio_dev_rx_queue_done(void *rxq, uint16_t offset);
> 
>  int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool);
> 
>  int  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf);
> +		const struct rte_eth_txq_conf *tx_conf);
> 
>  uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>  		uint16_t nb_pkts);
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index e30377c51..cff1d9b62 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -414,7 +414,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
>  			unsigned int socket_id __rte_unused,
> -			__rte_unused const struct rte_eth_rxconf *rx_conf,
> +			__rte_unused const struct rte_eth_rxq_conf *rx_conf,
>  			struct rte_mempool *mp)
>  {
>  	uint16_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_RQ_QUEUE_IDX;
> @@ -492,7 +492,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev,
> 
>  static void
>  virtio_update_rxtx_handler(struct rte_eth_dev *dev,
> -			   const struct rte_eth_txconf *tx_conf)
> +			   const struct rte_eth_txq_conf *tx_conf)
>  {
>  	uint8_t use_simple_rxtx = 0;
>  	struct virtio_hw *hw = dev->data->dev_private;
> @@ -519,7 +519,7 @@ virtio_update_rxtx_handler(struct rte_eth_dev *dev,
>   * struct rte_eth_dev *dev: Used to update dev
>   * uint16_t nb_desc: Defaults to values read from config space
>   * unsigned int socket_id: Used to allocate memzone
> - * const struct rte_eth_txconf *tx_conf: Used to setup tx engine
> + * const struct rte_eth_txq_conf *tx_conf: Used to setup tx engine
>   * uint16_t queue_idx: Just used as an index in dev txq list
>   */
>  int
> @@ -527,7 +527,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			uint16_t queue_idx,
>  			uint16_t nb_desc,
>  			unsigned int socket_id __rte_unused,
> -			const struct rte_eth_txconf *tx_conf)
> +			const struct rte_eth_txq_conf *tx_conf)
>  {
>  	uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX;
>  	struct virtio_hw *hw = dev->data->dev_private;
> diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
> index b48058afc..98389fa74 100644
> --- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
> +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
> @@ -189,11 +189,11 @@ void vmxnet3_dev_tx_queue_release(void *txq);
> 
>  int  vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>  				uint16_t nb_rx_desc, unsigned int socket_id,
> -				const struct rte_eth_rxconf *rx_conf,
> +				const struct rte_eth_rxq_conf *rx_conf,
>  				struct rte_mempool *mb_pool);
>  int  vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>  				uint16_t nb_tx_desc, unsigned int socket_id,
> -				const struct rte_eth_txconf *tx_conf);
> +				const struct rte_eth_txq_conf *tx_conf);
> 
>  int vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev);
> 
> diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> index d9cf43739..cfdf72f7f 100644
> --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
> +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> @@ -888,7 +888,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev,
>  			   uint16_t queue_idx,
>  			   uint16_t nb_desc,
>  			   unsigned int socket_id,
> -			   const struct rte_eth_txconf *tx_conf)
> +			   const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct vmxnet3_hw *hw = dev->data->dev_private;
>  	const struct rte_memzone *mz;
> @@ -993,7 +993,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev,
>  			   uint16_t queue_idx,
>  			   uint16_t nb_desc,
>  			   unsigned int socket_id,
> -			   __rte_unused const struct rte_eth_rxconf *rx_conf,
> +			   __rte_unused const struct rte_eth_rxq_conf *rx_conf,
>  			   struct rte_mempool *mp)
>  {
>  	const struct rte_memzone *mz;
> diff --git a/drivers/net/xenvirt/rte_eth_xenvirt.c b/drivers/net/xenvirt/rte_eth_xenvirt.c
> index e404b7755..792fbfb0a 100644
> --- a/drivers/net/xenvirt/rte_eth_xenvirt.c
> +++ b/drivers/net/xenvirt/rte_eth_xenvirt.c
> @@ -492,11 +492,12 @@ virtio_queue_setup(struct rte_eth_dev *dev, int queue_type)
>  }
> 
>  static int
> -eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id,
> -				uint16_t nb_rx_desc __rte_unused,
> -				unsigned int socket_id __rte_unused,
> -				const struct rte_eth_rxconf *rx_conf __rte_unused,
> -				struct rte_mempool *mb_pool)
> +eth_rx_queue_setup(struct rte_eth_dev *dev,
> +		   uint16_t rx_queue_id,
> +		   uint16_t nb_rx_desc __rte_unused,
> +		   unsigned int socket_id __rte_unused,
> +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> +		   struct rte_mempool *mb_pool)
>  {
>  	struct virtqueue *vq;
>  	vq = dev->data->rx_queues[rx_queue_id] = virtio_queue_setup(dev, VTNET_RQ);
> @@ -505,10 +506,11 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id,
>  }
> 
>  static int
> -eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> -				uint16_t nb_tx_desc __rte_unused,
> -				unsigned int socket_id __rte_unused,
> -				const struct rte_eth_txconf *tx_conf __rte_unused)
> +eth_tx_queue_setup(struct rte_eth_dev *dev,
> +		   uint16_t tx_queue_id,
> +		   uint16_t nb_tx_desc __rte_unused,
> +		   unsigned int socket_id __rte_unused,
> +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	dev->data->tx_queues[tx_queue_id] = virtio_queue_setup(dev, VTNET_TQ);
>  	return 0;
> diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
> index 8c0e17911..15f9426f2 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -869,7 +869,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_queue_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	struct rx_queue *rxq;
>  	int socket, ret;
>  	unsigned nb_ports;
> diff --git a/examples/ip_pipeline/app.h b/examples/ip_pipeline/app.h
> index e41290e74..59bb1bac8 100644
> --- a/examples/ip_pipeline/app.h
> +++ b/examples/ip_pipeline/app.h
> @@ -103,7 +103,7 @@ struct app_pktq_hwq_in_params {
>  	uint32_t size;
>  	uint32_t burst;
> 
> -	struct rte_eth_rxconf conf;
> +	struct rte_eth_rxq_conf conf;
>  };
> 
>  struct app_pktq_hwq_out_params {
> @@ -113,7 +113,7 @@ struct app_pktq_hwq_out_params {
>  	uint32_t burst;
>  	uint32_t dropless;
>  	uint64_t n_retries;
> -	struct rte_eth_txconf conf;
> +	struct rte_eth_txq_conf conf;
>  };
> 
>  struct app_pktq_swq_params {
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index e62636cb4..746140f60 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -1017,7 +1017,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_queue_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	struct rx_queue *rxq;
>  	int ret, socket;
>  	unsigned nb_ports;
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 99dc270cb..807d079cf 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -1325,7 +1325,7 @@ static void
>  port_init(uint8_t portid)
>  {
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	uint16_t nb_tx_queue, nb_rx_queue;
>  	uint16_t tx_queueid, rx_queueid, queue, lcore_id;
>  	int32_t ret, socket_id;
> diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
> index 9a13d3530..a3c060778 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -668,7 +668,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_queue_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	uint16_t queueid;
>  	unsigned lcore_id = 0, rx_lcore_id = 0;
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index 8eff4de41..03124e142 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -1887,7 +1887,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	unsigned nb_ports;
>  	uint16_t queueid;
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index fd442f5ef..f54decd20 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -1643,7 +1643,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	unsigned nb_ports;
>  	uint16_t queueid;
> diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
> index 34e4a6bef..9a1ff8748 100644
> --- a/examples/l3fwd-vf/main.c
> +++ b/examples/l3fwd-vf/main.c
> @@ -950,7 +950,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	unsigned nb_ports;
>  	uint16_t queueid;
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 81995fdbe..2e904b7ae 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -844,7 +844,7 @@ main(int argc, char **argv)
>  {
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	unsigned nb_ports;
>  	uint16_t queueid;
> diff --git a/examples/netmap_compat/lib/compat_netmap.c b/examples/netmap_compat/lib/compat_netmap.c
> index af2d9f3f7..2c245d1df 100644
> --- a/examples/netmap_compat/lib/compat_netmap.c
> +++ b/examples/netmap_compat/lib/compat_netmap.c
> @@ -57,8 +57,8 @@ struct netmap_port {
>  	struct rte_mempool   *pool;
>  	struct netmap_if     *nmif;
>  	struct rte_eth_conf   eth_conf;
> -	struct rte_eth_txconf tx_conf;
> -	struct rte_eth_rxconf rx_conf;
> +	struct rte_eth_txq_conf tx_conf;
> +	struct rte_eth_rxq_conf rx_conf;
>  	int32_t  socket_id;
>  	uint16_t nr_tx_rings;
>  	uint16_t nr_rx_rings;
> diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
> index 7954b9744..e72b86e78 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -3493,7 +3493,7 @@ int
>  main(int argc, char **argv)
>  {
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int ret;
>  	int i;
>  	unsigned nb_ports;
> diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
> index ddfcdb832..ac350f5fb 100644
> --- a/examples/ptpclient/ptpclient.c
> +++ b/examples/ptpclient/ptpclient.c
> @@ -237,7 +237,7 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
>  	/* Allocate and set up 1 TX queue per Ethernet port. */
>  	for (q = 0; q < tx_rings; q++) {
>  		/* Setup txq_flags */
> -		struct rte_eth_txconf *txconf;
> +		struct rte_eth_txq_conf *txconf;
> 
>  		rte_eth_dev_info_get(q, &dev_info);
>  		txconf = &dev_info.default_txconf;
> diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> index a82cbd7d5..955d051d2 100644
> --- a/examples/qos_sched/init.c
> +++ b/examples/qos_sched/init.c
> @@ -104,8 +104,8 @@ app_init_port(uint8_t portid, struct rte_mempool *mp)
>  {
>  	int ret;
>  	struct rte_eth_link link;
> -	struct rte_eth_rxconf rx_conf;
> -	struct rte_eth_txconf tx_conf;
> +	struct rte_eth_rxq_conf rx_conf;
> +	struct rte_eth_txq_conf tx_conf;
>  	uint16_t rx_size;
>  	uint16_t tx_size;
> 
> diff --git a/examples/tep_termination/vxlan_setup.c b/examples/tep_termination/vxlan_setup.c
> index 050bb32d3..8d61e8891 100644
> --- a/examples/tep_termination/vxlan_setup.c
> +++ b/examples/tep_termination/vxlan_setup.c
> @@ -138,8 +138,8 @@ vxlan_port_init(uint8_t port, struct rte_mempool *mbuf_pool)
>  	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
>  	uint16_t tx_ring_size = RTE_TEST_TX_DESC_DEFAULT;
>  	struct rte_eth_udp_tunnel tunnel_udp;
> -	struct rte_eth_rxconf *rxconf;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_rxq_conf *rxconf;
> +	struct rte_eth_txq_conf *txconf;
>  	struct vxlan_conf *pconf = &vxdev;
> 
>  	pconf->dst_port = udp_port;
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 4d1589d06..75c4c8341 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -269,8 +269,8 @@ port_init(uint8_t port)
>  {
>  	struct rte_eth_dev_info dev_info;
>  	struct rte_eth_conf port_conf;
> -	struct rte_eth_rxconf *rxconf;
> -	struct rte_eth_txconf *txconf;
> +	struct rte_eth_rxq_conf *rxconf;
> +	struct rte_eth_txq_conf *txconf;
>  	int16_t rx_rings, tx_rings;
>  	uint16_t rx_ring_size, tx_ring_size;
>  	int retval;
> diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
> index eba4d35aa..852269cdc 100644
> --- a/examples/vhost_xen/main.c
> +++ b/examples/vhost_xen/main.c
> @@ -276,7 +276,7 @@ static inline int
>  port_init(uint8_t port, struct rte_mempool *mbuf_pool)
>  {
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_rxconf *rxconf;
> +	struct rte_eth_rxq_conf *rxconf;
>  	struct rte_eth_conf port_conf;
>  	uint16_t rx_rings, tx_rings = (uint16_t)rte_lcore_count();
>  	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
> diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
> index 8949a1156..5c3a73789 100644
> --- a/examples/vmdq/main.c
> +++ b/examples/vmdq/main.c
> @@ -189,7 +189,7 @@ static inline int
>  port_init(uint8_t port, struct rte_mempool *mbuf_pool)
>  {
>  	struct rte_eth_dev_info dev_info;
> -	struct rte_eth_rxconf *rxconf;
> +	struct rte_eth_rxq_conf *rxconf;
>  	struct rte_eth_conf port_conf;
>  	uint16_t rxRings, txRings;
>  	uint16_t rxRingSize = RTE_TEST_RX_DESC_DEFAULT;
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0597641ee..da2424cc4 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -997,7 +997,7 @@ rte_eth_dev_close(uint8_t port_id)
>  int
>  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  		       uint16_t nb_rx_desc, unsigned int socket_id,
> -		       const struct rte_eth_rxconf *rx_conf,
> +		       const struct rte_eth_rxq_conf *rx_conf,
>  		       struct rte_mempool *mp)
>  {
>  	int ret;
> @@ -1088,7 +1088,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  int
>  rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>  		       uint16_t nb_tx_desc, unsigned int socket_id,
> -		       const struct rte_eth_txconf *tx_conf)
> +		       const struct rte_eth_txq_conf *tx_conf)
>  {
>  	struct rte_eth_dev *dev;
>  	struct rte_eth_dev_info dev_info;
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 0adf3274a..c40db4ee0 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -686,7 +686,7 @@ struct rte_eth_txmode {
>  /**
>   * A structure used to configure an RX ring of an Ethernet port.
>   */
> -struct rte_eth_rxconf {
> +struct rte_eth_rxq_conf {
>  	struct rte_eth_thresh rx_thresh; /**< RX ring threshold registers. */
>  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
>  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> @@ -709,7 +709,7 @@ struct rte_eth_rxconf {
>  /**
>   * A structure used to configure a TX ring of an Ethernet port.
>   */
> -struct rte_eth_txconf {
> +struct rte_eth_txq_conf {
>  	struct rte_eth_thresh tx_thresh; /**< TX ring threshold registers. */
>  	uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */
>  	uint16_t tx_free_thresh; /**< Start freeing TX buffers if there are
> @@ -956,8 +956,10 @@ struct rte_eth_dev_info {
>  	uint8_t hash_key_size; /**< Hash key size in bytes */
>  	/** Bit mask of RSS offloads, the bit offset also means flow type */
>  	uint64_t flow_type_rss_offloads;
> -	struct rte_eth_rxconf default_rxconf; /**< Default RX configuration */
> -	struct rte_eth_txconf default_txconf; /**< Default TX configuration */
> +	struct rte_eth_rxq_conf default_rxconf;
> +	/**< Default RX queue configuration */
> +	struct rte_eth_txq_conf default_txconf;
> +	/**< Default TX queue configuration */
>  	uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
>  	uint16_t vmdq_queue_num;  /**< Queue number for VMDQ pools. */
>  	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
> @@ -975,7 +977,7 @@ struct rte_eth_dev_info {
>   */
>  struct rte_eth_rxq_info {
>  	struct rte_mempool *mp;     /**< mempool used by that queue. */
> -	struct rte_eth_rxconf conf; /**< queue config parameters. */
> +	struct rte_eth_rxq_conf conf; /**< queue config parameters. */
>  	uint8_t scattered_rx;       /**< scattered packets RX supported. */
>  	uint16_t nb_desc;           /**< configured number of RXDs. */
>  } __rte_cache_min_aligned;
> @@ -985,7 +987,7 @@ struct rte_eth_rxq_info {
>   * Used to retieve information about configured queue.
>   */
>  struct rte_eth_txq_info {
> -	struct rte_eth_txconf conf; /**< queue config parameters. */
> +	struct rte_eth_txq_conf conf; /**< queue config parameters. */
>  	uint16_t nb_desc;           /**< configured number of TXDs. */
>  } __rte_cache_min_aligned;
> 
> @@ -1185,7 +1187,7 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev,
>  				    uint16_t rx_queue_id,
>  				    uint16_t nb_rx_desc,
>  				    unsigned int socket_id,
> -				    const struct rte_eth_rxconf *rx_conf,
> +				    const struct rte_eth_rxq_conf *rx_conf,
>  				    struct rte_mempool *mb_pool);
>  /**< @internal Set up a receive queue of an Ethernet device. */
> 
> @@ -1193,7 +1195,7 @@ typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev,
>  				    uint16_t tx_queue_id,
>  				    uint16_t nb_tx_desc,
>  				    unsigned int socket_id,
> -				    const struct rte_eth_txconf *tx_conf);
> +				    const struct rte_eth_txq_conf *tx_conf);
>  /**< @internal Setup a transmit queue of an Ethernet device. */
> 
>  typedef int (*eth_rx_enable_intr_t)(struct rte_eth_dev *dev,
> @@ -1937,7 +1939,7 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
>   */
>  int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  		uint16_t nb_rx_desc, unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf,
> +		const struct rte_eth_rxq_conf *rx_conf,
>  		struct rte_mempool *mb_pool);
> 
>  /**
> @@ -1985,7 +1987,7 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>   */
>  int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>  		uint16_t nb_tx_desc, unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf);
> +		const struct rte_eth_txq_conf *tx_conf);
> 
>  /**
>   * Return the NUMA socket to which an Ethernet device is connected
> @@ -2972,7 +2974,7 @@ static inline int rte_eth_tx_descriptor_status(uint8_t port_id,
>   *
>   * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
>   * invoke this function concurrently on the same tx queue without SW lock.
> - * @see rte_eth_dev_info_get, struct rte_eth_txconf::txq_flags
> + * @see rte_eth_dev_info_get, struct rte_eth_txq_conf::txq_flags
>   *
>   * @param port_id
>   *   The port identifier of the Ethernet device.
> diff --git a/test/test-pipeline/init.c b/test/test-pipeline/init.c
> index 1457c7890..eee75fb0e 100644
> --- a/test/test-pipeline/init.c
> +++ b/test/test-pipeline/init.c
> @@ -117,7 +117,7 @@ static struct rte_eth_conf port_conf = {
>  	},
>  };
> 
> -static struct rte_eth_rxconf rx_conf = {
> +static struct rte_eth_rxq_conf rx_conf = {
>  	.rx_thresh = {
>  		.pthresh = 8,
>  		.hthresh = 8,
> @@ -127,7 +127,7 @@ static struct rte_eth_rxconf rx_conf = {
>  	.rx_drop_en = 0,
>  };
> 
> -static struct rte_eth_txconf tx_conf = {
> +static struct rte_eth_txq_conf tx_conf = {
>  	.tx_thresh = {
>  		.pthresh = 36,
>  		.hthresh = 0,
> diff --git a/test/test/test_kni.c b/test/test/test_kni.c
> index db17fdf30..b5445e167 100644
> --- a/test/test/test_kni.c
> +++ b/test/test/test_kni.c
> @@ -67,7 +67,7 @@ struct test_kni_stats {
>  	volatile uint64_t egress;
>  };
> 
> -static const struct rte_eth_rxconf rx_conf = {
> +static const struct rte_eth_rxq_conf rx_conf = {
>  	.rx_thresh = {
>  		.pthresh = 8,
>  		.hthresh = 8,
> @@ -76,7 +76,7 @@ static const struct rte_eth_rxconf rx_conf = {
>  	.rx_free_thresh = 0,
>  };
> 
> -static const struct rte_eth_txconf tx_conf = {
> +static const struct rte_eth_txq_conf tx_conf = {
>  	.tx_thresh = {
>  		.pthresh = 36,
>  		.hthresh = 0,
> diff --git a/test/test/test_link_bonding.c b/test/test/test_link_bonding.c
> index dc28cea59..af23b1ae1 100644
> --- a/test/test/test_link_bonding.c
> +++ b/test/test/test_link_bonding.c
> @@ -199,7 +199,7 @@ static struct rte_eth_conf default_pmd_conf = {
>  	.lpbk_mode = 0,
>  };
> 
> -static const struct rte_eth_rxconf rx_conf_default = {
> +static const struct rte_eth_rxq_conf rx_conf_default = {
>  	.rx_thresh = {
>  		.pthresh = RX_PTHRESH,
>  		.hthresh = RX_HTHRESH,
> @@ -209,7 +209,7 @@ static const struct rte_eth_rxconf rx_conf_default = {
>  	.rx_drop_en = 0,
>  };
> 
> -static struct rte_eth_txconf tx_conf_default = {
> +static struct rte_eth_txq_conf tx_conf_default = {
>  	.tx_thresh = {
>  		.pthresh = TX_PTHRESH,
>  		.hthresh = TX_HTHRESH,
> diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c
> index 1ffd65a52..6f28ad303 100644
> --- a/test/test/test_pmd_perf.c
> +++ b/test/test/test_pmd_perf.c
> @@ -109,7 +109,7 @@ static struct rte_eth_conf port_conf = {
>  	.lpbk_mode = 1,  /* enable loopback */
>  };
> 
> -static struct rte_eth_rxconf rx_conf = {
> +static struct rte_eth_rxq_conf rx_conf = {
>  	.rx_thresh = {
>  		.pthresh = RX_PTHRESH,
>  		.hthresh = RX_HTHRESH,
> @@ -118,7 +118,7 @@ static struct rte_eth_rxconf rx_conf = {
>  	.rx_free_thresh = 32,
>  };
> 
> -static struct rte_eth_txconf tx_conf = {
> +static struct rte_eth_txq_conf tx_conf = {
>  	.tx_thresh = {
>  		.pthresh = TX_PTHRESH,
>  		.hthresh = TX_HTHRESH,
> diff --git a/test/test/virtual_pmd.c b/test/test/virtual_pmd.c
> index 9d46ad564..fb2479ced 100644
> --- a/test/test/virtual_pmd.c
> +++ b/test/test/virtual_pmd.c
> @@ -124,7 +124,7 @@ static int
>  virtual_ethdev_rx_queue_setup_success(struct rte_eth_dev *dev,
>  		uint16_t rx_queue_id, uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool __rte_unused)
>  {
>  	struct virtual_ethdev_queue *rx_q;
> @@ -147,7 +147,7 @@ static int
>  virtual_ethdev_rx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused,
>  		uint16_t rx_queue_id __rte_unused, uint16_t nb_rx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
>  		struct rte_mempool *mb_pool __rte_unused)
>  {
>  	return -1;
> @@ -157,7 +157,7 @@ static int
>  virtual_ethdev_tx_queue_setup_success(struct rte_eth_dev *dev,
>  		uint16_t tx_queue_id, uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	struct virtual_ethdev_queue *tx_q;
> 
> @@ -179,7 +179,7 @@ static int
>  virtual_ethdev_tx_queue_setup_fail(struct rte_eth_dev *dev __rte_unused,
>  		uint16_t tx_queue_id __rte_unused, uint16_t nb_tx_desc __rte_unused,
>  		unsigned int socket_id __rte_unused,
> -		const struct rte_eth_txconf *tx_conf __rte_unused)
> +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
>  {
>  	return -1;
>  }
> --
> 2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new " Shahaf Shuler
@ 2017-09-04 12:13   ` Ananyev, Konstantin
  2017-09-04 13:25   ` Ananyev, Konstantin
  1 sibling, 0 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-04 12:13 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Monday, September 4, 2017 8:12 AM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> A new offloads API was introduced by commits:
> 
> commit 121fff673172 ("ethdev: introduce Rx queue offloads API")
> commit 35ac80d92f29 ("ethdev: introduce Tx queue offloads API")
> 
> In order to enable the PMDs to support only one of the APIs,
> a conversion functions from the old to new API were added.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>  lib/librte_ether/rte_ethdev.c | 99 +++++++++++++++++++++++++++++++++++++-
>  1 file changed, 97 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 50f8aa98d..1aa21a129 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1006,6 +1006,34 @@ rte_eth_dev_close(uint8_t port_id)
>  	dev->data->tx_queues = NULL;
>  }
> 
> +/**
> + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> + * offloads API.
> + */
> +static void
> +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> +				struct rte_eth_rxq_conf *rxq_conf)
> +{

I think you need to:
rxq_conf->offloads = 0;
first here.

> +	if (rxmode->header_split == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> +	if (rxmode->hw_ip_checksum == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> +	if (rxmode->hw_vlan_filter == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> +	if (rxmode->hw_vlan_strip == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> +	if (rxmode->hw_vlan_extend == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> +	if (rxmode->jumbo_frame == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (rxmode->hw_strip_crc == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> +	if (rxmode->enable_scatter == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_SCATTER;
> +	if (rxmode->enable_lro == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_TCP_LRO;
> +}
> +
>  int
>  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  		       uint16_t nb_rx_desc, unsigned int socket_id,
> @@ -1016,6 +1044,8 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  	uint32_t mbp_buf_size;
>  	struct rte_eth_dev *dev;
>  	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_rxq_conf rxq_trans_conf;
> +	/* Holds translated configuration to be passed to the PMD */
>  	void **rxq;
> 
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -1062,6 +1092,11 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  		return -EINVAL;
>  	}
> 
> +	if ((!(dev->data->dev_flags & RTE_ETH_DEV_RXQ_OFFLOAD)) &&
> +	    (dev->data->dev_conf.rxmode.ignore_offloads == 1)) {
> +		return -ENOTSUP;
> +	}
> +
>  	if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
>  			nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
>  			nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
> @@ -1086,8 +1121,15 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  	if (rx_conf == NULL)
>  		rx_conf = &dev_info.default_rxconf;
> 
> +	rxq_trans_conf = *rx_conf;
> +	if ((dev->data->dev_flags & RTE_ETH_DEV_RXQ_OFFLOAD) &&
> +	    (dev->data->dev_conf.rxmode.ignore_offloads == 0)) {
> +		rte_eth_convert_rxmode_offloads(&dev->data->dev_conf.rxmode,
> +						&rxq_trans_conf);
> +	}
> +
>  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
> -					      socket_id, rx_conf, mp);
> +					      socket_id, &rxq_trans_conf, mp);
>  	if (!ret) {
>  		if (!dev->data->min_rx_buf_size ||
>  		    dev->data->min_rx_buf_size > mbp_buf_size)
> @@ -1097,6 +1139,49 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  	return ret;
>  }
> 
> +/**
> + * A conversion function from txq_flags to rte_eth_txq_conf offloads API.
> + */
> +static void
> +rte_eth_convert_txq_flags(struct rte_eth_txq_conf *txq_conf)
> +{
> +	uint32_t txq_flags = txq_conf->txq_flags;
> +	uint64_t *offloads = &txq_conf->offloads;

I think you need to:
*offloads = 0;
first here.
BTW, might be a bit cleaner:

uint64_t offloads;
offloads = 0;
<conversion code>
txq_conf->tx_offloads = offloads;

Konstantin

> +
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
> +		*offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
> +		*offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
> +		*offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
> +		*offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
> +		*offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
> +}
> +
> +/**
> + * A conversion function between rte_eth_txq_conf offloads API to txq_flags
> + * offloads API.
> + */
> +static void
> +rte_eth_convert_txq_offloads(struct rte_eth_txq_conf *txq_conf)
> +{
> +	uint32_t *txq_flags = &txq_conf->txq_flags;
> +	uint64_t offloads = txq_conf->offloads;
> +
> +	if (!(offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
> +		*txq_flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
> +	if (!(offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
> +		*txq_flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
> +	if (!(offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
> +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
> +	if (!(offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
> +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
> +	if (!(offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
> +		*txq_flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
> +}
> +
>  int
>  rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>  		       uint16_t nb_tx_desc, unsigned int socket_id,
> @@ -1104,6 +1189,8 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>  {
>  	struct rte_eth_dev *dev;
>  	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_txq_conf txq_trans_conf;
> +	/* Holds translated configuration to be passed to the PMD */
>  	void **txq;
> 
>  	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -1148,8 +1235,16 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>  	if (tx_conf == NULL)
>  		tx_conf = &dev_info.default_txconf;
> 
> +	txq_trans_conf = *tx_conf;
> +	if ((dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
> +	    (!(tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)))
> +		rte_eth_convert_txq_flags(&txq_trans_conf);
> +	else if (!(dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) &&
> +		 (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE))
> +		rte_eth_convert_txq_offloads(&txq_trans_conf);
> +
>  	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
> -					       socket_id, tx_conf);
> +					       socket_id, &txq_trans_conf);
>  }
> 
>  void
> --
> 2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs
  2017-09-04 12:06   ` Ananyev, Konstantin
@ 2017-09-04 12:45     ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04 12:45 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev

Hi Konstantin,

Monday, September 4, 2017 3:06 PM, Ananyev, Konstantin:
> Hi Shaaf,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> > Sent: Monday, September 4, 2017 8:12 AM
> > To: thomas@monjalon.net
> > Cc: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration
> structs
> >
> > Rename the structs rte_eth_txconf and rte_eth_rxconf to
> > rte_eth_txq_conf and rte_eth_rxq_conf respectively as those
> > structs represent per queue configuration.
> 
> If we are not going to force all PMDs to support new API in 17.11,
> then there probably not much point in renaming these structs in 17.11.
> I suppose most of users will stick with the old API till all PMDs will move
> to the new one - that would allow them to avoid necessity to support both
> flavors.
> In such case forcing them to modify their code without getting anything in
> return
> seems like unnecessary hassle.

Yes this is a good point.
I can postpone it this cleanup to 18.02, so that code modification from application will happen only once. 

> Konstantin
> 
> >
> > Rename was done with the following commands:
> >
> > find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
> > 's/rte_eth_txconf/rte_eth_txq_conf/g'
> >
> > find . \( -name '*.h' -or -name '*.c' \) -print0 | xargs -0 sed -i
> > 's/rte_eth_rxconf/rte_eth_rxq_conf/g'
> >
> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> > ---
> >  app/test-pmd/config.c                           |  4 +--
> >  app/test-pmd/testpmd.h                          |  4 +--
> >  drivers/net/af_packet/rte_eth_af_packet.c       |  4 +--
> >  drivers/net/ark/ark_ethdev_rx.c                 |  4 +--
> >  drivers/net/ark/ark_ethdev_rx.h                 |  2 +-
> >  drivers/net/ark/ark_ethdev_tx.c                 |  2 +-
> >  drivers/net/ark/ark_ethdev_tx.h                 |  2 +-
> >  drivers/net/avp/avp_ethdev.c                    |  8 +++---
> >  drivers/net/bnx2x/bnx2x_rxtx.c                  |  4 +--
> >  drivers/net/bnx2x/bnx2x_rxtx.h                  |  4 +--
> >  drivers/net/bnxt/bnxt_ethdev.c                  |  4 +--
> >  drivers/net/bnxt/bnxt_rxq.c                     |  2 +-
> >  drivers/net/bnxt/bnxt_rxq.h                     |  2 +-
> >  drivers/net/bnxt/bnxt_txq.c                     |  2 +-
> >  drivers/net/bnxt/bnxt_txq.h                     |  2 +-
> >  drivers/net/bonding/rte_eth_bond_pmd.c          |  7 ++---
> >  drivers/net/bonding/rte_eth_bond_private.h      |  4 +--
> >  drivers/net/cxgbe/cxgbe_ethdev.c                |  4 +--
> >  drivers/net/dpaa2/dpaa2_ethdev.c                |  4 +--
> >  drivers/net/e1000/e1000_ethdev.h                |  8 +++---
> >  drivers/net/e1000/em_rxtx.c                     |  4 +--
> >  drivers/net/e1000/igb_ethdev.c                  |  8 +++---
> >  drivers/net/e1000/igb_rxtx.c                    |  4 +--
> >  drivers/net/ena/ena_ethdev.c                    | 28 +++++++++++---------
> >  drivers/net/enic/enic_ethdev.c                  |  6 ++---
> >  drivers/net/failsafe/failsafe_ops.c             |  4 +--
> >  drivers/net/fm10k/fm10k_ethdev.c                | 12 ++++-----
> >  drivers/net/i40e/i40e_ethdev.c                  |  4 +--
> >  drivers/net/i40e/i40e_ethdev_vf.c               |  4 +--
> >  drivers/net/i40e/i40e_rxtx.c                    |  4 +--
> >  drivers/net/i40e/i40e_rxtx.h                    |  4 +--
> >  drivers/net/ixgbe/ixgbe_ethdev.c                |  8 +++---
> >  drivers/net/ixgbe/ixgbe_ethdev.h                |  4 +--
> >  drivers/net/ixgbe/ixgbe_rxtx.c                  |  4 +--
> >  drivers/net/kni/rte_eth_kni.c                   |  4 +--
> >  drivers/net/liquidio/lio_ethdev.c               |  8 +++---
> >  drivers/net/mlx4/mlx4.c                         | 12 ++++-----
> >  drivers/net/mlx5/mlx5_rxq.c                     |  4 +--
> >  drivers/net/mlx5/mlx5_rxtx.h                    |  6 ++---
> >  drivers/net/mlx5/mlx5_txq.c                     |  4 +--
> >  drivers/net/nfp/nfp_net.c                       | 12 ++++-----
> >  drivers/net/null/rte_eth_null.c                 |  4 +--
> >  drivers/net/pcap/rte_eth_pcap.c                 |  4 +--
> >  drivers/net/qede/qede_ethdev.c                  |  2 +-
> >  drivers/net/qede/qede_rxtx.c                    |  4 +--
> >  drivers/net/qede/qede_rxtx.h                    |  4 +--
> >  drivers/net/ring/rte_eth_ring.c                 | 20 +++++++-------
> >  drivers/net/sfc/sfc_ethdev.c                    |  4 +--
> >  drivers/net/sfc/sfc_rx.c                        |  4 +--
> >  drivers/net/sfc/sfc_rx.h                        |  2 +-
> >  drivers/net/sfc/sfc_tx.c                        |  4 +--
> >  drivers/net/sfc/sfc_tx.h                        |  2 +-
> >  drivers/net/szedata2/rte_eth_szedata2.c         |  4 +--
> >  drivers/net/tap/rte_eth_tap.c                   |  4 +--
> >  drivers/net/thunderx/nicvf_ethdev.c             |  8 +++---
> >  drivers/net/vhost/rte_eth_vhost.c               |  4 +--
> >  drivers/net/virtio/virtio_ethdev.c              |  2 +-
> >  drivers/net/virtio/virtio_ethdev.h              |  4 +--
> >  drivers/net/virtio/virtio_rxtx.c                |  8 +++---
> >  drivers/net/vmxnet3/vmxnet3_ethdev.h            |  4 +--
> >  drivers/net/vmxnet3/vmxnet3_rxtx.c              |  4 +--
> >  drivers/net/xenvirt/rte_eth_xenvirt.c           | 20 +++++++-------
> >  examples/ip_fragmentation/main.c                |  2 +-
> >  examples/ip_pipeline/app.h                      |  4 +--
> >  examples/ip_reassembly/main.c                   |  2 +-
> >  examples/ipsec-secgw/ipsec-secgw.c              |  2 +-
> >  examples/ipv4_multicast/main.c                  |  2 +-
> >  examples/l3fwd-acl/main.c                       |  2 +-
> >  examples/l3fwd-power/main.c                     |  2 +-
> >  examples/l3fwd-vf/main.c                        |  2 +-
> >  examples/l3fwd/main.c                           |  2 +-
> >  examples/netmap_compat/lib/compat_netmap.c      |  4 +--
> >  examples/performance-thread/l3fwd-thread/main.c |  2 +-
> >  examples/ptpclient/ptpclient.c                  |  2 +-
> >  examples/qos_sched/init.c                       |  4 +--
> >  examples/tep_termination/vxlan_setup.c          |  4 +--
> >  examples/vhost/main.c                           |  4 +--
> >  examples/vhost_xen/main.c                       |  2 +-
> >  examples/vmdq/main.c                            |  2 +-
> >  lib/librte_ether/rte_ethdev.c                   |  4 +--
> >  lib/librte_ether/rte_ethdev.h                   | 24 +++++++++--------
> >  test/test-pipeline/init.c                       |  4 +--
> >  test/test/test_kni.c                            |  4 +--
> >  test/test/test_link_bonding.c                   |  4 +--
> >  test/test/test_pmd_perf.c                       |  4 +--
> >  test/test/virtual_pmd.c                         |  8 +++---
> >  86 files changed, 223 insertions(+), 214 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > index 3ae3e1cd8..392f0c57f 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -1639,8 +1639,8 @@ rxtx_config_display(void)
> >  		printf("  packet len=%u - nb packet segments=%d\n",
> >  				(unsigned)tx_pkt_length, (int)
> tx_pkt_nb_segs);
> >
> > -	struct rte_eth_rxconf *rx_conf = &ports[0].rx_conf;
> > -	struct rte_eth_txconf *tx_conf = &ports[0].tx_conf;
> > +	struct rte_eth_rxq_conf *rx_conf = &ports[0].rx_conf;
> > +	struct rte_eth_txq_conf *tx_conf = &ports[0].tx_conf;
> >
> >  	printf("  nb forwarding cores=%d - nb forwarding ports=%d\n",
> >  	       nb_fwd_lcores, nb_fwd_ports);
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> > index c9d7739b8..507974f43 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -189,8 +189,8 @@ struct rte_port {
> >  	uint8_t                 need_reconfig_queues; /**< need reconfiguring
> queues or not */
> >  	uint8_t                 rss_flag;   /**< enable rss or not */
> >  	uint8_t                 dcb_flag;   /**< enable dcb */
> > -	struct rte_eth_rxconf   rx_conf;    /**< rx configuration */
> > -	struct rte_eth_txconf   tx_conf;    /**< tx configuration */
> > +	struct rte_eth_rxq_conf   rx_conf;    /**< rx configuration */
> > +	struct rte_eth_txq_conf   tx_conf;    /**< tx configuration */
> >  	struct ether_addr       *mc_addr_pool; /**< pool of multicast addrs */
> >  	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
> >  	uint8_t                 slave_flag; /**< bonding slave port */
> > diff --git a/drivers/net/af_packet/rte_eth_af_packet.c
> b/drivers/net/af_packet/rte_eth_af_packet.c
> > index 9a47852ca..7cba0aa91 100644
> > --- a/drivers/net/af_packet/rte_eth_af_packet.c
> > +++ b/drivers/net/af_packet/rte_eth_af_packet.c
> > @@ -395,7 +395,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> >                     uint16_t rx_queue_id,
> >                     uint16_t nb_rx_desc __rte_unused,
> >                     unsigned int socket_id __rte_unused,
> > -                   const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >                     struct rte_mempool *mb_pool)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > @@ -428,7 +428,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
> >                     uint16_t tx_queue_id,
> >                     uint16_t nb_tx_desc __rte_unused,
> >                     unsigned int socket_id __rte_unused,
> > -                   const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > diff --git a/drivers/net/ark/ark_ethdev_rx.c
> b/drivers/net/ark/ark_ethdev_rx.c
> > index f5d812a55..eb5a2c70a 100644
> > --- a/drivers/net/ark/ark_ethdev_rx.c
> > +++ b/drivers/net/ark/ark_ethdev_rx.c
> > @@ -140,7 +140,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			   uint16_t queue_idx,
> >  			   uint16_t nb_desc,
> >  			   unsigned int socket_id,
> > -			   const struct rte_eth_rxconf *rx_conf,
> > +			   const struct rte_eth_rxq_conf *rx_conf,
> >  			   struct rte_mempool *mb_pool)
> >  {
> >  	static int warning1;		/* = 0 */
> > @@ -163,7 +163,7 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  	if (rx_conf != NULL && warning1 == 0) {
> >  		warning1 = 1;
> >  		PMD_DRV_LOG(INFO,
> > -			    "Arkville ignores rte_eth_rxconf argument.\n");
> > +			    "Arkville ignores rte_eth_rxq_conf argument.\n");
> >  	}
> >
> >  	if (RTE_PKTMBUF_HEADROOM < ARK_RX_META_SIZE) {
> > diff --git a/drivers/net/ark/ark_ethdev_rx.h
> b/drivers/net/ark/ark_ethdev_rx.h
> > index 3a54a4c91..15b494243 100644
> > --- a/drivers/net/ark/ark_ethdev_rx.h
> > +++ b/drivers/net/ark/ark_ethdev_rx.h
> > @@ -45,7 +45,7 @@ int eth_ark_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			       uint16_t queue_idx,
> >  			       uint16_t nb_desc,
> >  			       unsigned int socket_id,
> > -			       const struct rte_eth_rxconf *rx_conf,
> > +			       const struct rte_eth_rxq_conf *rx_conf,
> >  			       struct rte_mempool *mp);
> >  uint32_t eth_ark_dev_rx_queue_count(struct rte_eth_dev *dev,
> >  				    uint16_t rx_queue_id);
> > diff --git a/drivers/net/ark/ark_ethdev_tx.c
> b/drivers/net/ark/ark_ethdev_tx.c
> > index 0e2d60deb..0e8aaf47a 100644
> > --- a/drivers/net/ark/ark_ethdev_tx.c
> > +++ b/drivers/net/ark/ark_ethdev_tx.c
> > @@ -234,7 +234,7 @@ eth_ark_tx_queue_setup(struct rte_eth_dev *dev,
> >  		       uint16_t queue_idx,
> >  		       uint16_t nb_desc,
> >  		       unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct ark_adapter *ark = (struct ark_adapter *)dev->data-
> >dev_private;
> >  	struct ark_tx_queue *queue;
> > diff --git a/drivers/net/ark/ark_ethdev_tx.h
> b/drivers/net/ark/ark_ethdev_tx.h
> > index 8aaafc22e..eb7ab63ed 100644
> > --- a/drivers/net/ark/ark_ethdev_tx.h
> > +++ b/drivers/net/ark/ark_ethdev_tx.h
> > @@ -49,7 +49,7 @@ int eth_ark_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			   uint16_t queue_idx,
> >  			   uint16_t nb_desc,
> >  			   unsigned int socket_id,
> > -			   const struct rte_eth_txconf *tx_conf);
> > +			   const struct rte_eth_txq_conf *tx_conf);
> >  void eth_ark_tx_queue_release(void *vtx_queue);
> >  int eth_ark_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id);
> >  int eth_ark_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id);
> > diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> > index c746a0e2c..01bc08a7d 100644
> > --- a/drivers/net/avp/avp_ethdev.c
> > +++ b/drivers/net/avp/avp_ethdev.c
> > @@ -79,14 +79,14 @@ static int avp_dev_rx_queue_setup(struct
> rte_eth_dev *dev,
> >  				  uint16_t rx_queue_id,
> >  				  uint16_t nb_rx_desc,
> >  				  unsigned int socket_id,
> > -				  const struct rte_eth_rxconf *rx_conf,
> > +				  const struct rte_eth_rxq_conf *rx_conf,
> >  				  struct rte_mempool *pool);
> >
> >  static int avp_dev_tx_queue_setup(struct rte_eth_dev *dev,
> >  				  uint16_t tx_queue_id,
> >  				  uint16_t nb_tx_desc,
> >  				  unsigned int socket_id,
> > -				  const struct rte_eth_txconf *tx_conf);
> > +				  const struct rte_eth_txq_conf *tx_conf);
> >
> >  static uint16_t avp_recv_scattered_pkts(void *rx_queue,
> >  					struct rte_mbuf **rx_pkts,
> > @@ -1143,7 +1143,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> >  		       uint16_t rx_queue_id,
> >  		       uint16_t nb_rx_desc,
> >  		       unsigned int socket_id,
> > -		       const struct rte_eth_rxconf *rx_conf,
> > +		       const struct rte_eth_rxq_conf *rx_conf,
> >  		       struct rte_mempool *pool)
> >  {
> >  	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data-
> >dev_private);
> > @@ -1207,7 +1207,7 @@ avp_dev_tx_queue_setup(struct rte_eth_dev
> *eth_dev,
> >  		       uint16_t tx_queue_id,
> >  		       uint16_t nb_tx_desc,
> >  		       unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf)
> > +		       const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data-
> >dev_private);
> >  	struct avp_queue *txq;
> > diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c
> b/drivers/net/bnx2x/bnx2x_rxtx.c
> > index 5dd4aee7f..1a0c633b1 100644
> > --- a/drivers/net/bnx2x/bnx2x_rxtx.c
> > +++ b/drivers/net/bnx2x/bnx2x_rxtx.c
> > @@ -60,7 +60,7 @@ bnx2x_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  		       uint16_t queue_idx,
> >  		       uint16_t nb_desc,
> >  		       unsigned int socket_id,
> > -		       __rte_unused const struct rte_eth_rxconf *rx_conf,
> > +		       __rte_unused const struct rte_eth_rxq_conf *rx_conf,
> >  		       struct rte_mempool *mp)
> >  {
> >  	uint16_t j, idx;
> > @@ -246,7 +246,7 @@ bnx2x_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  		       uint16_t queue_idx,
> >  		       uint16_t nb_desc,
> >  		       unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf)
> > +		       const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	uint16_t i;
> >  	unsigned int tsize;
> > diff --git a/drivers/net/bnx2x/bnx2x_rxtx.h
> b/drivers/net/bnx2x/bnx2x_rxtx.h
> > index 2e38ec26a..1c6a6b38d 100644
> > --- a/drivers/net/bnx2x/bnx2x_rxtx.h
> > +++ b/drivers/net/bnx2x/bnx2x_rxtx.h
> > @@ -68,12 +68,12 @@ struct bnx2x_tx_queue {
> >
> >  int bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  			      uint16_t nb_rx_desc, unsigned int socket_id,
> > -			      const struct rte_eth_rxconf *rx_conf,
> > +			      const struct rte_eth_rxq_conf *rx_conf,
> >  			      struct rte_mempool *mb_pool);
> >
> >  int bnx2x_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  			      uint16_t nb_tx_desc, unsigned int socket_id,
> > -			      const struct rte_eth_txconf *tx_conf);
> > +			      const struct rte_eth_txq_conf *tx_conf);
> >
> >  void bnx2x_dev_rx_queue_release(void *rxq);
> >  void bnx2x_dev_tx_queue_release(void *txq);
> > diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c
> > index c9d11228b..508e6b752 100644
> > --- a/drivers/net/bnxt/bnxt_ethdev.c
> > +++ b/drivers/net/bnxt/bnxt_ethdev.c
> > @@ -391,7 +391,7 @@ static void bnxt_dev_info_get_op(struct
> rte_eth_dev *eth_dev,
> >
> 	DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
> >
> >  	/* *INDENT-OFF* */
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = 8,
> >  			.hthresh = 8,
> > @@ -401,7 +401,7 @@ static void bnxt_dev_info_get_op(struct
> rte_eth_dev *eth_dev,
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = 32,
> >  			.hthresh = 0,
> > diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
> > index 0793820b1..d0ab47c36 100644
> > --- a/drivers/net/bnxt/bnxt_rxq.c
> > +++ b/drivers/net/bnxt/bnxt_rxq.c
> > @@ -293,7 +293,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev
> *eth_dev,
> >  			       uint16_t queue_idx,
> >  			       uint16_t nb_desc,
> >  			       unsigned int socket_id,
> > -			       const struct rte_eth_rxconf *rx_conf,
> > +			       const struct rte_eth_rxq_conf *rx_conf,
> >  			       struct rte_mempool *mp)
> >  {
> >  	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
> > diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
> > index 01aaa007f..29c0aa0a5 100644
> > --- a/drivers/net/bnxt/bnxt_rxq.h
> > +++ b/drivers/net/bnxt/bnxt_rxq.h
> > @@ -70,7 +70,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev
> *eth_dev,
> >  			       uint16_t queue_idx,
> >  			       uint16_t nb_desc,
> >  			       unsigned int socket_id,
> > -			       const struct rte_eth_rxconf *rx_conf,
> > +			       const struct rte_eth_rxq_conf *rx_conf,
> >  			       struct rte_mempool *mp);
> >  void bnxt_free_rx_mbufs(struct bnxt *bp);
> >
> > diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c
> > index 99dddddfc..f4701bd68 100644
> > --- a/drivers/net/bnxt/bnxt_txq.c
> > +++ b/drivers/net/bnxt/bnxt_txq.c
> > @@ -102,7 +102,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev
> *eth_dev,
> >  			       uint16_t queue_idx,
> >  			       uint16_t nb_desc,
> >  			       unsigned int socket_id,
> > -			       const struct rte_eth_txconf *tx_conf)
> > +			       const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
> >  	struct bnxt_tx_queue *txq;
> > diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
> > index 16f3a0bdd..5071dfd5b 100644
> > --- a/drivers/net/bnxt/bnxt_txq.h
> > +++ b/drivers/net/bnxt/bnxt_txq.h
> > @@ -70,6 +70,6 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev
> *eth_dev,
> >  			       uint16_t queue_idx,
> >  			       uint16_t nb_desc,
> >  			       unsigned int socket_id,
> > -			       const struct rte_eth_txconf *tx_conf);
> > +			       const struct rte_eth_txq_conf *tx_conf);
> >
> >  #endif
> > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> > index 3ee70baa0..fbf7ffba5 100644
> > --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> > @@ -2153,7 +2153,8 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev
> *dev, uint16_t vlan_id, int on)
> >  static int
> >  bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf, struct rte_mempool
> *mb_pool)
> > +		const struct rte_eth_rxq_conf *rx_conf,
> > +		struct rte_mempool *mb_pool)
> >  {
> >  	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)
> >  			rte_zmalloc_socket(NULL, sizeof(struct
> bond_rx_queue),
> > @@ -2166,7 +2167,7 @@ bond_ethdev_rx_queue_setup(struct
> rte_eth_dev *dev, uint16_t rx_queue_id,
> >
> >  	bd_rx_q->nb_rx_desc = nb_rx_desc;
> >
> > -	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct
> rte_eth_rxconf));
> > +	memcpy(&(bd_rx_q->rx_conf), rx_conf, sizeof(struct
> rte_eth_rxq_conf));
> >  	bd_rx_q->mb_pool = mb_pool;
> >
> >  	dev->data->rx_queues[rx_queue_id] = bd_rx_q;
> > @@ -2177,7 +2178,7 @@ bond_ethdev_rx_queue_setup(struct
> rte_eth_dev *dev, uint16_t rx_queue_id,
> >  static int
> >  bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf)
> > +		const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct bond_tx_queue *bd_tx_q  = (struct bond_tx_queue *)
> >  			rte_zmalloc_socket(NULL, sizeof(struct
> bond_tx_queue),
> > diff --git a/drivers/net/bonding/rte_eth_bond_private.h
> b/drivers/net/bonding/rte_eth_bond_private.h
> > index 1fe6ff880..579a18c98 100644
> > --- a/drivers/net/bonding/rte_eth_bond_private.h
> > +++ b/drivers/net/bonding/rte_eth_bond_private.h
> > @@ -74,7 +74,7 @@ struct bond_rx_queue {
> >  	/**< Reference to eth_dev private structure */
> >  	uint16_t nb_rx_desc;
> >  	/**< Number of RX descriptors available for the queue */
> > -	struct rte_eth_rxconf rx_conf;
> > +	struct rte_eth_rxq_conf rx_conf;
> >  	/**< Copy of RX configuration structure for queue */
> >  	struct rte_mempool *mb_pool;
> >  	/**< Reference to mbuf pool to use for RX queue */
> > @@ -87,7 +87,7 @@ struct bond_tx_queue {
> >  	/**< Reference to dev private structure */
> >  	uint16_t nb_tx_desc;
> >  	/**< Number of TX descriptors available for the queue */
> > -	struct rte_eth_txconf tx_conf;
> > +	struct rte_eth_txq_conf tx_conf;
> >  	/**< Copy of TX configuration structure for queue */
> >  };
> >
> > diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> > index 7bca45614..b8f965765 100644
> > --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> > +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> > @@ -443,7 +443,7 @@ static int cxgbe_dev_tx_queue_stop(struct
> rte_eth_dev *eth_dev,
> >  static int cxgbe_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
> >  				    uint16_t queue_idx,	uint16_t nb_desc,
> >  				    unsigned int socket_id,
> > -				    const struct rte_eth_txconf *tx_conf)
> > +				    const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct port_info *pi = (struct port_info *)(eth_dev->data-
> >dev_private);
> >  	struct adapter *adapter = pi->adapter;
> > @@ -552,7 +552,7 @@ static int cxgbe_dev_rx_queue_stop(struct
> rte_eth_dev *eth_dev,
> >  static int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> >  				    uint16_t queue_idx,	uint16_t nb_desc,
> >  				    unsigned int socket_id,
> > -				    const struct rte_eth_rxconf *rx_conf,
> > +				    const struct rte_eth_rxq_conf *rx_conf,
> >  				    struct rte_mempool *mp)
> >  {
> >  	struct port_info *pi = (struct port_info *)(eth_dev->data-
> >dev_private);
> > diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> > index 429b3a086..80b79ecc2 100644
> > --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> > +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> > @@ -355,7 +355,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t rx_queue_id,
> >  			 uint16_t nb_rx_desc __rte_unused,
> >  			 unsigned int socket_id __rte_unused,
> > -			 const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +			 const struct rte_eth_rxq_conf *rx_conf
> __rte_unused,
> >  			 struct rte_mempool *mb_pool)
> >  {
> >  	struct dpaa2_dev_priv *priv = dev->data->dev_private;
> > @@ -440,7 +440,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t tx_queue_id,
> >  			 uint16_t nb_tx_desc __rte_unused,
> >  			 unsigned int socket_id __rte_unused,
> > -			 const struct rte_eth_txconf *tx_conf __rte_unused)
> > +			 const struct rte_eth_txq_conf *tx_conf
> __rte_unused)
> >  {
> >  	struct dpaa2_dev_priv *priv = dev->data->dev_private;
> >  	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)
> > diff --git a/drivers/net/e1000/e1000_ethdev.h
> b/drivers/net/e1000/e1000_ethdev.h
> > index 5668910c5..6390cc137 100644
> > --- a/drivers/net/e1000/e1000_ethdev.h
> > +++ b/drivers/net/e1000/e1000_ethdev.h
> > @@ -372,7 +372,7 @@ void igb_dev_free_queues(struct rte_eth_dev
> *dev);
> >
> >  int eth_igb_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool);
> >
> >  uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev,
> > @@ -385,7 +385,7 @@ int eth_igb_tx_descriptor_status(void *tx_queue,
> uint16_t offset);
> >
> >  int eth_igb_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf);
> > +		const struct rte_eth_txq_conf *tx_conf);
> >
> >  int eth_igb_tx_done_cleanup(void *txq, uint32_t free_cnt);
> >
> > @@ -441,7 +441,7 @@ void em_dev_free_queues(struct rte_eth_dev
> *dev);
> >
> >  int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool);
> >
> >  uint32_t eth_em_rx_queue_count(struct rte_eth_dev *dev,
> > @@ -454,7 +454,7 @@ int eth_em_tx_descriptor_status(void *tx_queue,
> uint16_t offset);
> >
> >  int eth_em_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf);
> > +		const struct rte_eth_txq_conf *tx_conf);
> >
> >  int eth_em_rx_init(struct rte_eth_dev *dev);
> >
> > diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
> > index 31819c5bd..857b7167d 100644
> > --- a/drivers/net/e1000/em_rxtx.c
> > +++ b/drivers/net/e1000/em_rxtx.c
> > @@ -1185,7 +1185,7 @@ eth_em_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t queue_idx,
> >  			 uint16_t nb_desc,
> >  			 unsigned int socket_id,
> > -			 const struct rte_eth_txconf *tx_conf)
> > +			 const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	const struct rte_memzone *tz;
> >  	struct em_tx_queue *txq;
> > @@ -1347,7 +1347,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  		uint16_t queue_idx,
> >  		uint16_t nb_desc,
> >  		unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mp)
> >  {
> >  	const struct rte_memzone *rz;
> > diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c
> > index e4f7a9faf..7ac3703ac 100644
> > --- a/drivers/net/e1000/igb_ethdev.c
> > +++ b/drivers/net/e1000/igb_ethdev.c
> > @@ -2252,7 +2252,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
> >  	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = IGB_DEFAULT_RX_PTHRESH,
> >  			.hthresh = IGB_DEFAULT_RX_HTHRESH,
> > @@ -2262,7 +2262,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = IGB_DEFAULT_TX_PTHRESH,
> >  			.hthresh = IGB_DEFAULT_TX_HTHRESH,
> > @@ -2339,7 +2339,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		break;
> >  	}
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = IGB_DEFAULT_RX_PTHRESH,
> >  			.hthresh = IGB_DEFAULT_RX_HTHRESH,
> > @@ -2349,7 +2349,7 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = IGB_DEFAULT_TX_PTHRESH,
> >  			.hthresh = IGB_DEFAULT_TX_HTHRESH,
> > diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> > index 1c80a2a1b..f4a7fe571 100644
> > --- a/drivers/net/e1000/igb_rxtx.c
> > +++ b/drivers/net/e1000/igb_rxtx.c
> > @@ -1458,7 +1458,7 @@ eth_igb_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t queue_idx,
> >  			 uint16_t nb_desc,
> >  			 unsigned int socket_id,
> > -			 const struct rte_eth_txconf *tx_conf)
> > +			 const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	const struct rte_memzone *tz;
> >  	struct igb_tx_queue *txq;
> > @@ -1604,7 +1604,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t queue_idx,
> >  			 uint16_t nb_desc,
> >  			 unsigned int socket_id,
> > -			 const struct rte_eth_rxconf *rx_conf,
> > +			 const struct rte_eth_rxq_conf *rx_conf,
> >  			 struct rte_mempool *mp)
> >  {
> >  	const struct rte_memzone *rz;
> > diff --git a/drivers/net/ena/ena_ethdev.c
> b/drivers/net/ena/ena_ethdev.c
> > index 80ce1f353..69fe5218d 100644
> > --- a/drivers/net/ena/ena_ethdev.c
> > +++ b/drivers/net/ena/ena_ethdev.c
> > @@ -193,10 +193,10 @@ static uint16_t eth_ena_prep_pkts(void
> *tx_queue, struct rte_mbuf **tx_pkts,
> >  		uint16_t nb_pkts);
> >  static int ena_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> queue_idx,
> >  			      uint16_t nb_desc, unsigned int socket_id,
> > -			      const struct rte_eth_txconf *tx_conf);
> > +			      const struct rte_eth_txq_conf *tx_conf);
> >  static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> queue_idx,
> >  			      uint16_t nb_desc, unsigned int socket_id,
> > -			      const struct rte_eth_rxconf *rx_conf,
> > +			      const struct rte_eth_rxq_conf *rx_conf,
> >  			      struct rte_mempool *mp);
> >  static uint16_t eth_ena_recv_pkts(void *rx_queue,
> >  				  struct rte_mbuf **rx_pkts, uint16_t
> nb_pkts);
> > @@ -940,11 +940,12 @@ static int ena_queue_restart(struct ena_ring
> *ring)
> >  	return 0;
> >  }
> >
> > -static int ena_tx_queue_setup(struct rte_eth_dev *dev,
> > -			      uint16_t queue_idx,
> > -			      uint16_t nb_desc,
> > -			      __rte_unused unsigned int socket_id,
> > -			      __rte_unused const struct rte_eth_txconf
> *tx_conf)
> > +static int ena_tx_queue_setup(
> > +		struct rte_eth_dev *dev,
> > +		uint16_t queue_idx,
> > +		uint16_t nb_desc,
> > +		__rte_unused unsigned int socket_id,
> > +		__rte_unused const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct ena_com_create_io_ctx ctx =
> >  		/* policy set to _HOST just to satisfy icc compiler */
> > @@ -1042,12 +1043,13 @@ static int ena_tx_queue_setup(struct
> rte_eth_dev *dev,
> >  	return rc;
> >  }
> >
> > -static int ena_rx_queue_setup(struct rte_eth_dev *dev,
> > -			      uint16_t queue_idx,
> > -			      uint16_t nb_desc,
> > -			      __rte_unused unsigned int socket_id,
> > -			      __rte_unused const struct rte_eth_rxconf
> *rx_conf,
> > -			      struct rte_mempool *mp)
> > +static int ena_rx_queue_setup(
> > +		struct rte_eth_dev *dev,
> > +		uint16_t queue_idx,
> > +		uint16_t nb_desc,
> > +		__rte_unused unsigned int socket_id,
> > +		__rte_unused const struct rte_eth_rxq_conf *rx_conf,
> > +		struct rte_mempool *mp)
> >  {
> >  	struct ena_com_create_io_ctx ctx =
> >  		/* policy set to _HOST just to satisfy icc compiler */
> > diff --git a/drivers/net/enic/enic_ethdev.c
> b/drivers/net/enic/enic_ethdev.c
> > index da8fec2d0..da7e88d23 100644
> > --- a/drivers/net/enic/enic_ethdev.c
> > +++ b/drivers/net/enic/enic_ethdev.c
> > @@ -191,7 +191,7 @@ static int enicpmd_dev_tx_queue_setup(struct
> rte_eth_dev *eth_dev,
> >  	uint16_t queue_idx,
> >  	uint16_t nb_desc,
> >  	unsigned int socket_id,
> > -	__rte_unused const struct rte_eth_txconf *tx_conf)
> > +	__rte_unused const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	int ret;
> >  	struct enic *enic = pmd_priv(eth_dev);
> > @@ -303,7 +303,7 @@ static int enicpmd_dev_rx_queue_setup(struct
> rte_eth_dev *eth_dev,
> >  	uint16_t queue_idx,
> >  	uint16_t nb_desc,
> >  	unsigned int socket_id,
> > -	const struct rte_eth_rxconf *rx_conf,
> > +	const struct rte_eth_rxq_conf *rx_conf,
> >  	struct rte_mempool *mp)
> >  {
> >  	int ret;
> > @@ -485,7 +485,7 @@ static void enicpmd_dev_info_get(struct
> rte_eth_dev *eth_dev,
> >  		DEV_TX_OFFLOAD_UDP_CKSUM   |
> >  		DEV_TX_OFFLOAD_TCP_CKSUM   |
> >  		DEV_TX_OFFLOAD_TCP_TSO;
> > -	device_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	device_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
> >  	};
> >  }
> > diff --git a/drivers/net/failsafe/failsafe_ops.c
> b/drivers/net/failsafe/failsafe_ops.c
> > index ff9ad155c..6f3f5ef56 100644
> > --- a/drivers/net/failsafe/failsafe_ops.c
> > +++ b/drivers/net/failsafe/failsafe_ops.c
> > @@ -384,7 +384,7 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc,
> >  		unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool)
> >  {
> >  	struct sub_device *sdev;
> > @@ -452,7 +452,7 @@ fs_tx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc,
> >  		unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf)
> > +		const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct sub_device *sdev;
> >  	struct txq *txq;
> > diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> > index e60d3a365..d6d9d9169 100644
> > --- a/drivers/net/fm10k/fm10k_ethdev.c
> > +++ b/drivers/net/fm10k/fm10k_ethdev.c
> > @@ -1427,7 +1427,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
> >  	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
> >  	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = FM10K_DEFAULT_RX_PTHRESH,
> >  			.hthresh = FM10K_DEFAULT_RX_HTHRESH,
> > @@ -1437,7 +1437,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = FM10K_DEFAULT_TX_PTHRESH,
> >  			.hthresh = FM10K_DEFAULT_TX_HTHRESH,
> > @@ -1740,7 +1740,7 @@ check_thresh(uint16_t min, uint16_t max,
> uint16_t div, uint16_t request)
> >  }
> >
> >  static inline int
> > -handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxconf
> *conf)
> > +handle_rxconf(struct fm10k_rx_queue *q, const struct rte_eth_rxq_conf
> *conf)
> >  {
> >  	uint16_t rx_free_thresh;
> >
> > @@ -1805,7 +1805,7 @@ mempool_element_size_valid(struct
> rte_mempool *mp)
> >  static int
> >  fm10k_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
> >  	uint16_t nb_desc, unsigned int socket_id,
> > -	const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
> > +	const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
> >  {
> >  	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> >  	struct fm10k_dev_info *dev_info =
> > @@ -1912,7 +1912,7 @@ fm10k_rx_queue_release(void *queue)
> >  }
> >
> >  static inline int
> > -handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txconf
> *conf)
> > +handle_txconf(struct fm10k_tx_queue *q, const struct rte_eth_txq_conf
> *conf)
> >  {
> >  	uint16_t tx_free_thresh;
> >  	uint16_t tx_rs_thresh;
> > @@ -1971,7 +1971,7 @@ handle_txconf(struct fm10k_tx_queue *q, const
> struct rte_eth_txconf *conf)
> >  static int
> >  fm10k_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
> >  	uint16_t nb_desc, unsigned int socket_id,
> > -	const struct rte_eth_txconf *conf)
> > +	const struct rte_eth_txq_conf *conf)
> >  {
> >  	struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> >  	struct fm10k_tx_queue *q;
> > diff --git a/drivers/net/i40e/i40e_ethdev.c
> b/drivers/net/i40e/i40e_ethdev.c
> > index 8e0580c56..9dc422cbb 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -2973,7 +2973,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  	dev_info->reta_size = pf->hash_lut_size;
> >  	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = I40E_DEFAULT_RX_PTHRESH,
> >  			.hthresh = I40E_DEFAULT_RX_HTHRESH,
> > @@ -2983,7 +2983,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = I40E_DEFAULT_TX_PTHRESH,
> >  			.hthresh = I40E_DEFAULT_TX_HTHRESH,
> > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> > index 7c5c16b85..61938d487 100644
> > --- a/drivers/net/i40e/i40e_ethdev_vf.c
> > +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> > @@ -2144,7 +2144,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		DEV_TX_OFFLOAD_TCP_CKSUM |
> >  		DEV_TX_OFFLOAD_SCTP_CKSUM;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = I40E_DEFAULT_RX_PTHRESH,
> >  			.hthresh = I40E_DEFAULT_RX_HTHRESH,
> > @@ -2154,7 +2154,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = I40E_DEFAULT_TX_PTHRESH,
> >  			.hthresh = I40E_DEFAULT_TX_HTHRESH,
> > diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > index d42c23c05..f4e367db8 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -1731,7 +1731,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			uint16_t queue_idx,
> >  			uint16_t nb_desc,
> >  			unsigned int socket_id,
> > -			const struct rte_eth_rxconf *rx_conf,
> > +			const struct rte_eth_rxq_conf *rx_conf,
> >  			struct rte_mempool *mp)
> >  {
> >  	struct i40e_vsi *vsi;
> > @@ -2010,7 +2010,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			uint16_t queue_idx,
> >  			uint16_t nb_desc,
> >  			unsigned int socket_id,
> > -			const struct rte_eth_txconf *tx_conf)
> > +			const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct i40e_vsi *vsi;
> >  	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> > diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
> > index 20084d649..9d48e33f9 100644
> > --- a/drivers/net/i40e/i40e_rxtx.h
> > +++ b/drivers/net/i40e/i40e_rxtx.h
> > @@ -201,13 +201,13 @@ int i40e_dev_rx_queue_setup(struct
> rte_eth_dev *dev,
> >  			    uint16_t queue_idx,
> >  			    uint16_t nb_desc,
> >  			    unsigned int socket_id,
> > -			    const struct rte_eth_rxconf *rx_conf,
> > +			    const struct rte_eth_rxq_conf *rx_conf,
> >  			    struct rte_mempool *mp);
> >  int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
> >  			    uint16_t queue_idx,
> >  			    uint16_t nb_desc,
> >  			    unsigned int socket_id,
> > -			    const struct rte_eth_txconf *tx_conf);
> > +			    const struct rte_eth_txq_conf *tx_conf);
> >  void i40e_dev_rx_queue_release(void *rxq);
> >  void i40e_dev_tx_queue_release(void *txq);
> >  uint16_t i40e_recv_pkts(void *rx_queue,
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 22171d866..7022f2ecc 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -3665,7 +3665,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  	    hw->mac.type == ixgbe_mac_X550EM_a)
> >  		dev_info->tx_offload_capa |=
> DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
> >  			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
> > @@ -3675,7 +3675,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
> >  			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
> > @@ -3776,7 +3776,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
> >  				DEV_TX_OFFLOAD_SCTP_CKSUM  |
> >  				DEV_TX_OFFLOAD_TCP_TSO;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
> >  			.hthresh = IXGBE_DEFAULT_RX_HTHRESH,
> > @@ -3786,7 +3786,7 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = IXGBE_DEFAULT_TX_PTHRESH,
> >  			.hthresh = IXGBE_DEFAULT_TX_HTHRESH,
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h
> b/drivers/net/ixgbe/ixgbe_ethdev.h
> > index caa50c8b9..4085a704a 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.h
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.h
> > @@ -599,12 +599,12 @@ void ixgbe_dev_tx_queue_release(void *txq);
> >
> >  int  ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool);
> >
> >  int  ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf);
> > +		const struct rte_eth_txq_conf *tx_conf);
> >
> >  uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id);
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> b/drivers/net/ixgbe/ixgbe_rxtx.c
> > index 98d0e1a86..b6b21403d 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -2397,7 +2397,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t queue_idx,
> >  			 uint16_t nb_desc,
> >  			 unsigned int socket_id,
> > -			 const struct rte_eth_txconf *tx_conf)
> > +			 const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	const struct rte_memzone *tz;
> >  	struct ixgbe_tx_queue *txq;
> > @@ -2752,7 +2752,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			 uint16_t queue_idx,
> >  			 uint16_t nb_desc,
> >  			 unsigned int socket_id,
> > -			 const struct rte_eth_rxconf *rx_conf,
> > +			 const struct rte_eth_rxq_conf *rx_conf,
> >  			 struct rte_mempool *mp)
> >  {
> >  	const struct rte_memzone *rz;
> > diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
> > index 72a2733ba..e2ef7644f 100644
> > --- a/drivers/net/kni/rte_eth_kni.c
> > +++ b/drivers/net/kni/rte_eth_kni.c
> > @@ -238,7 +238,7 @@ eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > @@ -258,7 +258,7 @@ eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >  	struct pmd_queue *q;
> > diff --git a/drivers/net/liquidio/lio_ethdev.c
> b/drivers/net/liquidio/lio_ethdev.c
> > index a17fba501..e1bbddde7 100644
> > --- a/drivers/net/liquidio/lio_ethdev.c
> > +++ b/drivers/net/liquidio/lio_ethdev.c
> > @@ -1150,7 +1150,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev
> *eth_dev)
> >   * @param socket_id
> >   *    Where to allocate memory
> >   * @param rx_conf
> > - *    Pointer to the struction rte_eth_rxconf
> > + *    Pointer to the struction rte_eth_rxq_conf
> >   * @param mp
> >   *    Pointer to the packet pool
> >   *
> > @@ -1161,7 +1161,7 @@ lio_dev_mq_rx_configure(struct rte_eth_dev
> *eth_dev)
> >  static int
> >  lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> >  		       uint16_t num_rx_descs, unsigned int socket_id,
> > -		       const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		       const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		       struct rte_mempool *mp)
> >  {
> >  	struct lio_device *lio_dev = LIO_DEV(eth_dev);
> > @@ -1242,7 +1242,7 @@ lio_dev_rx_queue_release(void *rxq)
> >   *   NUMA socket id, used for memory allocations
> >   *
> >   * @param tx_conf
> > - *   Pointer to the structure rte_eth_txconf
> > + *   Pointer to the structure rte_eth_txq_conf
> >   *
> >   * @return
> >   *   - On success, return 0
> > @@ -1251,7 +1251,7 @@ lio_dev_rx_queue_release(void *rxq)
> >  static int
> >  lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> >  		       uint16_t num_tx_descs, unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		       const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct lio_device *lio_dev = LIO_DEV(eth_dev);
> >  	int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
> > diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
> > index 055de49a3..2db8b5646 100644
> > --- a/drivers/net/mlx4/mlx4.c
> > +++ b/drivers/net/mlx4/mlx4.c
> > @@ -539,7 +539,7 @@ priv_set_flags(struct priv *priv, unsigned int keep,
> unsigned int flags)
> >
> >  static int
> >  txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
> > -	  unsigned int socket, const struct rte_eth_txconf *conf);
> > +	  unsigned int socket, const struct rte_eth_txq_conf *conf);
> >
> >  static void
> >  txq_cleanup(struct txq *txq);
> > @@ -547,7 +547,7 @@ txq_cleanup(struct txq *txq);
> >  static int
> >  rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
> >  	  unsigned int socket, int inactive,
> > -	  const struct rte_eth_rxconf *conf,
> > +	  const struct rte_eth_rxq_conf *conf,
> >  	  struct rte_mempool *mp, int children_n,
> >  	  struct rxq *rxq_parent);
> >
> > @@ -1762,7 +1762,7 @@ mlx4_tx_burst_secondary_setup(void
> *dpdk_txq, struct rte_mbuf **pkts,
> >   */
> >  static int
> >  txq_setup(struct rte_eth_dev *dev, struct txq *txq, uint16_t desc,
> > -	  unsigned int socket, const struct rte_eth_txconf *conf)
> > +	  unsigned int socket, const struct rte_eth_txq_conf *conf)
> >  {
> >  	struct priv *priv = mlx4_get_priv(dev);
> >  	struct txq tmpl = {
> > @@ -1954,7 +1954,7 @@ txq_setup(struct rte_eth_dev *dev, struct txq
> *txq, uint16_t desc,
> >   */
> >  static int
> >  mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t
> desc,
> > -		    unsigned int socket, const struct rte_eth_txconf *conf)
> > +		    unsigned int socket, const struct rte_eth_txq_conf *conf)
> >  {
> >  	struct priv *priv = dev->data->dev_private;
> >  	struct txq *txq = (*priv->txqs)[idx];
> > @@ -3830,7 +3830,7 @@ rxq_create_qp(struct rxq *rxq,
> >  static int
> >  rxq_setup(struct rte_eth_dev *dev, struct rxq *rxq, uint16_t desc,
> >  	  unsigned int socket, int inactive,
> > -	  const struct rte_eth_rxconf *conf,
> > +	  const struct rte_eth_rxq_conf *conf,
> >  	  struct rte_mempool *mp, int children_n,
> >  	  struct rxq *rxq_parent)
> >  {
> > @@ -4007,7 +4007,7 @@ rxq_setup(struct rte_eth_dev *dev, struct rxq
> *rxq, uint16_t desc,
> >   */
> >  static int
> >  mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t
> desc,
> > -		    unsigned int socket, const struct rte_eth_rxconf *conf,
> > +		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
> >  		    struct rte_mempool *mp)
> >  {
> >  	struct rxq *parent;
> > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> > index 35c5cb42e..85428950c 100644
> > --- a/drivers/net/mlx5/mlx5_rxq.c
> > +++ b/drivers/net/mlx5/mlx5_rxq.c
> > @@ -843,7 +843,7 @@ rxq_setup(struct rxq_ctrl *tmpl)
> >  static int
> >  rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
> >  	       uint16_t desc, unsigned int socket,
> > -	       const struct rte_eth_rxconf *conf, struct rte_mempool *mp)
> > +	       const struct rte_eth_rxq_conf *conf, struct rte_mempool *mp)
> >  {
> >  	struct priv *priv = dev->data->dev_private;
> >  	struct rxq_ctrl tmpl = {
> > @@ -1110,7 +1110,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct
> rxq_ctrl *rxq_ctrl,
> >   */
> >  int
> >  mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t
> desc,
> > -		    unsigned int socket, const struct rte_eth_rxconf *conf,
> > +		    unsigned int socket, const struct rte_eth_rxq_conf *conf,
> >  		    struct rte_mempool *mp)
> >  {
> >  	struct priv *priv = dev->data->dev_private;
> > diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
> > index 033e70f25..eb5315760 100644
> > --- a/drivers/net/mlx5/mlx5_rxtx.h
> > +++ b/drivers/net/mlx5/mlx5_rxtx.h
> > @@ -301,7 +301,7 @@ int priv_allow_flow_type(struct priv *, enum
> hash_rxq_flow_type);
> >  int priv_rehash_flows(struct priv *);
> >  void rxq_cleanup(struct rxq_ctrl *);
> >  int mlx5_rx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t,
> unsigned int,
> > -			const struct rte_eth_rxconf *, struct rte_mempool
> *);
> > +			const struct rte_eth_rxq_conf *, struct rte_mempool
> *);
> >  void mlx5_rx_queue_release(void *);
> >  int priv_rx_intr_vec_enable(struct priv *priv);
> >  void priv_rx_intr_vec_disable(struct priv *priv);
> > @@ -314,9 +314,9 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev,
> uint16_t rx_queue_id);
> >
> >  void txq_cleanup(struct txq_ctrl *);
> >  int txq_ctrl_setup(struct rte_eth_dev *, struct txq_ctrl *, uint16_t,
> > -		   unsigned int, const struct rte_eth_txconf *);
> > +		   unsigned int, const struct rte_eth_txq_conf *);
> >  int mlx5_tx_queue_setup(struct rte_eth_dev *, uint16_t, uint16_t,
> unsigned int,
> > -			const struct rte_eth_txconf *);
> > +			const struct rte_eth_txq_conf *);
> >  void mlx5_tx_queue_release(void *);
> >
> >  /* mlx5_rxtx.c */
> > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> > index 4b0b532b1..7b8c2f766 100644
> > --- a/drivers/net/mlx5/mlx5_txq.c
> > +++ b/drivers/net/mlx5/mlx5_txq.c
> > @@ -211,7 +211,7 @@ txq_setup(struct txq_ctrl *tmpl, struct txq_ctrl
> *txq_ctrl)
> >  int
> >  txq_ctrl_setup(struct rte_eth_dev *dev, struct txq_ctrl *txq_ctrl,
> >  	       uint16_t desc, unsigned int socket,
> > -	       const struct rte_eth_txconf *conf)
> > +	       const struct rte_eth_txq_conf *conf)
> >  {
> >  	struct priv *priv = mlx5_get_priv(dev);
> >  	struct txq_ctrl tmpl = {
> > @@ -413,7 +413,7 @@ txq_ctrl_setup(struct rte_eth_dev *dev, struct
> txq_ctrl *txq_ctrl,
> >   */
> >  int
> >  mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t
> desc,
> > -		    unsigned int socket, const struct rte_eth_txconf *conf)
> > +		    unsigned int socket, const struct rte_eth_txq_conf *conf)
> >  {
> >  	struct priv *priv = dev->data->dev_private;
> >  	struct txq *txq = (*priv->txqs)[idx];
> > diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
> > index a3bf5e1f1..4122824d9 100644
> > --- a/drivers/net/nfp/nfp_net.c
> > +++ b/drivers/net/nfp/nfp_net.c
> > @@ -79,13 +79,13 @@ static uint16_t nfp_net_recv_pkts(void *rx_queue,
> struct rte_mbuf **rx_pkts,
> >  static void nfp_net_rx_queue_release(void *rxq);
> >  static int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> queue_idx,
> >  				  uint16_t nb_desc, unsigned int socket_id,
> > -				  const struct rte_eth_rxconf *rx_conf,
> > +				  const struct rte_eth_rxq_conf *rx_conf,
> >  				  struct rte_mempool *mp);
> >  static int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
> >  static void nfp_net_tx_queue_release(void *txq);
> >  static int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> queue_idx,
> >  				  uint16_t nb_desc, unsigned int socket_id,
> > -				  const struct rte_eth_txconf *tx_conf);
> > +				  const struct rte_eth_txq_conf *tx_conf);
> >  static int nfp_net_start(struct rte_eth_dev *dev);
> >  static void nfp_net_stats_get(struct rte_eth_dev *dev,
> >  			      struct rte_eth_stats *stats);
> > @@ -1119,7 +1119,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  					     DEV_TX_OFFLOAD_UDP_CKSUM |
> >  					     DEV_TX_OFFLOAD_TCP_CKSUM;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_thresh = {
> >  			.pthresh = DEFAULT_RX_PTHRESH,
> >  			.hthresh = DEFAULT_RX_HTHRESH,
> > @@ -1129,7 +1129,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_thresh = {
> >  			.pthresh = DEFAULT_TX_PTHRESH,
> >  			.hthresh = DEFAULT_TX_HTHRESH,
> > @@ -1388,7 +1388,7 @@ static int
> >  nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
> >  		       uint16_t queue_idx, uint16_t nb_desc,
> >  		       unsigned int socket_id,
> > -		       const struct rte_eth_rxconf *rx_conf,
> > +		       const struct rte_eth_rxq_conf *rx_conf,
> >  		       struct rte_mempool *mp)
> >  {
> >  	const struct rte_memzone *tz;
> > @@ -1537,7 +1537,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
> >  static int
> >  nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> >  		       uint16_t nb_desc, unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf)
> > +		       const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	const struct rte_memzone *tz;
> >  	struct nfp_net_txq *txq;
> > diff --git a/drivers/net/null/rte_eth_null.c
> b/drivers/net/null/rte_eth_null.c
> > index 5aef0591e..7ae14b77b 100644
> > --- a/drivers/net/null/rte_eth_null.c
> > +++ b/drivers/net/null/rte_eth_null.c
> > @@ -214,7 +214,7 @@ static int
> >  eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool)
> >  {
> >  	struct rte_mbuf *dummy_packet;
> > @@ -249,7 +249,7 @@ static int
> >  eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct rte_mbuf *dummy_packet;
> >  	struct pmd_internals *internals;
> > diff --git a/drivers/net/pcap/rte_eth_pcap.c
> b/drivers/net/pcap/rte_eth_pcap.c
> > index defb3b419..874856712 100644
> > --- a/drivers/net/pcap/rte_eth_pcap.c
> > +++ b/drivers/net/pcap/rte_eth_pcap.c
> > @@ -634,7 +634,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > @@ -652,7 +652,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >
> > diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c
> > index 4e9e89fad..5b6df9688 100644
> > --- a/drivers/net/qede/qede_ethdev.c
> > +++ b/drivers/net/qede/qede_ethdev.c
> > @@ -1293,7 +1293,7 @@ qede_dev_info_get(struct rte_eth_dev
> *eth_dev,
> >  	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
> >  	dev_info->flow_type_rss_offloads =
> (uint64_t)QEDE_RSS_OFFLOAD_ALL;
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.txq_flags = QEDE_TXQ_FLAGS,
> >  	};
> >
> > diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> > index 5c3613c7c..98da5f975 100644
> > --- a/drivers/net/qede/qede_rxtx.c
> > +++ b/drivers/net/qede/qede_rxtx.c
> > @@ -40,7 +40,7 @@ static inline int qede_alloc_rx_buffer(struct
> qede_rx_queue *rxq)
> >  int
> >  qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> >  		    uint16_t nb_desc, unsigned int socket_id,
> > -		    __rte_unused const struct rte_eth_rxconf *rx_conf,
> > +		    __rte_unused const struct rte_eth_rxq_conf *rx_conf,
> >  		    struct rte_mempool *mp)
> >  {
> >  	struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
> > @@ -238,7 +238,7 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
> >  		    uint16_t queue_idx,
> >  		    uint16_t nb_desc,
> >  		    unsigned int socket_id,
> > -		    const struct rte_eth_txconf *tx_conf)
> > +		    const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct qede_dev *qdev = dev->data->dev_private;
> >  	struct ecore_dev *edev = &qdev->edev;
> > diff --git a/drivers/net/qede/qede_rxtx.h
> b/drivers/net/qede/qede_rxtx.h
> > index b551fd6ae..0c10b8ebe 100644
> > --- a/drivers/net/qede/qede_rxtx.h
> > +++ b/drivers/net/qede/qede_rxtx.h
> > @@ -225,14 +225,14 @@ struct qede_fastpath {
> >   */
> >  int qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> >  			uint16_t nb_desc, unsigned int socket_id,
> > -			const struct rte_eth_rxconf *rx_conf,
> > +			const struct rte_eth_rxq_conf *rx_conf,
> >  			struct rte_mempool *mp);
> >
> >  int qede_tx_queue_setup(struct rte_eth_dev *dev,
> >  			uint16_t queue_idx,
> >  			uint16_t nb_desc,
> >  			unsigned int socket_id,
> > -			const struct rte_eth_txconf *tx_conf);
> > +			const struct rte_eth_txq_conf *tx_conf);
> >
> >  void qede_rx_queue_release(void *rx_queue);
> >
> > diff --git a/drivers/net/ring/rte_eth_ring.c
> b/drivers/net/ring/rte_eth_ring.c
> > index 464d3d384..6d077e3cf 100644
> > --- a/drivers/net/ring/rte_eth_ring.c
> > +++ b/drivers/net/ring/rte_eth_ring.c
> > @@ -155,11 +155,12 @@ eth_dev_set_link_up(struct rte_eth_dev *dev)
> >  }
> >
> >  static int
> > -eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> > -				    uint16_t nb_rx_desc __rte_unused,
> > -				    unsigned int socket_id __rte_unused,
> > -				    const struct rte_eth_rxconf *rx_conf
> __rte_unused,
> > -				    struct rte_mempool *mb_pool
> __rte_unused)
> > +eth_rx_queue_setup(struct rte_eth_dev *dev,
> > +		   uint16_t rx_queue_id,
> > +		   uint16_t nb_rx_desc __rte_unused,
> > +		   unsigned int socket_id __rte_unused,
> > +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> > +		   struct rte_mempool *mb_pool __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >  	dev->data->rx_queues[rx_queue_id] = &internals-
> >rx_ring_queues[rx_queue_id];
> > @@ -167,10 +168,11 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t rx_queue_id,
> >  }
> >
> >  static int
> > -eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> > -				    uint16_t nb_tx_desc __rte_unused,
> > -				    unsigned int socket_id __rte_unused,
> > -				    const struct rte_eth_txconf *tx_conf
> __rte_unused)
> > +eth_tx_queue_setup(struct rte_eth_dev *dev,
> > +		   uint16_t tx_queue_id,
> > +		   uint16_t nb_tx_desc __rte_unused,
> > +		   unsigned int socket_id __rte_unused,
> > +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >  	dev->data->tx_queues[tx_queue_id] = &internals-
> >tx_ring_queues[tx_queue_id];
> > diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> > index 2b037d863..959a2b42f 100644
> > --- a/drivers/net/sfc/sfc_ethdev.c
> > +++ b/drivers/net/sfc/sfc_ethdev.c
> > @@ -404,7 +404,7 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
> >  static int
> >  sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> >  		   uint16_t nb_rx_desc, unsigned int socket_id,
> > -		   const struct rte_eth_rxconf *rx_conf,
> > +		   const struct rte_eth_rxq_conf *rx_conf,
> >  		   struct rte_mempool *mb_pool)
> >  {
> >  	struct sfc_adapter *sa = dev->data->dev_private;
> > @@ -461,7 +461,7 @@ sfc_rx_queue_release(void *queue)
> >  static int
> >  sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> >  		   uint16_t nb_tx_desc, unsigned int socket_id,
> > -		   const struct rte_eth_txconf *tx_conf)
> > +		   const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct sfc_adapter *sa = dev->data->dev_private;
> >  	int rc;
> > diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
> > index 79ed046ce..079df6272 100644
> > --- a/drivers/net/sfc/sfc_rx.c
> > +++ b/drivers/net/sfc/sfc_rx.c
> > @@ -772,7 +772,7 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int
> sw_index)
> >
> >  static int
> >  sfc_rx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_rx_desc,
> > -		   const struct rte_eth_rxconf *rx_conf)
> > +		   const struct rte_eth_rxq_conf *rx_conf)
> >  {
> >  	const uint16_t rx_free_thresh_max = EFX_RXQ_LIMIT(nb_rx_desc);
> >  	int rc = 0;
> > @@ -903,7 +903,7 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa,
> struct rte_mempool *mb_pool)
> >  int
> >  sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
> >  	     uint16_t nb_rx_desc, unsigned int socket_id,
> > -	     const struct rte_eth_rxconf *rx_conf,
> > +	     const struct rte_eth_rxq_conf *rx_conf,
> >  	     struct rte_mempool *mb_pool)
> >  {
> >  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
> > diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
> > index 9e6282ead..126c41089 100644
> > --- a/drivers/net/sfc/sfc_rx.h
> > +++ b/drivers/net/sfc/sfc_rx.h
> > @@ -156,7 +156,7 @@ void sfc_rx_stop(struct sfc_adapter *sa);
> >
> >  int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
> >  		 uint16_t nb_rx_desc, unsigned int socket_id,
> > -		 const struct rte_eth_rxconf *rx_conf,
> > +		 const struct rte_eth_rxq_conf *rx_conf,
> >  		 struct rte_mempool *mb_pool);
> >  void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
> >  int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
> > diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
> > index bf596017a..fe030baa4 100644
> > --- a/drivers/net/sfc/sfc_tx.c
> > +++ b/drivers/net/sfc/sfc_tx.c
> > @@ -58,7 +58,7 @@
> >
> >  static int
> >  sfc_tx_qcheck_conf(struct sfc_adapter *sa, uint16_t nb_tx_desc,
> > -		   const struct rte_eth_txconf *tx_conf)
> > +		   const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	unsigned int flags = tx_conf->txq_flags;
> >  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
> > @@ -128,7 +128,7 @@ sfc_tx_qflush_done(struct sfc_txq *txq)
> >  int
> >  sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
> >  	     uint16_t nb_tx_desc, unsigned int socket_id,
> > -	     const struct rte_eth_txconf *tx_conf)
> > +	     const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
> >  	struct sfc_txq_info *txq_info;
> > diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
> > index 0c1c7083b..90b5eb7d7 100644
> > --- a/drivers/net/sfc/sfc_tx.h
> > +++ b/drivers/net/sfc/sfc_tx.h
> > @@ -141,7 +141,7 @@ void sfc_tx_close(struct sfc_adapter *sa);
> >
> >  int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
> >  		 uint16_t nb_tx_desc, unsigned int socket_id,
> > -		 const struct rte_eth_txconf *tx_conf);
> > +		 const struct rte_eth_txq_conf *tx_conf);
> >  void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
> >
> >  void sfc_tx_qflush_done(struct sfc_txq *txq);
> > diff --git a/drivers/net/szedata2/rte_eth_szedata2.c
> b/drivers/net/szedata2/rte_eth_szedata2.c
> > index 9c0d57cc1..6ba24a263 100644
> > --- a/drivers/net/szedata2/rte_eth_szedata2.c
> > +++ b/drivers/net/szedata2/rte_eth_szedata2.c
> > @@ -1253,7 +1253,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > @@ -1287,7 +1287,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev,
> >  		uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >  	struct szedata2_tx_queue *txq = &internals-
> >tx_queue[tx_queue_id];
> > diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> > index 9acea8398..5a1125a7a 100644
> > --- a/drivers/net/tap/rte_eth_tap.c
> > +++ b/drivers/net/tap/rte_eth_tap.c
> > @@ -918,7 +918,7 @@ tap_rx_queue_setup(struct rte_eth_dev *dev,
> >  		   uint16_t rx_queue_id,
> >  		   uint16_t nb_rx_desc,
> >  		   unsigned int socket_id,
> > -		   const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		   struct rte_mempool *mp)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> > @@ -997,7 +997,7 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
> >  		   uint16_t tx_queue_id,
> >  		   uint16_t nb_tx_desc __rte_unused,
> >  		   unsigned int socket_id __rte_unused,
> > -		   const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct pmd_internals *internals = dev->data->dev_private;
> >  	int ret;
> > diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> > index edc17f1d4..3ddca8b49 100644
> > --- a/drivers/net/thunderx/nicvf_ethdev.c
> > +++ b/drivers/net/thunderx/nicvf_ethdev.c
> > @@ -936,7 +936,7 @@ nicvf_set_rx_function(struct rte_eth_dev *dev)
> >  static int
> >  nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
> >  			 uint16_t nb_desc, unsigned int socket_id,
> > -			 const struct rte_eth_txconf *tx_conf)
> > +			 const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	uint16_t tx_free_thresh;
> >  	uint8_t is_single_pool;
> > @@ -1261,7 +1261,7 @@ nicvf_rxq_mbuf_setup(struct nicvf_rxq *rxq)
> >  static int
> >  nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
> >  			 uint16_t nb_desc, unsigned int socket_id,
> > -			 const struct rte_eth_rxconf *rx_conf,
> > +			 const struct rte_eth_rxq_conf *rx_conf,
> >  			 struct rte_mempool *mp)
> >  {
> >  	uint16_t rx_free_thresh;
> > @@ -1403,12 +1403,12 @@ nicvf_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING)
> >  		dev_info->flow_type_rss_offloads |=
> NICVF_RSS_OFFLOAD_TUNNEL;
> >
> > -	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > +	dev_info->default_rxconf = (struct rte_eth_rxq_conf) {
> >  		.rx_free_thresh = NICVF_DEFAULT_RX_FREE_THRESH,
> >  		.rx_drop_en = 0,
> >  	};
> >
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
> >  		.txq_flags =
> >  			ETH_TXQ_FLAGS_NOMULTSEGS  |
> > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> > index 0dac5e60e..c90d06bd7 100644
> > --- a/drivers/net/vhost/rte_eth_vhost.c
> > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > @@ -831,7 +831,7 @@ static int
> >  eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> >  		   uint16_t nb_rx_desc __rte_unused,
> >  		   unsigned int socket_id,
> > -		   const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		   struct rte_mempool *mb_pool)
> >  {
> >  	struct vhost_queue *vq;
> > @@ -854,7 +854,7 @@ static int
> >  eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> >  		   uint16_t nb_tx_desc __rte_unused,
> >  		   unsigned int socket_id,
> > -		   const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct vhost_queue *vq;
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> > index e320811ed..763b30e9a 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -1891,7 +1891,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> >  	dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
> >  	dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
> >  	dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
> > -	dev_info->default_txconf = (struct rte_eth_txconf) {
> > +	dev_info->default_txconf = (struct rte_eth_txq_conf) {
> >  		.txq_flags = ETH_TXQ_FLAGS_NOOFFLOADS
> >  	};
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.h
> b/drivers/net/virtio/virtio_ethdev.h
> > index c3413c6d9..57f0d7ad2 100644
> > --- a/drivers/net/virtio/virtio_ethdev.h
> > +++ b/drivers/net/virtio/virtio_ethdev.h
> > @@ -89,12 +89,12 @@ int virtio_dev_rx_queue_done(void *rxq, uint16_t
> offset);
> >
> >  int  virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool);
> >
> >  int  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf);
> > +		const struct rte_eth_txq_conf *tx_conf);
> >
> >  uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> >  		uint16_t nb_pkts);
> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> > index e30377c51..cff1d9b62 100644
> > --- a/drivers/net/virtio/virtio_rxtx.c
> > +++ b/drivers/net/virtio/virtio_rxtx.c
> > @@ -414,7 +414,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			uint16_t queue_idx,
> >  			uint16_t nb_desc,
> >  			unsigned int socket_id __rte_unused,
> > -			__rte_unused const struct rte_eth_rxconf *rx_conf,
> > +			__rte_unused const struct rte_eth_rxq_conf
> *rx_conf,
> >  			struct rte_mempool *mp)
> >  {
> >  	uint16_t vtpci_queue_idx = 2 * queue_idx +
> VTNET_SQ_RQ_QUEUE_IDX;
> > @@ -492,7 +492,7 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >
> >  static void
> >  virtio_update_rxtx_handler(struct rte_eth_dev *dev,
> > -			   const struct rte_eth_txconf *tx_conf)
> > +			   const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	uint8_t use_simple_rxtx = 0;
> >  	struct virtio_hw *hw = dev->data->dev_private;
> > @@ -519,7 +519,7 @@ virtio_update_rxtx_handler(struct rte_eth_dev
> *dev,
> >   * struct rte_eth_dev *dev: Used to update dev
> >   * uint16_t nb_desc: Defaults to values read from config space
> >   * unsigned int socket_id: Used to allocate memzone
> > - * const struct rte_eth_txconf *tx_conf: Used to setup tx engine
> > + * const struct rte_eth_txq_conf *tx_conf: Used to setup tx engine
> >   * uint16_t queue_idx: Just used as an index in dev txq list
> >   */
> >  int
> > @@ -527,7 +527,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			uint16_t queue_idx,
> >  			uint16_t nb_desc,
> >  			unsigned int socket_id __rte_unused,
> > -			const struct rte_eth_txconf *tx_conf)
> > +			const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	uint8_t vtpci_queue_idx = 2 * queue_idx +
> VTNET_SQ_TQ_QUEUE_IDX;
> >  	struct virtio_hw *hw = dev->data->dev_private;
> > diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h
> b/drivers/net/vmxnet3/vmxnet3_ethdev.h
> > index b48058afc..98389fa74 100644
> > --- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
> > +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
> > @@ -189,11 +189,11 @@ void vmxnet3_dev_tx_queue_release(void
> *txq);
> >
> >  int  vmxnet3_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> >  				uint16_t nb_rx_desc, unsigned int socket_id,
> > -				const struct rte_eth_rxconf *rx_conf,
> > +				const struct rte_eth_rxq_conf *rx_conf,
> >  				struct rte_mempool *mb_pool);
> >  int  vmxnet3_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t
> tx_queue_id,
> >  				uint16_t nb_tx_desc, unsigned int socket_id,
> > -				const struct rte_eth_txconf *tx_conf);
> > +				const struct rte_eth_txq_conf *tx_conf);
> >
> >  int vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev);
> >
> > diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c
> b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> > index d9cf43739..cfdf72f7f 100644
> > --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
> > +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
> > @@ -888,7 +888,7 @@ vmxnet3_dev_tx_queue_setup(struct rte_eth_dev
> *dev,
> >  			   uint16_t queue_idx,
> >  			   uint16_t nb_desc,
> >  			   unsigned int socket_id,
> > -			   const struct rte_eth_txconf *tx_conf)
> > +			   const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct vmxnet3_hw *hw = dev->data->dev_private;
> >  	const struct rte_memzone *mz;
> > @@ -993,7 +993,7 @@ vmxnet3_dev_rx_queue_setup(struct rte_eth_dev
> *dev,
> >  			   uint16_t queue_idx,
> >  			   uint16_t nb_desc,
> >  			   unsigned int socket_id,
> > -			   __rte_unused const struct rte_eth_rxconf
> *rx_conf,
> > +			   __rte_unused const struct rte_eth_rxq_conf
> *rx_conf,
> >  			   struct rte_mempool *mp)
> >  {
> >  	const struct rte_memzone *mz;
> > diff --git a/drivers/net/xenvirt/rte_eth_xenvirt.c
> b/drivers/net/xenvirt/rte_eth_xenvirt.c
> > index e404b7755..792fbfb0a 100644
> > --- a/drivers/net/xenvirt/rte_eth_xenvirt.c
> > +++ b/drivers/net/xenvirt/rte_eth_xenvirt.c
> > @@ -492,11 +492,12 @@ virtio_queue_setup(struct rte_eth_dev *dev, int
> queue_type)
> >  }
> >
> >  static int
> > -eth_rx_queue_setup(struct rte_eth_dev *dev,uint16_t rx_queue_id,
> > -				uint16_t nb_rx_desc __rte_unused,
> > -				unsigned int socket_id __rte_unused,
> > -				const struct rte_eth_rxconf *rx_conf
> __rte_unused,
> > -				struct rte_mempool *mb_pool)
> > +eth_rx_queue_setup(struct rte_eth_dev *dev,
> > +		   uint16_t rx_queue_id,
> > +		   uint16_t nb_rx_desc __rte_unused,
> > +		   unsigned int socket_id __rte_unused,
> > +		   const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> > +		   struct rte_mempool *mb_pool)
> >  {
> >  	struct virtqueue *vq;
> >  	vq = dev->data->rx_queues[rx_queue_id] =
> virtio_queue_setup(dev, VTNET_RQ);
> > @@ -505,10 +506,11 @@ eth_rx_queue_setup(struct rte_eth_dev
> *dev,uint16_t rx_queue_id,
> >  }
> >
> >  static int
> > -eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> > -				uint16_t nb_tx_desc __rte_unused,
> > -				unsigned int socket_id __rte_unused,
> > -				const struct rte_eth_txconf *tx_conf
> __rte_unused)
> > +eth_tx_queue_setup(struct rte_eth_dev *dev,
> > +		   uint16_t tx_queue_id,
> > +		   uint16_t nb_tx_desc __rte_unused,
> > +		   unsigned int socket_id __rte_unused,
> > +		   const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	dev->data->tx_queues[tx_queue_id] = virtio_queue_setup(dev,
> VTNET_TQ);
> >  	return 0;
> > diff --git a/examples/ip_fragmentation/main.c
> b/examples/ip_fragmentation/main.c
> > index 8c0e17911..15f9426f2 100644
> > --- a/examples/ip_fragmentation/main.c
> > +++ b/examples/ip_fragmentation/main.c
> > @@ -869,7 +869,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_queue_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	struct rx_queue *rxq;
> >  	int socket, ret;
> >  	unsigned nb_ports;
> > diff --git a/examples/ip_pipeline/app.h b/examples/ip_pipeline/app.h
> > index e41290e74..59bb1bac8 100644
> > --- a/examples/ip_pipeline/app.h
> > +++ b/examples/ip_pipeline/app.h
> > @@ -103,7 +103,7 @@ struct app_pktq_hwq_in_params {
> >  	uint32_t size;
> >  	uint32_t burst;
> >
> > -	struct rte_eth_rxconf conf;
> > +	struct rte_eth_rxq_conf conf;
> >  };
> >
> >  struct app_pktq_hwq_out_params {
> > @@ -113,7 +113,7 @@ struct app_pktq_hwq_out_params {
> >  	uint32_t burst;
> >  	uint32_t dropless;
> >  	uint64_t n_retries;
> > -	struct rte_eth_txconf conf;
> > +	struct rte_eth_txq_conf conf;
> >  };
> >
> >  struct app_pktq_swq_params {
> > diff --git a/examples/ip_reassembly/main.c
> b/examples/ip_reassembly/main.c
> > index e62636cb4..746140f60 100644
> > --- a/examples/ip_reassembly/main.c
> > +++ b/examples/ip_reassembly/main.c
> > @@ -1017,7 +1017,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_queue_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	struct rx_queue *rxq;
> >  	int ret, socket;
> >  	unsigned nb_ports;
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> secgw/ipsec-secgw.c
> > index 99dc270cb..807d079cf 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > @@ -1325,7 +1325,7 @@ static void
> >  port_init(uint8_t portid)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	uint16_t nb_tx_queue, nb_rx_queue;
> >  	uint16_t tx_queueid, rx_queueid, queue, lcore_id;
> >  	int32_t ret, socket_id;
> > diff --git a/examples/ipv4_multicast/main.c
> b/examples/ipv4_multicast/main.c
> > index 9a13d3530..a3c060778 100644
> > --- a/examples/ipv4_multicast/main.c
> > +++ b/examples/ipv4_multicast/main.c
> > @@ -668,7 +668,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_queue_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	uint16_t queueid;
> >  	unsigned lcore_id = 0, rx_lcore_id = 0;
> > diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> > index 8eff4de41..03124e142 100644
> > --- a/examples/l3fwd-acl/main.c
> > +++ b/examples/l3fwd-acl/main.c
> > @@ -1887,7 +1887,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	unsigned nb_ports;
> >  	uint16_t queueid;
> > diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-
> power/main.c
> > index fd442f5ef..f54decd20 100644
> > --- a/examples/l3fwd-power/main.c
> > +++ b/examples/l3fwd-power/main.c
> > @@ -1643,7 +1643,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	unsigned nb_ports;
> >  	uint16_t queueid;
> > diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
> > index 34e4a6bef..9a1ff8748 100644
> > --- a/examples/l3fwd-vf/main.c
> > +++ b/examples/l3fwd-vf/main.c
> > @@ -950,7 +950,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	unsigned nb_ports;
> >  	uint16_t queueid;
> > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> > index 81995fdbe..2e904b7ae 100644
> > --- a/examples/l3fwd/main.c
> > +++ b/examples/l3fwd/main.c
> > @@ -844,7 +844,7 @@ main(int argc, char **argv)
> >  {
> >  	struct lcore_conf *qconf;
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	unsigned nb_ports;
> >  	uint16_t queueid;
> > diff --git a/examples/netmap_compat/lib/compat_netmap.c
> b/examples/netmap_compat/lib/compat_netmap.c
> > index af2d9f3f7..2c245d1df 100644
> > --- a/examples/netmap_compat/lib/compat_netmap.c
> > +++ b/examples/netmap_compat/lib/compat_netmap.c
> > @@ -57,8 +57,8 @@ struct netmap_port {
> >  	struct rte_mempool   *pool;
> >  	struct netmap_if     *nmif;
> >  	struct rte_eth_conf   eth_conf;
> > -	struct rte_eth_txconf tx_conf;
> > -	struct rte_eth_rxconf rx_conf;
> > +	struct rte_eth_txq_conf tx_conf;
> > +	struct rte_eth_rxq_conf rx_conf;
> >  	int32_t  socket_id;
> >  	uint16_t nr_tx_rings;
> >  	uint16_t nr_rx_rings;
> > diff --git a/examples/performance-thread/l3fwd-thread/main.c
> b/examples/performance-thread/l3fwd-thread/main.c
> > index 7954b9744..e72b86e78 100644
> > --- a/examples/performance-thread/l3fwd-thread/main.c
> > +++ b/examples/performance-thread/l3fwd-thread/main.c
> > @@ -3493,7 +3493,7 @@ int
> >  main(int argc, char **argv)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int ret;
> >  	int i;
> >  	unsigned nb_ports;
> > diff --git a/examples/ptpclient/ptpclient.c
> b/examples/ptpclient/ptpclient.c
> > index ddfcdb832..ac350f5fb 100644
> > --- a/examples/ptpclient/ptpclient.c
> > +++ b/examples/ptpclient/ptpclient.c
> > @@ -237,7 +237,7 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> >  	/* Allocate and set up 1 TX queue per Ethernet port. */
> >  	for (q = 0; q < tx_rings; q++) {
> >  		/* Setup txq_flags */
> > -		struct rte_eth_txconf *txconf;
> > +		struct rte_eth_txq_conf *txconf;
> >
> >  		rte_eth_dev_info_get(q, &dev_info);
> >  		txconf = &dev_info.default_txconf;
> > diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> > index a82cbd7d5..955d051d2 100644
> > --- a/examples/qos_sched/init.c
> > +++ b/examples/qos_sched/init.c
> > @@ -104,8 +104,8 @@ app_init_port(uint8_t portid, struct rte_mempool
> *mp)
> >  {
> >  	int ret;
> >  	struct rte_eth_link link;
> > -	struct rte_eth_rxconf rx_conf;
> > -	struct rte_eth_txconf tx_conf;
> > +	struct rte_eth_rxq_conf rx_conf;
> > +	struct rte_eth_txq_conf tx_conf;
> >  	uint16_t rx_size;
> >  	uint16_t tx_size;
> >
> > diff --git a/examples/tep_termination/vxlan_setup.c
> b/examples/tep_termination/vxlan_setup.c
> > index 050bb32d3..8d61e8891 100644
> > --- a/examples/tep_termination/vxlan_setup.c
> > +++ b/examples/tep_termination/vxlan_setup.c
> > @@ -138,8 +138,8 @@ vxlan_port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> >  	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
> >  	uint16_t tx_ring_size = RTE_TEST_TX_DESC_DEFAULT;
> >  	struct rte_eth_udp_tunnel tunnel_udp;
> > -	struct rte_eth_rxconf *rxconf;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_rxq_conf *rxconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	struct vxlan_conf *pconf = &vxdev;
> >
> >  	pconf->dst_port = udp_port;
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index 4d1589d06..75c4c8341 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -269,8 +269,8 @@ port_init(uint8_t port)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> >  	struct rte_eth_conf port_conf;
> > -	struct rte_eth_rxconf *rxconf;
> > -	struct rte_eth_txconf *txconf;
> > +	struct rte_eth_rxq_conf *rxconf;
> > +	struct rte_eth_txq_conf *txconf;
> >  	int16_t rx_rings, tx_rings;
> >  	uint16_t rx_ring_size, tx_ring_size;
> >  	int retval;
> > diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
> > index eba4d35aa..852269cdc 100644
> > --- a/examples/vhost_xen/main.c
> > +++ b/examples/vhost_xen/main.c
> > @@ -276,7 +276,7 @@ static inline int
> >  port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_rxconf *rxconf;
> > +	struct rte_eth_rxq_conf *rxconf;
> >  	struct rte_eth_conf port_conf;
> >  	uint16_t rx_rings, tx_rings = (uint16_t)rte_lcore_count();
> >  	uint16_t rx_ring_size = RTE_TEST_RX_DESC_DEFAULT;
> > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
> > index 8949a1156..5c3a73789 100644
> > --- a/examples/vmdq/main.c
> > +++ b/examples/vmdq/main.c
> > @@ -189,7 +189,7 @@ static inline int
> >  port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> > -	struct rte_eth_rxconf *rxconf;
> > +	struct rte_eth_rxq_conf *rxconf;
> >  	struct rte_eth_conf port_conf;
> >  	uint16_t rxRings, txRings;
> >  	uint16_t rxRingSize = RTE_TEST_RX_DESC_DEFAULT;
> > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> > index 0597641ee..da2424cc4 100644
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> > @@ -997,7 +997,7 @@ rte_eth_dev_close(uint8_t port_id)
> >  int
> >  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
> >  		       uint16_t nb_rx_desc, unsigned int socket_id,
> > -		       const struct rte_eth_rxconf *rx_conf,
> > +		       const struct rte_eth_rxq_conf *rx_conf,
> >  		       struct rte_mempool *mp)
> >  {
> >  	int ret;
> > @@ -1088,7 +1088,7 @@ rte_eth_rx_queue_setup(uint8_t port_id,
> uint16_t rx_queue_id,
> >  int
> >  rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
> >  		       uint16_t nb_tx_desc, unsigned int socket_id,
> > -		       const struct rte_eth_txconf *tx_conf)
> > +		       const struct rte_eth_txq_conf *tx_conf)
> >  {
> >  	struct rte_eth_dev *dev;
> >  	struct rte_eth_dev_info dev_info;
> > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> > index 0adf3274a..c40db4ee0 100644
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -686,7 +686,7 @@ struct rte_eth_txmode {
> >  /**
> >   * A structure used to configure an RX ring of an Ethernet port.
> >   */
> > -struct rte_eth_rxconf {
> > +struct rte_eth_rxq_conf {
> >  	struct rte_eth_thresh rx_thresh; /**< RX ring threshold registers. */
> >  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors.
> */
> >  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are
> available. */
> > @@ -709,7 +709,7 @@ struct rte_eth_rxconf {
> >  /**
> >   * A structure used to configure a TX ring of an Ethernet port.
> >   */
> > -struct rte_eth_txconf {
> > +struct rte_eth_txq_conf {
> >  	struct rte_eth_thresh tx_thresh; /**< TX ring threshold registers. */
> >  	uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */
> >  	uint16_t tx_free_thresh; /**< Start freeing TX buffers if there are
> > @@ -956,8 +956,10 @@ struct rte_eth_dev_info {
> >  	uint8_t hash_key_size; /**< Hash key size in bytes */
> >  	/** Bit mask of RSS offloads, the bit offset also means flow type */
> >  	uint64_t flow_type_rss_offloads;
> > -	struct rte_eth_rxconf default_rxconf; /**< Default RX configuration
> */
> > -	struct rte_eth_txconf default_txconf; /**< Default TX configuration
> */
> > +	struct rte_eth_rxq_conf default_rxconf;
> > +	/**< Default RX queue configuration */
> > +	struct rte_eth_txq_conf default_txconf;
> > +	/**< Default TX queue configuration */
> >  	uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
> >  	uint16_t vmdq_queue_num;  /**< Queue number for VMDQ pools.
> */
> >  	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
> > @@ -975,7 +977,7 @@ struct rte_eth_dev_info {
> >   */
> >  struct rte_eth_rxq_info {
> >  	struct rte_mempool *mp;     /**< mempool used by that queue. */
> > -	struct rte_eth_rxconf conf; /**< queue config parameters. */
> > +	struct rte_eth_rxq_conf conf; /**< queue config parameters. */
> >  	uint8_t scattered_rx;       /**< scattered packets RX supported. */
> >  	uint16_t nb_desc;           /**< configured number of RXDs. */
> >  } __rte_cache_min_aligned;
> > @@ -985,7 +987,7 @@ struct rte_eth_rxq_info {
> >   * Used to retieve information about configured queue.
> >   */
> >  struct rte_eth_txq_info {
> > -	struct rte_eth_txconf conf; /**< queue config parameters. */
> > +	struct rte_eth_txq_conf conf; /**< queue config parameters. */
> >  	uint16_t nb_desc;           /**< configured number of TXDs. */
> >  } __rte_cache_min_aligned;
> >
> > @@ -1185,7 +1187,7 @@ typedef int (*eth_rx_queue_setup_t)(struct
> rte_eth_dev *dev,
> >  				    uint16_t rx_queue_id,
> >  				    uint16_t nb_rx_desc,
> >  				    unsigned int socket_id,
> > -				    const struct rte_eth_rxconf *rx_conf,
> > +				    const struct rte_eth_rxq_conf *rx_conf,
> >  				    struct rte_mempool *mb_pool);
> >  /**< @internal Set up a receive queue of an Ethernet device. */
> >
> > @@ -1193,7 +1195,7 @@ typedef int (*eth_tx_queue_setup_t)(struct
> rte_eth_dev *dev,
> >  				    uint16_t tx_queue_id,
> >  				    uint16_t nb_tx_desc,
> >  				    unsigned int socket_id,
> > -				    const struct rte_eth_txconf *tx_conf);
> > +				    const struct rte_eth_txq_conf *tx_conf);
> >  /**< @internal Setup a transmit queue of an Ethernet device. */
> >
> >  typedef int (*eth_rx_enable_intr_t)(struct rte_eth_dev *dev,
> > @@ -1937,7 +1939,7 @@ void _rte_eth_dev_reset(struct rte_eth_dev
> *dev);
> >   */
> >  int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
> >  		uint16_t nb_rx_desc, unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf,
> > +		const struct rte_eth_rxq_conf *rx_conf,
> >  		struct rte_mempool *mb_pool);
> >
> >  /**
> > @@ -1985,7 +1987,7 @@ int rte_eth_rx_queue_setup(uint8_t port_id,
> uint16_t rx_queue_id,
> >   */
> >  int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
> >  		uint16_t nb_tx_desc, unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf);
> > +		const struct rte_eth_txq_conf *tx_conf);
> >
> >  /**
> >   * Return the NUMA socket to which an Ethernet device is connected
> > @@ -2972,7 +2974,7 @@ static inline int
> rte_eth_tx_descriptor_status(uint8_t port_id,
> >   *
> >   * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple
> threads can
> >   * invoke this function concurrently on the same tx queue without SW lock.
> > - * @see rte_eth_dev_info_get, struct rte_eth_txconf::txq_flags
> > + * @see rte_eth_dev_info_get, struct rte_eth_txq_conf::txq_flags
> >   *
> >   * @param port_id
> >   *   The port identifier of the Ethernet device.
> > diff --git a/test/test-pipeline/init.c b/test/test-pipeline/init.c
> > index 1457c7890..eee75fb0e 100644
> > --- a/test/test-pipeline/init.c
> > +++ b/test/test-pipeline/init.c
> > @@ -117,7 +117,7 @@ static struct rte_eth_conf port_conf = {
> >  	},
> >  };
> >
> > -static struct rte_eth_rxconf rx_conf = {
> > +static struct rte_eth_rxq_conf rx_conf = {
> >  	.rx_thresh = {
> >  		.pthresh = 8,
> >  		.hthresh = 8,
> > @@ -127,7 +127,7 @@ static struct rte_eth_rxconf rx_conf = {
> >  	.rx_drop_en = 0,
> >  };
> >
> > -static struct rte_eth_txconf tx_conf = {
> > +static struct rte_eth_txq_conf tx_conf = {
> >  	.tx_thresh = {
> >  		.pthresh = 36,
> >  		.hthresh = 0,
> > diff --git a/test/test/test_kni.c b/test/test/test_kni.c
> > index db17fdf30..b5445e167 100644
> > --- a/test/test/test_kni.c
> > +++ b/test/test/test_kni.c
> > @@ -67,7 +67,7 @@ struct test_kni_stats {
> >  	volatile uint64_t egress;
> >  };
> >
> > -static const struct rte_eth_rxconf rx_conf = {
> > +static const struct rte_eth_rxq_conf rx_conf = {
> >  	.rx_thresh = {
> >  		.pthresh = 8,
> >  		.hthresh = 8,
> > @@ -76,7 +76,7 @@ static const struct rte_eth_rxconf rx_conf = {
> >  	.rx_free_thresh = 0,
> >  };
> >
> > -static const struct rte_eth_txconf tx_conf = {
> > +static const struct rte_eth_txq_conf tx_conf = {
> >  	.tx_thresh = {
> >  		.pthresh = 36,
> >  		.hthresh = 0,
> > diff --git a/test/test/test_link_bonding.c b/test/test/test_link_bonding.c
> > index dc28cea59..af23b1ae1 100644
> > --- a/test/test/test_link_bonding.c
> > +++ b/test/test/test_link_bonding.c
> > @@ -199,7 +199,7 @@ static struct rte_eth_conf default_pmd_conf = {
> >  	.lpbk_mode = 0,
> >  };
> >
> > -static const struct rte_eth_rxconf rx_conf_default = {
> > +static const struct rte_eth_rxq_conf rx_conf_default = {
> >  	.rx_thresh = {
> >  		.pthresh = RX_PTHRESH,
> >  		.hthresh = RX_HTHRESH,
> > @@ -209,7 +209,7 @@ static const struct rte_eth_rxconf rx_conf_default =
> {
> >  	.rx_drop_en = 0,
> >  };
> >
> > -static struct rte_eth_txconf tx_conf_default = {
> > +static struct rte_eth_txq_conf tx_conf_default = {
> >  	.tx_thresh = {
> >  		.pthresh = TX_PTHRESH,
> >  		.hthresh = TX_HTHRESH,
> > diff --git a/test/test/test_pmd_perf.c b/test/test/test_pmd_perf.c
> > index 1ffd65a52..6f28ad303 100644
> > --- a/test/test/test_pmd_perf.c
> > +++ b/test/test/test_pmd_perf.c
> > @@ -109,7 +109,7 @@ static struct rte_eth_conf port_conf = {
> >  	.lpbk_mode = 1,  /* enable loopback */
> >  };
> >
> > -static struct rte_eth_rxconf rx_conf = {
> > +static struct rte_eth_rxq_conf rx_conf = {
> >  	.rx_thresh = {
> >  		.pthresh = RX_PTHRESH,
> >  		.hthresh = RX_HTHRESH,
> > @@ -118,7 +118,7 @@ static struct rte_eth_rxconf rx_conf = {
> >  	.rx_free_thresh = 32,
> >  };
> >
> > -static struct rte_eth_txconf tx_conf = {
> > +static struct rte_eth_txq_conf tx_conf = {
> >  	.tx_thresh = {
> >  		.pthresh = TX_PTHRESH,
> >  		.hthresh = TX_HTHRESH,
> > diff --git a/test/test/virtual_pmd.c b/test/test/virtual_pmd.c
> > index 9d46ad564..fb2479ced 100644
> > --- a/test/test/virtual_pmd.c
> > +++ b/test/test/virtual_pmd.c
> > @@ -124,7 +124,7 @@ static int
> >  virtual_ethdev_rx_queue_setup_success(struct rte_eth_dev *dev,
> >  		uint16_t rx_queue_id, uint16_t nb_rx_desc __rte_unused,
> >  		unsigned int socket_id,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool __rte_unused)
> >  {
> >  	struct virtual_ethdev_queue *rx_q;
> > @@ -147,7 +147,7 @@ static int
> >  virtual_ethdev_rx_queue_setup_fail(struct rte_eth_dev *dev
> __rte_unused,
> >  		uint16_t rx_queue_id __rte_unused, uint16_t nb_rx_desc
> __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_rxconf *rx_conf __rte_unused,
> > +		const struct rte_eth_rxq_conf *rx_conf __rte_unused,
> >  		struct rte_mempool *mb_pool __rte_unused)
> >  {
> >  	return -1;
> > @@ -157,7 +157,7 @@ static int
> >  virtual_ethdev_tx_queue_setup_success(struct rte_eth_dev *dev,
> >  		uint16_t tx_queue_id, uint16_t nb_tx_desc __rte_unused,
> >  		unsigned int socket_id,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	struct virtual_ethdev_queue *tx_q;
> >
> > @@ -179,7 +179,7 @@ static int
> >  virtual_ethdev_tx_queue_setup_fail(struct rte_eth_dev *dev
> __rte_unused,
> >  		uint16_t tx_queue_id __rte_unused, uint16_t nb_tx_desc
> __rte_unused,
> >  		unsigned int socket_id __rte_unused,
> > -		const struct rte_eth_txconf *tx_conf __rte_unused)
> > +		const struct rte_eth_txq_conf *tx_conf __rte_unused)
> >  {
> >  	return -1;
> >  }
> > --
> > 2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new " Shahaf Shuler
  2017-09-04 12:13   ` Ananyev, Konstantin
@ 2017-09-04 13:25   ` Ananyev, Konstantin
  2017-09-04 13:53     ` Thomas Monjalon
  2017-09-04 14:02     ` Shahaf Shuler
  1 sibling, 2 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-04 13:25 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

Hi Shahaf,

>  }
> 
> +/**
> + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> + * offloads API.
> + */
> +static void
> +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> +				struct rte_eth_rxq_conf *rxq_conf)
> +{
> +	if (rxmode->header_split == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> +	if (rxmode->hw_ip_checksum == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> +	if (rxmode->hw_vlan_filter == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;

Thinking on it a bit more:
VLAN_FILTER is definitely one per device, as it would affect VFs also.
At least that's what we have for Intel devices (ixgbe, i40e) right now.
For Intel devices VLAN_STRIP is also per device and
will also be  applied to all corresponding VFs.
In fact, right now it is possible to query/change these 3 vlan offload flags on the fly
(after dev_start) on  port basis by rte_eth_dev_(get|set)_vlan_offload API.
So, I think at least these 3 flags need to be remained on a port basis.
In fact, why can't we have both per port and per queue RX offload:
- dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis.
- rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis.
- if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis  -
   rx_queue_setup() will return an error.
- rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs
  can be configured on a per queue basis.
BTW - in that case we probably wouldn't need ignore flag inside rx_conf anymore.


> +	if (rxmode->hw_vlan_strip == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> +	if (rxmode->hw_vlan_extend == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> +	if (rxmode->jumbo_frame == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;

There are some extra checks for that flag inside rte_eth_dev_configure().
If we going so support it per queue - then it probably need to be updated. 

> +	if (rxmode->hw_strip_crc == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> +	if (rxmode->enable_scatter == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_SCATTER;
> +	if (rxmode->enable_lro == 1)
> +		rxq_conf->offloads |= DEV_RX_OFFLOAD_TCP_LRO;
> +}
> +

Konstantin

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04 13:25   ` Ananyev, Konstantin
@ 2017-09-04 13:53     ` Thomas Monjalon
  2017-09-04 14:18       ` Ananyev, Konstantin
  2017-09-04 14:02     ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-04 13:53 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Shahaf Shuler, dev

04/09/2017 15:25, Ananyev, Konstantin:
> Hi Shahaf,
> 
> > +/**
> > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > + * offloads API.
> > + */
> > +static void
> > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > +				struct rte_eth_rxq_conf *rxq_conf)
> > +{
> > +	if (rxmode->header_split == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > +	if (rxmode->hw_ip_checksum == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > +	if (rxmode->hw_vlan_filter == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> 
> Thinking on it a bit more:
> VLAN_FILTER is definitely one per device, as it would affect VFs also.
> At least that's what we have for Intel devices (ixgbe, i40e) right now.
> For Intel devices VLAN_STRIP is also per device and
> will also be  applied to all corresponding VFs.
> In fact, right now it is possible to query/change these 3 vlan offload flags on the fly
> (after dev_start) on  port basis by rte_eth_dev_(get|set)_vlan_offload API.
> So, I think at least these 3 flags need to be remained on a port basis.

I don't understand how it helps to be able to configure the same thing
in 2 places.
I think you are just describing a limitation of these HW: some offloads
must be the same for all queues.
It does not prevent from configuring them in the per-queue setup.

> In fact, why can't we have both per port and per queue RX offload:
> - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis.
> - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis.
> - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis  -
>    rx_queue_setup() will return an error.

The queue setup can work while the value is the same for every queues.

> - rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs
>   can be configured on a per queue basis.

Yes the PMD should advertise its limitations like being forced to
apply the same configuration to all its queues.

> BTW - in that case we probably wouldn't need ignore flag inside rx_conf anymore.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04 13:25   ` Ananyev, Konstantin
  2017-09-04 13:53     ` Thomas Monjalon
@ 2017-09-04 14:02     ` Shahaf Shuler
  2017-09-04 15:55       ` Ananyev, Konstantin
  1 sibling, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-04 14:02 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev

Hi Konstantin,

Monday, September 4, 2017 4:25 PM, Ananyev, Konstantin:
> 
> Hi Shahaf,
> 
> >  }
> >
> > +/**
> > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > + * offloads API.
> > + */
> > +static void
> > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > +				struct rte_eth_rxq_conf *rxq_conf) {
> > +	if (rxmode->header_split == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > +	if (rxmode->hw_ip_checksum == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > +	if (rxmode->hw_vlan_filter == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> 
> Thinking on it a bit more:
> VLAN_FILTER is definitely one per device, as it would affect VFs also.
> At least that's what we have for Intel devices (ixgbe, i40e) right now.

This is vendor specific. For Mellanox this is one per device (regardless if it is a vf/pf).

> For Intel devices VLAN_STRIP is also per device and will also be  applied to all
> corresponding VFs.

Again - vendor specific. For Mellanox is per queue.

> In fact, right now it is possible to query/change these 3 vlan offload flags on
> the fly (after dev_start) on  port basis by
> rte_eth_dev_(get|set)_vlan_offload API.
> So, I think at least these 3 flags need to be remained on a port basis.

Am not sure I agree. 
Why, for example, block from application the option to set some queues with vlan strip and some without if device allows?
Also how will we decide which offloads should stay per port and which are allowed to move per queue? this much depends on the underlying PMD.

Looks like i missed that part on ethdev, and if Rx offload will be per queue I will need to change it also.


> In fact, why can't we have both per port and per queue RX offload:
> - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port
> basis.
> - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on
> a queue basis.
> - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue
> basis  -
>    rx_queue_setup() will return an error.

Why not taking the per port configuration as a sub-case of per queue configuration?
For per-port offloads as long as the same configuration applies the queue setup succeeds.


> - rte_eth_rxq_info can be extended to provide information which
> RX_OFFLOADs
>   can be configured on a per queue basis.

I am OK with the info suggestion. 

> BTW - in that case we probably wouldn't need ignore flag inside rx_conf
> anymore.
> 
> 
> > +	if (rxmode->hw_vlan_strip == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> > +	if (rxmode->hw_vlan_extend == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> > +	if (rxmode->jumbo_frame == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> There are some extra checks for that flag inside rte_eth_dev_configure().
> If we going so support it per queue - then it probably need to be updated.
> 
> > +	if (rxmode->hw_strip_crc == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> > +	if (rxmode->enable_scatter == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_SCATTER;
> > +	if (rxmode->enable_lro == 1)
> > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_TCP_LRO; }
> > +
> 
> Konstantin

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04 13:53     ` Thomas Monjalon
@ 2017-09-04 14:18       ` Ananyev, Konstantin
  2017-09-05  7:48         ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-04 14:18 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Shahaf Shuler, dev



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Monday, September 4, 2017 2:54 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> 04/09/2017 15:25, Ananyev, Konstantin:
> > Hi Shahaf,
> >
> > > +/**
> > > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > > + * offloads API.
> > > + */
> > > +static void
> > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > > +				struct rte_eth_rxq_conf *rxq_conf)
> > > +{
> > > +	if (rxmode->header_split == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > > +	if (rxmode->hw_ip_checksum == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > > +	if (rxmode->hw_vlan_filter == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> >
> > Thinking on it a bit more:
> > VLAN_FILTER is definitely one per device, as it would affect VFs also.
> > At least that's what we have for Intel devices (ixgbe, i40e) right now.
> > For Intel devices VLAN_STRIP is also per device and
> > will also be  applied to all corresponding VFs.
> > In fact, right now it is possible to query/change these 3 vlan offload flags on the fly
> > (after dev_start) on  port basis by rte_eth_dev_(get|set)_vlan_offload API.
> > So, I think at least these 3 flags need to be remained on a port basis.
> 
> I don't understand how it helps to be able to configure the same thing
> in 2 places.

Because some offloads are per device, another - per queue.
Configuring on a device basis would allow most users to conjure all
queues in the same manner by default.
Those users who would  need more fine-grained setup (per queue)
will be able to overwrite it by rx_queue_setup().
 
> I think you are just describing a limitation of these HW: some offloads
> must be the same for all queues.

As I said above - on some devices some offloads might also affect queues
that belong to VFs (to another ports in DPDK words).   
You might never invoke rx_queue_setup() for these queues per your app.
But you still want to enable this offload on that device.

> It does not prevent from configuring them in the per-queue setup.
> 
> > In fact, why can't we have both per port and per queue RX offload:
> > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis.
> > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis.
> > - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis  -
> >    rx_queue_setup() will return an error.
> 
> The queue setup can work while the value is the same for every queues.

Ok, and how people would know that?
That for device N offload X has to be the same for all queues,
and for device M offload X can be differs for different queues.

Again, if we don't allow to enable/disable offloads for particular queue,
why to bother with updating rx_queue_setup() API at all? 

> 
> > - rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs
> >   can be configured on a per queue basis.
> 
> Yes the PMD should advertise its limitations like being forced to
> apply the same configuration to all its queues.

Didn't get your last sentence.
Konstantin

> 
> > BTW - in that case we probably wouldn't need ignore flag inside rx_conf anymore.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04 14:02     ` Shahaf Shuler
@ 2017-09-04 15:55       ` Ananyev, Konstantin
  0 siblings, 0 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-04 15:55 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon; +Cc: dev



> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> Sent: Monday, September 4, 2017 3:03 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> Hi Konstantin,
> 
> Monday, September 4, 2017 4:25 PM, Ananyev, Konstantin:
> >
> > Hi Shahaf,
> >
> > >  }
> > >
> > > +/**
> > > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > > + * offloads API.
> > > + */
> > > +static void
> > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > > +				struct rte_eth_rxq_conf *rxq_conf) {
> > > +	if (rxmode->header_split == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > > +	if (rxmode->hw_ip_checksum == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > > +	if (rxmode->hw_vlan_filter == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> >
> > Thinking on it a bit more:
> > VLAN_FILTER is definitely one per device, as it would affect VFs also.
> > At least that's what we have for Intel devices (ixgbe, i40e) right now.
> 
> This is vendor specific. For Mellanox this is one per device (regardless if it is a vf/pf).
> 
> > For Intel devices VLAN_STRIP is also per device and will also be  applied to all
> > corresponding VFs.
> 
> Again - vendor specific. For Mellanox is per queue.

Yep, I understand it varies quite a lot from vendor from vendor.
That's why I started to think we need to have to allow user to specify rx_offloads
for both port and queue basis.

> 
> > In fact, right now it is possible to query/change these 3 vlan offload flags on
> > the fly (after dev_start) on  port basis by
> > rte_eth_dev_(get|set)_vlan_offload API.
> > So, I think at least these 3 flags need to be remained on a port basis.
> 
> Am not sure I agree.
> Why, for example, block from application the option to set some queues with vlan strip and some without if device allows?
> Also how will we decide which offloads should stay per port and which are allowed to move per queue? this much depends on the
> underlying PMD.
> 
> Looks like i missed that part on ethdev, and if Rx offload will be per queue I will need to change it also.
> 
> 
> > In fact, why can't we have both per port and per queue RX offload:
> > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port
> > basis.
> > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on
> > a queue basis.
> > - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue
> > basis  -
> >    rx_queue_setup() will return an error.
> 
> Why not taking the per port configuration as a sub-case of per queue configuration?
> For per-port offloads as long as the same configuration applies the queue setup succeeds.

Do you mean that as long as queue config for offload X matches port config for the same offload - it is ok,
even if for that particular device offload X is per port not per queue?

Let say:
...
rxconf.offloads = DEV_RX_OFFLOAD_VLAN_STRIP;
...
rte_eth_dev_configure(port, ...);
...

/* queue offloads matches port offloads - always  */
rte_eth_rx_queue_setup(port, ..., &rcxonf);
...
rxconf.offloads &= ~ DEV_RX_OFFLOAD_VLAN_STRIP;

/* ok for devices where vlan_stip per queue is supported,
  * fails for devices with vlan_strip offload is per device.
  */ 
rte_eth_rx_queue_setup(port, ..., &rcxonf);

?

Konstantin





 

> 
> 
> > - rte_eth_rxq_info can be extended to provide information which
> > RX_OFFLOADs
> >   can be configured on a per queue basis.
> 
> I am OK with the info suggestion.
> 
> > BTW - in that case we probably wouldn't need ignore flag inside rx_conf
> > anymore.
> >
> >
> > > +	if (rxmode->hw_vlan_strip == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> > > +	if (rxmode->hw_vlan_extend == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> > > +	if (rxmode->jumbo_frame == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > There are some extra checks for that flag inside rte_eth_dev_configure().
> > If we going so support it per queue - then it probably need to be updated.
> >
> > > +	if (rxmode->hw_strip_crc == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> > > +	if (rxmode->enable_scatter == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_SCATTER;
> > > +	if (rxmode->enable_lro == 1)
> > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_TCP_LRO; }
> > > +
> >
> > Konstantin

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-04 14:18       ` Ananyev, Konstantin
@ 2017-09-05  7:48         ` Thomas Monjalon
  2017-09-05  8:09           ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-05  7:48 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Shahaf Shuler, dev

04/09/2017 16:18, Ananyev, Konstantin:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 04/09/2017 15:25, Ananyev, Konstantin:
> > > Hi Shahaf,
> > >
> > > > +/**
> > > > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > > > + * offloads API.
> > > > + */
> > > > +static void
> > > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > > > +				struct rte_eth_rxq_conf *rxq_conf)
> > > > +{
> > > > +	if (rxmode->header_split == 1)
> > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > > > +	if (rxmode->hw_ip_checksum == 1)
> > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > > > +	if (rxmode->hw_vlan_filter == 1)
> > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> > >
> > > Thinking on it a bit more:
> > > VLAN_FILTER is definitely one per device, as it would affect VFs also.
> > > At least that's what we have for Intel devices (ixgbe, i40e) right now.
> > > For Intel devices VLAN_STRIP is also per device and
> > > will also be  applied to all corresponding VFs.
> > > In fact, right now it is possible to query/change these 3 vlan offload flags on the fly
> > > (after dev_start) on  port basis by rte_eth_dev_(get|set)_vlan_offload API.
> > > So, I think at least these 3 flags need to be remained on a port basis.
> > 
> > I don't understand how it helps to be able to configure the same thing
> > in 2 places.
> 
> Because some offloads are per device, another - per queue.
> Configuring on a device basis would allow most users to conjure all
> queues in the same manner by default.
> Those users who would  need more fine-grained setup (per queue)
> will be able to overwrite it by rx_queue_setup().

Those users can set the same config for all queues.
>  
> > I think you are just describing a limitation of these HW: some offloads
> > must be the same for all queues.
> 
> As I said above - on some devices some offloads might also affect queues
> that belong to VFs (to another ports in DPDK words).   
> You might never invoke rx_queue_setup() for these queues per your app.
> But you still want to enable this offload on that device.

You are advocating for per-port configuration API because
some settings must be the same on all the ports of your hardware?
So there is a big trouble. You don't need per-port settings,
but per-hw-device settings.
Or would you accept more fine-grained per-port settings?
If yes, you can accept even finer grained per-queues settings.
> 
> > It does not prevent from configuring them in the per-queue setup.
> > 
> > > In fact, why can't we have both per port and per queue RX offload:
> > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis.
> > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis.
> > > - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis  -
> > >    rx_queue_setup() will return an error.
> > 
> > The queue setup can work while the value is the same for every queues.
> 
> Ok, and how people would know that?
> That for device N offload X has to be the same for all queues,
> and for device M offload X can be differs for different queues.

We can know the hardware limitations by filling this information
at PMD init.

> Again, if we don't allow to enable/disable offloads for particular queue,
> why to bother with updating rx_queue_setup() API at all? 

I do not understand this question.

> > > - rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs
> > >   can be configured on a per queue basis.
> > 
> > Yes the PMD should advertise its limitations like being forced to
> > apply the same configuration to all its queues.
> 
> Didn't get your last sentence.

I agree that the hardware limitations must be written in an ethdev structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-05  7:48         ` Thomas Monjalon
@ 2017-09-05  8:09           ` Ananyev, Konstantin
  2017-09-05 10:51             ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-05  8:09 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Shahaf Shuler, dev



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, September 5, 2017 8:48 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> 04/09/2017 16:18, Ananyev, Konstantin:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 04/09/2017 15:25, Ananyev, Konstantin:
> > > > Hi Shahaf,
> > > >
> > > > > +/**
> > > > > + * A conversion function from rxmode offloads API to rte_eth_rxq_conf
> > > > > + * offloads API.
> > > > > + */
> > > > > +static void
> > > > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode,
> > > > > +				struct rte_eth_rxq_conf *rxq_conf)
> > > > > +{
> > > > > +	if (rxmode->header_split == 1)
> > > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> > > > > +	if (rxmode->hw_ip_checksum == 1)
> > > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> > > > > +	if (rxmode->hw_vlan_filter == 1)
> > > > > +		rxq_conf->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> > > >
> > > > Thinking on it a bit more:
> > > > VLAN_FILTER is definitely one per device, as it would affect VFs also.
> > > > At least that's what we have for Intel devices (ixgbe, i40e) right now.
> > > > For Intel devices VLAN_STRIP is also per device and
> > > > will also be  applied to all corresponding VFs.
> > > > In fact, right now it is possible to query/change these 3 vlan offload flags on the fly
> > > > (after dev_start) on  port basis by rte_eth_dev_(get|set)_vlan_offload API.
> > > > So, I think at least these 3 flags need to be remained on a port basis.
> > >
> > > I don't understand how it helps to be able to configure the same thing
> > > in 2 places.
> >
> > Because some offloads are per device, another - per queue.
> > Configuring on a device basis would allow most users to conjure all
> > queues in the same manner by default.
> > Those users who would  need more fine-grained setup (per queue)
> > will be able to overwrite it by rx_queue_setup().
> 
> Those users can set the same config for all queues.
> >
> > > I think you are just describing a limitation of these HW: some offloads
> > > must be the same for all queues.
> >
> > As I said above - on some devices some offloads might also affect queues
> > that belong to VFs (to another ports in DPDK words).
> > You might never invoke rx_queue_setup() for these queues per your app.
> > But you still want to enable this offload on that device.

I am ok with having per-port and per-queue offload configuration.
My concern is that after that patch only per-queue offload configuration will remain.
I think we need both.
Konstantin

> 
> You are advocating for per-port configuration API because
> some settings must be the same on all the ports of your hardware?
> So there is a big trouble. You don't need per-port settings,
> but per-hw-device settings.
> Or would you accept more fine-grained per-port settings?
> If yes, you can accept even finer grained per-queues settings.
> >
> > > It does not prevent from configuring them in the per-queue setup.
> > >
> > > > In fact, why can't we have both per port and per queue RX offload:
> > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on a port basis.
> > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply them on a queue basis.
> > > > - if particular RX_OFFLOAD flag for that device couldn't be setup on a queue basis  -
> > > >    rx_queue_setup() will return an error.
> > >
> > > The queue setup can work while the value is the same for every queues.
> >
> > Ok, and how people would know that?
> > That for device N offload X has to be the same for all queues,
> > and for device M offload X can be differs for different queues.
> 
> We can know the hardware limitations by filling this information
> at PMD init.
> 
> > Again, if we don't allow to enable/disable offloads for particular queue,
> > why to bother with updating rx_queue_setup() API at all?
> 
> I do not understand this question.
> 
> > > > - rte_eth_rxq_info can be extended to provide information which RX_OFFLOADs
> > > >   can be configured on a per queue basis.
> > >
> > > Yes the PMD should advertise its limitations like being forced to
> > > apply the same configuration to all its queues.
> >
> > Didn't get your last sentence.
> 
> I agree that the hardware limitations must be written in an ethdev structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-05  8:09           ` Ananyev, Konstantin
@ 2017-09-05 10:51             ` Shahaf Shuler
  2017-09-05 13:50               ` Thomas Monjalon
  2017-09-05 15:31               ` Ananyev, Konstantin
  0 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-05 10:51 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev

Tuesday, September 5, 2017 11:10 AM, Ananyev, Konstantin:

> > > > > In fact, right now it is possible to query/change these 3 vlan
> > > > > offload flags on the fly (after dev_start) on  port basis by
> rte_eth_dev_(get|set)_vlan_offload API.

Regarding this API from ethdev.

So this seems like a hack on ethdev. Currently there are 2 ways for user to set Rx vlan offloads.
One is through dev_configure which require the ports to be stopped. The other is this API which can set even if the port is started.

We should have only one place were application set offloads and this is currently on dev_configure,
And future to be on rx_queue_setup.

I would say that this API should be removed as well.
Application which wants to change those offloads will stop the ports and reconfigure the PMD.
Am quite sure that there are PMDs which need to re-create the Rxq based on vlan offloads changing and this cannot be done while the traffic flows.


> > > > > So, I think at least these 3 flags need to be remained on a port basis.
> > > >
> > > > I don't understand how it helps to be able to configure the same
> > > > thing in 2 places.
> > >
> > > Because some offloads are per device, another - per queue.
> > > Configuring on a device basis would allow most users to conjure all
> > > queues in the same manner by default.
> > > Those users who would  need more fine-grained setup (per queue) will
> > > be able to overwrite it by rx_queue_setup().
> >
> > Those users can set the same config for all queues.
> > >
> > > > I think you are just describing a limitation of these HW: some
> > > > offloads must be the same for all queues.
> > >
> > > As I said above - on some devices some offloads might also affect
> > > queues that belong to VFs (to another ports in DPDK words).
> > > You might never invoke rx_queue_setup() for these queues per your
> app.
> > > But you still want to enable this offload on that device.
> 
> I am ok with having per-port and per-queue offload configuration.
> My concern is that after that patch only per-queue offload configuration will
> remain.
> I think we need both.

So looks like we all agree PMDs should report as part of the rte_eth_dev_info_get which offloads are per port and which are per queue.

Regarding the offloads configuration by application I see 2 options:
1. have an API to set offloads per port as part of device configure and API to set offloads per queue as part of queue setup
2. set all offloads as part of queue configuration (per port offloads will be set equally for all queues). In case of a mixed configuration for port offloads PMD will return error.
    Such error can be reported on device start. The PMD will traverse the queues and check for conflicts.

I will focus on the cons, since both achieve the goal:

Cons of #1:
- Two places to configure offloads.
- Like Thomas mentioned - what about offloads per device? This direction leads to more places to configure the offloads.

Cons of #2:
- Late error reporting - on device start and not on queue setup.

I would go with #2.

> Konstantin
> 
> >
> > You are advocating for per-port configuration API because some
> > settings must be the same on all the ports of your hardware?
> > So there is a big trouble. You don't need per-port settings, but
> > per-hw-device settings.
> > Or would you accept more fine-grained per-port settings?
> > If yes, you can accept even finer grained per-queues settings.
> > >
> > > > It does not prevent from configuring them in the per-queue setup.
> > > >
> > > > > In fact, why can't we have both per port and per queue RX offload:
> > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on
> a port basis.
> > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply
> them on a queue basis.
> > > > > - if particular RX_OFFLOAD flag for that device couldn't be setup on a
> queue basis  -
> > > > >    rx_queue_setup() will return an error.
> > > >
> > > > The queue setup can work while the value is the same for every
> queues.
> > >
> > > Ok, and how people would know that?
> > > That for device N offload X has to be the same for all queues, and
> > > for device M offload X can be differs for different queues.
> >
> > We can know the hardware limitations by filling this information at
> > PMD init.
> >
> > > Again, if we don't allow to enable/disable offloads for particular
> > > queue, why to bother with updating rx_queue_setup() API at all?
> >
> > I do not understand this question.
> >
> > > > > - rte_eth_rxq_info can be extended to provide information which
> RX_OFFLOADs
> > > > >   can be configured on a per queue basis.
> > > >
> > > > Yes the PMD should advertise its limitations like being forced to
> > > > apply the same configuration to all its queues.
> > >
> > > Didn't get your last sentence.
> >
> > I agree that the hardware limitations must be written in an ethdev
> structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-05 10:51             ` Shahaf Shuler
@ 2017-09-05 13:50               ` Thomas Monjalon
  2017-09-05 15:31               ` Ananyev, Konstantin
  1 sibling, 0 replies; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-05 13:50 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Ananyev, Konstantin, dev

05/09/2017 12:51, Shahaf Shuler:
> So looks like we all agree PMDs should report as part of the rte_eth_dev_info_get which offloads are per port and which are per queue.
> 
> Regarding the offloads configuration by application I see 2 options:
> 1. have an API to set offloads per port as part of device configure and API to set offloads per queue as part of queue setup
> 2. set all offloads as part of queue configuration (per port offloads will be set equally for all queues). In case of a mixed configuration for port offloads PMD will return error.
>     Such error can be reported on device start. The PMD will traverse the queues and check for conflicts.
> 
> I will focus on the cons, since both achieve the goal:
> 
> Cons of #1:
> - Two places to configure offloads.
> - Like Thomas mentioned - what about offloads per device? This direction leads to more places to configure the offloads.
> 
> Cons of #2:
> - Late error reporting - on device start and not on queue setup.

Why not reporting error on queue setup?

> I would go with #2.

I vote also for #2

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-05 10:51             ` Shahaf Shuler
  2017-09-05 13:50               ` Thomas Monjalon
@ 2017-09-05 15:31               ` Ananyev, Konstantin
  2017-09-06  6:01                 ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-05 15:31 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon; +Cc: dev




> 
> > > > > > In fact, right now it is possible to query/change these 3 vlan
> > > > > > offload flags on the fly (after dev_start) on  port basis by
> > rte_eth_dev_(get|set)_vlan_offload API.
> 
> Regarding this API from ethdev.
> 
> So this seems like a hack on ethdev. Currently there are 2 ways for user to set Rx vlan offloads.
> One is through dev_configure which require the ports to be stopped. The other is this API which can set even if the port is started.

Yes there is an ability to enable/disable VLAN offloads without stop/reconfigure the device.
Though I wouldn't call it 'a hack'.
>From my perspective - it is a useful feature. 
Same as it is possible in some cases to change MTU without stopping device, etc.

> 
> We should have only one place were application set offloads and this is currently on dev_configure,

Hmm, if HW supports the ability to do things at runtime why we have to stop users from using that ability?

> And future to be on rx_queue_setup.
> 
> I would say that this API should be removed as well.
> Application which wants to change those offloads will stop the ports and reconfigure the PMD.

I wouldn't agree - see above.

> Am quite sure that there are PMDs which need to re-create the Rxq based on vlan offloads changing and this cannot be done while the
> traffic flows.

That's an optional API - PMD can choose does it want to support it or not.

> 
> 
> > > > > > So, I think at least these 3 flags need to be remained on a port basis.
> > > > >
> > > > > I don't understand how it helps to be able to configure the same
> > > > > thing in 2 places.
> > > >
> > > > Because some offloads are per device, another - per queue.
> > > > Configuring on a device basis would allow most users to conjure all
> > > > queues in the same manner by default.
> > > > Those users who would  need more fine-grained setup (per queue) will
> > > > be able to overwrite it by rx_queue_setup().
> > >
> > > Those users can set the same config for all queues.
> > > >
> > > > > I think you are just describing a limitation of these HW: some
> > > > > offloads must be the same for all queues.
> > > >
> > > > As I said above - on some devices some offloads might also affect
> > > > queues that belong to VFs (to another ports in DPDK words).
> > > > You might never invoke rx_queue_setup() for these queues per your
> > app.
> > > > But you still want to enable this offload on that device.
> >
> > I am ok with having per-port and per-queue offload configuration.
> > My concern is that after that patch only per-queue offload configuration will
> > remain.
> > I think we need both.
> 
> So looks like we all agree PMDs should report as part of the rte_eth_dev_info_get which offloads are per port and which are per queue.

Yep.

> 
> Regarding the offloads configuration by application I see 2 options:
> 1. have an API to set offloads per port as part of device configure and API to set offloads per queue as part of queue setup
> 2. set all offloads as part of queue configuration (per port offloads will be set equally for all queues). In case of a mixed configuration for
> port offloads PMD will return error.
>     Such error can be reported on device start. The PMD will traverse the queues and check for conflicts.
> 
> I will focus on the cons, since both achieve the goal:
> 
> Cons of #1:
> - Two places to configure offloads.

Yes, but why is that a problem?

> - Like Thomas mentioned - what about offloads per device? This direction leads to more places to configure the offloads.

As you said above - there would be 2 places: per port and per queue.
Could you explain - what other places you are talking about? 

> 
> Cons of #2:
> - Late error reporting - on device start and not on queue setup.

Consider scenario when PF has a corresponding VFs
(PF is controlled by DPDK)
Right now (at least with Intel HW) it is possible to:

struct rte_eth_conf dev_conf;
 dev_conf. rxmode.hw_vlan_filter = 1;
...
rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
rte_eth_dev_start(pf_port_id);

In that scenario I don't have any RX/TX queues configured.
Though I still able to enable vlan filter, and it would work correctly for VFs.
Same for other per-port offloads.
With approach #2 it simply wouldn't work.

So my preference is still #1.

Konstantin

> 
> I would go with #2.
> 
> > Konstantin
> >
> > >
> > > You are advocating for per-port configuration API because some
> > > settings must be the same on all the ports of your hardware?
> > > So there is a big trouble. You don't need per-port settings, but
> > > per-hw-device settings.
> > > Or would you accept more fine-grained per-port settings?
> > > If yes, you can accept even finer grained per-queues settings.
> > > >
> > > > > It does not prevent from configuring them in the per-queue setup.
> > > > >
> > > > > > In fact, why can't we have both per port and per queue RX offload:
> > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on
> > a port basis.
> > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply
> > them on a queue basis.
> > > > > > - if particular RX_OFFLOAD flag for that device couldn't be setup on a
> > queue basis  -
> > > > > >    rx_queue_setup() will return an error.
> > > > >
> > > > > The queue setup can work while the value is the same for every
> > queues.
> > > >
> > > > Ok, and how people would know that?
> > > > That for device N offload X has to be the same for all queues, and
> > > > for device M offload X can be differs for different queues.
> > >
> > > We can know the hardware limitations by filling this information at
> > > PMD init.
> > >
> > > > Again, if we don't allow to enable/disable offloads for particular
> > > > queue, why to bother with updating rx_queue_setup() API at all?
> > >
> > > I do not understand this question.
> > >
> > > > > > - rte_eth_rxq_info can be extended to provide information which
> > RX_OFFLOADs
> > > > > >   can be configured on a per queue basis.
> > > > >
> > > > > Yes the PMD should advertise its limitations like being forced to
> > > > > apply the same configuration to all its queues.
> > > >
> > > > Didn't get your last sentence.
> > >
> > > I agree that the hardware limitations must be written in an ethdev
> > structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-05 15:31               ` Ananyev, Konstantin
@ 2017-09-06  6:01                 ` Shahaf Shuler
  2017-09-06  9:33                   ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-06  6:01 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev

Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin:
> >
> > > > > > > In fact, right now it is possible to query/change these 3
> > > > > > > vlan offload flags on the fly (after dev_start) on  port
> > > > > > > basis by
> > > rte_eth_dev_(get|set)_vlan_offload API.
> >
> > Regarding this API from ethdev.
> >
> > So this seems like a hack on ethdev. Currently there are 2 ways for user to
> set Rx vlan offloads.
> > One is through dev_configure which require the ports to be stopped. The
> other is this API which can set even if the port is started.
> 
> Yes there is an ability to enable/disable VLAN offloads without
> stop/reconfigure the device.
> Though I wouldn't call it 'a hack'.
> From my perspective - it is a useful feature.
> Same as it is possible in some cases to change MTU without stopping device,
> etc.
> 
> >
> > We should have only one place were application set offloads and this
> > is currently on dev_configure,
> 
> Hmm, if HW supports the ability to do things at runtime why we have to stop
> users from using that ability?
> 
> > And future to be on rx_queue_setup.
> >
> > I would say that this API should be removed as well.
> > Application which wants to change those offloads will stop the ports and
> reconfigure the PMD.
> 
> I wouldn't agree - see above.
> 
> > Am quite sure that there are PMDs which need to re-create the Rxq
> > based on vlan offloads changing and this cannot be done while the traffic
> flows.
> 
> That's an optional API - PMD can choose does it want to support it or not.
> 
> >
> >
> > > > > > > So, I think at least these 3 flags need to be remained on a port
> basis.
> > > > > >
> > > > > > I don't understand how it helps to be able to configure the
> > > > > > same thing in 2 places.
> > > > >
> > > > > Because some offloads are per device, another - per queue.
> > > > > Configuring on a device basis would allow most users to conjure
> > > > > all queues in the same manner by default.
> > > > > Those users who would  need more fine-grained setup (per queue)
> > > > > will be able to overwrite it by rx_queue_setup().
> > > >
> > > > Those users can set the same config for all queues.
> > > > >
> > > > > > I think you are just describing a limitation of these HW: some
> > > > > > offloads must be the same for all queues.
> > > > >
> > > > > As I said above - on some devices some offloads might also
> > > > > affect queues that belong to VFs (to another ports in DPDK words).
> > > > > You might never invoke rx_queue_setup() for these queues per
> > > > > your
> > > app.
> > > > > But you still want to enable this offload on that device.
> > >
> > > I am ok with having per-port and per-queue offload configuration.
> > > My concern is that after that patch only per-queue offload
> > > configuration will remain.
> > > I think we need both.
> >
> > So looks like we all agree PMDs should report as part of the
> rte_eth_dev_info_get which offloads are per port and which are per queue.
> 
> Yep.
> 
> >
> > Regarding the offloads configuration by application I see 2 options:
> > 1. have an API to set offloads per port as part of device configure
> > and API to set offloads per queue as part of queue setup 2. set all
> > offloads as part of queue configuration (per port offloads will be set equally
> for all queues). In case of a mixed configuration for port offloads PMD will
> return error.
> >     Such error can be reported on device start. The PMD will traverse the
> queues and check for conflicts.
> >
> > I will focus on the cons, since both achieve the goal:
> >
> > Cons of #1:
> > - Two places to configure offloads.
> 
> Yes, but why is that a problem?

If we could make the offloads API to set the offloads in a single place it would be much cleaner and less error prune.
There is one flow which change the offloads configuration. 
Later on when we want to change/expend it will be much simpler, as all modification can happen in a single place only.

> 
> > - Like Thomas mentioned - what about offloads per device? This direction
> leads to more places to configure the offloads.
> 
> As you said above - there would be 2 places: per port and per queue.
> Could you explain - what other places you are talking about?

In fact, the vlan filter offload for PF is a *per device* offload and not per port. Since the corresponding VF has it just by the fact the PF set it on dev_configure.
So to be exact, such offload should be set on a new offload section called "per device offloads".
Currently you compromise on setting it in the *per port* offload section, with proper explanation on the VF limitation in intel.

> 
> >
> > Cons of #2:
> > - Late error reporting - on device start and not on queue setup.
> 
> Consider scenario when PF has a corresponding VFs (PF is controlled by
> DPDK) Right now (at least with Intel HW) it is possible to:
> 
> struct rte_eth_conf dev_conf;
>  dev_conf. rxmode.hw_vlan_filter = 1;
> ...
> rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
> rte_eth_dev_start(pf_port_id);
> 
> In that scenario I don't have any RX/TX queues configured.
> Though I still able to enable vlan filter, and it would work correctly for VFs.
> Same for other per-port offloads.

For the PF - enabling vlan filtering without any queues means nothing. The PF can receive no traffic, what different does it makes the vlan filtering is set?
For the VF - I assume it will have queues, therefore for it vlan filtering has a meaning. However as I said above, the VF has the vlan filter because in intel this is per-device offload, so this is not a good example. 

Which other per-port offloads you refer to?
I don't understand what is the meaning of setting per-port offloads without opening any Tx/Rx queues. 


> With approach #2 it simply wouldn't work.

Yes for vlan filtering it will not work on intel, and this may be enough to move to suggestion #1.

Thomas?

> 
> So my preference is still #1.
> 
> Konstantin
> 
> >
> > I would go with #2.
> >
> > > Konstantin
> > >
> > > >
> > > > You are advocating for per-port configuration API because some
> > > > settings must be the same on all the ports of your hardware?
> > > > So there is a big trouble. You don't need per-port settings, but
> > > > per-hw-device settings.
> > > > Or would you accept more fine-grained per-port settings?
> > > > If yes, you can accept even finer grained per-queues settings.
> > > > >
> > > > > > It does not prevent from configuring them in the per-queue setup.
> > > > > >
> > > > > > > In fact, why can't we have both per port and per queue RX
> offload:
> > > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply
> > > > > > > them on
> > > a port basis.
> > > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and
> > > > > > > apply
> > > them on a queue basis.
> > > > > > > - if particular RX_OFFLOAD flag for that device couldn't be
> > > > > > > setup on a
> > > queue basis  -
> > > > > > >    rx_queue_setup() will return an error.
> > > > > >
> > > > > > The queue setup can work while the value is the same for every
> > > queues.
> > > > >
> > > > > Ok, and how people would know that?
> > > > > That for device N offload X has to be the same for all queues,
> > > > > and for device M offload X can be differs for different queues.
> > > >
> > > > We can know the hardware limitations by filling this information
> > > > at PMD init.
> > > >
> > > > > Again, if we don't allow to enable/disable offloads for
> > > > > particular queue, why to bother with updating rx_queue_setup() API
> at all?
> > > >
> > > > I do not understand this question.
> > > >
> > > > > > > - rte_eth_rxq_info can be extended to provide information
> > > > > > > which
> > > RX_OFFLOADs
> > > > > > >   can be configured on a per queue basis.
> > > > > >
> > > > > > Yes the PMD should advertise its limitations like being forced
> > > > > > to apply the same configuration to all its queues.
> > > > >
> > > > > Didn't get your last sentence.
> > > >
> > > > I agree that the hardware limitations must be written in an ethdev
> > > structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-06  6:01                 ` Shahaf Shuler
@ 2017-09-06  9:33                   ` Ananyev, Konstantin
  2017-09-13  9:27                     ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-06  9:33 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon; +Cc: dev



> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> Sent: Wednesday, September 6, 2017 7:02 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin:
> > >
> > > > > > > > In fact, right now it is possible to query/change these 3
> > > > > > > > vlan offload flags on the fly (after dev_start) on  port
> > > > > > > > basis by
> > > > rte_eth_dev_(get|set)_vlan_offload API.
> > >
> > > Regarding this API from ethdev.
> > >
> > > So this seems like a hack on ethdev. Currently there are 2 ways for user to
> > set Rx vlan offloads.
> > > One is through dev_configure which require the ports to be stopped. The
> > other is this API which can set even if the port is started.
> >
> > Yes there is an ability to enable/disable VLAN offloads without
> > stop/reconfigure the device.
> > Though I wouldn't call it 'a hack'.
> > From my perspective - it is a useful feature.
> > Same as it is possible in some cases to change MTU without stopping device,
> > etc.
> >
> > >
> > > We should have only one place were application set offloads and this
> > > is currently on dev_configure,
> >
> > Hmm, if HW supports the ability to do things at runtime why we have to stop
> > users from using that ability?
> >
> > > And future to be on rx_queue_setup.
> > >
> > > I would say that this API should be removed as well.
> > > Application which wants to change those offloads will stop the ports and
> > reconfigure the PMD.
> >
> > I wouldn't agree - see above.
> >
> > > Am quite sure that there are PMDs which need to re-create the Rxq
> > > based on vlan offloads changing and this cannot be done while the traffic
> > flows.
> >
> > That's an optional API - PMD can choose does it want to support it or not.
> >
> > >
> > >
> > > > > > > > So, I think at least these 3 flags need to be remained on a port
> > basis.
> > > > > > >
> > > > > > > I don't understand how it helps to be able to configure the
> > > > > > > same thing in 2 places.
> > > > > >
> > > > > > Because some offloads are per device, another - per queue.
> > > > > > Configuring on a device basis would allow most users to conjure
> > > > > > all queues in the same manner by default.
> > > > > > Those users who would  need more fine-grained setup (per queue)
> > > > > > will be able to overwrite it by rx_queue_setup().
> > > > >
> > > > > Those users can set the same config for all queues.
> > > > > >
> > > > > > > I think you are just describing a limitation of these HW: some
> > > > > > > offloads must be the same for all queues.
> > > > > >
> > > > > > As I said above - on some devices some offloads might also
> > > > > > affect queues that belong to VFs (to another ports in DPDK words).
> > > > > > You might never invoke rx_queue_setup() for these queues per
> > > > > > your
> > > > app.
> > > > > > But you still want to enable this offload on that device.
> > > >
> > > > I am ok with having per-port and per-queue offload configuration.
> > > > My concern is that after that patch only per-queue offload
> > > > configuration will remain.
> > > > I think we need both.
> > >
> > > So looks like we all agree PMDs should report as part of the
> > rte_eth_dev_info_get which offloads are per port and which are per queue.
> >
> > Yep.
> >
> > >
> > > Regarding the offloads configuration by application I see 2 options:
> > > 1. have an API to set offloads per port as part of device configure
> > > and API to set offloads per queue as part of queue setup 2. set all
> > > offloads as part of queue configuration (per port offloads will be set equally
> > for all queues). In case of a mixed configuration for port offloads PMD will
> > return error.
> > >     Such error can be reported on device start. The PMD will traverse the
> > queues and check for conflicts.
> > >
> > > I will focus on the cons, since both achieve the goal:
> > >
> > > Cons of #1:
> > > - Two places to configure offloads.
> >
> > Yes, but why is that a problem?
> 
> If we could make the offloads API to set the offloads in a single place it would be much cleaner and less error prune.
> There is one flow which change the offloads configuration.
> Later on when we want to change/expend it will be much simpler, as all modification can happen in a single place only.

Ok I understand that intention, but I don't think it would fit for all cases.
>From my perspective it is not that big hassle to specify offloads for per-port and per-queue way.
Again we still have offloads that could be enabled/disabled without device/queue stop. 

> 
> >
> > > - Like Thomas mentioned - what about offloads per device? This direction
> > leads to more places to configure the offloads.
> >
> > As you said above - there would be 2 places: per port and per queue.
> > Could you explain - what other places you are talking about?
> 
> In fact, the vlan filter offload for PF is a *per device* offload and not per port. Since the corresponding VF has it just by the fact the PF set it
> on dev_configure.

I don't understand why you differ per-device and per-port offloads.
As I remember, right now there is one to one mapping between ethdev and portid inside DPDK.
All rte_ethdev functions do refer device through port id.
We can name it per-device or per-port offloads - whatever you like - it wouldn't change anything.

> So to be exact, such offload should be set on a new offload section called "per device offloads".
> Currently you compromise on setting it in the *per port* offload section, with proper explanation on the VF limitation in intel.
> 
> >
> > >
> > > Cons of #2:
> > > - Late error reporting - on device start and not on queue setup.
> >
> > Consider scenario when PF has a corresponding VFs (PF is controlled by
> > DPDK) Right now (at least with Intel HW) it is possible to:
> >
> > struct rte_eth_conf dev_conf;
> >  dev_conf. rxmode.hw_vlan_filter = 1;
> > ...
> > rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
> > rte_eth_dev_start(pf_port_id);
> >
> > In that scenario I don't have any RX/TX queues configured.
> > Though I still able to enable vlan filter, and it would work correctly for VFs.
> > Same for other per-port offloads.
> 
> For the PF - enabling vlan filtering without any queues means nothing. The PF can receive no traffic, what different does it makes the vlan
> filtering is set?
> For the VF - I assume it will have queues, therefore for it vlan filtering has a meaning. However as I said above, the VF has the vlan filter
> because in intel this is per-device offload, so this is not a good example.

Yes it is a per-device offload, and right now it is possible to enable/disable it via
dev_confgiure(); dev_start();
without configuring/starting any RX/TX queues.
That's an ability I'd like to preserve.
So from my perspective it is a perfectly valid example.  
Konstantin

> 
> Which other per-port offloads you refer to?
> I don't understand what is the meaning of setting per-port offloads without opening any Tx/Rx queues.
> 
> 
> > With approach #2 it simply wouldn't work.
> 
> Yes for vlan filtering it will not work on intel, and this may be enough to move to suggestion #1.
> 
> Thomas?
> 
> >
> > So my preference is still #1.
> >
> > Konstantin
> >
> > >
> > > I would go with #2.
> > >
> > > > Konstantin
> > > >
> > > > >
> > > > > You are advocating for per-port configuration API because some
> > > > > settings must be the same on all the ports of your hardware?
> > > > > So there is a big trouble. You don't need per-port settings, but
> > > > > per-hw-device settings.
> > > > > Or would you accept more fine-grained per-port settings?
> > > > > If yes, you can accept even finer grained per-queues settings.
> > > > > >
> > > > > > > It does not prevent from configuring them in the per-queue setup.
> > > > > > >
> > > > > > > > In fact, why can't we have both per port and per queue RX
> > offload:
> > > > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply
> > > > > > > > them on
> > > > a port basis.
> > > > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and
> > > > > > > > apply
> > > > them on a queue basis.
> > > > > > > > - if particular RX_OFFLOAD flag for that device couldn't be
> > > > > > > > setup on a
> > > > queue basis  -
> > > > > > > >    rx_queue_setup() will return an error.
> > > > > > >
> > > > > > > The queue setup can work while the value is the same for every
> > > > queues.
> > > > > >
> > > > > > Ok, and how people would know that?
> > > > > > That for device N offload X has to be the same for all queues,
> > > > > > and for device M offload X can be differs for different queues.
> > > > >
> > > > > We can know the hardware limitations by filling this information
> > > > > at PMD init.
> > > > >
> > > > > > Again, if we don't allow to enable/disable offloads for
> > > > > > particular queue, why to bother with updating rx_queue_setup() API
> > at all?
> > > > >
> > > > > I do not understand this question.
> > > > >
> > > > > > > > - rte_eth_rxq_info can be extended to provide information
> > > > > > > > which
> > > > RX_OFFLOADs
> > > > > > > >   can be configured on a per queue basis.
> > > > > > >
> > > > > > > Yes the PMD should advertise its limitations like being forced
> > > > > > > to apply the same configuration to all its queues.
> > > > > >
> > > > > > Didn't get your last sentence.
> > > > >
> > > > > I agree that the hardware limitations must be written in an ethdev
> > > > structure.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v2 0/2] ethdev new offloads API
  2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
                   ` (3 preceding siblings ...)
  2017-09-04  7:12 ` [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new " Shahaf Shuler
@ 2017-09-10 12:07 ` Shahaf Shuler
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 1/2] ethdev: introduce Rx queue " Shahaf Shuler
                     ` (2 more replies)
  4 siblings, 3 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-10 12:07 UTC (permalink / raw)
  To: thomas; +Cc: dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
With the new API, Tx and Rx offloads configuration is per queue.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

The new API does not have an equivalent for the below Tx flags:

* ETH_TXQ_FLAGS_NOREFCOUNT
* ETH_TXQ_FLAGS_NOMULTMEMP

The reason is that those flags are not to manage offloads, rather some
guarantee from application on the way it uses mbufs, therefore could not be
present as part of DEV_TX_OFFLOADS_*.
Such flags are useful only for benchmarks, and therefore provide a non-realistic    
performance for DPDK customers using simple benchmarks for evaluation.
Leveraging the work being done in this series to clean up those flags.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  New capabilities were added for PMD to advertize it has moved to the new
   offloads API.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html

on v2:
 - Taking new approach of dividing offloads into per-queue and per-port one.
 - Postpone the Tx/Rx public struct renaming to 18.02
 - squash the helper functions into the Rx/Tx offloads intro patches.

Shahaf Shuler (2):
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API

 doc/guides/nics/features.rst  |  27 +++--
 lib/librte_ether/rte_ethdev.c | 215 ++++++++++++++++++++++++++++++++++---
 lib/librte_ether/rte_ethdev.h |  84 ++++++++++++++-
 3 files changed, 301 insertions(+), 25 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] ethdev: introduce Rx queue offloads API
  2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
@ 2017-09-10 12:07   ` Shahaf Shuler
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx " Shahaf Shuler
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
  2 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-10 12:07 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  19 +++--
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  52 ++++++++++++-
 3 files changed, 204 insertions(+), 23 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..f2c8497c2 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,7 +206,7 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,8 +509,7 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +525,7 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -540,7 +540,7 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -557,6 +557,7 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -574,6 +575,7 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -586,6 +588,7 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0597641ee..b3c10701e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -767,7 +857,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1004,6 +1094,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1074,8 +1165,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -1979,7 +2080,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2055,23 +2157,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2080,6 +2200,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2094,13 +2221,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0adf3274a..f424cba04 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@ struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	uint64_t offloads;
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		ignore_offload_bitfield : 1;
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
 };
 
 /**
@@ -691,6 +712,12 @@ struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -706,6 +733,7 @@ struct rte_eth_rxconf {
 #define ETH_TXQ_FLAGS_NOXSUMS \
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
+
 /**
  * A structure used to configure a TX ring of an Ethernet port.
  */
@@ -907,6 +935,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +989,11 @@ struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1870,6 +1913,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1923,6 +1969,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 1/2] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-09-10 12:07   ` Shahaf Shuler
  2017-09-10 17:48     ` Stephen Hemminger
  2017-09-11  8:03     ` Andrew Rybchenko
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
  2 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-10 12:07 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

The new API does not have an equivalent for the below, benchmark
specific, flags:

	- ETH_TXQ_FLAGS_NOREFCOUNT
	- ETH_TXQ_FLAGS_NOMULTMEMP

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---

Note:

In the special case were application is using the old API and the PMD
has allready converted to the new one. If there are Tx offloads which can
be set only per-port the queue setup may fail.

I choose to treat this case as an exception considering all Tx offloads are
currently defined to be per-queue. New ones to be added should require from
the application to move to the new API as well.

---
 doc/guides/nics/features.rst  |  8 ++++++
 lib/librte_ether/rte_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 32 ++++++++++++++++++++-
 3 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f2c8497c2..bb25a1cee 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,6 +131,7 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
+* **[uses]    rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
@@ -220,6 +221,7 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
@@ -510,6 +512,7 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +529,7 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -541,6 +545,7 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -576,6 +582,7 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b3c10701e..cd79cb1c9 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1186,6 +1186,50 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1193,6 +1237,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1237,8 +1282,20 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+	else
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index f424cba04..4ad7dd059 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	uint64_t offloads;
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -733,6 +739,14 @@ struct rte_eth_rxconf {
 #define ETH_TXQ_FLAGS_NOXSUMS \
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+	/**
+	 * When set the txq_flags should be ignored,
+	 * instead per-queue Tx offloads will be set on offloads field
+	 * located on rte_eth_txq_conf struct.
+	 * This flag is temporary till the rte_eth_txq_conf.txq_flags
+	 * API will be deprecated.
+	 */
 
 /**
  * A structure used to configure a TX ring of an Ethernet port.
@@ -745,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 };
 
 /**
@@ -969,6 +989,8 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< multi segment send is supported. */
 
 struct rte_pci_device;
 
@@ -991,9 +1013,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2024,6 +2049,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-10 17:48     ` Stephen Hemminger
  2017-09-11  5:52       ` Shahaf Shuler
  2017-09-11  8:03     ` Andrew Rybchenko
  1 sibling, 1 reply; 134+ messages in thread
From: Stephen Hemminger @ 2017-09-10 17:48 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: thomas, dev

On Sun, 10 Sep 2017 15:07:49 +0300
Shahaf Shuler <shahafs@mellanox.com> wrote:

> Introduce a new API to configure Tx offloads.
> 
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
> 
> In addition the Tx offloads will be disabled by default and be
> enabled per application needs. This will much simplify PMD management of
> the different offloads.
> 
> The new API does not have an equivalent for the below, benchmark
> specific, flags:
> 
> 	- ETH_TXQ_FLAGS_NOREFCOUNT
> 	- ETH_TXQ_FLAGS_NOMULTMEMP
> 
> Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> field in order to move to the new API.
> 
> The old Tx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---

Agree on a conceptual and hardware level, that this is a property that
could be per queue. But is there really an application that would want
to have refcounting on one queue and not another?  If application is cloning
mbuf's it needs refcounting.  One could even argue that for safety
these should be library wide.  That way if an application tried to manipulate
ref count on an mbuf and refcountin was enabled it could be panic'd.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-10 17:48     ` Stephen Hemminger
@ 2017-09-11  5:52       ` Shahaf Shuler
  2017-09-11  6:21         ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-11  5:52 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Thomas Monjalon, dev

Sunday, September 10, 2017 8:48 PM, Stephen Hemminger:
> 
> On Sun, 10 Sep 2017 15:07:49 +0300
> Shahaf Shuler <shahafs@mellanox.com> wrote:
> 
> > Introduce a new API to configure Tx offloads.
> >
> > In the new API, offloads are divided into per-port and per-queue
> > offloads. The PMD reports capability for each of them.
> > Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> > To enable per-port offload, the offload should be set on both device
> > configuration and queue configuration. To enable per-queue offload,
> > the offloads can be set only on queue configuration.
> >
> > In addition the Tx offloads will be disabled by default and be enabled
> > per application needs. This will much simplify PMD management of the
> > different offloads.
> >
> > The new API does not have an equivalent for the below, benchmark
> > specific, flags:
> >
> > 	- ETH_TXQ_FLAGS_NOREFCOUNT
> > 	- ETH_TXQ_FLAGS_NOMULTMEMP
> >
> > Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> > field in order to move to the new API.
> >
> > The old Tx offloads API is kept for the meanwhile, in order to enable
> > a smooth transition for PMDs and application to the new API.
> >
> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> > ---
> 
> Agree on a conceptual and hardware level, that this is a property that could
> be per queue. But is there really an application that would want to have
> refcounting on one queue and not another?  If application is cloning mbuf's it
> needs refcounting.  One could even argue that for safety these should be
> library wide.  That way if an application tried to manipulate ref count on an
> mbuf and refcountin was enabled it could be panic'd.

Actually the refcount and multi mempool flags has no equivalent on this new API. They are not counted as offloads rather some guarantees from application side, which I agree that probably needs to by library wide. 
In the current API you cannot set those per queue nor per port. I think there is intention to move those flags to some other location following this series [1]

[1]
http://dpdk.org/ml/archives/dev/2017-September/074475.html

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  5:52       ` Shahaf Shuler
@ 2017-09-11  6:21         ` Jerin Jacob
  2017-09-11  7:56           ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-11  6:21 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Stephen Hemminger, Thomas Monjalon, dev

-----Original Message-----
> Date: Mon, 11 Sep 2017 05:52:19 +0000
> From: Shahaf Shuler <shahafs@mellanox.com>
> To: Stephen Hemminger <stephen@networkplumber.org>
> CC: Thomas Monjalon <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Sunday, September 10, 2017 8:48 PM, Stephen Hemminger:
> > 
> > On Sun, 10 Sep 2017 15:07:49 +0300
> > Shahaf Shuler <shahafs@mellanox.com> wrote:
> > 
> > > Introduce a new API to configure Tx offloads.
> > >
> > > In the new API, offloads are divided into per-port and per-queue
> > > offloads. The PMD reports capability for each of them.
> > > Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> > > To enable per-port offload, the offload should be set on both device
> > > configuration and queue configuration. To enable per-queue offload,
> > > the offloads can be set only on queue configuration.
> > >
> > > In addition the Tx offloads will be disabled by default and be enabled
> > > per application needs. This will much simplify PMD management of the
> > > different offloads.
> > >
> > > The new API does not have an equivalent for the below, benchmark
> > > specific, flags:
> > >
> > > 	- ETH_TXQ_FLAGS_NOREFCOUNT
> > > 	- ETH_TXQ_FLAGS_NOMULTMEMP
> > >
> > > Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> > > field in order to move to the new API.
> > >
> > > The old Tx offloads API is kept for the meanwhile, in order to enable
> > > a smooth transition for PMDs and application to the new API.
> > >
> > > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> > > ---
> > 
> > Agree on a conceptual and hardware level, that this is a property that could
> > be per queue. But is there really an application that would want to have
> > refcounting on one queue and not another?  If application is cloning mbuf's it
> > needs refcounting.  One could even argue that for safety these should be
> > library wide.  That way if an application tried to manipulate ref count on an
> > mbuf and refcountin was enabled it could be panic'd.
> 
> Actually the refcount and multi mempool flags has no equivalent on this new API. They are not counted as offloads rather some guarantees from application side, which I agree that probably needs to by library wide. 
> In the current API you cannot set those per queue nor per port. I think there is intention to move those flags to some other location following this series [1]

I don't think that is in following this series. It should be in this
series, if we are removing a feature then we should find a way to fit that in
some location as there is a use case for it[1]. Without an alternative,
this patch is NACK from me.

[1]
http://dpdk.org/ml/archives/dev/2017-September/074475.html

> 
> [1]
> http://dpdk.org/ml/archives/dev/2017-September/074475.html
> 
> 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  6:21         ` Jerin Jacob
@ 2017-09-11  7:56           ` Shahaf Shuler
  2017-09-11  8:06             ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-11  7:56 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Stephen Hemminger, Thomas Monjalon, dev

Monday, September 11, 2017 9:21 AM, Jerin Jacob:
> 
> I don't think that is in following this series. It should be in this series, if we are
> removing a feature then we should find a way to fit that in some location as
> there is a use case for it[1]. Without an alternative, this patch is NACK from
> me.
> 
> [1]
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpd
> k.org%2Fml%2Farchives%2Fdev%2F2017-
> September%2F074475.html&data=02%7C01%7Cshahafs%40mellanox.com%7
> C6ce00422e1db4075f9f808d4f8dd5f39%7Ca652971c7d2e4d9ba6a4d149256f46
> 1b%7C0%7C0%7C636407077062635613&sdata=INJMOfiL9iwSboWuTVhnVvllu
> e2gS1%2FVB4Aj9XP09No%3D&reserved=0

I don't understand.
>From the exact link above, you explicitly say that *you* will move this flags once the series is integrated. Quoting:

" 
> Please Jerin, could you work on moving these settings in a new API?

Sure. Once the generic code is in place. We are committed to fix the
PMDs by 18.02.
"

What has changed?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx " Shahaf Shuler
  2017-09-10 17:48     ` Stephen Hemminger
@ 2017-09-11  8:03     ` Andrew Rybchenko
  2017-09-11 12:27       ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-11  8:03 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/10/2017 03:07 PM, Shahaf Shuler wrote:
> Introduce a new API to configure Tx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> In addition the Tx offloads will be disabled by default and be
> enabled per application needs. This will much simplify PMD management of
> the different offloads.
>
> The new API does not have an equivalent for the below, benchmark
> specific, flags:
>
> 	- ETH_TXQ_FLAGS_NOREFCOUNT
> 	- ETH_TXQ_FLAGS_NOMULTMEMP
>
> Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> field in order to move to the new API.
>
> The old Tx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>
> Note:
>
> In the special case were application is using the old API and the PMD
> has allready converted to the new one. If there are Tx offloads which can
> be set only per-port the queue setup may fail.
>
> I choose to treat this case as an exception considering all Tx offloads are
> currently defined to be per-queue. New ones to be added should require from
> the application to move to the new API as well.
>
> ---
>   doc/guides/nics/features.rst  |  8 ++++++
>   lib/librte_ether/rte_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++-
>   lib/librte_ether/rte_ethdev.h | 32 ++++++++++++++++++++-
>   3 files changed, 97 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index f2c8497c2..bb25a1cee 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -131,6 +131,7 @@ Lock-free Tx queue
>   If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
>   invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
>   
> +* **[uses]    rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.

It should be rte_eth_txconf here and below since renaming is postponed.

>   * **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
>   * **[related]  API**: ``rte_eth_tx_burst()``.
>   
> @@ -220,6 +221,7 @@ TSO
>   
>   Supports TCP Segmentation Offloading.
>   
> +* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
>   * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
>   * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
>   * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
> @@ -510,6 +512,7 @@ VLAN offload
>   Supports VLAN offload to hardware.
>   
>   * **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
> +* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
>   * **[implements] eth_dev_ops**: ``vlan_offload_set``.
>   * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
>   * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
> @@ -526,6 +529,7 @@ QinQ offload
>   Supports QinQ (queue in queue) offload.
>   
>   * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
> +* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
>   * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
>   * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
>      ``mbuf.vlan_tci_outer``.
> @@ -541,6 +545,7 @@ L3 checksum offload
>   Supports L3 checksum offload.
>   
>   * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
> +* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
>   * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
>     ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
>   * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
> @@ -558,6 +563,7 @@ L4 checksum offload
>   Supports L4 checksum offload.
>   
>   * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
> +* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
>   * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
>     ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
>     ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
> @@ -576,6 +582,7 @@ MACsec offload
>   Supports MACsec.
>   
>   * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
> +* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
>   * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
>   * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
>     ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
> @@ -589,6 +596,7 @@ Inner L3 checksum
>   Supports inner packet L3 checksum.
>   
>   * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
> +* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
>   * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
>     ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
>     ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index b3c10701e..cd79cb1c9 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -1186,6 +1186,50 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>   	return ret;
>   }
>   
> +/**
> + * A conversion function from txq_flags API.
> + */
> +static void
> +rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)

Maybe tx_offlaods should be simply return value of the function instead 
of void.
Similar comment is applicable to rte_eth_convert_txq_offloads().

> +{
> +	uint64_t offloads = 0;
> +
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
> +		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
> +		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
> +		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
> +		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
> +	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
> +		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
> +
> +	*tx_offloads = offloads;
> +}
> +
> +/**
> + * A conversion function from offloads API.
> + */
> +static void
> +rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
> +{
> +	uint32_t flags = 0;
> +
> +	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
> +		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
> +	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
> +		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
> +	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
> +		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
> +	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
> +		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
> +	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
> +		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
> +
> +	*txq_flags = flags;
> +}
> +
>   int
>   rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>   		       uint16_t nb_tx_desc, unsigned int socket_id,
> @@ -1193,6 +1237,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>   {
>   	struct rte_eth_dev *dev;
>   	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_txconf local_conf;
>   	void **txq;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -1237,8 +1282,20 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>   	if (tx_conf == NULL)
>   		tx_conf = &dev_info.default_txconf;
>   
> +	/*
> +	 * Convert between the offloads API to enable PMDs to support
> +	 * only one of them.
> +	 */
> +	local_conf = *tx_conf;
> +	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)
> +		rte_eth_convert_txq_offloads(tx_conf->offloads,
> +					     &local_conf.txq_flags);
> +	else
> +		rte_eth_convert_txq_flags(tx_conf->txq_flags,
> +					  &local_conf.offloads);
> +
>   	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
> -					       socket_id, tx_conf);
> +					       socket_id, &local_conf);
>   }
>   
>   void
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index f424cba04..4ad7dd059 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
>    */
>   struct rte_eth_txmode {
>   	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
> +	uint64_t offloads;
> +	/**

It should be /**< to say Doxygen that it is a comment for the previous line.
However, I'd prefer to see the comment before uint64_t offloads; (and 
keep /** )
Not sure, since it highly depends on what is used in other similar 
places in the file.
Similar comments are applicable to a number of lines below.

> +	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
> +	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
> +	 * structure are allowed to be set.
> +	 */
>   
>   	/* For i40e specifically */
>   	uint16_t pvid;
> @@ -733,6 +739,14 @@ struct rte_eth_rxconf {
>   #define ETH_TXQ_FLAGS_NOXSUMS \
>   		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
>   		 ETH_TXQ_FLAGS_NOXSUMTCP)
> +#define ETH_TXQ_FLAGS_IGNORE	0x8000
> +	/**
> +	 * When set the txq_flags should be ignored,
> +	 * instead per-queue Tx offloads will be set on offloads field
> +	 * located on rte_eth_txq_conf struct.
> +	 * This flag is temporary till the rte_eth_txq_conf.txq_flags
> +	 * API will be deprecated.
> +	 */
>   
>   /**
>    * A structure used to configure a TX ring of an Ethernet port.
> @@ -745,6 +759,12 @@ struct rte_eth_txconf {
>   
>   	uint32_t txq_flags; /**< Set flags for the Tx queue */
>   	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> +	uint64_t offloads;
> +	/**
> +	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
> +	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
> +	 * structure are allowed to be set.
> +	 */
>   };
>   
>   /**
> @@ -969,6 +989,8 @@ struct rte_eth_conf {
>   /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
>    * tx queue without SW lock.
>    */
> +#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
> +/**< multi segment send is supported. */

The comment should start from capital letter as everywhere else in the 
file (as far as I can see).

>   
>   struct rte_pci_device;
>   
> @@ -991,9 +1013,12 @@ struct rte_eth_dev_info {
>   	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
>   	uint64_t rx_offload_capa;
>   	/**< Device per port RX offload capabilities. */
> -	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
> +	uint64_t tx_offload_capa;
> +	/**< Device per port TX offload capabilities. */
>   	uint64_t rx_queue_offload_capa;
>   	/**< Device per queue RX offload capabilities. */
> +	uint64_t tx_queue_offload_capa;
> +	/**< Device per queue TX offload capabilities. */
>   	uint16_t reta_size;
>   	/**< Device redirection table size, the total number of entries. */
>   	uint8_t hash_key_size; /**< Hash key size in bytes */
> @@ -2024,6 +2049,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>    *   - The *txq_flags* member contains flags to pass to the TX queue setup
>    *     function to configure the behavior of the TX queue. This should be set
>    *     to 0 if no special configuration is required.
> + *     This API is obsolete and will be deprecated. Applications
> + *     should set it to ETH_TXQ_FLAGS_IGNORE and use
> + *     the offloads field below.
> + *   - The *offloads* member contains Tx offloads to be enabled.
> + *     Offloads which are not set cannot be used on the datapath.
>    *
>    *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
>    *     the transmit function to use default values.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  7:56           ` Shahaf Shuler
@ 2017-09-11  8:06             ` Jerin Jacob
  2017-09-11  8:46               ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-11  8:06 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Stephen Hemminger, Thomas Monjalon, dev

-----Original Message-----
> Date: Mon, 11 Sep 2017 07:56:05 +0000
> From: Shahaf Shuler <shahafs@mellanox.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>  <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Monday, September 11, 2017 9:21 AM, Jerin Jacob:
> > 
> > I don't think that is in following this series. It should be in this series, if we are
> > removing a feature then we should find a way to fit that in some location as
> > there is a use case for it[1]. Without an alternative, this patch is NACK from
> > me.
> > 
> > [1]
> > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpd
> > k.org%2Fml%2Farchives%2Fdev%2F2017-
> > September%2F074475.html&data=02%7C01%7Cshahafs%40mellanox.com%7
> > C6ce00422e1db4075f9f808d4f8dd5f39%7Ca652971c7d2e4d9ba6a4d149256f46
> > 1b%7C0%7C0%7C636407077062635613&sdata=INJMOfiL9iwSboWuTVhnVvllu
> > e2gS1%2FVB4Aj9XP09No%3D&reserved=0
> 
> I don't understand.
> From the exact link above, you explicitly say that *you* will move this flags once the series is integrated. Quoting:
> 
> " 
> > Please Jerin, could you work on moving these settings in a new API?
> 
> Sure. Once the generic code is in place. We are committed to fix the
> PMDs by 18.02.

Yes. I will take care of the PMD(nicvf) side of the changes. Not in ethdev or
mempool. Meaning, you need to decide how you are going to expose the
equivalent of these flags and enable the generic code for those flags in
ethdev or mempool. The drivers side of changes I can take care.

> "
> 
> What has changed?
> 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  8:06             ` Jerin Jacob
@ 2017-09-11  8:46               ` Shahaf Shuler
  2017-09-11  9:05                 ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-11  8:46 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Stephen Hemminger, Thomas Monjalon, dev

Monday, September 11, 2017 11:06 AM, Jerin Jacob:
> >
> > I don't understand.
> > From the exact link above, you explicitly say that *you* will move this flags
> once the series is integrated. Quoting:
> >
> > "
> > > Please Jerin, could you work on moving these settings in a new API?
> >
> > Sure. Once the generic code is in place. We are committed to fix the
> > PMDs by 18.02.
> 
> Yes. I will take care of the PMD(nicvf) side of the changes. Not in ethdev or
> mempool. Meaning, you need to decide how you are going to expose the
> equivalent of these flags and enable the generic code for those flags in
> ethdev or mempool. The drivers side of changes I can take care.
> 

How about doing it a PMD option? 
Seems like nicvf is the only PMD which care about them.

If there will be more PMDs later, we can think about which API is needed.  

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  8:46               ` Shahaf Shuler
@ 2017-09-11  9:05                 ` Jerin Jacob
  2017-09-11 11:02                   ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-11  9:05 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Stephen Hemminger, Thomas Monjalon, dev

-----Original Message-----
> Date: Mon, 11 Sep 2017 08:46:50 +0000
> From: Shahaf Shuler <shahafs@mellanox.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>  <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Monday, September 11, 2017 11:06 AM, Jerin Jacob:
> > >
> > > I don't understand.
> > > From the exact link above, you explicitly say that *you* will move this flags
> > once the series is integrated. Quoting:
> > >
> > > "
> > > > Please Jerin, could you work on moving these settings in a new API?
> > >
> > > Sure. Once the generic code is in place. We are committed to fix the
> > > PMDs by 18.02.
> > 
> > Yes. I will take care of the PMD(nicvf) side of the changes. Not in ethdev or
> > mempool. Meaning, you need to decide how you are going to expose the
> > equivalent of these flags and enable the generic code for those flags in
> > ethdev or mempool. The drivers side of changes I can take care.
> > 
> 
> How about doing it a PMD option? 
> Seems like nicvf is the only PMD which care about them.

Lets take flag by flag:
ETH_TXQ_FLAGS_NOMULTMEMP - I think, this should be removed. But we can have
common code in ethdev pmd to detect all pool being configured from on the same pool
as on the rx_configure() application passes the mempool.

ETH_TXQ_FLAGS_NOREFCOUNT: This one has i40e and nicvf consumers.
And it is driven by the use case too. So it should available in some
form.

> 
> If there will be more PMDs later, we can think about which API is needed.  
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  9:05                 ` Jerin Jacob
@ 2017-09-11 11:02                   ` Ananyev, Konstantin
  2017-09-12  4:01                     ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-11 11:02 UTC (permalink / raw)
  To: Jerin Jacob, Shahaf Shuler
  Cc: Stephen Hemminger, Thomas Monjalon, dev, Zhang, Helin, Wu, Jingjing


> > > >
> > > > I don't understand.
> > > > From the exact link above, you explicitly say that *you* will move this flags
> > > once the series is integrated. Quoting:
> > > >
> > > > "
> > > > > Please Jerin, could you work on moving these settings in a new API?
> > > >
> > > > Sure. Once the generic code is in place. We are committed to fix the
> > > > PMDs by 18.02.
> > >
> > > Yes. I will take care of the PMD(nicvf) side of the changes. Not in ethdev or
> > > mempool. Meaning, you need to decide how you are going to expose the
> > > equivalent of these flags and enable the generic code for those flags in
> > > ethdev or mempool. The drivers side of changes I can take care.
> > >
> >
> > How about doing it a PMD option?
> > Seems like nicvf is the only PMD which care about them.
> 
> Lets take flag by flag:
> ETH_TXQ_FLAGS_NOMULTMEMP - I think, this should be removed. But we can have
> common code in ethdev pmd to detect all pool being configured from on the same pool
> as on the rx_configure() application passes the mempool.


This is TX offloads, not RX.
At tx_queue_setup() user doesn't have to provide the mempool pointer,
and can pass mbuf from any mempool to the TX routine.
BTW, how do you know one which particular mempool to use?
Still read it from xmitted mbuf (At least first one), I presume?

> 
> ETH_TXQ_FLAGS_NOREFCOUNT: This one has i40e and nicvf consumers.

About i40e - as far as I know, no-one use i40e PMD with this flag.
As far as I remember, it was added purely for benchmarking purposes on some early stages.
So my vote would be to remove it from i40e.
Helin, Jingjing - what are your thoughts here.
About nicvf - as I can see it is used only in conjunction with ETH_TXQ_FLAGS_NOMULTMEMP,
never alone.
My understanding is that current meaning of these flags
is a promise for PMD that for that particular TX queue user would submit only mbufs that:
- all belong to the same mempool
- always would have refcount==1
 - would always be a direct ones (no indirect mbufs)

So literally, yes it is not a TX HW offload, though I understand your intention to
have such possibility - it might help to save some cycles. 
Wonder would some new driver specific function would help in that case?
nicvf_txq_pool_setup(portid, queueid, struct rte_mempool *txpool, uint32_t flags);
or so?
So the user can call it just before rte_eth_tx_queue_setup()?
Konstantin

> And it is driven by the use case too. So it should available in some
> form.	
> 
> >
> > If there will be more PMDs later, we can think about which API is needed.
> >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11  8:03     ` Andrew Rybchenko
@ 2017-09-11 12:27       ` Shahaf Shuler
  2017-09-11 13:10         ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-11 12:27 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon; +Cc: dev

September 11, 2017 11:03 AM, Andrew Rybchenko:




 +/**

 + * A conversion function from txq_flags API.

 + */

 +static void

 +rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)

Maybe tx_offlaods should be simply return value of the function instead of void.
Similar comment is applicable to rte_eth_convert_txq_offloads().


Can you elaborate why it would be better?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11 12:27       ` Shahaf Shuler
@ 2017-09-11 13:10         ` Andrew Rybchenko
  0 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-11 13:10 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon; +Cc: dev

On 09/11/2017 03:27 PM, Shahaf Shuler wrote:
>
> September 11, 2017 11:03 AM, Andrew Rybchenko:
>
>     +/**
>
>     + * A conversion function from txq_flags API.
>
>     + */
>
>     +static void
>
>     +rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
>
>
> Maybe tx_offlaods should be simply return value of the function 
> instead of void.
> Similar comment is applicable to rte_eth_convert_txq_offloads().
>
> Can you elaborate why it would be better?
>

It is a pure converter function and it would avoid questions like:
Is tx_offloads an output only or input/output parameter?
Yes, the function is tiny and it is easy to find the answer, but still.

Also it would make it possible to pass the result to other function
call without extra variables etc.
Yes, right now usage of the function is very limited and hopefully
it will not live long, so it is not that important. Definitely up to you.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-11 11:02                   ` Ananyev, Konstantin
@ 2017-09-12  4:01                     ` Jerin Jacob
  2017-09-12  5:25                       ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-12  4:01 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Shahaf Shuler, Stephen Hemminger, Thomas Monjalon, dev, Zhang,
	Helin, Wu, Jingjing

-----Original Message-----
> Date: Mon, 11 Sep 2017 11:02:07 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, Shahaf Shuler
>  <shahafs@mellanox.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>  <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>, "Zhang, Helin"
>  <helin.zhang@intel.com>, "Wu, Jingjing" <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> 
> > > > >
> > > > > I don't understand.
> > > > > From the exact link above, you explicitly say that *you* will move this flags
> > > > once the series is integrated. Quoting:
> > > > >
> > > > > "
> > > > > > Please Jerin, could you work on moving these settings in a new API?
> > > > >
> > > > > Sure. Once the generic code is in place. We are committed to fix the
> > > > > PMDs by 18.02.
> > > >
> > > > Yes. I will take care of the PMD(nicvf) side of the changes. Not in ethdev or
> > > > mempool. Meaning, you need to decide how you are going to expose the
> > > > equivalent of these flags and enable the generic code for those flags in
> > > > ethdev or mempool. The drivers side of changes I can take care.
> > > >
> > >
> > > How about doing it a PMD option?
> > > Seems like nicvf is the only PMD which care about them.
> > 
> > Lets take flag by flag:
> > ETH_TXQ_FLAGS_NOMULTMEMP - I think, this should be removed. But we can have
> > common code in ethdev pmd to detect all pool being configured from on the same pool
> > as on the rx_configure() application passes the mempool.
> 
> 
> This is TX offloads, not RX.
> At tx_queue_setup() user doesn't have to provide the mempool pointer,
> and can pass mbuf from any mempool to the TX routine.
> BTW, how do you know one which particular mempool to use?
> Still read it from xmitted mbuf (At least first one), I presume?

Yes. Still it reads from xmitted mbuf for the first one.

> 
> > 
> > ETH_TXQ_FLAGS_NOREFCOUNT: This one has i40e and nicvf consumers.
> 
> About i40e - as far as I know, no-one use i40e PMD with this flag.
> As far as I remember, it was added purely for benchmarking purposes on some early stages.
> So my vote would be to remove it from i40e.
> Helin, Jingjing - what are your thoughts here.
> About nicvf - as I can see it is used only in conjunction with ETH_TXQ_FLAGS_NOMULTMEMP,
> never alone.
> My understanding is that current meaning of these flags
> is a promise for PMD that for that particular TX queue user would submit only mbufs that:
> - all belong to the same mempool
> - always would have refcount==1
>  - would always be a direct ones (no indirect mbufs)

Yes, only when ETH_TXQ_FLAGS_NOMULTMEMP and ETH_TXQ_FLAGS_NOREFCOUNT
selected at tx queue configuration.

> 
> So literally, yes it is not a TX HW offload, though I understand your intention to
> have such possibility - it might help to save some cycles. 

It not a few cycles. We could see ~24% drop on per core(with 64B) with 
testpmd and l3fwd on some SoCs. It is not very specific to nicvf HW, The
problem is with limited cache hierarchy in very low end arm64 machines.
For TX buffer recycling case, it need to touch the mbuf again to find out the
associated mempool to free. It is fine if application demands it but not
all the application demands it.

We have two category of arm64 machines, The high end machine where cache
hierarchy similar x86 server machine. The low end ones with very
limited cache resources. Unfortunately, we need to have the same binary on both
machines.


> Wonder would some new driver specific function would help in that case?
> nicvf_txq_pool_setup(portid, queueid, struct rte_mempool *txpool, uint32_t flags);
> or so?

It is possible, but how do we make such change in testpmd, l3fwd or
ipsec-gw in tree application which does need only NOMULTIMEMP &
NOREFCOUNT.

If there is concern about making it Tx queue level it is fine. We can
move from queue level to port level or global level.
IMO, Application should express in some form that it wants only
NOMULTIMEMP & NOREFCOUNT and thats is the case for l3fwd and ipsec-gw


> So the user can call it just before rte_eth_tx_queue_setup()?
> Konstantin
> 
> > And it is driven by the use case too. So it should available in some
> > form.	
> > 
> > >
> > > If there will be more PMDs later, we can think about which API is needed.
> > >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  4:01                     ` Jerin Jacob
@ 2017-09-12  5:25                       ` Shahaf Shuler
  2017-09-12  5:51                         ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-12  5:25 UTC (permalink / raw)
  To: Jerin Jacob, Ananyev, Konstantin
  Cc: Stephen Hemminger, Thomas Monjalon, dev, Zhang, Helin, Wu, Jingjing

Tuesday, September 12, 2017 7:01 AM, Jerin Jacob:
> Yes, only when ETH_TXQ_FLAGS_NOMULTMEMP and
> ETH_TXQ_FLAGS_NOREFCOUNT selected at tx queue configuration.
> 
> >
> > So literally, yes it is not a TX HW offload, though I understand your
> > intention to have such possibility - it might help to save some cycles.
> 
> It not a few cycles. We could see ~24% drop on per core(with 64B) with
> testpmd and l3fwd on some SoCs. It is not very specific to nicvf HW, The
> problem is with limited cache hierarchy in very low end arm64 machines.
> For TX buffer recycling case, it need to touch the mbuf again to find out the
> associated mempool to free. It is fine if application demands it but not all the
> application demands it.
> 
> We have two category of arm64 machines, The high end machine where
> cache hierarchy similar x86 server machine. The low end ones with very
> limited cache resources. Unfortunately, we need to have the same binary on
> both machines.
> 
> 
> > Wonder would some new driver specific function would help in that case?
> > nicvf_txq_pool_setup(portid, queueid, struct rte_mempool *txpool,
> > uint32_t flags); or so?
> 
> It is possible, but how do we make such change in testpmd, l3fwd or ipsec-
> gw in tree application which does need only NOMULTIMEMP &
> NOREFCOUNT.
> 
> If there is concern about making it Tx queue level it is fine. We can move
> from queue level to port level or global level.
> IMO, Application should express in some form that it wants only
> NOMULTIMEMP & NOREFCOUNT and thats is the case for l3fwd and ipsec-
> gw
> 

I understand the use case, and the fact those flags improve the performance on low-end ARM CPUs.
IMO those flags cannot be on queue/port level. They must be global.

Even though the use-case is generic the nicvf PMD is the only one which do such optimization.
So am suggesting again - why not expose it as a PMD specific parameter?

- The application can express it wants such optimization. 
- It is global

Currently it does not seems there is high demand for such flags from other PMDs. If such demand will raise, we can discuss again on how to expose it properly.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  5:25                       ` Shahaf Shuler
@ 2017-09-12  5:51                         ` Jerin Jacob
  2017-09-12  6:35                           ` Shahaf Shuler
  2017-09-12  6:43                           ` Andrew Rybchenko
  0 siblings, 2 replies; 134+ messages in thread
From: Jerin Jacob @ 2017-09-12  5:51 UTC (permalink / raw)
  To: Shahaf Shuler
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

-----Original Message-----
> Date: Tue, 12 Sep 2017 05:25:42 +0000
> From: Shahaf Shuler <shahafs@mellanox.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, "Ananyev, Konstantin"
>  <konstantin.ananyev@intel.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>  <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>, "Zhang, Helin"
>  <helin.zhang@intel.com>, "Wu, Jingjing" <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Tuesday, September 12, 2017 7:01 AM, Jerin Jacob:
> > Yes, only when ETH_TXQ_FLAGS_NOMULTMEMP and
> > ETH_TXQ_FLAGS_NOREFCOUNT selected at tx queue configuration.
> > 
> > >
> > > So literally, yes it is not a TX HW offload, though I understand your
> > > intention to have such possibility - it might help to save some cycles.
> > 
> > It not a few cycles. We could see ~24% drop on per core(with 64B) with
> > testpmd and l3fwd on some SoCs. It is not very specific to nicvf HW, The
> > problem is with limited cache hierarchy in very low end arm64 machines.
> > For TX buffer recycling case, it need to touch the mbuf again to find out the
> > associated mempool to free. It is fine if application demands it but not all the
> > application demands it.
> > 
> > We have two category of arm64 machines, The high end machine where
> > cache hierarchy similar x86 server machine. The low end ones with very
> > limited cache resources. Unfortunately, we need to have the same binary on
> > both machines.
> > 
> > 
> > > Wonder would some new driver specific function would help in that case?
> > > nicvf_txq_pool_setup(portid, queueid, struct rte_mempool *txpool,
> > > uint32_t flags); or so?
> > 
> > It is possible, but how do we make such change in testpmd, l3fwd or ipsec-
> > gw in tree application which does need only NOMULTIMEMP &
> > NOREFCOUNT.
> > 
> > If there is concern about making it Tx queue level it is fine. We can move
> > from queue level to port level or global level.
> > IMO, Application should express in some form that it wants only
> > NOMULTIMEMP & NOREFCOUNT and thats is the case for l3fwd and ipsec-
> > gw
> > 
> 
> I understand the use case, and the fact those flags improve the performance on low-end ARM CPUs.
> IMO those flags cannot be on queue/port level. They must be global.

Where should we have it as global(in terms of API)?
And why it can not be at port level?

> 
> Even though the use-case is generic the nicvf PMD is the only one which do such optimization.
> So am suggesting again - why not expose it as a PMD specific parameter?

Why to make it as PMD specific? if application can express it though
normative DPDK APIs.

> 
> - The application can express it wants such optimization. 
> - It is global
> 
> Currently it does not seems there is high demand for such flags from other PMDs. If such demand will raise, we can discuss again on how to expose it properly.

It is not PMD specific. It is all about where it runs? it will
applicable for any PMD that runs low end hardwares where it need SW
based Tx buffer recycling(The NPU is different story as it has HW
assisted mempool manager).
What we are loosing by running DPDK effectively on low end hardware
with such "on demand" runtime configuration though DPDK normative API.


> 
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  5:51                         ` Jerin Jacob
@ 2017-09-12  6:35                           ` Shahaf Shuler
  2017-09-12  6:46                             ` Andrew Rybchenko
  2017-09-12  7:17                             ` Jerin Jacob
  2017-09-12  6:43                           ` Andrew Rybchenko
  1 sibling, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-12  6:35 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
> > I understand the use case, and the fact those flags improve the
> performance on low-end ARM CPUs.
> > IMO those flags cannot be on queue/port level. They must be global.
> 
> Where should we have it as global(in terms of API)?
> And why it can not be at port level?

Because I don't think there is a use-case that application would want to have recounting on one port and not on the other. It is either application clone/not clone mbufs. 
Same about the multi mempool. It is either application have it or not. 

If there is a strong use-case for application to say on port X it clones mbufs and and port Y it don't then maybe this is enough to have it per-port.
We can go even further - why not to have guarantee per queue? it is possible if application is willing to manage. 

Again those are not offloads, therefore if we expose those this should on different location the offloads field on eth conf. 

> 
> >
> > Even though the use-case is generic the nicvf PMD is the only one which do
> such optimization.
> > So am suggesting again - why not expose it as a PMD specific parameter?
> 
> Why to make it as PMD specific? if application can express it though
> normative DPDK APIs.
> 
> >
> > - The application can express it wants such optimization.
> > - It is global
> >
> > Currently it does not seems there is high demand for such flags from other
> PMDs. If such demand will raise, we can discuss again on how to expose it
> properly.
> 
> It is not PMD specific. It is all about where it runs? it will applicable for any
> PMD that runs low end hardwares where it need SW based Tx buffer
> recycling(The NPU is different story as it has HW assisted mempool
> manager).

Maybe, but I don't see other PMD which use those flags. Do you aware to any plans to add such optimizations?
You are pushing for generic API which is currently used only by a single entity. 

> What we are loosing by running DPDK effectively on low end hardware with
> such "on demand" runtime configuration though DPDK normative API.

Complexity of APIs for applications. More structs on ethdev, more API definitions, more field to be configured by application, all valid for a single PMD. 
For the rest of the PMDs, those fields are currently don't-care. 

> 
> 
> >
> >
> >
> >
> >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  5:51                         ` Jerin Jacob
  2017-09-12  6:35                           ` Shahaf Shuler
@ 2017-09-12  6:43                           ` Andrew Rybchenko
  2017-09-12  6:59                             ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-12  6:43 UTC (permalink / raw)
  To: Jerin Jacob, Shahaf Shuler
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

On 09/12/2017 08:51 AM, Jerin Jacob wrote:
>> Tuesday, September 12, 2017 7:01 AM, Jerin Jacob:
>>> Yes, only when ETH_TXQ_FLAGS_NOMULTMEMP and
>>> ETH_TXQ_FLAGS_NOREFCOUNT selected at tx queue configuration.
>>>
>>>> So literally, yes it is not a TX HW offload, though I understand your
>>>> intention to have such possibility - it might help to save some cycles.
>>> It not a few cycles. We could see ~24% drop on per core(with 64B) with
>>> testpmd and l3fwd on some SoCs. It is not very specific to nicvf HW, The
>>> problem is with limited cache hierarchy in very low end arm64 machines.
>>> For TX buffer recycling case, it need to touch the mbuf again to find out the
>>> associated mempool to free. It is fine if application demands it but not all the
>>> application demands it.
>>>
>>> We have two category of arm64 machines, The high end machine where
>>> cache hierarchy similar x86 server machine. The low end ones with very
>>> limited cache resources. Unfortunately, we need to have the same binary on
>>> both machines.
>>>
>>>
>>>> Wonder would some new driver specific function would help in that case?
>>>> nicvf_txq_pool_setup(portid, queueid, struct rte_mempool *txpool,
>>>> uint32_t flags); or so?
>>> It is possible, but how do we make such change in testpmd, l3fwd or ipsec-
>>> gw in tree application which does need only NOMULTIMEMP &
>>> NOREFCOUNT.
>>>
>>> If there is concern about making it Tx queue level it is fine. We can move
>>> from queue level to port level or global level.
>>> IMO, Application should express in some form that it wants only
>>> NOMULTIMEMP & NOREFCOUNT and thats is the case for l3fwd and ipsec-
>>> gw
>>>
>> I understand the use case, and the fact those flags improve the performance on low-end ARM CPUs.
>> IMO those flags cannot be on queue/port level. They must be global.
> Where should we have it as global(in terms of API)?
> And why it can not be at port level?

I think port level is the right place for these flags. These flags 
define which
transmit and transmit cleanup callbacks could be used. These functions are
specified on port level now. However, I see no good reasons to change it.
It will complicate the possibility to make transmit and transmit cleanup 
callback
per queue (not per port as now).
All three (no-multi-seg, no-multi-mempool, no-reference-counter) are from
one group and should go together.

>> Even though the use-case is generic the nicvf PMD is the only one which do such optimization.
>> So am suggesting again - why not expose it as a PMD specific parameter?
> Why to make it as PMD specific? if application can express it though
> normative DPDK APIs.
>
>> - The application can express it wants such optimization.
>> - It is global
>>
>> Currently it does not seems there is high demand for such flags from other PMDs. If such demand will raise, we can discuss again on how to expose it properly.
> It is not PMD specific. It is all about where it runs? it will
> applicable for any PMD that runs low end hardwares where it need SW
> based Tx buffer recycling(The NPU is different story as it has HW
> assisted mempool manager).
> What we are loosing by running DPDK effectively on low end hardware
> with such "on demand" runtime configuration though DPDK normative API.

+1 and it improves performance on amd64 as well, definitely less than 24%,
but noticeable. If application architecture meets these conditions, why 
don't
allow it use the advantage and run faster.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  6:35                           ` Shahaf Shuler
@ 2017-09-12  6:46                             ` Andrew Rybchenko
  2017-09-12  7:17                             ` Jerin Jacob
  1 sibling, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-12  6:46 UTC (permalink / raw)
  To: Shahaf Shuler, Jerin Jacob
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

On 09/12/2017 09:35 AM, Shahaf Shuler wrote:
> Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
>>> - The application can express it wants such optimization.
>>> - It is global
>>>
>>> Currently it does not seems there is high demand for such flags from other
>> PMDs. If such demand will raise, we can discuss again on how to expose it
>> properly.
>>
>> It is not PMD specific. It is all about where it runs? it will applicable for any
>> PMD that runs low end hardwares where it need SW based Tx buffer
>> recycling(The NPU is different story as it has HW assisted mempool
>> manager).
> Maybe, but I don't see other PMD which use those flags. Do you aware to any plans to add such optimizations?
> You are pushing for generic API which is currently used only by a single entity.

http://dpdk.org/ml/archives/dev/2017-September/074907.html

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  6:43                           ` Andrew Rybchenko
@ 2017-09-12  6:59                             ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-12  6:59 UTC (permalink / raw)
  To: Andrew Rybchenko, Jerin Jacob
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

September 12, 2017 9:43 AM, Andrew Rybchenko:

I think port level is the right place for these flags. These flags define which
transmit and transmit cleanup callbacks could be used. These functions are
specified on port level now. However, I see no good reasons to change it.

The Tx queue flags are not currently per-port  rather per-queue. The flags are provided as an input for tx_queue_setup.
Even though application and example in dpdk tree use identical flags for all queues it doesn’t mean application is not allowed to do otherwise.


It will complicate the possibility to make transmit and transmit cleanup callback
per queue (not per port as now).
All three (no-multi-seg, no-multi-mempool, no-reference-counter) are from
one group and should go together.


Even though the use-case is generic the nicvf PMD is the only one which do such optimization.

So am suggesting again - why not expose it as a PMD specific parameter?



Why to make it as PMD specific? if application can express it though

normative DPDK APIs.





- The application can express it wants such optimization.

- It is global



Currently it does not seems there is high demand for such flags from other PMDs. If such demand will raise, we can discuss again on how to expose it properly.



It is not PMD specific. It is all about where it runs? it will

applicable for any PMD that runs low end hardwares where it need SW

based Tx buffer recycling(The NPU is different story as it has HW

assisted mempool manager).

What we are loosing by running DPDK effectively on low end hardware

with such "on demand" runtime configuration though DPDK normative API.

+1 and it improves performance on amd64 as well, definitely less than 24%,
but noticeable. If application architecture meets these conditions, why don't
allow it use the advantage and run faster.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  6:35                           ` Shahaf Shuler
  2017-09-12  6:46                             ` Andrew Rybchenko
@ 2017-09-12  7:17                             ` Jerin Jacob
  2017-09-12  8:03                               ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-12  7:17 UTC (permalink / raw)
  To: Shahaf Shuler
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

-----Original Message-----
> Date: Tue, 12 Sep 2017 06:35:16 +0000
> From: Shahaf Shuler <shahafs@mellanox.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, Stephen Hemminger
>  <stephen@networkplumber.org>, Thomas Monjalon <thomas@monjalon.net>,
>  "dev@dpdk.org" <dev@dpdk.org>, "Zhang, Helin" <helin.zhang@intel.com>,
>  "Wu, Jingjing" <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
> > > I understand the use case, and the fact those flags improve the
> > performance on low-end ARM CPUs.
> > > IMO those flags cannot be on queue/port level. They must be global.
> > 
> > Where should we have it as global(in terms of API)?
> > And why it can not be at port level?
> 
> Because I don't think there is a use-case that application would want to have recounting on one port and not on the other. It is either application clone/not clone mbufs. 
> Same about the multi mempool. It is either application have it or not. 

Why not? If a port is given to data plane and another port to control
plane. It can have different characteristics.

Making it port level, we can achieve the global use case as well. but not
another way around.

MULTISEG flag also has the same attribute. But some reason you are OK to
include that in flags.

> 
> If there is a strong use-case for application to say on port X it clones mbufs and and port Y it don't then maybe this is enough to have it per-port.
> We can go even further - why not to have guarantee per queue? it is possible if application is willing to manage. 
> 
> Again those are not offloads, therefore if we expose those this should on different location the offloads field on eth conf. 

What is the definition of offload? It is something we can offload to HW.
If so, then, reference count we can offload to HW with external HW pool
manager which DPDK has support now.

> 
> > 
> > >
> > > Even though the use-case is generic the nicvf PMD is the only one which do
> > such optimization.
> > > So am suggesting again - why not expose it as a PMD specific parameter?
> > 
> > Why to make it as PMD specific? if application can express it though
> > normative DPDK APIs.
> > 
> > >
> > > - The application can express it wants such optimization.
> > > - It is global
> > >
> > > Currently it does not seems there is high demand for such flags from other
> > PMDs. If such demand will raise, we can discuss again on how to expose it
> > properly.
> > 
> > It is not PMD specific. It is all about where it runs? it will applicable for any
> > PMD that runs low end hardwares where it need SW based Tx buffer
> > recycling(The NPU is different story as it has HW assisted mempool
> > manager).
> 
> Maybe, but I don't see other PMD which use those flags. Do you aware to any plans to add such optimizations?

Sorry. I can't comment on another vendor PMD roadmap.

> You are pushing for generic API which is currently used only by a single entity. 

You are removing a existing generic flag.

> 
> > What we are loosing by running DPDK effectively on low end hardware with
> > such "on demand" runtime configuration though DPDK normative API.
> 
> Complexity of APIs for applications. More structs on ethdev, more API definitions, more field to be configured by application, all valid for a single PMD. 
> For the rest of the PMDs, those fields are currently don't-care. 

I don't understand the application complexly port. It just configuration at
port level. And it is at application will, it can choose to run in any mode.
BTW, It is all boils down to features and performance/watt.
IMO, everything should be runtime configurable.

> 
> > 
> > 
> > >
> > >
> > >
> > >
> > >

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  7:17                             ` Jerin Jacob
@ 2017-09-12  8:03                               ` Shahaf Shuler
  2017-09-12 10:27                                 ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-12  8:03 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

Tuesday, September 12, 2017 10:18 AM, Jerin Jacob:
> > Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
> > > > I understand the use case, and the fact those flags improve the
> > > performance on low-end ARM CPUs.
> > > > IMO those flags cannot be on queue/port level. They must be global.
> > >
> > > Where should we have it as global(in terms of API)?
> > > And why it can not be at port level?
> >
> > Because I don't think there is a use-case that application would want to
> have recounting on one port and not on the other. It is either application
> clone/not clone mbufs.
> > Same about the multi mempool. It is either application have it or not.
> 
> Why not? If a port is given to data plane and another port to control plane. It
> can have different characteristics.
> 
> Making it port level, we can achieve the global use case as well. but not
> another way around.
> 
> MULTISEG flag also has the same attribute. But some reason you are OK to
> include that in flags.
> 
> >
> > If there is a strong use-case for application to say on port X it clones mbufs
> and and port Y it don't then maybe this is enough to have it per-port.
> > We can go even further - why not to have guarantee per queue? it is
> possible if application is willing to manage.
> >
> > Again those are not offloads, therefore if we expose those this should on
> different location the offloads field on eth conf.
> 
> What is the definition of offload? It is something we can offload to HW.
> If so, then, reference count we can offload to HW with external HW pool
> manager which DPDK has support now.

OK, well understood the requirement for such flags. Thanks for your replies.

I think that for simplicity I will add two more flags on the Tx offloads capabilities:

DEV_TX_OFFLOADS _MULTI_MEMPOOL <** Device supports transmission of mbufs from multiple mempools. */
DEV_TX_OFFLOADS_INDIRECT_MBUFS <** Device support transmission of indirect mbufs. */

Those caps can be reported by the PMD as per-port/per-queue offloads. Application will choose how to set those. When not set - PMD can assume all mbufs has ref_cnt = 1 and the same mempool.

Any objection? 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12  8:03                               ` Shahaf Shuler
@ 2017-09-12 10:27                                 ` Andrew Rybchenko
  2017-09-12 14:26                                   ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-12 10:27 UTC (permalink / raw)
  To: Shahaf Shuler, Jerin Jacob
  Cc: Ananyev, Konstantin, Stephen Hemminger, Thomas Monjalon, dev,
	Zhang, Helin, Wu, Jingjing

On 09/12/2017 11:03 AM, Shahaf Shuler wrote:
> OK, well understood the requirement for such flags. Thanks for your replies.
>
> I think that for simplicity I will add two more flags on the Tx offloads capabilities:
>
> DEV_TX_OFFLOADS _MULTI_MEMPOOL <** Device supports transmission of mbufs from multiple mempools. */
> DEV_TX_OFFLOADS_INDIRECT_MBUFS <** Device support transmission of indirect mbufs. */

Indirect mbufs is just an example when reference counters are required.
Direct mbufs may use reference counters as well.

> Those caps can be reported by the PMD as per-port/per-queue offloads. Application will choose how to set those. When not set - PMD can assume all mbufs has ref_cnt = 1 and the same mempool.
>
> Any objection?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12 10:27                                 ` Andrew Rybchenko
@ 2017-09-12 14:26                                   ` Ananyev, Konstantin
  2017-09-12 14:36                                     ` Jerin Jacob
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-12 14:26 UTC (permalink / raw)
  To: Andrew Rybchenko, Shahaf Shuler, Jerin Jacob
  Cc: Stephen Hemminger, Thomas Monjalon, dev, Zhang, Helin, Wu, Jingjing



> -----Original Message-----
> From: Andrew Rybchenko [mailto:arybchenko@solarflare.com]
> Sent: Tuesday, September 12, 2017 11:28 AM
> To: Shahaf Shuler <shahafs@mellanox.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> <thomas@monjalon.net>; dev@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
> 
> On 09/12/2017 11:03 AM, Shahaf Shuler wrote:
> > OK, well understood the requirement for such flags. Thanks for your replies.
> >
> > I think that for simplicity I will add two more flags on the Tx offloads capabilities:
> >
> > DEV_TX_OFFLOADS _MULTI_MEMPOOL <** Device supports transmission of mbufs from multiple mempools. */
> > DEV_TX_OFFLOADS_INDIRECT_MBUFS <** Device support transmission of indirect mbufs. */
> 
> Indirect mbufs is just an example when reference counters are required.
> Direct mbufs may use reference counters as well.

Personally, I still in favor to move these 2 flags away from TX_OFFLOADS.
But if people think it would be really helpfull to keep them, should we have then:
DEV_TX_OFFLOADS_FAST_FREE (or whatever then name will be) - 
it would mean the same what (NOMULTIMEMP | NOREFCOUNT) means now.
?
Konstsantin

> 
> > Those caps can be reported by the PMD as per-port/per-queue offloads. Application will choose how to set those. When not set - PMD
> can assume all mbufs has ref_cnt = 1 and the same mempool.
> >
> > Any objection?
> 


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12 14:26                                   ` Ananyev, Konstantin
@ 2017-09-12 14:36                                     ` Jerin Jacob
  2017-09-12 14:43                                       ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Jerin Jacob @ 2017-09-12 14:36 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Andrew Rybchenko, Shahaf Shuler, Stephen Hemminger,
	Thomas Monjalon, dev, Zhang, Helin, Wu, Jingjing

-----Original Message-----
> Date: Tue, 12 Sep 2017 14:26:38 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Andrew Rybchenko <arybchenko@solarflare.com>, Shahaf Shuler
>  <shahafs@mellanox.com>, Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>  <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>, "Zhang, Helin"
>  <helin.zhang@intel.com>, "Wu, Jingjing" <jingjing.wu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> 
> 
> > -----Original Message-----
> > From: Andrew Rybchenko [mailto:arybchenko@solarflare.com]
> > Sent: Tuesday, September 12, 2017 11:28 AM
> > To: Shahaf Shuler <shahafs@mellanox.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > <thomas@monjalon.net>; dev@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
> > 
> > On 09/12/2017 11:03 AM, Shahaf Shuler wrote:
> > > OK, well understood the requirement for such flags. Thanks for your replies.
> > >
> > > I think that for simplicity I will add two more flags on the Tx offloads capabilities:
> > >
> > > DEV_TX_OFFLOADS _MULTI_MEMPOOL <** Device supports transmission of mbufs from multiple mempools. */
> > > DEV_TX_OFFLOADS_INDIRECT_MBUFS <** Device support transmission of indirect mbufs. */
> > 
> > Indirect mbufs is just an example when reference counters are required.
> > Direct mbufs may use reference counters as well.
> 
> Personally, I still in favor to move these 2 flags away from TX_OFFLOADS.
> But if people think it would be really helpfull to keep them, should we have then:
> DEV_TX_OFFLOADS_FAST_FREE (or whatever then name will be) - 
> it would mean the same what (NOMULTIMEMP | NOREFCOUNT) means now.

I am not too concerned about name. Yes. it should mean exiting (NOMULTIMEMP |
NOREFCOUNT)


> ?
> Konstsantin
> 
> > 
> > > Those caps can be reported by the PMD as per-port/per-queue offloads. Application will choose how to set those. When not set - PMD
> > can assume all mbufs has ref_cnt = 1 and the same mempool.
> > >
> > > Any objection?
> > 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
  2017-09-12 14:36                                     ` Jerin Jacob
@ 2017-09-12 14:43                                       ` Andrew Rybchenko
  0 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-12 14:43 UTC (permalink / raw)
  To: Jerin Jacob, Ananyev, Konstantin
  Cc: Shahaf Shuler, Stephen Hemminger, Thomas Monjalon, dev, Zhang,
	Helin, Wu, Jingjing

On 09/12/2017 05:36 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Tue, 12 Sep 2017 14:26:38 +0000
>> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
>> To: Andrew Rybchenko <arybchenko@solarflare.com>, Shahaf Shuler
>>   <shahafs@mellanox.com>, Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> CC: Stephen Hemminger <stephen@networkplumber.org>, Thomas Monjalon
>>   <thomas@monjalon.net>, "dev@dpdk.org" <dev@dpdk.org>, "Zhang, Helin"
>>   <helin.zhang@intel.com>, "Wu, Jingjing" <jingjing.wu@intel.com>
>> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>>   API
>>
>>
>>
>>> -----Original Message-----
>>> From: Andrew Rybchenko [mailto:arybchenko@solarflare.com]
>>> Sent: Tuesday, September 12, 2017 11:28 AM
>>> To: Shahaf Shuler <shahafs@mellanox.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
>>> <thomas@monjalon.net>; dev@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>
>>> Subject: Re: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API
>>>
>>> On 09/12/2017 11:03 AM, Shahaf Shuler wrote:
>>>> OK, well understood the requirement for such flags. Thanks for your replies.
>>>>
>>>> I think that for simplicity I will add two more flags on the Tx offloads capabilities:
>>>>
>>>> DEV_TX_OFFLOADS _MULTI_MEMPOOL <** Device supports transmission of mbufs from multiple mempools. */
>>>> DEV_TX_OFFLOADS_INDIRECT_MBUFS <** Device support transmission of indirect mbufs. */
>>> Indirect mbufs is just an example when reference counters are required.
>>> Direct mbufs may use reference counters as well.
>> Personally, I still in favor to move these 2 flags away from TX_OFFLOADS.
>> But if people think it would be really helpfull to keep them, should we have then:
>> DEV_TX_OFFLOADS_FAST_FREE (or whatever then name will be) -
>> it would mean the same what (NOMULTIMEMP | NOREFCOUNT) means now.
> I am not too concerned about name. Yes. it should mean exiting (NOMULTIMEMP |
> NOREFCOUNT)

Merging these two flags together is OK for me as well.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v3 0/2] ethdev new offloads API
  2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 1/2] ethdev: introduce Rx queue " Shahaf Shuler
  2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-13  6:37   ` Shahaf Shuler
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
                       ` (3 more replies)
  2 siblings, 4 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13  6:37 UTC (permalink / raw)
  To: thomas; +Cc: dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
With the new API, Tx and Rx offloads configuration is per queue.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

The new API does not have an equivalent for the below Tx flags:

* ETH_TXQ_FLAGS_NOREFCOUNT
* ETH_TXQ_FLAGS_NOMULTMEMP

The reason is that those flags are not to manage offloads, rather some
guarantee from application on the way it uses mbufs, therefore could not be
present as part of DEV_TX_OFFLOADS_*.
Such flags are useful only for benchmarks, and therefore provide a non-realistic    
performance for DPDK customers using simple benchmarks for evaluation.
Leveraging the work being done in this series to clean up those flags.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  New capabilities were added for PMD to advertize it has moved to the new
   offloads API.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html

on v3:
 - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
   for the no refcnt and single mempool flags.
 - Fix features documentation.
 - Fix commnet style.

on v2:
 - Taking new approach of dividing offloads into per-queue and per-port one.
 - Postpone the Tx/Rx public struct renaming to 18.02
 - squash the helper functions into the Rx/Tx offloads intro patches.

Shahaf Shuler (2):
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API

 doc/guides/nics/features.rst  |  66 +++++++----
 lib/librte_ether/rte_ethdev.c | 220 ++++++++++++++++++++++++++++++++++---
 lib/librte_ether/rte_ethdev.h |  89 ++++++++++++++-
 3 files changed, 335 insertions(+), 40 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
@ 2017-09-13  6:37     ` Shahaf Shuler
  2017-09-13  8:13       ` Andrew Rybchenko
  2017-09-13  8:49       ` Andrew Rybchenko
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx " Shahaf Shuler
                       ` (2 subsequent siblings)
  3 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13  6:37 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0597641ee..b3c10701e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -767,7 +857,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1004,6 +1094,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1074,8 +1165,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -1979,7 +2080,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2055,23 +2157,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2080,6 +2200,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2094,13 +2221,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0adf3274a..ba7a2b2dc 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@ struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@ struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@ struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1870,6 +1912,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1923,6 +1968,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx queue offloads API
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-09-13  6:37     ` Shahaf Shuler
  2017-09-13  8:40       ` Andrew Rybchenko
  2017-09-13  9:10     ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Andrew Rybchenko
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
  3 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13  6:37 UTC (permalink / raw)
  To: thomas; +Cc: dev

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  | 33 +++++++++++++++-----
 lib/librte_ether/rte_ethdev.c | 64 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 38 +++++++++++++++++++++-
 3 files changed, 125 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4e68144ef..1a8af473b 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,7 +131,8 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -220,11 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -510,10 +512,11 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -526,11 +529,12 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_l3_checksum_offload:
@@ -541,13 +545,14 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -565,7 +571,7 @@ Supports L4 checksum offload.
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
 .. _nic_features_macsec_offload:
@@ -576,9 +582,10 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
@@ -596,7 +604,7 @@ Supports inner packet L3 checksum.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
 
 .. _nic_features_timesync:
 
+Mbuf fast free
+--------------
+
+Supports optimization for fast release of mbufs following successful Tx.
+Requires all mbufs to come from the same mempool and has refcnt = 1.
+
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+
 Timesync
 --------
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b3c10701e..85b99588f 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1186,6 +1186,55 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+	if ((txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT) &&
+	    (txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP))
+		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+	if (tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		flags |= (ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1193,6 +1242,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1237,8 +1287,20 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+	else
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index ba7a2b2dc..d4a00a760 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -734,6 +740,15 @@ struct rte_eth_rxconf {
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
 /**
+ * When set the txq_flags should be ignored,
+ * instead per-queue Tx offloads will be set on offloads field
+ * located on rte_eth_txq_conf struct.
+ * This flag is temporary till the rte_eth_txq_conf.txq_flags
+ * API will be deprecated.
+ */
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+
+/**
  * A structure used to configure a TX ring of an Ethernet port.
  */
 struct rte_eth_txconf {
@@ -744,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 /**
@@ -968,6 +989,13 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< Device supports multi segment send. */
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+/**< Device supports optimization for fast release of mbufs.
+ *   When set application must guarantee that all mbufs comes from a single
+ *   mempool and has refcnt = 1.
+ */
 
 struct rte_pci_device;
 
@@ -990,9 +1018,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2023,6 +2054,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-09-13  8:13       ` Andrew Rybchenko
  2017-09-13 12:49         ` Shahaf Shuler
  2017-09-13  8:49       ` Andrew Rybchenko
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13  8:13 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.

I think it would be useful to have the description in the documentation.
It is really important topic on how per-port and per-queue offloads coexist
and rules should be 100% clear for PMD maintainers and application
developers.

Please, also highlight how per-port and per-queue capabilities should be
advertised. I mean if per-queue capability should be reported as per-port
as well. I'd say no to avoid duplication of per-queue capabilities in two
places. If so, could you explain why to enable it should be specified in
both places. How should be treated configuration when the offload is
enabled on port, but disabled on queue level.

> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  |  33 ++++----
>   lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>   3 files changed, 210 insertions(+), 30 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 37ffbc68c..4e68144ef 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -179,7 +179,7 @@ Jumbo frame
>   
>   Supports Rx jumbo frames.
>   
> -* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,

May be it should be removed from documentation when it is removed from 
sources?
I have no strong opinion, but it would be more clear to find it in the 
documentation
with its status specified (obsolete)
The note is applicable to all similar cases below.

[snip]

> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 0adf3274a..ba7a2b2dc 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h

[snip]

> @@ -907,6 +934,18 @@ struct rte_eth_conf {
>   #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
>   #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
>   #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
> +#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
> +#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
> +#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
> +#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
> +#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
> +#define DEV_RX_OFFLOAD_SCATTER		0x00002000
> +#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> +				 DEV_RX_OFFLOAD_UDP_CKSUM | \
> +				 DEV_RX_OFFLOAD_TCP_CKSUM)
> +#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
> +			     DEV_RX_OFFLOAD_VLAN_FILTER | \
> +			     DEV_RX_OFFLOAD_VLAN_EXTEND)

It is not directly related to the patch, but I'd like to highlight that 
Rx/Tx are asymmetric here
since SCTP is missing for Rx, but present for Tx.

[snip]

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx queue offloads API
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-13  8:40       ` Andrew Rybchenko
  2017-09-13 12:51         ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13  8:40 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Tx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.

Note about documentation of the per-queue and per-port offloads
coexistence is applicable here as well. It would be really helpful to 
have it
in the documentation.

> In addition the Tx offloads will be disabled by default and be
> enabled per application needs. This will much simplify PMD management of
> the different offloads.
>
> Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> field in order to move to the new API.
>
> The old Tx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  | 33 +++++++++++++++-----
>   lib/librte_ether/rte_ethdev.c | 64 +++++++++++++++++++++++++++++++++++++-
>   lib/librte_ether/rte_ethdev.h | 38 +++++++++++++++++++++-
>   3 files changed, 125 insertions(+), 10 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 4e68144ef..1a8af473b 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst

[snip]

> @@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
>   
>   .. _nic_features_timesync:
>   
> +Mbuf fast free
> +--------------
> +
> +Supports optimization for fast release of mbufs following successful Tx.
> +Requires all mbufs to come from the same mempool and has refcnt = 1.

It is ambiguous here in the case of fast free configured on port level.
Please, highlight that "from the same mempool" is per-queue.

> +
> +* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
> +* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
> +
>   Timesync
>   --------
>   
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index b3c10701e..85b99588f 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c

[snip]

> @@ -1193,6 +1242,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>   {
>   	struct rte_eth_dev *dev;
>   	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_txconf local_conf;
>   	void **txq;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -1237,8 +1287,20 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
>   	if (tx_conf == NULL)
>   		tx_conf = &dev_info.default_txconf;
>   
> +	/*
> +	 * Convert between the offloads API to enable PMDs to support
> +	 * only one of them.
> +	 */
> +	local_conf = *tx_conf;
> +	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)
> +		rte_eth_convert_txq_offloads(tx_conf->offloads,
> +					     &local_conf.txq_flags);

Is it intended that ignore flag is lost here?
It mean that failsafe slaves will treat txq_flags as the primary source 
of offloads
configuration and do conversion from txq_flags to offloads.
For example, it means that DEV_TX_OFFLOAD_QINQ_INSERT will be lost as well
as many other offloads which are not covered by txq_flags.

> +	else
> +		rte_eth_convert_txq_flags(tx_conf->txq_flags,
> +					  &local_conf.offloads);
> +
>   	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
> -					       socket_id, tx_conf);
> +					       socket_id, &local_conf);
>   }
>   
>   void

[snip]

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
  2017-09-13  8:13       ` Andrew Rybchenko
@ 2017-09-13  8:49       ` Andrew Rybchenko
  2017-09-13  9:13         ` Andrew Rybchenko
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13  8:49 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.
>
> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  |  33 ++++----
>   lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>   3 files changed, 210 insertions(+), 30 deletions(-)

[snip]

> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 0597641ee..b3c10701e 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> @@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
>   	}
>   }
>   
> +/**
> + * A conversion function from rxmode bitfield API.
> + */
> +static void
> +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
> +				    uint64_t *rx_offloads)
> +{
> +	uint64_t offloads = 0;
> +
> +	if (rxmode->header_split == 1)
> +		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
> +	if (rxmode->hw_ip_checksum == 1)
> +		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
> +	if (rxmode->hw_vlan_filter == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
> +	if (rxmode->hw_vlan_strip == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> +	if (rxmode->hw_vlan_extend == 1)
> +		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
> +	if (rxmode->jumbo_frame == 1)
> +		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (rxmode->hw_strip_crc == 1)
> +		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
> +	if (rxmode->enable_scatter == 1)
> +		offloads |= DEV_RX_OFFLOAD_SCATTER;
> +	if (rxmode->enable_lro == 1)
> +		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
> +
> +	*rx_offloads = offloads;
> +}
> +
> +/**
> + * A conversion function from rxmode offloads API.
> + */
> +static void
> +rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
> +			    struct rte_eth_rxmode *rxmode)
> +{
> +
> +	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
> +		rxmode->header_split = 1;
> +	else
> +		rxmode->header_split = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
> +		rxmode->hw_ip_checksum = 1;
> +	else
> +		rxmode->hw_ip_checksum = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
> +		rxmode->hw_vlan_filter = 1;
> +	else
> +		rxmode->hw_vlan_filter = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> +		rxmode->hw_vlan_strip = 1;
> +	else
> +		rxmode->hw_vlan_strip = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
> +		rxmode->hw_vlan_extend = 1;
> +	else
> +		rxmode->hw_vlan_extend = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> +		rxmode->jumbo_frame = 1;
> +	else
> +		rxmode->jumbo_frame = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
> +		rxmode->hw_strip_crc = 1;
> +	else
> +		rxmode->hw_strip_crc = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
> +		rxmode->enable_scatter = 1;
> +	else
> +		rxmode->enable_scatter = 0;
> +	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
> +		rxmode->enable_lro = 1;
> +	else
> +		rxmode->enable_lro = 0;
> +}
> +
>   int
>   rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>   		      const struct rte_eth_conf *dev_conf)
>   {
>   	struct rte_eth_dev *dev;
>   	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_conf local_conf = *dev_conf;
>   	int diag;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>   		return -EBUSY;
>   	}
>   
> +	/*
> +	 * Convert between the offloads API to enable PMDs to support
> +	 * only one of them.
> +	 */
> +	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
> +		rte_eth_convert_rx_offload_bitfield(
> +				&dev_conf->rxmode, &local_conf.rxmode.offloads);
> +	} else {
> +		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
> +					    &local_conf.rxmode);

Ignore flag is lost here and it will result in treating txq_flags as the 
primary
information about offloads. It is important in the case of failsafe PMD.

> +	}
> +
>   	/* Copy the dev_conf parameter into the dev structure */
> -	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
> +	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
>   
>   	/*
>   	 * Check that the numbers of RX and TX queues are not greater

[snip]

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/2] ethdev new offloads API
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
  2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-13  9:10     ` Andrew Rybchenko
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
  3 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13  9:10 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
> Tx offloads configuration is per queue. Tx offloads are enabled by default,
> and can be disabled using ETH_TXQ_FLAGS_NO* flags.
> This behaviour is not consistent with the Rx side where the Rx offloads
> configuration is per port. Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> Moreover, considering more Tx and Rx offloads will be added
> over time, the cost of managing them all inside the PMD will be tremendous,
> as the PMD will need to check the matching for the entire offload set
> for each mbuf it handles.
> In addition, on the current approach each Rx offload added breaks the
> ABI compatibility as it requires to add entries to existing bit-fields.
>   
> The series address above issues by defining a new offloads API.
> With the new API, Tx and Rx offloads configuration is per queue.
> The offloads are disabled by default. Each offload can be enabled or
> disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
> Such API will enable to easily add or remove offloads, without breaking the
> ABI compatibility.

It should be updated since offloads are configured now per-port and 
per-queue.

> The new API does not have an equivalent for the below Tx flags:
>
> * ETH_TXQ_FLAGS_NOREFCOUNT
> * ETH_TXQ_FLAGS_NOMULTMEMP
>
> The reason is that those flags are not to manage offloads, rather some
> guarantee from application on the way it uses mbufs, therefore could not be
> present as part of DEV_TX_OFFLOADS_*.
> Such flags are useful only for benchmarks, and therefore provide a non-realistic
> performance for DPDK customers using simple benchmarks for evaluation.
> Leveraging the work being done in this series to clean up those flags.

It should be updated since now you care about these flags as well.

> In order to provide a smooth transition between the APIs the following actions
> were taken:
> *  The old offloads API is kept for the meanwhile.
> *  New capabilities were added for PMD to advertize it has moved to the new
>     offloads API.
> *  Helper function which copy from old to new API were added to ethdev,
>     enabling the PMD to support only one of the APIs.
>
> Per discussion made on the RFC of this series [1], the integration plan which was
> decided is to do the transition in two phases:
> * ethdev API will move on 17.11.
> * Apps and examples will move on 18.02.
>
> This to enable PMD maintainers sufficient time to adopt the new API.
>
> [1]
> http://dpdk.org/ml/archives/dev/2017-August/072643.html
>
> on v3:
>   - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
>     for the no refcnt and single mempool flags.
>   - Fix features documentation.
>   - Fix commnet style.
>
> on v2:
>   - Taking new approach of dividing offloads into per-queue and per-port one.
>   - Postpone the Tx/Rx public struct renaming to 18.02
>   - squash the helper functions into the Rx/Tx offloads intro patches.
>
> Shahaf Shuler (2):
>    ethdev: introduce Rx queue offloads API
>    ethdev: introduce Tx queue offloads API
>
>   doc/guides/nics/features.rst  |  66 +++++++----
>   lib/librte_ether/rte_ethdev.c | 220 ++++++++++++++++++++++++++++++++++---
>   lib/librte_ether/rte_ethdev.h |  89 ++++++++++++++-
>   3 files changed, 335 insertions(+), 40 deletions(-)

Many thanks for your work on this patch series.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  8:49       ` Andrew Rybchenko
@ 2017-09-13  9:13         ` Andrew Rybchenko
  2017-09-13 12:33           ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13  9:13 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: dev

On 09/13/2017 11:49 AM, Andrew Rybchenko wrote:
> On 09/13/2017 09:37 AM, Shahaf Shuler wrote:
>> Introduce a new API to configure Rx offloads.
>>
>> In the new API, offloads are divided into per-port and per-queue
>> offloads. The PMD reports capability for each of them.
>> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
>> To enable per-port offload, the offload should be set on both device
>> configuration and queue configuration. To enable per-queue offload, the
>> offloads can be set only on queue configuration.
>>
>> Applications should set the ignore_offload_bitfield bit on rxmode
>> structure in order to move to the new API.
>>
>> The old Rx offloads API is kept for the meanwhile, in order to enable a
>> smooth transition for PMDs and application to the new API.
>>
>> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>> ---
>>   doc/guides/nics/features.rst  |  33 ++++----
>>   lib/librte_ether/rte_ethdev.c | 156 
>> +++++++++++++++++++++++++++++++++----
>>   lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
>>   3 files changed, 210 insertions(+), 30 deletions(-) 

[snip]

>> diff --git a/lib/librte_ether/rte_ethdev.c 
>> b/lib/librte_ether/rte_ethdev.c
>> index 0597641ee..b3c10701e 100644

[snip]

>> @@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t 
>> nb_rx_q, uint16_t nb_tx_q,
>>           return -EBUSY;
>>       }
>>   +    /*
>> +     * Convert between the offloads API to enable PMDs to support
>> +     * only one of them.
>> +     */
>> +    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
>> +        rte_eth_convert_rx_offload_bitfield(
>> +                &dev_conf->rxmode, &local_conf.rxmode.offloads);
>> +    } else {
>> + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
>> +                        &local_conf.rxmode);
>
> Ignore flag is lost here and it will result in treating txq_flags as 
> the primary
> information about offloads. It is important in the case of failsafe PMD.

Sorry, I mean rxmode (not txq_flags).

[snip]

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-06  9:33                   ` Ananyev, Konstantin
@ 2017-09-13  9:27                     ` Thomas Monjalon
  2017-09-13 11:16                       ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-13  9:27 UTC (permalink / raw)
  To: Ananyev, Konstantin, Shahaf Shuler; +Cc: dev, stephen

I still think we must streamline ethdev API instead of complexifying.
We should drop the big "configure everything" and configure offloads
one by one, and per queue (the finer grain).

More comments below

06/09/2017 11:33, Ananyev, Konstantin:
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> > Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin:

> > > > > > > > > In fact, right now it is possible to query/change these 3
> > > > > > > > > vlan offload flags on the fly (after dev_start) on  port
> > > > > > > > > basis by
> > > > > rte_eth_dev_(get|set)_vlan_offload API.
> > > >
> > > > Regarding this API from ethdev.
> > > >
> > > > So this seems like a hack on ethdev. Currently there are 2 ways for user to
> > > set Rx vlan offloads.
> > > > One is through dev_configure which require the ports to be stopped. The
> > > other is this API which can set even if the port is started.
> > >
> > > Yes there is an ability to enable/disable VLAN offloads without
> > > stop/reconfigure the device.
> > > Though I wouldn't call it 'a hack'.
> > > From my perspective - it is a useful feature.
> > > Same as it is possible in some cases to change MTU without stopping device,
> > > etc.

I think the function rte_eth_dev_configure(), which set up several things
at a time, is a very bad idea from API perspective.

In the VLAN example, we should have only one function to set this offload.
And the PMD should advertise the capability of configuring VLAN on the fly.
This function should return an error if called on the fly (started state)
and PMD does not support it.


> > > Consider scenario when PF has a corresponding VFs (PF is controlled by
> > > DPDK) Right now (at least with Intel HW) it is possible to:
> > >
> > > struct rte_eth_conf dev_conf;
> > >  dev_conf. rxmode.hw_vlan_filter = 1;
> > > ...
> > > rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
> > > rte_eth_dev_start(pf_port_id);
> > >
> > > In that scenario I don't have any RX/TX queues configured.
> > > Though I still able to enable vlan filter, and it would work correctly for VFs.
> > > Same for other per-port offloads.
> > 
> > For the PF - enabling vlan filtering without any queues means nothing. The PF can receive no traffic, what different does it makes the vlan
> > filtering is set?
> > For the VF - I assume it will have queues, therefore for it vlan filtering has a meaning. However as I said above, the VF has the vlan filter
> > because in intel this is per-device offload, so this is not a good example.
> 
> Yes it is a per-device offload, and right now it is possible to enable/disable it via
> dev_confgiure(); dev_start();
> without configuring/starting any RX/TX queues.
> That's an ability I'd like to preserve.
> So from my perspective it is a perfectly valid example.

It is configuring VFs by setting the PF.
Where is it documented?
It looks to me as a device-specific side effect.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13  9:27                     ` Thomas Monjalon
@ 2017-09-13 11:16                       ` Shahaf Shuler
  2017-09-13 12:41                         ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13 11:16 UTC (permalink / raw)
  To: Thomas Monjalon, Ananyev, Konstantin; +Cc: dev, stephen

Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> I still think we must streamline ethdev API instead of complexifying.
> We should drop the big "configure everything" and configure offloads one by
> one, and per queue (the finer grain).

The issue is, that there is some functionality which cannot be achieved when configuring offload per queue.
For example - vlan filter on intel NICs. The PF can set it even without creating a single queue, in order to enable it for the VFs.

To make it right, such functionality resides on per-device offloads. However we cannot have such concept on ethdev layer.
Also not sure we want to modify the eal for that.

> 
> More comments below
> 
> 06/09/2017 11:33, Ananyev, Konstantin:
> > From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> > > Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin:
> 
> > > > > > > > > > In fact, right now it is possible to query/change
> > > > > > > > > > these 3 vlan offload flags on the fly (after
> > > > > > > > > > dev_start) on  port basis by
> > > > > > rte_eth_dev_(get|set)_vlan_offload API.
> > > > >
> > > > > Regarding this API from ethdev.
> > > > >
> > > > > So this seems like a hack on ethdev. Currently there are 2 ways
> > > > > for user to
> > > > set Rx vlan offloads.
> > > > > One is through dev_configure which require the ports to be
> > > > > stopped. The
> > > > other is this API which can set even if the port is started.
> > > >
> > > > Yes there is an ability to enable/disable VLAN offloads without
> > > > stop/reconfigure the device.
> > > > Though I wouldn't call it 'a hack'.
> > > > From my perspective - it is a useful feature.
> > > > Same as it is possible in some cases to change MTU without
> > > > stopping device, etc.
> 
> I think the function rte_eth_dev_configure(), which set up several things at a
> time, is a very bad idea from API perspective.
> 
> In the VLAN example, we should have only one function to set this offload.
> And the PMD should advertise the capability of configuring VLAN on the fly.
> This function should return an error if called on the fly (started state) and
> PMD does not support it.
> 
> 
> > > > Consider scenario when PF has a corresponding VFs (PF is
> > > > controlled by
> > > > DPDK) Right now (at least with Intel HW) it is possible to:
> > > >
> > > > struct rte_eth_conf dev_conf;
> > > >  dev_conf. rxmode.hw_vlan_filter = 1; ...
> > > > rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf);
> > > > rte_eth_dev_start(pf_port_id);
> > > >
> > > > In that scenario I don't have any RX/TX queues configured.
> > > > Though I still able to enable vlan filter, and it would work correctly for
> VFs.
> > > > Same for other per-port offloads.
> > >
> > > For the PF - enabling vlan filtering without any queues means
> > > nothing. The PF can receive no traffic, what different does it makes the
> vlan filtering is set?
> > > For the VF - I assume it will have queues, therefore for it vlan
> > > filtering has a meaning. However as I said above, the VF has the vlan filter
> because in intel this is per-device offload, so this is not a good example.
> >
> > Yes it is a per-device offload, and right now it is possible to
> > enable/disable it via dev_confgiure(); dev_start(); without
> > configuring/starting any RX/TX queues.
> > That's an ability I'd like to preserve.
> > So from my perspective it is a perfectly valid example.
> 
> It is configuring VFs by setting the PF.
> Where is it documented?
> It looks to me as a device-specific side effect.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  9:13         ` Andrew Rybchenko
@ 2017-09-13 12:33           ` Shahaf Shuler
  2017-09-13 12:34             ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13 12:33 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon; +Cc: dev

Wednesday, September 13, 2017 12:13 PM, Andrew Rybchenko:
>>return -EBUSY;
>>      }
>>  +    /*
>>+     * Convert between the offloads API to enable PMDs to support
>>+     * only one of them.
>>+     */
>>+    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
>>+        rte_eth_convert_rx_offload_bitfield(
>>+                &dev_conf->rxmode, &local_conf.rxmode.offloads);
>>+    } else {
>>+        rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
>>+                        &local_conf.rxmode);

>Ignore flag is lost here and it will result in treating txq_flags as the primary
>information about offloads. It is important in the case of failsafe PMD.
>
>Sorry, I mean rxmode (not txq_flags).


Am not sure the ignore_offload_bitfield is lost on converstion. The convert function does not assign to it.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13 12:33           ` Shahaf Shuler
@ 2017-09-13 12:34             ` Andrew Rybchenko
  0 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-13 12:34 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon; +Cc: dev

On 09/13/2017 03:33 PM, Shahaf Shuler wrote:
>
> Wednesday, September 13, 2017 12:13 PM, Andrew Rybchenko:
>
>         >>return -EBUSY;
>         >>      }
>         >>  +    /*
>         >>+     * Convert between the offloads API to enable PMDs to
>         support
>         >>+     * only one of them.
>         >>+     */
>         >>+    if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
>         >>+ rte_eth_convert_rx_offload_bitfield(
>         >>+ &dev_conf->rxmode, &local_conf.rxmode.offloads);
>         >>+    } else {
>         >>+ rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
>         >>+ &local_conf.rxmode);
>
>
>     >Ignore flag is lost here and it will result in treating txq_flags
>     as the primary
>     >information about offloads. It is important in the case of
>     failsafe PMD.
>
> >
> >Sorry, I mean rxmode (not txq_flags).
>
> Am not sure the ignore_offload_bitfield is lost on converstion. The 
> convert function does not assign to it.
>

That's true. My bad.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 11:16                       ` Shahaf Shuler
@ 2017-09-13 12:41                         ` Thomas Monjalon
  2017-09-13 12:56                           ` Ananyev, Konstantin
  2017-09-13 12:56                           ` Shahaf Shuler
  0 siblings, 2 replies; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-13 12:41 UTC (permalink / raw)
  To: dev, Shahaf Shuler; +Cc: Ananyev, Konstantin, stephen

13/09/2017 13:16, Shahaf Shuler:
> Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> > I still think we must streamline ethdev API instead of complexifying.
> > We should drop the big "configure everything" and configure offloads one by
> > one, and per queue (the finer grain).
> 
> The issue is, that there is some functionality which cannot be achieved when configuring offload per queue.
> For example - vlan filter on intel NICs. The PF can set it even without creating a single queue, in order to enable it for the VFs.

As it is a device-specific - not documented - side effect,
I won't consider it.
However I understand it may be better to be able to configure
per-port offloads with a dedicated per-port function.
I agree with the approach of the v3 of this series.

Let me give my overview of offloads:

We have simple offloads which are configured by just setting a flag.
The same flag can be set per-port or per-queue.
This offload can be set before starting or on the fly.
We currently have no generic way to set it on the fly.

We have also more complicate offloads which require more configuration.
They are set with the rte_flow API.
They can be per-port, per-queue, on the fly or not (AFAIK).

I think we must discuss "on the fly" capability.
It requires probably to set up simple offloads (flags) with a dedicated
function instead of using "configure" and "queue_setup" functions.
This new capability can be implemented in a different series.

Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue offloads API
  2017-09-13  8:13       ` Andrew Rybchenko
@ 2017-09-13 12:49         ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13 12:49 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon; +Cc: dev

Wednesday, September 13, 2017 11:13 AM, Andrew Rybchenko:
On 09/13/2017 09:37 AM, Shahaf Shuler wrote:

>I think it would be useful to have the description in the documentation.
>It is really important topic on how per-port and per-queue offloads coexist
>and rules should be 100% clear for PMD maintainers and application
>developers.

OK.

>Please, also highlight how per-port and per-queue capabilities should be
>advertised. I mean if per-queue capability should be reported as per-port
>as well. I'd say no to avoid duplication of per-queue capabilities in two
>places.

I will add documentation. Offloads can be reported in only one cap – either it is per-port or per-queue.


>If so, could you explain why to enable it should be specified in
>both places.

It is set also in the queue setup to emphasize the queue also have this offload. Logically it can be avoided, however I thought it is good to have, to make it explicit to application and PMDs.

How should be treated configuration when the offload is
>enabled on port, but disabled on queue level.

In that case the queue setup should return with error. As the application tries do a mixed configuration for per-port offload.



>>diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst

>>index 37ffbc68c..4e68144ef 100644

>>--- a/doc/guides/nics/features.rst

>>+++ b/doc/guides/nics/features.rst

>>@@ -179,7 +179,7 @@ Jumbo frame

>>

>> Supports Rx jumbo frames.

>>

>>-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,

>May be it should be removed from documentation when it is removed from sources?
>I have no strong opinion, but it would be more clear to find it in the documentation
>with its status specified (obsolete)

I think it will complex the documentation. The old API is obsoleted. If PMD developer thinks on how to implement a new feature, and read this doc,  it should implement according to the new API.


>[snip]



>>@@ -907,6 +934,18 @@ struct rte_eth_conf {

>> #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020

>> #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040

>> #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080

>>+#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100

>>+#define DEV_RX_OFFLOAD_VLAN_FILTER  0x00000200

>>+#define DEV_RX_OFFLOAD_VLAN_EXTEND  0x00000400

>>+#define DEV_RX_OFFLOAD_JUMBO_FRAME  0x00000800

>>+#define DEV_RX_OFFLOAD_CRC_STRIP    0x00001000

>>+#define DEV_RX_OFFLOAD_SCATTER              0x00002000

>>+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \

>>+                             DEV_RX_OFFLOAD_UDP_CKSUM | \

>>+                             DEV_RX_OFFLOAD_TCP_CKSUM)

>>+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \

>>+                          DEV_RX_OFFLOAD_VLAN_FILTER | \

>>+                          DEV_RX_OFFLOAD_VLAN_EXTEND)

>It is not directly related to the patch, but I'd like to highlight that Rx/Tx are asymmetric here
>since SCTP is missing for Rx, but present for Tx.

Right. This can be added on a different series.




[snip]

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx queue offloads API
  2017-09-13  8:40       ` Andrew Rybchenko
@ 2017-09-13 12:51         ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13 12:51 UTC (permalink / raw)
  To: Andrew Rybchenko, Thomas Monjalon; +Cc: dev

Wednesday, September 13, 2017 11:41 AM, Andrew Rybchenko:

>>+Mbuf fast free

>>+--------------

>>+

>>+Supports optimization for fast release of mbufs following successful Tx.

>>+Requires all mbufs to come from the same mempool and has refcnt = 1.

>It is ambiguous here in the case of fast free configured on port level.
>Please, highlight that "from the same mempool" is per-queue.

OK.




>>+     local_conf = *tx_conf;

>>+     if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)

>>+             rte_eth_convert_txq_offloads(tx_conf->offloads,

>>+                                         &local_conf.txq_flags);

>Is it intended that ignore flag is lost here?
>It mean that failsafe slaves will treat txq_flags as the primary source of offloads
>configuration and do conversion from txq_flags to offloads.
>For example, it means that DEV_TX_OFFLOAD_QINQ_INSERT will be lost as well
>as many other offloads which are not covered by txq_flags.

Right, this is bug. Thanks.




^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 12:41                         ` Thomas Monjalon
@ 2017-09-13 12:56                           ` Ananyev, Konstantin
  2017-09-13 13:20                             ` Thomas Monjalon
  2017-09-13 12:56                           ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-13 12:56 UTC (permalink / raw)
  To: Thomas Monjalon, dev, Shahaf Shuler; +Cc: stephen



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, September 13, 2017 1:42 PM
> To: dev@dpdk.org; Shahaf Shuler <shahafs@mellanox.com>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> 13/09/2017 13:16, Shahaf Shuler:
> > Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> > > I still think we must streamline ethdev API instead of complexifying.
> > > We should drop the big "configure everything" and configure offloads one by
> > > one, and per queue (the finer grain).
> >
> > The issue is, that there is some functionality which cannot be achieved when configuring offload per queue.
> > For example - vlan filter on intel NICs. The PF can set it even without creating a single queue, in order to enable it for the VFs.
> 
> As it is a device-specific - not documented - side effect,
> I won't consider it.

Hmm, are you saying that if there are gaps in our documentation it ok to break things?
Once again - you suggest to break existing functionality without providing any
alternative way to support it.
Surely I will NACK such proposal.
Konstantin 

> However I understand it may be better to be able to configure
> per-port offloads with a dedicated per-port function.
> I agree with the approach of the v3 of this series.
> 
> Let me give my overview of offloads:
> 
> We have simple offloads which are configured by just setting a flag.
> The same flag can be set per-port or per-queue.
> This offload can be set before starting or on the fly.
> We currently have no generic way to set it on the fly.
> 
> We have also more complicate offloads which require more configuration.
> They are set with the rte_flow API.
> They can be per-port, per-queue, on the fly or not (AFAIK).
> 
> I think we must discuss "on the fly" capability.
> It requires probably to set up simple offloads (flags) with a dedicated
> function instead of using "configure" and "queue_setup" functions.
> This new capability can be implemented in a different series.
> 
> Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 12:41                         ` Thomas Monjalon
  2017-09-13 12:56                           ` Ananyev, Konstantin
@ 2017-09-13 12:56                           ` Shahaf Shuler
  1 sibling, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-13 12:56 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: Ananyev, Konstantin, stephen

Wednesday, September 13, 2017 3:42 PM, Thomas MonjalonL
> 13/09/2017 13:16, Shahaf Shuler:
> > Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> > > I still think we must streamline ethdev API instead of complexifying.
> > > We should drop the big "configure everything" and configure offloads
> > > one by one, and per queue (the finer grain).
> >
> > The issue is, that there is some functionality which cannot be achieved
> when configuring offload per queue.
> > For example - vlan filter on intel NICs. The PF can set it even without
> creating a single queue, in order to enable it for the VFs.
> 
> As it is a device-specific - not documented - side effect, I won't consider it.
> However I understand it may be better to be able to configure per-port
> offloads with a dedicated per-port function.
> I agree with the approach of the v3 of this series.
> 
> Let me give my overview of offloads:
> 
> We have simple offloads which are configured by just setting a flag.
> The same flag can be set per-port or per-queue.
> This offload can be set before starting or on the fly.
> We currently have no generic way to set it on the fly.
> 
> We have also more complicate offloads which require more configuration.
> They are set with the rte_flow API.
> They can be per-port, per-queue, on the fly or not (AFAIK).
> 
> I think we must discuss "on the fly" capability.
> It requires probably to set up simple offloads (flags) with a dedicated
> function instead of using "configure" and "queue_setup" functions.
> This new capability can be implemented in a different series.
> 
> Opinions?

Agree about the on the fly configuration for Tx/Rx offloads. 

Currently the vlan case is an exception, we should have a generic API to set such offloads. 
I assume that we also don't want to have dev_op function on PMD for each "on the flight" configuration like we have with the vlan offloads. 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 12:56                           ` Ananyev, Konstantin
@ 2017-09-13 13:20                             ` Thomas Monjalon
  2017-09-13 21:42                               ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-13 13:20 UTC (permalink / raw)
  To: Ananyev, Konstantin, stephen; +Cc: dev, Shahaf Shuler

13/09/2017 14:56, Ananyev, Konstantin:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 13/09/2017 13:16, Shahaf Shuler:
> > > Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> > > > I still think we must streamline ethdev API instead of complexifying.
> > > > We should drop the big "configure everything" and configure offloads one by
> > > > one, and per queue (the finer grain).
> > >
> > > The issue is, that there is some functionality which cannot be achieved when configuring offload per queue.
> > > For example - vlan filter on intel NICs. The PF can set it even without creating a single queue, in order to enable it for the VFs.
> > 
> > As it is a device-specific - not documented - side effect,
> > I won't consider it.
> 
> Hmm, are you saying that if there are gaps in our documentation it ok to break things?

If it is not documented, we did not explicitly agree on it.
How an application knows that setting a PF settings will have
effect on related VFs?

> Once again - you suggest to break existing functionality without providing any
> alternative way to support it.

It is not a functionality, it is a side effect.
What happens if a VF changes this settings? error?
Is this error documented?

> Surely I will NACK such proposal.

Nothing to nack, I agree with v3 which doesn't break ixgbe VLAN settings.

Konstantin, I would like your opinion about the proposal below.
It is about making on the fly configuration more generic.
You say it is possible to configure VLAN on the fly,
and I think we should make it possible for other offload features.

> > However I understand it may be better to be able to configure
> > per-port offloads with a dedicated per-port function.
> > I agree with the approach of the v3 of this series.
> > 
> > Let me give my overview of offloads:
> > 
> > We have simple offloads which are configured by just setting a flag.
> > The same flag can be set per-port or per-queue.
> > This offload can be set before starting or on the fly.
> > We currently have no generic way to set it on the fly.
> > 
> > We have also more complicate offloads which require more configuration.
> > They are set with the rte_flow API.
> > They can be per-port, per-queue, on the fly or not (AFAIK).
> > 
> > I think we must discuss "on the fly" capability.
> > It requires probably to set up simple offloads (flags) with a dedicated
> > function instead of using "configure" and "queue_setup" functions.
> > This new capability can be implemented in a different series.
> > 
> > Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 13:20                             ` Thomas Monjalon
@ 2017-09-13 21:42                               ` Ananyev, Konstantin
  2017-09-14  8:02                                 ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-13 21:42 UTC (permalink / raw)
  To: Thomas Monjalon, stephen; +Cc: dev, Shahaf Shuler



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, September 13, 2017 2:21 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org
> Cc: dev@dpdk.org; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> 13/09/2017 14:56, Ananyev, Konstantin:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 13/09/2017 13:16, Shahaf Shuler:
> > > > Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon:
> > > > > I still think we must streamline ethdev API instead of complexifying.
> > > > > We should drop the big "configure everything" and configure offloads one by
> > > > > one, and per queue (the finer grain).
> > > >
> > > > The issue is, that there is some functionality which cannot be achieved when configuring offload per queue.
> > > > For example - vlan filter on intel NICs. The PF can set it even without creating a single queue, in order to enable it for the VFs.
> > >
> > > As it is a device-specific - not documented - side effect,
> > > I won't consider it.
> >
> > Hmm, are you saying that if there are gaps in our documentation it ok to break things?
> 
> If it is not documented, we did not explicitly agree on it.
> How an application knows that setting a PF settings will have
> effect on related VFs?
> 
> > Once again - you suggest to break existing functionality without providing any
> > alternative way to support it.
> 
> It is not a functionality, it is a side effect.

I wouldn't agree with that.
DPDK does support PF for these devices.
It is responsibility of PF to provide to user ability to configure and control it's VF(s).  

> What happens if a VF changes this settings? error?

Depends on particular offload and HW.
For ixgbe and igb for most cases VF is simply not physically capable to change
these things.
I think, that in some places error is returned, in other such request is silently ignored.

> Is this error documented?
> 
> > Surely I will NACK such proposal.
> 
> Nothing to nack, I agree with v3 which doesn't break ixgbe VLAN settings.

Ok then.

> 
> Konstantin, I would like your opinion about the proposal below.
> It is about making on the fly configuration more generic.
> You say it is possible to configure VLAN on the fly,
> and I think we should make it possible for other offload features.

It would be a good thing, but I don't think it is possible for all offloads.
For some of them you still have to stop the queue(port) first. 

Also I am not sure what exactly do you propose?
Is that something like that:
- wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
- Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
  Introduce new functions:

int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);

uint64_t rte_eth_get_port_rx_offload(portid);
uint64_t rte_eth_set_queue_rx_offload(portid, queueid);

And add new fileds:
rx_offload_port_dynamic_capa
rx_offload_queue_dynamic_capa
inside rte_eth_dev_info.

And it would be user responsibility to call set_port/queue_rx_offload()
somewhere before dev_start() for static offloads.
?

If so, then it seems reasonable to me.
Konstantin

> 
> > > However I understand it may be better to be able to configure
> > > per-port offloads with a dedicated per-port function.
> > > I agree with the approach of the v3 of this series.
> > >
> > > Let me give my overview of offloads:
> > >
> > > We have simple offloads which are configured by just setting a flag.
> > > The same flag can be set per-port or per-queue.
> > > This offload can be set before starting or on the fly.
> > > We currently have no generic way to set it on the fly.
> > >
> > > We have also more complicate offloads which require more configuration.
> > > They are set with the rte_flow API.
> > > They can be per-port, per-queue, on the fly or not (AFAIK).
> > >
> > > I think we must discuss "on the fly" capability.
> > > It requires probably to set up simple offloads (flags) with a dedicated
> > > function instead of using "configure" and "queue_setup" functions.
> > > This new capability can be implemented in a different series.
> > >
> > > Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-13 21:42                               ` Ananyev, Konstantin
@ 2017-09-14  8:02                                 ` Thomas Monjalon
  2017-09-18 10:31                                   ` Bruce Richardson
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-14  8:02 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: stephen, dev, Shahaf Shuler

13/09/2017 23:42, Ananyev, Konstantin:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 13/09/2017 14:56, Ananyev, Konstantin:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Konstantin, I would like your opinion about the proposal below.
> > It is about making on the fly configuration more generic.
> > You say it is possible to configure VLAN on the fly,
> > and I think we should make it possible for other offload features.
> 
> It would be a good thing, but I don't think it is possible for all offloads.
> For some of them you still have to stop the queue(port) first. 
> 
> Also I am not sure what exactly do you propose?
> Is that something like that:
> - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
>   Introduce new functions:
> 
> int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> 
> uint64_t rte_eth_get_port_rx_offload(portid);
> uint64_t rte_eth_set_queue_rx_offload(portid, queueid);
> 
> And add new fileds:
> rx_offload_port_dynamic_capa
> rx_offload_queue_dynamic_capa
> inside rte_eth_dev_info.
> 
> And it would be user responsibility to call set_port/queue_rx_offload()
> somewhere before dev_start() for static offloads.
> ?

Yes exactly.

> If so, then it seems reasonable to me.

Good, thank you


> > > > However I understand it may be better to be able to configure
> > > > per-port offloads with a dedicated per-port function.
> > > > I agree with the approach of the v3 of this series.
> > > >
> > > > Let me give my overview of offloads:
> > > >
> > > > We have simple offloads which are configured by just setting a flag.
> > > > The same flag can be set per-port or per-queue.
> > > > This offload can be set before starting or on the fly.
> > > > We currently have no generic way to set it on the fly.
> > > >
> > > > We have also more complicate offloads which require more configuration.
> > > > They are set with the rte_flow API.
> > > > They can be per-port, per-queue, on the fly or not (AFAIK).
> > > >
> > > > I think we must discuss "on the fly" capability.
> > > > It requires probably to set up simple offloads (flags) with a dedicated
> > > > function instead of using "configure" and "queue_setup" functions.
> > > > This new capability can be implemented in a different series.
> > > >
> > > > Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v4 0/3] ethdev new offloads API
  2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
                       ` (2 preceding siblings ...)
  2017-09-13  9:10     ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Andrew Rybchenko
@ 2017-09-17  6:54     ` Shahaf Shuler
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 1/3] ethdev: introduce Rx queue " Shahaf Shuler
                         ` (4 more replies)
  3 siblings, 5 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-17  6:54 UTC (permalink / raw)
  To: thomas, jerin.jacob, konstantin.ananyev, arybchenko; +Cc: dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
In the new API, offloads are divided into per-port and per-queue offloads,
with a corresponding capability for each.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html

on v4:
 - Added another patch for documentation.
 - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
 - clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.

on v3:
 - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
   for the no refcnt and single mempool flags.
 - Fix features documentation.
 - Fix comment style.

on v2:
 - Taking new approach of dividing offloads into per-queue and per-port one.
 - Postpone the Tx/Rx public struct renaming to 18.02
 - squash the helper functions into the Rx/Tx offloads intro patches.

Shahaf Shuler (3):
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API
  doc: add details on ethdev offloads API

 doc/guides/nics/features.rst            |  66 +++++---
 doc/guides/prog_guide/poll_mode_drv.rst |  17 ++
 lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
 lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
 4 files changed, 355 insertions(+), 40 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v4 1/3] ethdev: introduce Rx queue offloads API
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
@ 2017-09-17  6:54       ` Shahaf Shuler
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx " Shahaf Shuler
                         ` (3 subsequent siblings)
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-17  6:54 UTC (permalink / raw)
  To: thomas, jerin.jacob, konstantin.ananyev, arybchenko; +Cc: dev

Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index a88916f2a..56c104d86 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -687,12 +687,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -722,8 +800,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -767,7 +857,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1021,6 +1111,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1091,8 +1182,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -1996,7 +2097,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2072,23 +2174,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2097,6 +2217,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2111,13 +2238,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 99cdd54d4..6a2af355a 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@ struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@ struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@ struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1874,6 +1916,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1927,6 +1972,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx queue offloads API
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 1/3] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-09-17  6:54       ` Shahaf Shuler
  2017-09-18  7:50         ` Andrew Rybchenko
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev " Shahaf Shuler
                         ` (2 subsequent siblings)
  4 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-17  6:54 UTC (permalink / raw)
  To: thomas, jerin.jacob, konstantin.ananyev, arybchenko; +Cc: dev

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  | 33 ++++++++++++++-----
 lib/librte_ether/rte_ethdev.c | 67 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 38 ++++++++++++++++++++-
 3 files changed, 128 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4e68144ef..1a8af473b 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,7 +131,8 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -220,11 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -510,10 +512,11 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -526,11 +529,12 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_l3_checksum_offload:
@@ -541,13 +545,14 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -565,7 +571,7 @@ Supports L4 checksum offload.
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
 .. _nic_features_macsec_offload:
@@ -576,9 +582,10 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
@@ -596,7 +604,7 @@ Supports inner packet L3 checksum.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
 
 .. _nic_features_timesync:
 
+Mbuf fast free
+--------------
+
+Supports optimization for fast release of mbufs following successful Tx.
+Requires all mbufs to come from the same mempool and has refcnt = 1.
+
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+
 Timesync
 --------
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 56c104d86..f0968dd14 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1203,6 +1203,55 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+	if ((txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT) &&
+	    (txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP))
+		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+	if (tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		flags |= (ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1210,6 +1259,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1254,8 +1304,23 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) {
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+		/* Keep the ignore flag. */
+		local_conf.txq_flags |= ETH_TXQ_FLAGS_IGNORE;
+	} else {
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+	}
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 6a2af355a..0a75b1b1e 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -734,6 +740,15 @@ struct rte_eth_rxconf {
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
 /**
+ * When set the txq_flags should be ignored,
+ * instead per-queue Tx offloads will be set on offloads field
+ * located on rte_eth_txq_conf struct.
+ * This flag is temporary till the rte_eth_txq_conf.txq_flags
+ * API will be deprecated.
+ */
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+
+/**
  * A structure used to configure a TX ring of an Ethernet port.
  */
 struct rte_eth_txconf {
@@ -744,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 /**
@@ -968,6 +989,13 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< Device supports multi segment send. */
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+/**< Device supports optimization for fast release of mbufs.
+ *   When set application must guarantee that per-queue all mbufs comes from
+ *   the same mempool and has refcnt = 1.
+ */
 
 struct rte_pci_device;
 
@@ -990,9 +1018,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2027,6 +2058,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev offloads API
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 1/3] ethdev: introduce Rx queue " Shahaf Shuler
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-17  6:54       ` Shahaf Shuler
  2017-09-18  7:51         ` Andrew Rybchenko
  2017-09-18 13:40         ` Mcnamara, John
  2017-09-18  7:51       ` [dpdk-dev] [PATCH v4 0/3] ethdev new " Andrew Rybchenko
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
  4 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-17  6:54 UTC (permalink / raw)
  To: thomas, jerin.jacob, konstantin.ananyev, arybchenko; +Cc: dev

Add the programmers guide details on the new offloads API introduced
by commits:

commit f649472cad9d ("ethdev: introduce Rx queue offloads API")
commit ecb46b66cda5 ("ethdev: introduce Tx queue offloads API")

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/prog_guide/poll_mode_drv.rst | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 8922e39f4..03092ae98 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -310,6 +310,23 @@ exported by each PMD. The list of flags and their precise meaning is
 described in the mbuf API documentation and in the in :ref:`Mbuf Library
 <Mbuf_Library>`, section "Meta Information".
 
+Per-Port and Per-Queue Offloads
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+On DPDK 17.11, a new offloads API was introduced.
+
+In the new API, offloads are divided into per-port and per-queue offloads.
+The different offloads capabilities can be queried by ``rte_eth_dev_info_get()``. Offloads which is supported can be either per-port or per-queue.
+
+Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Per-port offload configuration is set on ``rte_eth_dev_configure``. Per-queue offload configuration is set on ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
+To enable per-port offload, the offload should be set on both device configuration and queue setup. In case of a mixed configuration the queue setup shell return with error.
+To enable per-queue offload, the offload can be set only on the queue setup.
+Offloads which are not enabled are disabled by default.
+
+For application to use this Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag on ``txq_flags`` field located in ``rte_eth_txconf`` struct. On such case it is not required to set other flags on ``txq_flags``.
+For application to use this Rx offloads API it should set the ``ignore_offload_bitfield`` bit in ``rte_eth_rxmode`` struct. On such case it is not required to set other bitfield offloads on ``rxmode`` struct.
+
 Poll Mode Driver API
 --------------------
 
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx queue offloads API
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-18  7:50         ` Andrew Rybchenko
  0 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-18  7:50 UTC (permalink / raw)
  To: Shahaf Shuler, thomas, jerin.jacob, konstantin.ananyev; +Cc: dev

On 09/17/2017 09:54 AM, Shahaf Shuler wrote:
> Introduce a new API to configure Tx offloads.
>
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
>
> In addition the Tx offloads will be disabled by default and be
> enabled per application needs. This will much simplify PMD management of
> the different offloads.
>
> Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> field in order to move to the new API.
>
> The old Tx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/nics/features.rst  | 33 ++++++++++++++-----
>   lib/librte_ether/rte_ethdev.c | 67 +++++++++++++++++++++++++++++++++++++-
>   lib/librte_ether/rte_ethdev.h | 38 ++++++++++++++++++++-
>   3 files changed, 128 insertions(+), 10 deletions(-)

<...>

> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index 6a2af355a..0a75b1b1e 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h

<...>

> @@ -744,6 +759,12 @@ struct rte_eth_txconf {
>   
>   	uint32_t txq_flags; /**< Set flags for the Tx queue */
>   	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> +	/**
> +	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
> +	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
> +	 * structure are allowed to be set.

It contradicts to the statements that:
-  tx_queue_offload_capa is per-queue offloads only
-  to enable per-port offload, the offload should be set on both device 
configuration and
     queue configuration.
Similar is applicable to Rx offloads as well.

> +	 */
> +	uint64_t offloads;
>   };
>   
>   /**

<...>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev offloads API
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev " Shahaf Shuler
@ 2017-09-18  7:51         ` Andrew Rybchenko
  2017-09-18 13:40         ` Mcnamara, John
  1 sibling, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-18  7:51 UTC (permalink / raw)
  To: Shahaf Shuler, thomas, jerin.jacob, konstantin.ananyev, arybchenko; +Cc: dev

On 09/17/2017 09:54 AM, Shahaf Shuler wrote:
> Add the programmers guide details on the new offloads API introduced
> by commits:
>
> commit f649472cad9d ("ethdev: introduce Rx queue offloads API")
> commit ecb46b66cda5 ("ethdev: introduce Tx queue offloads API")
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>   doc/guides/prog_guide/poll_mode_drv.rst | 17 +++++++++++++++++
>   1 file changed, 17 insertions(+)
>
> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
> index 8922e39f4..03092ae98 100644
> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> +++ b/doc/guides/prog_guide/poll_mode_drv.rst

<...>

> +To enable per-port offload, the offload should be set on both device configuration and queue setup. In case of a mixed configuration the queue setup shell return with error.

Typo "shell"

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/3] ethdev new offloads API
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
                         ` (2 preceding siblings ...)
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev " Shahaf Shuler
@ 2017-09-18  7:51       ` Andrew Rybchenko
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
  4 siblings, 0 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2017-09-18  7:51 UTC (permalink / raw)
  To: Shahaf Shuler, thomas, jerin.jacob, konstantin.ananyev; +Cc: dev

On 09/17/2017 09:54 AM, Shahaf Shuler wrote:
> Tx offloads configuration is per queue. Tx offloads are enabled by default,
> and can be disabled using ETH_TXQ_FLAGS_NO* flags.
> This behaviour is not consistent with the Rx side where the Rx offloads
> configuration is per port. Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> Moreover, considering more Tx and Rx offloads will be added
> over time, the cost of managing them all inside the PMD will be tremendous,
> as the PMD will need to check the matching for the entire offload set
> for each mbuf it handles.
> In addition, on the current approach each Rx offload added breaks the
> ABI compatibility as it requires to add entries to existing bit-fields.
>   
> The series address above issues by defining a new offloads API.
> In the new API, offloads are divided into per-port and per-queue offloads,
> with a corresponding capability for each.
> The offloads are disabled by default. Each offload can be enabled or
> disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
> Such API will enable to easily add or remove offloads, without breaking the
> ABI compatibility.
>
> In order to provide a smooth transition between the APIs the following actions
> were taken:
> *  The old offloads API is kept for the meanwhile.
> *  Helper function which copy from old to new API were added to ethdev,
>     enabling the PMD to support only one of the APIs.

As I understand there is an API to copy from a new to old API as well, 
allowing
applications to use the new API and work fine with PMD which supports an 
old API only.

> Per discussion made on the RFC of this series [1], the integration plan which was
> decided is to do the transition in two phases:
> * ethdev API will move on 17.11.
> * Apps and examples will move on 18.02.
>
> This to enable PMD maintainers sufficient time to adopt the new API.
>
> [1]
> http://dpdk.org/ml/archives/dev/2017-August/072643.html
>
> on v4:
>   - Added another patch for documentation.
>   - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
>   - clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.
>
> on v3:
>   - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
>     for the no refcnt and single mempool flags.
>   - Fix features documentation.
>   - Fix comment style.
>
> on v2:
>   - Taking new approach of dividing offloads into per-queue and per-port one.
>   - Postpone the Tx/Rx public struct renaming to 18.02
>   - squash the helper functions into the Rx/Tx offloads intro patches.
>
> Shahaf Shuler (3):
>    ethdev: introduce Rx queue offloads API
>    ethdev: introduce Tx queue offloads API
>    doc: add details on ethdev offloads API
>
>   doc/guides/nics/features.rst            |  66 +++++---
>   doc/guides/prog_guide/poll_mode_drv.rst |  17 ++
>   lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
>   lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
>   4 files changed, 355 insertions(+), 40 deletions(-)

Series-reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-14  8:02                                 ` Thomas Monjalon
@ 2017-09-18 10:31                                   ` Bruce Richardson
  2017-09-18 10:57                                     ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2017-09-18 10:31 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Ananyev, Konstantin, stephen, dev, Shahaf Shuler

On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> 13/09/2017 23:42, Ananyev, Konstantin:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Konstantin, I would like your opinion about the proposal below.
> > > It is about making on the fly configuration more generic.
> > > You say it is possible to configure VLAN on the fly,
> > > and I think we should make it possible for other offload features.
> > 
> > It would be a good thing, but I don't think it is possible for all offloads.
> > For some of them you still have to stop the queue(port) first. 
> > 
> > Also I am not sure what exactly do you propose?
> > Is that something like that:
> > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> >   Introduce new functions:
> > 
> > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
Would be useful to have a valid mask here, to indicate what bits to use.
That way, you can adjust one bit without worrying about what other bits
you may change in the process. There are probably apps out there that
just want to toggle a single bit on, and off, at runtime while ignoring
others.
Alternatively, we can have set/unset functions which enable/disable
offloads, based on the mask.

> > 
> > uint64_t rte_eth_get_port_rx_offload(portid);
> > uint64_t rte_eth_set_queue_rx_offload(portid, queueid);
s/set/get/
> > 
> > And add new fileds:
> > rx_offload_port_dynamic_capa
> > rx_offload_queue_dynamic_capa
> > inside rte_eth_dev_info.
> > 
> > And it would be user responsibility to call set_port/queue_rx_offload()
> > somewhere before dev_start() for static offloads.
> > ?
> 
> Yes exactly.
> 
> > If so, then it seems reasonable to me.
> 
> Good, thank you
> 
> 
Sorry I'm a bit late to the review, but the above suggestion of separate
APIs for enabling offloads, seems much better than passing in the flags
in structures to the existing calls. From what I see all later revisions
of this patchset still use the existing flags parameter to setup calls
method.

Some advantages that I see of the separate APIs:
* allows some settings to be set before start, and others afterwards,
  with an appropriate return value if dynamic config not supported.
* we can get fine grained error reporting from these - the set calls can
  all return the mask indicating what offloads could not be applied -
  zero means all ok, 1 means a problem with that setting. This may be
  easier for the app to use than feature discovery in some cases.
* for those PMDs which support configuration at a per-queue level, it
  can allow the user to specify the per-port settings as a default, and
  then override that value at the queue level, if you just want one queue
  different from the rest.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 10:31                                   ` Bruce Richardson
@ 2017-09-18 10:57                                     ` Ananyev, Konstantin
  2017-09-18 11:04                                       ` Bruce Richardson
  2017-09-18 11:04                                       ` Bruce Richardson
  0 siblings, 2 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-18 10:57 UTC (permalink / raw)
  To: Richardson, Bruce, Thomas Monjalon; +Cc: stephen, dev, Shahaf Shuler



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 18, 2017 11:32 AM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; dev@dpdk.org; Shahaf Shuler
> <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > 13/09/2017 23:42, Ananyev, Konstantin:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Konstantin, I would like your opinion about the proposal below.
> > > > It is about making on the fly configuration more generic.
> > > > You say it is possible to configure VLAN on the fly,
> > > > and I think we should make it possible for other offload features.
> > >
> > > It would be a good thing, but I don't think it is possible for all offloads.
> > > For some of them you still have to stop the queue(port) first.
> > >
> > > Also I am not sure what exactly do you propose?
> > > Is that something like that:
> > > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> > >   Introduce new functions:
> > >
> > > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> Would be useful to have a valid mask here, to indicate what bits to use.
> That way, you can adjust one bit without worrying about what other bits
> you may change in the process. There are probably apps out there that
> just want to toggle a single bit on, and off, at runtime while ignoring
> others.
> Alternatively, we can have set/unset functions which enable/disable
> offloads, based on the mask.

My thought was  that people would do:

uint64_t offload = rte_eth_get_port_rx_offload(port);
offload |= RX_OFFLOAD_X;
offload &= ~RX_OFFLOAD_Y;
rte_eth_set_port_rx_offload(port, offload);

In that case, I think we don't really need a mask.

> 
> > >
> > > uint64_t rte_eth_get_port_rx_offload(portid);
> > > uint64_t rte_eth_set_queue_rx_offload(portid, queueid);
> s/set/get/
> > >
> > > And add new fileds:
> > > rx_offload_port_dynamic_capa
> > > rx_offload_queue_dynamic_capa
> > > inside rte_eth_dev_info.
> > >
> > > And it would be user responsibility to call set_port/queue_rx_offload()
> > > somewhere before dev_start() for static offloads.
> > > ?
> >
> > Yes exactly.
> >
> > > If so, then it seems reasonable to me.
> >
> > Good, thank you
> >
> >
> Sorry I'm a bit late to the review, but the above suggestion of separate
> APIs for enabling offloads, seems much better than passing in the flags
> in structures to the existing calls. From what I see all later revisions
> of this patchset still use the existing flags parameter to setup calls
> method.
> 
> Some advantages that I see of the separate APIs:
> * allows some settings to be set before start, and others afterwards,
>   with an appropriate return value if dynamic config not supported.
> * we can get fine grained error reporting from these - the set calls can
>   all return the mask indicating what offloads could not be applied -
>   zero means all ok, 1 means a problem with that setting. This may be
>   easier for the app to use than feature discovery in some cases.
> * for those PMDs which support configuration at a per-queue level, it
>   can allow the user to specify the per-port settings as a default, and
>   then override that value at the queue level, if you just want one queue
>   different from the rest.

I think we all in favor to have a separate API here.
Though from the discussion we had at latest TB, I am not sure it is doable
in 17.11 timeframe.
Konstantin

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 10:57                                     ` Ananyev, Konstantin
@ 2017-09-18 11:04                                       ` Bruce Richardson
  2017-09-18 11:27                                         ` Thomas Monjalon
  2017-09-18 11:04                                       ` Bruce Richardson
  1 sibling, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2017-09-18 11:04 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Thomas Monjalon, stephen, dev, Shahaf Shuler

On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Monday, September 18, 2017 11:32 AM
> > To: Thomas Monjalon <thomas@monjalon.net>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; dev@dpdk.org; Shahaf Shuler
> > <shahafs@mellanox.com>
> > Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> > 
> > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > It is about making on the fly configuration more generic.
> > > > > You say it is possible to configure VLAN on the fly,
> > > > > and I think we should make it possible for other offload features.
> > > >
> > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > For some of them you still have to stop the queue(port) first.
> > > >
> > > > Also I am not sure what exactly do you propose?
> > > > Is that something like that:
> > > > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > > > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> > > >   Introduce new functions:
> > > >
> > > > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > > > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> > Would be useful to have a valid mask here, to indicate what bits to use.
> > That way, you can adjust one bit without worrying about what other bits
> > you may change in the process. There are probably apps out there that
> > just want to toggle a single bit on, and off, at runtime while ignoring
> > others.
> > Alternatively, we can have set/unset functions which enable/disable
> > offloads, based on the mask.
> 
> My thought was  that people would do:
> 
> uint64_t offload = rte_eth_get_port_rx_offload(port);
> offload |= RX_OFFLOAD_X;
> offload &= ~RX_OFFLOAD_Y;
> rte_eth_set_port_rx_offload(port, offload);
> 
> In that case, I think we don't really need a mask.
> 
Sure, that can work, I'm not concerned either way.

Overall, I think my slight preference would be to have set/unset,
enable/disable functions to make it clear what is happening, rather than
having to worry about the complete set each time.

uint64_t rte_eth_port_rx_offload_enable(port_id, offload_mask)
uint64_t rte_eth_port_rx_offload_disable(port_id, offload_mask)

each returning the bits failing (or bits changed if you like, but I prefer
bits failing as return value, since it means 0 == no_error).

/Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 10:57                                     ` Ananyev, Konstantin
  2017-09-18 11:04                                       ` Bruce Richardson
@ 2017-09-18 11:04                                       ` Bruce Richardson
  2017-09-18 11:11                                         ` Ananyev, Konstantin
  1 sibling, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2017-09-18 11:04 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Thomas Monjalon, stephen, dev, Shahaf Shuler

On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Monday, September 18, 2017 11:32 AM
> > To: Thomas Monjalon <thomas@monjalon.net>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; dev@dpdk.org; Shahaf Shuler
> > <shahafs@mellanox.com>
> > Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> > 
> > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > It is about making on the fly configuration more generic.
> > > > > You say it is possible to configure VLAN on the fly,
> > > > > and I think we should make it possible for other offload features.
> > > >
> > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > For some of them you still have to stop the queue(port) first.
> > > >
> > > > Also I am not sure what exactly do you propose?
> > > > Is that something like that:
> > > > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > > > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> > > >   Introduce new functions:
> > > >
> > > > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > > > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> > Would be useful to have a valid mask here, to indicate what bits to use.
> > That way, you can adjust one bit without worrying about what other bits
> > you may change in the process. There are probably apps out there that
> > just want to toggle a single bit on, and off, at runtime while ignoring
> > others.
> > Alternatively, we can have set/unset functions which enable/disable
> > offloads, based on the mask.
> 
> My thought was  that people would do:
> 
> uint64_t offload = rte_eth_get_port_rx_offload(port);
> offload |= RX_OFFLOAD_X;
> offload &= ~RX_OFFLOAD_Y;
> rte_eth_set_port_rx_offload(port, offload);
> 
> In that case, I think we don't really need a mask.
> 
> > 
> > > >
> > > > uint64_t rte_eth_get_port_rx_offload(portid);
> > > > uint64_t rte_eth_set_queue_rx_offload(portid, queueid);
> > s/set/get/
> > > >
> > > > And add new fileds:
> > > > rx_offload_port_dynamic_capa
> > > > rx_offload_queue_dynamic_capa
> > > > inside rte_eth_dev_info.
> > > >
> > > > And it would be user responsibility to call set_port/queue_rx_offload()
> > > > somewhere before dev_start() for static offloads.
> > > > ?
> > >
> > > Yes exactly.
> > >
> > > > If so, then it seems reasonable to me.
> > >
> > > Good, thank you
> > >
> > >
> > Sorry I'm a bit late to the review, but the above suggestion of separate
> > APIs for enabling offloads, seems much better than passing in the flags
> > in structures to the existing calls. From what I see all later revisions
> > of this patchset still use the existing flags parameter to setup calls
> > method.
> > 
> > Some advantages that I see of the separate APIs:
> > * allows some settings to be set before start, and others afterwards,
> >   with an appropriate return value if dynamic config not supported.
> > * we can get fine grained error reporting from these - the set calls can
> >   all return the mask indicating what offloads could not be applied -
> >   zero means all ok, 1 means a problem with that setting. This may be
> >   easier for the app to use than feature discovery in some cases.
> > * for those PMDs which support configuration at a per-queue level, it
> >   can allow the user to specify the per-port settings as a default, and
> >   then override that value at the queue level, if you just want one queue
> >   different from the rest.
> 
> I think we all in favor to have a separate API here.
> Though from the discussion we had at latest TB, I am not sure it is doable
> in 17.11 timeframe.

Ok, so does that imply no change in this release, and that the existing
set is to be ignored?

/Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 11:04                                       ` Bruce Richardson
@ 2017-09-18 11:11                                         ` Ananyev, Konstantin
  2017-09-18 11:32                                           ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-09-18 11:11 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Thomas Monjalon, stephen, dev, Shahaf Shuler



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 18, 2017 12:05 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Thomas Monjalon <thomas@monjalon.net>; stephen@networkplumber.org; dev@dpdk.org; Shahaf Shuler <shahafs@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> 
> On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Monday, September 18, 2017 11:32 AM
> > > To: Thomas Monjalon <thomas@monjalon.net>
> > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; stephen@networkplumber.org; dev@dpdk.org; Shahaf Shuler
> > > <shahafs@mellanox.com>
> > > Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
> > >
> > > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > > It is about making on the fly configuration more generic.
> > > > > > You say it is possible to configure VLAN on the fly,
> > > > > > and I think we should make it possible for other offload features.
> > > > >
> > > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > > For some of them you still have to stop the queue(port) first.
> > > > >
> > > > > Also I am not sure what exactly do you propose?
> > > > > Is that something like that:
> > > > > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > > > > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> > > > >   Introduce new functions:
> > > > >
> > > > > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > > > > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> > > Would be useful to have a valid mask here, to indicate what bits to use.
> > > That way, you can adjust one bit without worrying about what other bits
> > > you may change in the process. There are probably apps out there that
> > > just want to toggle a single bit on, and off, at runtime while ignoring
> > > others.
> > > Alternatively, we can have set/unset functions which enable/disable
> > > offloads, based on the mask.
> >
> > My thought was  that people would do:
> >
> > uint64_t offload = rte_eth_get_port_rx_offload(port);
> > offload |= RX_OFFLOAD_X;
> > offload &= ~RX_OFFLOAD_Y;
> > rte_eth_set_port_rx_offload(port, offload);
> >
> > In that case, I think we don't really need a mask.
> >
> > >
> > > > >
> > > > > uint64_t rte_eth_get_port_rx_offload(portid);
> > > > > uint64_t rte_eth_set_queue_rx_offload(portid, queueid);
> > > s/set/get/
> > > > >
> > > > > And add new fileds:
> > > > > rx_offload_port_dynamic_capa
> > > > > rx_offload_queue_dynamic_capa
> > > > > inside rte_eth_dev_info.
> > > > >
> > > > > And it would be user responsibility to call set_port/queue_rx_offload()
> > > > > somewhere before dev_start() for static offloads.
> > > > > ?
> > > >
> > > > Yes exactly.
> > > >
> > > > > If so, then it seems reasonable to me.
> > > >
> > > > Good, thank you
> > > >
> > > >
> > > Sorry I'm a bit late to the review, but the above suggestion of separate
> > > APIs for enabling offloads, seems much better than passing in the flags
> > > in structures to the existing calls. From what I see all later revisions
> > > of this patchset still use the existing flags parameter to setup calls
> > > method.
> > >
> > > Some advantages that I see of the separate APIs:
> > > * allows some settings to be set before start, and others afterwards,
> > >   with an appropriate return value if dynamic config not supported.
> > > * we can get fine grained error reporting from these - the set calls can
> > >   all return the mask indicating what offloads could not be applied -
> > >   zero means all ok, 1 means a problem with that setting. This may be
> > >   easier for the app to use than feature discovery in some cases.
> > > * for those PMDs which support configuration at a per-queue level, it
> > >   can allow the user to specify the per-port settings as a default, and
> > >   then override that value at the queue level, if you just want one queue
> > >   different from the rest.
> >
> > I think we all in favor to have a separate API here.
> > Though from the discussion we had at latest TB, I am not sure it is doable
> > in 17.11 timeframe.
> 
> Ok, so does that imply no change in this release, and that the existing
> set is to be ignored?

No, my understanding the current plan is to go forward with Shahaf patches,
and then apply another one (new set/get API) on top of them.
Konstantin

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 11:04                                       ` Bruce Richardson
@ 2017-09-18 11:27                                         ` Thomas Monjalon
  0 siblings, 0 replies; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-18 11:27 UTC (permalink / raw)
  To: Bruce Richardson, Ananyev, Konstantin; +Cc: stephen, dev, Shahaf Shuler

18/09/2017 13:04, Bruce Richardson:
> On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> > From: Richardson, Bruce
> > > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > > It is about making on the fly configuration more generic.
> > > > > > You say it is possible to configure VLAN on the fly,
> > > > > > and I think we should make it possible for other offload features.
> > > > >
> > > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > > For some of them you still have to stop the queue(port) first.
> > > > >
> > > > > Also I am not sure what exactly do you propose?
> > > > > Is that something like that:
> > > > > - wipe existing offload bitfileds from rte_eth_rxmode (already done by Shahaf)
> > > > > - Instead of uint64_t offloads inside both  rte_eth_rxmode and te_eth_rxconf
> > > > >   Introduce new functions:
> > > > >
> > > > > int rte_eth_set_port_rx_offload(portid, uint64_t offload_mask);
> > > > > int rte_eth_set_queue_rx_offload(portid, queueid, uint64_t offload_mask);
> > > Would be useful to have a valid mask here, to indicate what bits to use.
> > > That way, you can adjust one bit without worrying about what other bits
> > > you may change in the process. There are probably apps out there that
> > > just want to toggle a single bit on, and off, at runtime while ignoring
> > > others.
> > > Alternatively, we can have set/unset functions which enable/disable
> > > offloads, based on the mask.
> > 
> > My thought was  that people would do:
> > 
> > uint64_t offload = rte_eth_get_port_rx_offload(port);
> > offload |= RX_OFFLOAD_X;
> > offload &= ~RX_OFFLOAD_Y;
> > rte_eth_set_port_rx_offload(port, offload);
> > 
> > In that case, I think we don't really need a mask.
> > 
> Sure, that can work, I'm not concerned either way.
> 
> Overall, I think my slight preference would be to have set/unset,
> enable/disable functions to make it clear what is happening, rather than
> having to worry about the complete set each time.
> 
> uint64_t rte_eth_port_rx_offload_enable(port_id, offload_mask)
> uint64_t rte_eth_port_rx_offload_disable(port_id, offload_mask)
> 
> each returning the bits failing (or bits changed if you like, but I prefer
> bits failing as return value, since it means 0 == no_error).

I think we need both: "get" functions + "mask" parameters in "set" functions.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 11:11                                         ` Ananyev, Konstantin
@ 2017-09-18 11:32                                           ` Thomas Monjalon
  2017-09-18 11:37                                             ` Bruce Richardson
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-18 11:32 UTC (permalink / raw)
  To: Ananyev, Konstantin, Richardson, Bruce; +Cc: stephen, dev, Shahaf Shuler

18/09/2017 13:11, Ananyev, Konstantin:
> From: Richardson, Bruce
> > On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> > > From: Richardson, Bruce
> > > > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > > > It is about making on the fly configuration more generic.
> > > > > > > You say it is possible to configure VLAN on the fly,
> > > > > > > and I think we should make it possible for other offload features.
> > > > > >
> > > > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > > > For some of them you still have to stop the queue(port) first.
[...]
[technical details skipped]
[...]
> > > > > > If so, then it seems reasonable to me.
> > > > >
> > > > > Good, thank you
> > > > >
> > > > >
> > > > Sorry I'm a bit late to the review, but the above suggestion of separate
> > > > APIs for enabling offloads, seems much better than passing in the flags
> > > > in structures to the existing calls. From what I see all later revisions
> > > > of this patchset still use the existing flags parameter to setup calls
> > > > method.
> > > >
> > > > Some advantages that I see of the separate APIs:
> > > > * allows some settings to be set before start, and others afterwards,
> > > >   with an appropriate return value if dynamic config not supported.
> > > > * we can get fine grained error reporting from these - the set calls can
> > > >   all return the mask indicating what offloads could not be applied -
> > > >   zero means all ok, 1 means a problem with that setting. This may be
> > > >   easier for the app to use than feature discovery in some cases.
> > > > * for those PMDs which support configuration at a per-queue level, it
> > > >   can allow the user to specify the per-port settings as a default, and
> > > >   then override that value at the queue level, if you just want one queue
> > > >   different from the rest.
> > >
> > > I think we all in favor to have a separate API here.
> > > Though from the discussion we had at latest TB, I am not sure it is doable
> > > in 17.11 timeframe.
> > 
> > Ok, so does that imply no change in this release, and that the existing
> > set is to be ignored?
> 
> No, my understanding the current plan is to go forward with Shahaf patches,
> and then apply another one (new set/get API) on top of them.

Yes, it is what we agreed (hope to see it in minutes).
If someone can do these new patches in 17.11 timeframe, it's great!
Bruce, do you want to make it a try?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 11:32                                           ` Thomas Monjalon
@ 2017-09-18 11:37                                             ` Bruce Richardson
  2017-09-18 14:27                                               ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2017-09-18 11:37 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Ananyev, Konstantin, stephen, dev, Shahaf Shuler

On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> 18/09/2017 13:11, Ananyev, Konstantin:
> > From: Richardson, Bruce
> > > On Mon, Sep 18, 2017 at 11:57:03AM +0100, Ananyev, Konstantin wrote:
> > > > From: Richardson, Bruce
> > > > > On Thu, Sep 14, 2017 at 10:02:26AM +0200, Thomas Monjalon wrote:
> > > > > > 13/09/2017 23:42, Ananyev, Konstantin:
> > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > > > 13/09/2017 14:56, Ananyev, Konstantin:
> > > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > > > Konstantin, I would like your opinion about the proposal below.
> > > > > > > > It is about making on the fly configuration more generic.
> > > > > > > > You say it is possible to configure VLAN on the fly,
> > > > > > > > and I think we should make it possible for other offload features.
> > > > > > >
> > > > > > > It would be a good thing, but I don't think it is possible for all offloads.
> > > > > > > For some of them you still have to stop the queue(port) first.
> [...]
> [technical details skipped]
> [...]
> > > > > > > If so, then it seems reasonable to me.
> > > > > >
> > > > > > Good, thank you
> > > > > >
> > > > > >
> > > > > Sorry I'm a bit late to the review, but the above suggestion of separate
> > > > > APIs for enabling offloads, seems much better than passing in the flags
> > > > > in structures to the existing calls. From what I see all later revisions
> > > > > of this patchset still use the existing flags parameter to setup calls
> > > > > method.
> > > > >
> > > > > Some advantages that I see of the separate APIs:
> > > > > * allows some settings to be set before start, and others afterwards,
> > > > >   with an appropriate return value if dynamic config not supported.
> > > > > * we can get fine grained error reporting from these - the set calls can
> > > > >   all return the mask indicating what offloads could not be applied -
> > > > >   zero means all ok, 1 means a problem with that setting. This may be
> > > > >   easier for the app to use than feature discovery in some cases.
> > > > > * for those PMDs which support configuration at a per-queue level, it
> > > > >   can allow the user to specify the per-port settings as a default, and
> > > > >   then override that value at the queue level, if you just want one queue
> > > > >   different from the rest.
> > > >
> > > > I think we all in favor to have a separate API here.
> > > > Though from the discussion we had at latest TB, I am not sure it is doable
> > > > in 17.11 timeframe.
> > > 
> > > Ok, so does that imply no change in this release, and that the existing
> > > set is to be ignored?
> > 
> > No, my understanding the current plan is to go forward with Shahaf patches,
> > and then apply another one (new set/get API) on top of them.
> 
> Yes, it is what we agreed (hope to see it in minutes).
> If someone can do these new patches in 17.11 timeframe, it's great!
> Bruce, do you want to make it a try?

If I have the chance, I can try, but given how short time is and that
userspace is on next week, I very much doubt I'll even get it started.

/Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev offloads API
  2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev " Shahaf Shuler
  2017-09-18  7:51         ` Andrew Rybchenko
@ 2017-09-18 13:40         ` Mcnamara, John
  1 sibling, 0 replies; 134+ messages in thread
From: Mcnamara, John @ 2017-09-18 13:40 UTC (permalink / raw)
  To: Shahaf Shuler, thomas, jerin.jacob, Ananyev, Konstantin, arybchenko; +Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Sunday, September 17, 2017 7:55 AM
> To: thomas@monjalon.net; jerin.jacob@caviumnetworks.com; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>; arybchenko@solarflare.com
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev offloads API
> 
> Add the programmers guide details on the new offloads API introduced by
> commits:
> 
> commit f649472cad9d ("ethdev: introduce Rx queue offloads API") commit
> ecb46b66cda5 ("ethdev: introduce Tx queue offloads API")
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>


> ...

> +Per-Port and Per-Queue Offloads
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +On DPDK 17.11, a new offloads API was introduced.

It is best to omit this line.


There are a number of small grammatical errors in the rest of the text.
Probably something like this would be better:

Per-Port and Per-Queue Offloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In the DPDK offload API, offloads are divided into per-port and per-queue offloads.
The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
Supported offloads can be either per-port or per-queue.

Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
Per-port offload configuration is set using ``rte_eth_dev_configure``.
Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
To enable per-port offload, the offload should be set on both device configuration and queue setup.
In case of a mixed configuration the queue setup shall return with an error.
To enable per-queue offload, the offload can be set only on the queue setup.
Offloads which are not enabled are disabled by default.

For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
In such cases it is not required to set other flags in ``txq_flags``.
In such cases it is not required to set other bitfield offloads in the ``rxmode`` struct.
For an application to use the Rx offloads API it should set the ``ignore_offload_bitfield`` bit in the ``rte_eth_rxmode`` struct.

Reviewed-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 11:37                                             ` Bruce Richardson
@ 2017-09-18 14:27                                               ` Shahaf Shuler
  2017-09-18 14:42                                                 ` Thomas Monjalon
  2017-09-18 14:44                                                 ` Bruce Richardson
  0 siblings, 2 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-18 14:27 UTC (permalink / raw)
  To: Bruce Richardson, Thomas Monjalon; +Cc: Ananyev, Konstantin, stephen, dev

Monday, September 18, 2017 2:38 PM, Bruce Richardson
> On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> > 18/09/2017 13:11, Ananyev, Konstantin:
> > > From: Richardson, Bruce
> > > > >
> > > > > I think we all in favor to have a separate API here.
> > > > > Though from the discussion we had at latest TB, I am not sure it
> > > > > is doable in 17.11 timeframe.
> > > >
> > > > Ok, so does that imply no change in this release, and that the
> > > > existing set is to be ignored?
> > >
> > > No, my understanding the current plan is to go forward with Shahaf
> > > patches, and then apply another one (new set/get API) on top of them.
> >
> > Yes, it is what we agreed (hope to see it in minutes).
> > If someone can do these new patches in 17.11 timeframe, it's great!
> > Bruce, do you want to make it a try?
> 
> If I have the chance, I can try, but given how short time is and that userspace
> is on next week, I very much doubt I'll even get it started.

I wasn't aware to the techboard decision on the extra patchset needed.
I think it will be wrong to introduce an API on 17.11 and change it again on 18.02.  
I will do my best to make everything ready for 17.11 so we can have one solid API on top of which all PMDs and application will be converted. Considering some Holidays and the DPDK summit I won't have much time to work on it.

The plan is as follows:
1.  complete the last comment on the current series and integrate it.
2. send a new patchset to convert to the API suggested above.

Aggregating the different suggestions I come up with the below. if this is agreed, then I will move with the implementation.
(I thought it is good to return error values for the get function).

**                                                                            
* Get Tx offloads set on a specific port.                                     
*                                                                             
* @param port_id                                                              
*   The port identifier of the Ethernet device.                               
* @param offloads                                                             
*   A pointer to uint64_t where the offloads flags                            
*   will be filled using DEV_TX_OFFLOAD_* flags.                              
* @return                                                                     
*   - (0) if successful.                                                      
*   - (-ENOTSUP or -ENODEV) on failure.                                       
*/                                                                            
int rte_eth_get_port_tx_offloads(uint8_t port_id, uint64_t *offloads);         
                                                                              
**                                                                            
* Get Tx offloads set on a specific queue.                                    
*                                                                             
* @param port_id                                                              
*   The port identifier of the Ethernet device.                               
* @param queue_id                                                             
*   The queue identifier.                                                     
* @param offloads                                                             
*   A pointer to uint64_t where the offloads flags                            
*   will be filled using DEV_TX_OFFLOAD_* flags.                              
* @return                                                                     
*   - (0) if successful.                                                      
*   - (-ENOTSUP or -ENODEV) on failure.                                       
*/                                                                            
int rte_eth_get_queue_tx_offloads(uint8_t port_id, uint16_t queue_id,          
                                 uint64_t *offloads);                         
**                                                                            
* Set Tx offloads on a specific port.                                         
*                                                                             
* @param port_id                                                              
*   The port identifier of the Ethernet device.                               
* @param offloads_mask                                                        
*   Indicates which offloads to be set using DEV_TX_OFFLOAD_* flags.          
* @return                                                                     
*   (0) if all offloads set successfully, otherwise offloads                  
*   flags which were not set.                                                 
*                                                                             
*/                                                                            
uint64_t rte_eth_set_port_tx_offloads(uint8_t port_id, uint64_t offloads_mask);

/**                                                                       
 * Set Tx offloads on a specific queue.                                   
 *                                                                        
 * @param port_id                                                         
 *   The port identifier of the Ethernet device.                          
 * @param queue_id                                                        
 *   The queue identifier.                                                
 * @param offloads_mask                                                   
 *   Indicates which offloads to be set using DEV_TX_OFFLOAD_* flags.     
 * @return                                                                
 *   (0) if all offloads set successfully, otherwise offloads             
 *   flags which were not set.                                            
 *                                                                        
 */                                                                       
uint64_t rte_eth_set_queue_tx_offloads(uint8_t port_id, uint16_t queue_id,
                                       uint64_t offloads_mask);           
/**                                                                       
 * Get Rx offloads set on a specific port.                                
 *                                                                        
 * @param port_id                                                         
 *   The port identifier of the Ethernet device.                          
 * @param offloads                                                        
 *   A pointer to uint64_t where the offloads flags                       
 *   will be filled using DEV_RX_OFFLOAD_* flags.                         
 * @return                                                                
 *   - (0) if successful.                                                 
 *   - (-ENOTSUP or -ENODEV) on failure.                                  
 */                                                                       
int rte_eth_get_port_rx_offloads(uint8_t port_id, uint64_t *offloads);    
                                                                          
/**                                                                       
 * Get Rx offloads set on a specific queue.                               
 *                                                                        
 * @param port_id                                                         
 *   The port identifier of the Ethernet device.                          
 * @param queue_id                                                        
 *   The queue identifier.                                                
 * @param offloads                                                        
 *   A pointer to uint64_t where the offloads flags                       
 *   will be filled using DEV_RX_OFFLOAD_* flags.                         
 * @return                                                                
 *   - (0) if successful.                                                 
 *   - (-ENOTSUP or -ENODEV) on failure.                                  
 */                                                                       
int rte_eth_get_queue_rx_offlaods(uint8_t port_id, uint16_t queue_id,     
                                  uint64_t *offloads);   

/**                                                                            
 * Set Rx offloads on a specific port.                                         
 *                                                                             
 * @param port_id                                                              
 *   The port identifier of the Ethernet device.                               
 * @param offloads_mask                                                        
 *   Indicates which offloads to be set using DEV_RX_OFFLOAD_* flags.          
 * @return                                                                     
 *   (0) if all offloads set successfully, otherwise offloads                  
 *   flags which were not set.                                                 
 *                                                                             
 */                                                                            
uint64_t rte_eth_set_port_rx_offloads(uint8_t port_id, uint64_t offloads_mask);
                                                                               
/**                                                                            
 * Set Rx offloads on a specific port.                                         
 *                                                                             
 * @param port_id                                                              
 *   The port identifier of the Ethernet device.                               
 * @param queue_id                                                             
 *   The queue identifier.                                                     
 * @param offloads_mask                                                        
 *   Indicates which offloads to be set using DEV_RX_OFFLOAD_* flags.          
 * @return                                                                     
 *   (0) if all offloads set successfully, otherwise offloads                  
 *   flags which were not set.                                                 
 *                                                                             
 */                                                                            
uint64_t rte_eth_set_queue_rx_offloads(uint8_t port_id, uint16_t queue_id,     
                                       uint64_t offloads_mask);                                 

> 
> /Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 14:27                                               ` Shahaf Shuler
@ 2017-09-18 14:42                                                 ` Thomas Monjalon
  2017-09-18 14:44                                                 ` Bruce Richardson
  1 sibling, 0 replies; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-18 14:42 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Bruce Richardson, Ananyev, Konstantin, stephen, dev

18/09/2017 16:27, Shahaf Shuler:
> Monday, September 18, 2017 2:38 PM, Bruce Richardson
> > On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> > > 18/09/2017 13:11, Ananyev, Konstantin:
> > > > From: Richardson, Bruce
> > > > > >
> > > > > > I think we all in favor to have a separate API here.
> > > > > > Though from the discussion we had at latest TB, I am not sure it
> > > > > > is doable in 17.11 timeframe.
> > > > >
> > > > > Ok, so does that imply no change in this release, and that the
> > > > > existing set is to be ignored?
> > > >
> > > > No, my understanding the current plan is to go forward with Shahaf
> > > > patches, and then apply another one (new set/get API) on top of them.
> > >
> > > Yes, it is what we agreed (hope to see it in minutes).
> > > If someone can do these new patches in 17.11 timeframe, it's great!
> > > Bruce, do you want to make it a try?
> > 
> > If I have the chance, I can try, but given how short time is and that userspace
> > is on next week, I very much doubt I'll even get it started.
> 
> I wasn't aware to the techboard decision on the extra patchset needed.
> I think it will be wrong to introduce an API on 17.11 and change it again on 18.02.  
> I will do my best to make everything ready for 17.11 so we can have one solid API on top of which all PMDs and application will be converted. Considering some Holidays and the DPDK summit I won't have much time to work on it.
> 
> The plan is as follows:
> 1.  complete the last comment on the current series and integrate it.
> 2. send a new patchset to convert to the API suggested above.

Thank you Shahaf.

> Aggregating the different suggestions I come up with the below. if this is agreed, then I will move with the implementation.
> (I thought it is good to return error values for the get function).

[...]
> **                                                                            
> * Set Tx offloads on a specific port.                                         
> *                                                                             
> * @param port_id                                                              
> *   The port identifier of the Ethernet device.                               
> * @param offloads_mask                                                        
> *   Indicates which offloads to be set using DEV_TX_OFFLOAD_* flags.          
> * @return                                                                     
> *   (0) if all offloads set successfully, otherwise offloads                  
> *   flags which were not set.                                                 
> *                                                                             
> */                                                                            
> uint64_t rte_eth_set_port_tx_offloads(uint8_t port_id, uint64_t offloads_mask);

You need to have a parameter for the offloads value,
different of offloads mask:
	set(port, value, mask)
Or as proposed by Bruce, you need 2 functions:
	enable(port, mask)
	disable(port, mask)

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 14:27                                               ` Shahaf Shuler
  2017-09-18 14:42                                                 ` Thomas Monjalon
@ 2017-09-18 14:44                                                 ` Bruce Richardson
  2017-09-18 18:18                                                   ` Shahaf Shuler
  1 sibling, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2017-09-18 14:44 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Thomas Monjalon, Ananyev, Konstantin, stephen, dev

On Mon, Sep 18, 2017 at 02:27:25PM +0000, Shahaf Shuler wrote:
> Monday, September 18, 2017 2:38 PM, Bruce Richardson
> > On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> > > 18/09/2017 13:11, Ananyev, Konstantin:
> > > > From: Richardson, Bruce
> > > > > >
> > > > > > I think we all in favor to have a separate API here.
> > > > > > Though from the discussion we had at latest TB, I am not sure it
> > > > > > is doable in 17.11 timeframe.
> > > > >
> > > > > Ok, so does that imply no change in this release, and that the
> > > > > existing set is to be ignored?
> > > >
> > > > No, my understanding the current plan is to go forward with Shahaf
> > > > patches, and then apply another one (new set/get API) on top of them.
> > >
> > > Yes, it is what we agreed (hope to see it in minutes).
> > > If someone can do these new patches in 17.11 timeframe, it's great!
> > > Bruce, do you want to make it a try?
> > 
> > If I have the chance, I can try, but given how short time is and that userspace
> > is on next week, I very much doubt I'll even get it started.
> 
> I wasn't aware to the techboard decision on the extra patchset needed.
> I think it will be wrong to introduce an API on 17.11 and change it again on 18.02.  
> I will do my best to make everything ready for 17.11 so we can have one solid API on top of which all PMDs and application will be converted. Considering some Holidays and the DPDK summit I won't have much time to work on it.
> 
> The plan is as follows:
> 1.  complete the last comment on the current series and integrate it.
> 2. send a new patchset to convert to the API suggested above.
> 
> Aggregating the different suggestions I come up with the below. if this is agreed, then I will move with the implementation.
> (I thought it is good to return error values for the get function).

I'd rather you didn't. :-) The only realistic error I would consider is
an invalid port id, and I think returning 0 - no offloads - is fine in
those cases. The user will pretty quickly discover it's an invalid port
id anyway, so I prefer a get function to just return the value as a
return value and be done with it!

Otherwise, these will do fine. I would prefer some way to only change
one offload at a time without having to call "get" and do bit twiddling
before a call to "set", but this will be ok, if others are happy with
it.

If we at least get the return value as the mask of enabled offloads we
can at least shorten some cases as e.g.
rte_eth_set_port_tx_offloads(port_id, rte_eth_get_port_tx_offloads(port_id) | OFFLOAD_X);

/Bruce

> 
> **                                                                            
> * Get Tx offloads set on a specific port.                                     
> *                                                                             
> * @param port_id                                                              
> *   The port identifier of the Ethernet device.                               
> * @param offloads                                                             
> *   A pointer to uint64_t where the offloads flags                            
> *   will be filled using DEV_TX_OFFLOAD_* flags.                              
> * @return                                                                     
> *   - (0) if successful.                                                      
> *   - (-ENOTSUP or -ENODEV) on failure.                                       
> */                                                                            
> int rte_eth_get_port_tx_offloads(uint8_t port_id, uint64_t *offloads);         
>                                                                               
> **                                                                            
> * Get Tx offloads set on a specific queue.                                    
> *                                                                             
> * @param port_id                                                              
> *   The port identifier of the Ethernet device.                               
> * @param queue_id                                                             
> *   The queue identifier.                                                     
> * @param offloads                                                             
> *   A pointer to uint64_t where the offloads flags                            
> *   will be filled using DEV_TX_OFFLOAD_* flags.                              
> * @return                                                                     
> *   - (0) if successful.                                                      
> *   - (-ENOTSUP or -ENODEV) on failure.                                       
> */                                                                            
> int rte_eth_get_queue_tx_offloads(uint8_t port_id, uint16_t queue_id,          
>                                  uint64_t *offloads);                         
> **                                                                            
> * Set Tx offloads on a specific port.                                         
> *                                                                             
> * @param port_id                                                              
> *   The port identifier of the Ethernet device.                               
> * @param offloads_mask                                                        
> *   Indicates which offloads to be set using DEV_TX_OFFLOAD_* flags.          
> * @return                                                                     
> *   (0) if all offloads set successfully, otherwise offloads                  
> *   flags which were not set.                                                 
> *                                                                             
> */                                                                            
> uint64_t rte_eth_set_port_tx_offloads(uint8_t port_id, uint64_t offloads_mask);
> 
> /**                                                                       
>  * Set Tx offloads on a specific queue.                                   
>  *                                                                        
>  * @param port_id                                                         
>  *   The port identifier of the Ethernet device.                          
>  * @param queue_id                                                        
>  *   The queue identifier.                                                
>  * @param offloads_mask                                                   
>  *   Indicates which offloads to be set using DEV_TX_OFFLOAD_* flags.     
>  * @return                                                                
>  *   (0) if all offloads set successfully, otherwise offloads             
>  *   flags which were not set.                                            
>  *                                                                        
>  */                                                                       
> uint64_t rte_eth_set_queue_tx_offloads(uint8_t port_id, uint16_t queue_id,
>                                        uint64_t offloads_mask);           
> /**                                                                       
>  * Get Rx offloads set on a specific port.                                
>  *                                                                        
>  * @param port_id                                                         
>  *   The port identifier of the Ethernet device.                          
>  * @param offloads                                                        
>  *   A pointer to uint64_t where the offloads flags                       
>  *   will be filled using DEV_RX_OFFLOAD_* flags.                         
>  * @return                                                                
>  *   - (0) if successful.                                                 
>  *   - (-ENOTSUP or -ENODEV) on failure.                                  
>  */                                                                       
> int rte_eth_get_port_rx_offloads(uint8_t port_id, uint64_t *offloads);    
>                                                                           
> /**                                                                       
>  * Get Rx offloads set on a specific queue.                               
>  *                                                                        
>  * @param port_id                                                         
>  *   The port identifier of the Ethernet device.                          
>  * @param queue_id                                                        
>  *   The queue identifier.                                                
>  * @param offloads                                                        
>  *   A pointer to uint64_t where the offloads flags                       
>  *   will be filled using DEV_RX_OFFLOAD_* flags.                         
>  * @return                                                                
>  *   - (0) if successful.                                                 
>  *   - (-ENOTSUP or -ENODEV) on failure.                                  
>  */                                                                       
> int rte_eth_get_queue_rx_offlaods(uint8_t port_id, uint16_t queue_id,     
>                                   uint64_t *offloads);   
> 
> /**                                                                            
>  * Set Rx offloads on a specific port.                                         
>  *                                                                             
>  * @param port_id                                                              
>  *   The port identifier of the Ethernet device.                               
>  * @param offloads_mask                                                        
>  *   Indicates which offloads to be set using DEV_RX_OFFLOAD_* flags.          
>  * @return                                                                     
>  *   (0) if all offloads set successfully, otherwise offloads                  
>  *   flags which were not set.                                                 
>  *                                                                             
>  */                                                                            
> uint64_t rte_eth_set_port_rx_offloads(uint8_t port_id, uint64_t offloads_mask);
>                                                                                
> /**                                                                            
>  * Set Rx offloads on a specific port.                                         
>  *                                                                             
>  * @param port_id                                                              
>  *   The port identifier of the Ethernet device.                               
>  * @param queue_id                                                             
>  *   The queue identifier.                                                     
>  * @param offloads_mask                                                        
>  *   Indicates which offloads to be set using DEV_RX_OFFLOAD_* flags.          
>  * @return                                                                     
>  *   (0) if all offloads set successfully, otherwise offloads                  
>  *   flags which were not set.                                                 
>  *                                                                             
>  */                                                                            
> uint64_t rte_eth_set_queue_rx_offloads(uint8_t port_id, uint16_t queue_id,     
>                                        uint64_t offloads_mask);                                 
> 
> > 
> > /Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 14:44                                                 ` Bruce Richardson
@ 2017-09-18 18:18                                                   ` Shahaf Shuler
  2017-09-18 21:08                                                     ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-18 18:18 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: Thomas Monjalon, Ananyev, Konstantin, stephen, dev

Monday, September 18, 2017 5:45 PM, Bruce Richardson:
> 
> On Mon, Sep 18, 2017 at 02:27:25PM +0000, Shahaf Shuler wrote:
> > Monday, September 18, 2017 2:38 PM, Bruce Richardson
> > > On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> > > > 18/09/2017 13:11, Ananyev, Konstantin:
> > > > > From: Richardson, Bruce
> > > > > > >
> > > > > > > I think we all in favor to have a separate API here.
> > > > > > > Though from the discussion we had at latest TB, I am not
> > > > > > > sure it is doable in 17.11 timeframe.
> > > > > >
> > > > > > Ok, so does that imply no change in this release, and that the
> > > > > > existing set is to be ignored?
> > > > >
> > > > > No, my understanding the current plan is to go forward with
> > > > > Shahaf patches, and then apply another one (new set/get API) on top
> of them.
> > > >
> > > > Yes, it is what we agreed (hope to see it in minutes).
> > > > If someone can do these new patches in 17.11 timeframe, it's great!
> > > > Bruce, do you want to make it a try?
> > >
> > > If I have the chance, I can try, but given how short time is and
> > > that userspace is on next week, I very much doubt I'll even get it started.
> >
> > I wasn't aware to the techboard decision on the extra patchset needed.
> > I think it will be wrong to introduce an API on 17.11 and change it again on
> 18.02.
> > I will do my best to make everything ready for 17.11 so we can have one
> solid API on top of which all PMDs and application will be converted.
> Considering some Holidays and the DPDK summit I won't have much time to
> work on it.
> >
> > The plan is as follows:
> > 1.  complete the last comment on the current series and integrate it.
> > 2. send a new patchset to convert to the API suggested above.
> >
> > Aggregating the different suggestions I come up with the below. if this is
> agreed, then I will move with the implementation.
> > (I thought it is good to return error values for the get function).
> 
> I'd rather you didn't. :-) The only realistic error I would consider is an invalid
> port id, and I think returning 0 - no offloads - is fine in those cases. The user
> will pretty quickly discover it's an invalid port id anyway, so I prefer a get
> function to just return the value as a return value and be done with it!

It would be simpler, however am not sure invalid port is the only error to consider. Another possible error can be the PMD is not supporting this function.
On that case returning 0 is not good enough. The application cannot know why the offload is not set, is it because it is set wrong or the PMD just don't support this API (which leads me to my next point). 

Declare:
API1 = The API being worked so far.
AP2 = The suggested API being discussed here.

API1  was designed for easy adoption from both PMDs and application. Application can use either old/new API on top of PMD which support one of them thanks to the convert function. There was no hard demand to force all of the PMDs to support it at once. 
With API2 this model breaks. Application which moved to the new offloads API cannot work with PMD which supports the old one.

If I aggregate the pros for API2:
>From Bruce:
>* allows some settings to be set before start, and others afterwards,
>  with an appropriate return value if dynamic config not supported.
>* we can get fine grained error reporting from these - the set calls can
>  all return the mask indicating what offloads could not be applied -
>  zero means all ok, 1 means a problem with that setting. This may be
>  easier for the app to use than feature discovery in some cases.

For that functionality I think the get function are enough (application set offload and then check which of them is actually set).
The set function are more for the on the fly configuration. 

>* for those PMDs which support configuration at a per-queue level, it
>  can allow the user to specify the per-port settings as a default, and
>  then override that value at the queue level, if you just want one queue
>  different from the rest.

This can be done with API1 as well.  

>From Thomas:
> make the on the flight vlan offloads configuration more generic.

The cons:
1. hard requirement from the PMDs to support the new API. 

I can commit for mlx5 and mlx4 PMDs. I know other PMD maintainers plan to convert their PMDs as well. If we take this approach we must make sure they all move.

There is another possible API (API3):
1. keep the per-port, per-queue configuration.
2. add the get function for better error reporting and visibility for application.
3. keep the current on the flight vlan offloading configuration as an exception. In case there will be a need to configure more offloads on the flight we can move to API2.

With API1 I am obviously OK.
I agree API2 is more generic. 
API3 is a nice compromise, if we don't want to force all PMD to convert. 

--Shahaf

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 18:18                                                   ` Shahaf Shuler
@ 2017-09-18 21:08                                                     ` Thomas Monjalon
  2017-09-19  7:33                                                       ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-18 21:08 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: Bruce Richardson, Ananyev, Konstantin, stephen, dev

18/09/2017 20:18, Shahaf Shuler:
> Monday, September 18, 2017 5:45 PM, Bruce Richardson:
> > 
> > On Mon, Sep 18, 2017 at 02:27:25PM +0000, Shahaf Shuler wrote:
> > > Monday, September 18, 2017 2:38 PM, Bruce Richardson
> > > > On Mon, Sep 18, 2017 at 01:32:29PM +0200, Thomas Monjalon wrote:
> > > > > 18/09/2017 13:11, Ananyev, Konstantin:
> > > > > > From: Richardson, Bruce
> > > > > > > >
> > > > > > > > I think we all in favor to have a separate API here.
> > > > > > > > Though from the discussion we had at latest TB, I am not
> > > > > > > > sure it is doable in 17.11 timeframe.
> > > > > > >
> > > > > > > Ok, so does that imply no change in this release, and that the
> > > > > > > existing set is to be ignored?
> > > > > >
> > > > > > No, my understanding the current plan is to go forward with
> > > > > > Shahaf patches, and then apply another one (new set/get API) on top
> > of them.
> > > > >
> > > > > Yes, it is what we agreed (hope to see it in minutes).
> > > > > If someone can do these new patches in 17.11 timeframe, it's great!
> > > > > Bruce, do you want to make it a try?
> > > >
> > > > If I have the chance, I can try, but given how short time is and
> > > > that userspace is on next week, I very much doubt I'll even get it started.
> > >
> > > I wasn't aware to the techboard decision on the extra patchset needed.
> > > I think it will be wrong to introduce an API on 17.11 and change it again on
> > 18.02.
> > > I will do my best to make everything ready for 17.11 so we can have one
> > solid API on top of which all PMDs and application will be converted.
> > Considering some Holidays and the DPDK summit I won't have much time to
> > work on it.
> > >
> > > The plan is as follows:
> > > 1.  complete the last comment on the current series and integrate it.
> > > 2. send a new patchset to convert to the API suggested above.
> > >
> > > Aggregating the different suggestions I come up with the below. if this is
> > agreed, then I will move with the implementation.
> > > (I thought it is good to return error values for the get function).
> > 
> > I'd rather you didn't. :-) The only realistic error I would consider is an invalid
> > port id, and I think returning 0 - no offloads - is fine in those cases. The user
> > will pretty quickly discover it's an invalid port id anyway, so I prefer a get
> > function to just return the value as a return value and be done with it!
> 
> It would be simpler, however am not sure invalid port is the only error to consider. Another possible error can be the PMD is not supporting this function.
> On that case returning 0 is not good enough. The application cannot know why the offload is not set, is it because it is set wrong or the PMD just don't support this API (which leads me to my next point). 

We can skip error reporting on "get" functions
and rely on "set" functions to return error if offload API is not supported
or for other miscellaneous errors.

> Declare:
> API1 = The API being worked so far.
> AP2 = The suggested API being discussed here.
> 
> API1  was designed for easy adoption from both PMDs and application. Application can use either old/new API on top of PMD which support one of them thanks to the convert function. There was no hard demand to force all of the PMDs to support it at once. 
> With API2 this model breaks. Application which moved to the new offloads API cannot work with PMD which supports the old one.

It means the generic applications cannot migrate to the new API
until every PMDs have migrated.
I don't see it like a big issue.

> If I aggregate the pros for API2:
> From Bruce:
> >* allows some settings to be set before start, and others afterwards,
> >  with an appropriate return value if dynamic config not supported.
> >* we can get fine grained error reporting from these - the set calls can
> >  all return the mask indicating what offloads could not be applied -
> >  zero means all ok, 1 means a problem with that setting. This may be
> >  easier for the app to use than feature discovery in some cases.
> 
> For that functionality I think the get function are enough (application set offload and then check which of them is actually set).
> The set function are more for the on the fly configuration. 
> 
> >* for those PMDs which support configuration at a per-queue level, it
> >  can allow the user to specify the per-port settings as a default, and
> >  then override that value at the queue level, if you just want one queue
> >  different from the rest.
> 
> This can be done with API1 as well.  
> 
> From Thomas:
> > make the on the flight vlan offloads configuration more generic.
> 
> The cons:
> 1. hard requirement from the PMDs to support the new API. 
> 
> I can commit for mlx5 and mlx4 PMDs. I know other PMD maintainers plan to convert their PMDs as well. If we take this approach we must make sure they all move.

We can try to get an agreement from more vendors at Dublin summit.
If not, we can wait more than one release cycle for late support.

> There is another possible API (API3):
> 1. keep the per-port, per-queue configuration.
> 2. add the get function for better error reporting and visibility for application.
> 3. keep the current on the flight vlan offloading configuration as an exception. In case there will be a need to configure more offloads on the flight we can move to API2.
> 
> With API1 I am obviously OK.
> I agree API2 is more generic. 
> API3 is a nice compromise, if we don't want to force all PMD to convert. 

The question is: do we want to choose a compromise while breaking this API?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-18 21:08                                                     ` Thomas Monjalon
@ 2017-09-19  7:33                                                       ` Shahaf Shuler
  2017-09-19  7:56                                                         ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-19  7:33 UTC (permalink / raw)
  To: Thomas Monjalon, Ferruh Yigit
  Cc: Bruce Richardson, Ananyev, Konstantin, stephen, dev

Tuesday, September 19, 2017 12:09 AM, Thomas Monjalon:
> >
> > It would be simpler, however am not sure invalid port is the only error to
> consider. Another possible error can be the PMD is not supporting this
> function.
> > On that case returning 0 is not good enough. The application cannot know
> why the offload is not set, is it because it is set wrong or the PMD just don't
> support this API (which leads me to my next point).
> 
> We can skip error reporting on "get" functions and rely on "set" functions to
> return error if offload API is not supported or for other miscellaneous errors.

It will complex the set function then. 
Instead of using the return value to understand all offloads were set, the application will need to provide another pointer for the function to understand which offloads were actually set. 

I understand this is nice to use the return value of the get without the need of temporary variable, it will save some lines in the code.
But I think that good error reporting to make the application crystal clear why the offloads on get are 0 wins. 


> > I can commit for mlx5 and mlx4 PMDs. I know other PMD maintainers plan
> to convert their PMDs as well. If we take this approach we must make sure
> they all move.
> 
> We can try to get an agreement from more vendors at Dublin summit.
> If not, we can wait more than one release cycle for late support.

Yes we can discuss on it in Dublin.  Still I want to emphasize my concern:
There is no point in moving PMD to the new API if there is no application to use it . Besides of user applications this refers also to testpmd and other examples on dpdk tree (which I plan to convert on 18.02).  
PMD maintainers may object to this conversion if their PMD still uses the old offloads APIs.

So can we have guarantee from Thomas/Ferruh that this series, as long it is technically OK, will be accepted? Will we force all that object to change their PMDs?

If not, I think this is bad approach to put the API floating around ethdev with no PMD to implement it.



> 
> > There is another possible API (API3):
> > 1. keep the per-port, per-queue configuration.
> > 2. add the get function for better error reporting and visibility for
> application.
> > 3. keep the current on the flight vlan offloading configuration as an
> exception. In case there will be a need to configure more offloads on the
> flight we can move to API2.
> >
> > With API1 I am obviously OK.
> > I agree API2 is more generic.
> > API3 is a nice compromise, if we don't want to force all PMD to convert.
> 
> The question is: do we want to choose a compromise while breaking this
> API?

Maybe compromise is not the right word.

We are striving for the generic API2 which has all the full functionality and generalize API1 by supporting on the fly configuration as well.
Maybe for user applications there is no such use-case. How many application decides on the flight to suddenly change the crc strip or the scatter setting ? 
Moreover, how many PMDs will actually support such on the flight configuration?
How easy will it be for application to work with the API for PMD which don't support on the flight configuration? They will need to try, and if fail stop the port and try again - in that sense there no much benefit for API2. 

Currently we have only the vlan offloads which can be set on the flight and maybe it is more than enough, I don't know, am not familiar with enough application to be sure. 

API3 propose to wait with this approach till we will have a proper use case from users. 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API
  2017-09-19  7:33                                                       ` Shahaf Shuler
@ 2017-09-19  7:56                                                         ` Thomas Monjalon
  0 siblings, 0 replies; 134+ messages in thread
From: Thomas Monjalon @ 2017-09-19  7:56 UTC (permalink / raw)
  To: Shahaf Shuler, Ferruh Yigit, Bruce Richardson, Ananyev, Konstantin
  Cc: stephen, dev

19/09/2017 09:33, Shahaf Shuler:
> Tuesday, September 19, 2017 12:09 AM, Thomas Monjalon:
> > >
> > > It would be simpler, however am not sure invalid port is the only error to
> > consider. Another possible error can be the PMD is not supporting this
> > function.
> > > On that case returning 0 is not good enough. The application cannot know
> > why the offload is not set, is it because it is set wrong or the PMD just don't
> > support this API (which leads me to my next point).
> > 
> > We can skip error reporting on "get" functions and rely on "set" functions to
> > return error if offload API is not supported or for other miscellaneous errors.
> 
> It will complex the set function then. 
> Instead of using the return value to understand all offloads were set, the application will need to provide another pointer for the function to understand which offloads were actually set.

I think we must forbid setting offloads partially.
If one setting is not possible, nothing should be done.

I don't understand why it would complicate the "set" function.
Anyway, we must report errors in "set" function.

One more question is: do we want to return a mask of "accepted" offload
by getting the mask as a pointer?

> I understand this is nice to use the return value of the get without the need of temporary variable, it will save some lines in the code.
> But I think that good error reporting to make the application crystal clear why the offloads on get are 0 wins.

No strong opinion about error return in "get" function.
It is probably reasonnable to distinguish offload value 0
and "get" function not implemented.

> > > I can commit for mlx5 and mlx4 PMDs. I know other PMD maintainers plan
> > to convert their PMDs as well. If we take this approach we must make sure
> > they all move.
> > 
> > We can try to get an agreement from more vendors at Dublin summit.
> > If not, we can wait more than one release cycle for late support.
> 
> Yes we can discuss on it in Dublin.  Still I want to emphasize my concern:
> There is no point in moving PMD to the new API if there is no application to use it . Besides of user applications this refers also to testpmd and other examples on dpdk tree (which I plan to convert on 18.02).  
> PMD maintainers may object to this conversion if their PMD still uses the old offloads APIs.
> 
> So can we have guarantee from Thomas/Ferruh that this series, as long it is technically OK, will be accepted? Will we force all that object to change their PMDs?
> 
> If not, I think this is bad approach to put the API floating around ethdev with no PMD to implement it.
> 
> > > There is another possible API (API3):
> > > 1. keep the per-port, per-queue configuration.
> > > 2. add the get function for better error reporting and visibility for
> > application.
> > > 3. keep the current on the flight vlan offloading configuration as an
> > exception. In case there will be a need to configure more offloads on the
> > flight we can move to API2.
> > >
> > > With API1 I am obviously OK.
> > > I agree API2 is more generic.
> > > API3 is a nice compromise, if we don't want to force all PMD to convert.
> > 
> > The question is: do we want to choose a compromise while breaking this
> > API?
> 
> Maybe compromise is not the right word.
> 
> We are striving for the generic API2 which has all the full functionality and generalize API1 by supporting on the fly configuration as well.
> Maybe for user applications there is no such use-case. How many application decides on the flight to suddenly change the crc strip or the scatter setting ? 
> Moreover, how many PMDs will actually support such on the flight configuration?
> How easy will it be for application to work with the API for PMD which don't support on the flight configuration? They will need to try, and if fail stop the port and try again - in that sense there no much benefit for API2. 
> 
> Currently we have only the vlan offloads which can be set on the flight and maybe it is more than enough, I don't know, am not familiar with enough application to be sure. 
> 
> API3 propose to wait with this approach till we will have a proper use case from users. 

If, as a community, we decide that configuring offloads on the fly
is not a requirement, OK to not plan it in the API.
If we decide to not do it now, we could change again the API later.

Opinions?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v5 0/3] ethdev new offloads API
  2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
                         ` (3 preceding siblings ...)
  2017-09-18  7:51       ` [dpdk-dev] [PATCH v4 0/3] ethdev new " Andrew Rybchenko
@ 2017-09-28 18:54       ` Shahaf Shuler
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue " Shahaf Shuler
                           ` (3 more replies)
  4 siblings, 4 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-28 18:54 UTC (permalink / raw)
  To: ferruh.yigit, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
In the new API, offloads are divided into per-port and per-queue offloads,
with a corresponding capability for each.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.
*  Helper function which copy from new to old API were also added,
   to enable application to use the new API with PMD which still supports
   the old one.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html

on v5:
 - fix documentation.
 - fix comments on port offloads configuration.

on v4:
 - Added another patch for documentation.
 - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
 - clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.

on v3:
 - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
   for the no refcnt and single mempool flags.
 - Fix features documentation.
 - Fix comment style.

on v2:
 - Taking new approach of dividing offloads into per-queue and per-port one.
 - Postpone the Tx/Rx public struct renaming to 18.02
 - squash the helper functions into the Rx/Tx offloads intro patches.

Shahaf Shuler (3):
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API
  doc: add details on ethdev offloads API

 doc/guides/nics/features.rst            |  66 +++++---
 doc/guides/prog_guide/poll_mode_drv.rst |  20 +++
 lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
 lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
 4 files changed, 358 insertions(+), 40 deletions(-)

Series-reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue offloads API
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
@ 2017-09-28 18:54         ` Shahaf Shuler
  2017-10-03  0:32           ` Ferruh Yigit
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx " Shahaf Shuler
                           ` (2 subsequent siblings)
  3 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-28 18:54 UTC (permalink / raw)
  To: ferruh.yigit, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 1849a3bdd..9b73d2377 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -688,12 +688,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -723,8 +801,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -768,7 +858,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1032,6 +1122,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -2007,7 +2108,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2083,23 +2185,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2108,6 +2228,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2122,13 +2249,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 99cdd54d4..e02d57881 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@ struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@ struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
+	 * fields on rte_eth_dev_info structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@ struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1874,6 +1916,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1927,6 +1972,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx queue offloads API
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-09-28 18:54         ` Shahaf Shuler
  2017-10-03 19:50           ` Ferruh Yigit
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 3/3] doc: add details on ethdev " Shahaf Shuler
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
  3 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-28 18:54 UTC (permalink / raw)
  To: ferruh.yigit, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  | 33 ++++++++++++++-----
 lib/librte_ether/rte_ethdev.c | 67 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 38 ++++++++++++++++++++-
 3 files changed, 128 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4e68144ef..1a8af473b 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,7 +131,8 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -220,11 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -510,10 +512,11 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -526,11 +529,12 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_l3_checksum_offload:
@@ -541,13 +545,14 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -565,7 +571,7 @@ Supports L4 checksum offload.
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
 .. _nic_features_macsec_offload:
@@ -576,9 +582,10 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
@@ -596,7 +604,7 @@ Supports inner packet L3 checksum.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
 
 .. _nic_features_timesync:
 
+Mbuf fast free
+--------------
+
+Supports optimization for fast release of mbufs following successful Tx.
+Requires all mbufs to come from the same mempool and has refcnt = 1.
+
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+
 Timesync
 --------
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 9b73d2377..59756dd82 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1214,6 +1214,55 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+	if ((txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT) &&
+	    (txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP))
+		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+	if (tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		flags |= (ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP);
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1221,6 +1270,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1265,8 +1315,23 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) {
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+		/* Keep the ignore flag. */
+		local_conf.txq_flags |= ETH_TXQ_FLAGS_IGNORE;
+	} else {
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+	}
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index e02d57881..78de045ed 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -734,6 +740,15 @@ struct rte_eth_rxconf {
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
 /**
+ * When set the txq_flags should be ignored,
+ * instead per-queue Tx offloads will be set on offloads field
+ * located on rte_eth_txq_conf struct.
+ * This flag is temporary till the rte_eth_txq_conf.txq_flags
+ * API will be deprecated.
+ */
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+
+/**
  * A structure used to configure a TX ring of an Ethernet port.
  */
 struct rte_eth_txconf {
@@ -744,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
+	 * fields on rte_eth_dev_info structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 /**
@@ -968,6 +989,13 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< Device supports multi segment send. */
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+/**< Device supports optimization for fast release of mbufs.
+ *   When set application must guarantee that per-queue all mbufs comes from
+ *   the same mempool and has refcnt = 1.
+ */
 
 struct rte_pci_device;
 
@@ -990,9 +1018,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2027,6 +2058,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v5 3/3] doc: add details on ethdev offloads API
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue " Shahaf Shuler
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx " Shahaf Shuler
@ 2017-09-28 18:54         ` Shahaf Shuler
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
  3 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-09-28 18:54 UTC (permalink / raw)
  To: ferruh.yigit, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Add the programmers guide details on the new offloads API introduced
by commits:

commit 3ef4f4a50d2c ("ethdev: introduce Rx queue offloads API")
commit a23fa10f3ea0 ("ethdev: introduce Tx queue offloads API")

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/prog_guide/poll_mode_drv.rst | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 8922e39f4..423170997 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -310,6 +310,26 @@ exported by each PMD. The list of flags and their precise meaning is
 described in the mbuf API documentation and in the in :ref:`Mbuf Library
 <Mbuf_Library>`, section "Meta Information".
 
+Per-Port and Per-Queue Offloads
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In the DPDK offload API, offloads are divided into per-port and per-queue offloads.
+The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+Supported offloads can be either per-port or per-queue.
+
+Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Per-port offload configuration is set using ``rte_eth_dev_configure``.
+Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
+To enable per-port offload, the offload should be set on both device configuration and queue setup.
+In case of a mixed configuration the queue setup shall return with an error.
+To enable per-queue offload, the offload can be set only on the queue setup.
+Offloads which are not enabled are disabled by default.
+
+For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
+In such cases it is not required to set other flags in ``txq_flags``.
+For an application to use the Rx offloads API it should set the ``ignore_offload_bitfield`` bit in the ``rte_eth_rxmode`` struct.
+In such cases it is not required to set other bitfield offloads in the ``rxmode`` struct.
+
 Poll Mode Driver API
 --------------------
 
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue offloads API
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-10-03  0:32           ` Ferruh Yigit
  2017-10-03  6:25             ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Ferruh Yigit @ 2017-10-03  0:32 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
> 
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
> 
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.
> 
> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>

<...>

> @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  	if (rx_conf == NULL)
>  		rx_conf = &dev_info.default_rxconf;
>  
> +	local_conf = *rx_conf;
> +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
> +		/**
> +		 * Reflect port offloads to queue offloads in order for
> +		 * offloads to not be discarded.
> +		 */
> +		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
> +						    &local_conf.offloads);
> +	}

If an application switches to the new method, it will set "offloads" and
if underlying PMD doesn't support the new method it will just do nothing
with "offloads" variable but problem is application won't know PMD just
ignored them, it may think per queue offloads set.

Does it make sense to notify application that PMD doesn't understand
that new "offloads" flag?

> +
>  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
> -					      socket_id, rx_conf, mp);
> +					      socket_id, &local_conf, mp);
>  	if (!ret) {
>  		if (!dev->data->min_rx_buf_size ||
>  		    dev->data->min_rx_buf_size > mbp_buf_size)

<...>

>  /**
> @@ -691,6 +712,12 @@ struct rte_eth_rxconf {
>  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
>  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
>  	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> +	/**
> +	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> +	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
> +	 * fields on rte_eth_dev_info structure are allowed to be set.
> +	 */

How application will use above "capa" flags to decide what to set? Since
"rx_queue_offload_capa" is new field introduced with this patch no PMD
implemented it yet, does it means no application will be able to use per
queue offloads yet?

> +	uint64_t offloads;
>  };
>  

<...>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue offloads API
  2017-10-03  0:32           ` Ferruh Yigit
@ 2017-10-03  6:25             ` Shahaf Shuler
  2017-10-03 19:46               ` Ferruh Yigit
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-03  6:25 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon
  Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Hi Ferruh,

Tuesday, October 3, 2017 3:32 AM, Ferruh Yigit:
> On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
> > Introduce a new API to configure Rx offloads.
> >
> > In the new API, offloads are divided into per-port and per-queue
> > offloads. The PMD reports capability for each of them.
> > Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> > To enable per-port offload, the offload should be set on both device
> > configuration and queue configuration. To enable per-queue offload,
> > the offloads can be set only on queue configuration.
> >
> > Applications should set the ignore_offload_bitfield bit on rxmode
> > structure in order to move to the new API.
> >
> > The old Rx offloads API is kept for the meanwhile, in order to enable
> > a smooth transition for PMDs and application to the new API.
> >
> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> 
> <...>
> 
> > @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id,
> uint16_t rx_queue_id,
> >  	if (rx_conf == NULL)
> >  		rx_conf = &dev_info.default_rxconf;
> >
> > +	local_conf = *rx_conf;
> > +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
> > +		/**
> > +		 * Reflect port offloads to queue offloads in order for
> > +		 * offloads to not be discarded.
> > +		 */
> > +		rte_eth_convert_rx_offload_bitfield(&dev->data-
> >dev_conf.rxmode,
> > +						    &local_conf.offloads);
> > +	}
> 
> If an application switches to the new method, it will set "offloads" and if
> underlying PMD doesn't support the new method it will just do nothing with
> "offloads" variable but problem is application won't know PMD just ignored
> them, it may think per queue offloads set.
> 
> Does it make sense to notify application that PMD doesn't understand that
> new "offloads" flag?

I don't think it is needed. In the new API the per-queue Rx offloads caps are reported using a new rx_queue_offload_capa field. Old PMD will not set it, therefore application which use the new API will see that the underlying PMD is supporting only per-port Rx offloads. 
This should be enough for it to understand that the per-queue offloads won't be set. 

> 
> > +
> >  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
> nb_rx_desc,
> > -					      socket_id, rx_conf, mp);
> > +					      socket_id, &local_conf, mp);
> >  	if (!ret) {
> >  		if (!dev->data->min_rx_buf_size ||
> >  		    dev->data->min_rx_buf_size > mbp_buf_size)
> 
> <...>
> 
> >  /**
> > @@ -691,6 +712,12 @@ struct rte_eth_rxconf {
> >  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors.
> */
> >  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are
> available. */
> >  	uint8_t rx_deferred_start; /**< Do not start queue with
> > rte_eth_dev_start(). */
> > +	/**
> > +	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > +	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
> > +	 * fields on rte_eth_dev_info structure are allowed to be set.
> > +	 */
> 
> How application will use above "capa" flags to decide what to set? Since
> "rx_queue_offload_capa" is new field introduced with this patch no PMD
> implemented it yet, does it means no application will be able to use per
> queue offloads yet?

Yes.
Application which use the new offloads API should query the device info and look into the rx_offloads_capa and rx_queue_offloads_capa.
According to those 2 caps it will decide how to set the offloads. 
Per-queue Rx offloads is a new functionality introduced in this series. Of course old PMD will not support it, and this will be reflected on the rx_queue_offlaods_capa.  


> 
> > +	uint64_t offloads;
> >  };
> >
> 
> <...>


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue offloads API
  2017-10-03  6:25             ` Shahaf Shuler
@ 2017-10-03 19:46               ` Ferruh Yigit
  0 siblings, 0 replies; 134+ messages in thread
From: Ferruh Yigit @ 2017-10-03 19:46 UTC (permalink / raw)
  To: Shahaf Shuler, Thomas Monjalon
  Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

On 10/3/2017 7:25 AM, Shahaf Shuler wrote:
> Hi Ferruh,
> 
> Tuesday, October 3, 2017 3:32 AM, Ferruh Yigit:
>> On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
>>> Introduce a new API to configure Rx offloads.
>>>
>>> In the new API, offloads are divided into per-port and per-queue
>>> offloads. The PMD reports capability for each of them.
>>> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
>>> To enable per-port offload, the offload should be set on both device
>>> configuration and queue configuration. To enable per-queue offload,
>>> the offloads can be set only on queue configuration.
>>>
>>> Applications should set the ignore_offload_bitfield bit on rxmode
>>> structure in order to move to the new API.
>>>
>>> The old Rx offloads API is kept for the meanwhile, in order to enable
>>> a smooth transition for PMDs and application to the new API.
>>>
>>> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>>
>> <...>
>>
>>> @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id,
>> uint16_t rx_queue_id,
>>>  	if (rx_conf == NULL)
>>>  		rx_conf = &dev_info.default_rxconf;
>>>
>>> +	local_conf = *rx_conf;
>>> +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
>>> +		/**
>>> +		 * Reflect port offloads to queue offloads in order for
>>> +		 * offloads to not be discarded.
>>> +		 */
>>> +		rte_eth_convert_rx_offload_bitfield(&dev->data-
>>> dev_conf.rxmode,
>>> +						    &local_conf.offloads);
>>> +	}
>>
>> If an application switches to the new method, it will set "offloads" and if
>> underlying PMD doesn't support the new method it will just do nothing with
>> "offloads" variable but problem is application won't know PMD just ignored
>> them, it may think per queue offloads set.
>>
>> Does it make sense to notify application that PMD doesn't understand that
>> new "offloads" flag?
> 
> I don't think it is needed. In the new API the per-queue Rx offloads caps are reported using a new rx_queue_offload_capa field. Old PMD will not set it, therefore application which use the new API will see that the underlying PMD is supporting only per-port Rx offloads. 
> This should be enough for it to understand that the per-queue offloads won't be set. 

OK, makes sense, so application should check queue bases offload
capabilities PMD returned and decide port based or queue based offloads
to use.

> 
>>
>>> +
>>>  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
>> nb_rx_desc,
>>> -					      socket_id, rx_conf, mp);
>>> +					      socket_id, &local_conf, mp);
>>>  	if (!ret) {
>>>  		if (!dev->data->min_rx_buf_size ||
>>>  		    dev->data->min_rx_buf_size > mbp_buf_size)
>>

<...>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx queue offloads API
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx " Shahaf Shuler
@ 2017-10-03 19:50           ` Ferruh Yigit
  2017-10-04  8:06             ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Ferruh Yigit @ 2017-10-03 19:50 UTC (permalink / raw)
  To: Shahaf Shuler, thomas; +Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
> Introduce a new API to configure Tx offloads.
> 
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
> 
> In addition the Tx offloads will be disabled by default and be
> enabled per application needs. This will much simplify PMD management of
> the different offloads.
> 
> Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
> field in order to move to the new API.
> 
> The old Tx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>

<...>

> @@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
>  
>  .. _nic_features_timesync:
>  
> +Mbuf fast free
> +--------------

I think this is not one of the current tracked features. Is this
coming/planed with new patches?

I suggest removing from this patch, and if required add with another
patch that both updates default.ini and this documented.

> +
> +Supports optimization for fast release of mbufs following successful Tx.
> +Requires all mbufs to come from the same mempool and has refcnt = 1.
> +
> +* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
> +* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
> +
>  Timesync
>  --------
>  
> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
> index 9b73d2377..59756dd82 100644
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c

<...>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx queue offloads API
  2017-10-03 19:50           ` Ferruh Yigit
@ 2017-10-04  8:06             ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:06 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon
  Cc: arybchenko, konstantin.ananyev, jerin.jacob, dev

Tuesday, October 3, 2017 10:50 PM, Ferruh Yigit:
> <...>
> 
> > @@ -620,6 +628,15 @@ Supports packet type parsing and returns a list of
> supported types.
> >
> >  .. _nic_features_timesync:
> >
> > +Mbuf fast free
> > +--------------
> 
> I think this is not one of the current tracked features. Is this coming/planed
> with new patches?

This is not a new feature, rather re-wording and merging of the flags:
ETH_TXQ_FLAGS_NOREFCOUNT
ETH_TXQ_FLAGS_NOMULTMEMP

> 
> I suggest removing from this patch, and if required add with another patch
> that both updates default.ini and this documented.

I agree it makes more sense to have this "feature" on a different patch. 

> 
> > +
> > +Supports optimization for fast release of mbufs following successful Tx.
> > +Requires all mbufs to come from the same mempool and has refcnt = 1.
> > +
> > +* **[uses]       rte_eth_txconf,rte_eth_txmode**:
> ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
> > +* **[provides]   rte_eth_dev_info**:
> ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_
> FREE``.
> > +
> >  Timesync
> >  --------
> >
> > diff --git a/lib/librte_ether/rte_ethdev.c
> > b/lib/librte_ether/rte_ethdev.c index 9b73d2377..59756dd82 100644
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> 
> <...>


^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API
  2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
                           ` (2 preceding siblings ...)
  2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 3/3] doc: add details on ethdev " Shahaf Shuler
@ 2017-10-04  8:17         ` Shahaf Shuler
  2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue " Shahaf Shuler
                             ` (4 more replies)
  3 siblings, 5 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:17 UTC (permalink / raw)
  To: konstantin.ananyev, thomas, arybchenko, jerin.jacob, ferruh.yigit; +Cc: dev

Tx offloads configuration is per queue. Tx offloads are enabled by default, 
and can be disabled using ETH_TXQ_FLAGS_NO* flags. 
This behaviour is not consistent with the Rx side where the Rx offloads
configuration is per port. Rx offloads are disabled by default and enabled 
according to bit field in rte_eth_rxmode structure.

Moreover, considering more Tx and Rx offloads will be added 
over time, the cost of managing them all inside the PMD will be tremendous,
as the PMD will need to check the matching for the entire offload set 
for each mbuf it handles.
In addition, on the current approach each Rx offload added breaks the
ABI compatibility as it requires to add entries to existing bit-fields.
 
The series address above issues by defining a new offloads API.
In the new API, offloads are divided into per-port and per-queue offloads,
with a corresponding capability for each.
The offloads are disabled by default. Each offload can be enabled or
disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
Such API will enable to easily add or remove offloads, without breaking the
ABI compatibility.

In order to provide a smooth transition between the APIs the following actions
were taken:
*  The old offloads API is kept for the meanwhile.
*  Helper function which copy from old to new API were added to ethdev,
   enabling the PMD to support only one of the APIs.
*  Helper function which copy from new to old API were also added,
   to enable application to use the new API with PMD which still supports
   the old one.

Per discussion made on the RFC of this series [1], the integration plan which was
decided is to do the transition in two phases:
* ethdev API will move on 17.11.
* Apps and examples will move on 18.02.

This to enable PMD maintainers sufficient time to adopt the new API.

[1]
http://dpdk.org/ml/archives/dev/2017-August/072643.html

on v6:
 - Move mbuf fast free Tx offload to a seperate patch.

on v5:
 - Fix documentation.
 - Fix comments on port offloads configuration.

on v4:
 - Added another patch for documentation.
 - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
 - Clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.

on v3:
 - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
   for the no refcnt and single mempool flags.
 - Fix features documentation.
 - Fix comment style.

on v2:
 - Taking new approach of dividing offloads into per-queue and per-port one.
 - Postpone the Tx/Rx public struct renaming to 18.02
 - Squash the helper functions into the Rx/Tx offloads intro patches.

Shahaf Shuler (4):
  ethdev: introduce Rx queue offloads API
  ethdev: introduce Tx queue offloads API
  ethdev: add mbuf fast free Tx offload
  doc: add details on ethdev offloads API

 doc/guides/nics/features.rst            |  66 +++++---
 doc/guides/nics/features/default.ini    |   1 +
 doc/guides/prog_guide/poll_mode_drv.rst |  20 +++
 lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
 lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
 5 files changed, 359 insertions(+), 40 deletions(-)

Series-reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>

-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue offloads API
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
@ 2017-10-04  8:17           ` Shahaf Shuler
  2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 2/4] ethdev: introduce Tx " Shahaf Shuler
                             ` (3 subsequent siblings)
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:17 UTC (permalink / raw)
  To: konstantin.ananyev, thomas, arybchenko, jerin.jacob, ferruh.yigit; +Cc: dev

Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@ CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@ MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 1849a3bdd..9b73d2377 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -688,12 +688,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -723,8 +801,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -768,7 +858,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1032,6 +1122,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -2007,7 +2108,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2083,23 +2185,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2108,6 +2228,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2122,13 +2249,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 99cdd54d4..e02d57881 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@ struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@ struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@ struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
+	 * fields on rte_eth_dev_info structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@ struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@ struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1874,6 +1916,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1927,6 +1972,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v6 2/4] ethdev: introduce Tx queue offloads API
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
  2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue " Shahaf Shuler
@ 2017-10-04  8:17           ` Shahaf Shuler
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mbuf fast free Tx offload Shahaf Shuler
                             ` (2 subsequent siblings)
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:17 UTC (permalink / raw)
  To: konstantin.ananyev, thomas, arybchenko, jerin.jacob, ferruh.yigit; +Cc: dev

Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  | 24 ++++++++++-----
 lib/librte_ether/rte_ethdev.c | 62 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 33 +++++++++++++++++++-
 3 files changed, 109 insertions(+), 10 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4e68144ef..17745dace 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,7 +131,8 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -220,11 +221,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -510,10 +512,11 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -526,11 +529,12 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_l3_checksum_offload:
@@ -541,13 +545,14 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -565,7 +571,7 @@ Supports L4 checksum offload.
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
 .. _nic_features_macsec_offload:
@@ -576,9 +582,10 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
@@ -596,7 +604,7 @@ Supports inner packet L3 checksum.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 9b73d2377..856a54a8e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1214,6 +1214,50 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1221,6 +1265,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1265,8 +1310,23 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) {
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+		/* Keep the ignore flag. */
+		local_conf.txq_flags |= ETH_TXQ_FLAGS_IGNORE;
+	} else {
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+	}
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index e02d57881..da91f8740 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -734,6 +740,15 @@ struct rte_eth_rxconf {
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
 /**
+ * When set the txq_flags should be ignored,
+ * instead per-queue Tx offloads will be set on offloads field
+ * located on rte_eth_txq_conf struct.
+ * This flag is temporary till the rte_eth_txq_conf.txq_flags
+ * API will be deprecated.
+ */
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+
+/**
  * A structure used to configure a TX ring of an Ethernet port.
  */
 struct rte_eth_txconf {
@@ -744,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
+	 * fields on rte_eth_dev_info structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 /**
@@ -968,6 +989,8 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< Device supports multi segment send. */
 
 struct rte_pci_device;
 
@@ -990,9 +1013,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2027,6 +2053,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v6 3/4] ethdev: add mbuf fast free Tx offload
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
  2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue " Shahaf Shuler
  2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 2/4] ethdev: introduce Tx " Shahaf Shuler
@ 2017-10-04  8:18           ` Shahaf Shuler
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
  2017-10-04 16:12           ` [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API Ananyev, Konstantin
  4 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:18 UTC (permalink / raw)
  To: konstantin.ananyev, thomas, arybchenko, jerin.jacob, ferruh.yigit; +Cc: dev

PMDs which expose this offload cap supports optimization for fast release
of mbufs following successful Tx.
Such optimization requires that per queue, all mbufs come from the same
mempool and has refcnt = 1.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst         | 9 +++++++++
 doc/guides/nics/features/default.ini | 1 +
 lib/librte_ether/rte_ethdev.c        | 5 +++++
 lib/librte_ether/rte_ethdev.h        | 5 +++++
 4 files changed, 20 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 17745dace..6538470ac 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -628,6 +628,15 @@ Supports packet type parsing and returns a list of supported types.
 
 .. _nic_features_timesync:
 
+Mbuf fast free
+--------------
+
+Supports optimization for fast release of mbufs following successful Tx.
+Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
+
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+
 Timesync
 --------
 
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 542430696..9a5990195 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -75,3 +75,4 @@ x86-64               =
 Usage doc            =
 Design doc           =
 Perf doc             =
+Mbuf fast free       =
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 856a54a8e..59756dd82 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1232,6 +1232,9 @@ rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
 		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
 	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
 		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+	if ((txq_flags & ETH_TXQ_FLAGS_NOREFCOUNT) &&
+	    (txq_flags & ETH_TXQ_FLAGS_NOMULTMEMP))
+		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	*tx_offloads = offloads;
 }
@@ -1254,6 +1257,8 @@ rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
 		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
 	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
 		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+	if (tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		flags |= (ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP);
 
 	*txq_flags = flags;
 }
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index da91f8740..78de045ed 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -991,6 +991,11 @@ struct rte_eth_conf {
  */
 #define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
 /**< Device supports multi segment send. */
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+/**< Device supports optimization for fast release of mbufs.
+ *   When set application must guarantee that per-queue all mbufs comes from
+ *   the same mempool and has refcnt = 1.
+ */
 
 struct rte_pci_device;
 
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
                             ` (2 preceding siblings ...)
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mbuf fast free Tx offload Shahaf Shuler
@ 2017-10-04  8:18           ` Shahaf Shuler
  2017-10-04 13:46             ` Mcnamara, John
                               ` (2 more replies)
  2017-10-04 16:12           ` [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API Ananyev, Konstantin
  4 siblings, 3 replies; 134+ messages in thread
From: Shahaf Shuler @ 2017-10-04  8:18 UTC (permalink / raw)
  To: konstantin.ananyev, thomas, arybchenko, jerin.jacob, ferruh.yigit; +Cc: dev

Add the programmers guide details on the new offloads API introduced
by commits:

commit 67a1a59b597f ("ethdev: introduce Rx queue offloads API")
commit f883eb32e2d4 ("ethdev: introduce Tx queue offloads API")

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/prog_guide/poll_mode_drv.rst | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 8922e39f4..423170997 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -310,6 +310,26 @@ exported by each PMD. The list of flags and their precise meaning is
 described in the mbuf API documentation and in the in :ref:`Mbuf Library
 <Mbuf_Library>`, section "Meta Information".
 
+Per-Port and Per-Queue Offloads
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In the DPDK offload API, offloads are divided into per-port and per-queue offloads.
+The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+Supported offloads can be either per-port or per-queue.
+
+Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Per-port offload configuration is set using ``rte_eth_dev_configure``.
+Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
+To enable per-port offload, the offload should be set on both device configuration and queue setup.
+In case of a mixed configuration the queue setup shall return with an error.
+To enable per-queue offload, the offload can be set only on the queue setup.
+Offloads which are not enabled are disabled by default.
+
+For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
+In such cases it is not required to set other flags in ``txq_flags``.
+For an application to use the Rx offloads API it should set the ``ignore_offload_bitfield`` bit in the ``rte_eth_rxmode`` struct.
+In such cases it is not required to set other bitfield offloads in the ``rxmode`` struct.
+
 Poll Mode Driver API
 --------------------
 
-- 
2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
@ 2017-10-04 13:46             ` Mcnamara, John
  2018-03-15  1:58             ` Patil, Harish
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
  2 siblings, 0 replies; 134+ messages in thread
From: Mcnamara, John @ 2017-10-04 13:46 UTC (permalink / raw)
  To: Shahaf Shuler, Ananyev, Konstantin, thomas, arybchenko,
	jerin.jacob, Yigit, Ferruh
  Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Wednesday, October 4, 2017 9:18 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; arybchenko@solarflare.com;
> jerin.jacob@caviumnetworks.com; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API
> 
> Add the programmers guide details on the new offloads API introduced by
> commits:
> 
> commit 67a1a59b597f ("ethdev: introduce Rx queue offloads API") commit
> f883eb32e2d4 ("ethdev: introduce Tx queue offloads API")
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> Reviewed-by: John McNamara <john.mcnamara@intel.com>

Acked-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API
  2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
                             ` (3 preceding siblings ...)
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
@ 2017-10-04 16:12           ` Ananyev, Konstantin
  2017-10-05  0:55             ` Ferruh Yigit
  4 siblings, 1 reply; 134+ messages in thread
From: Ananyev, Konstantin @ 2017-10-04 16:12 UTC (permalink / raw)
  To: Shahaf Shuler, thomas, arybchenko, jerin.jacob, Yigit, Ferruh; +Cc: dev



> -----Original Message-----
> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
> Sent: Wednesday, October 4, 2017 9:18 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net; arybchenko@solarflare.com;
> jerin.jacob@caviumnetworks.com; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v6 0/4] ethdev new offloads API
> 
> Tx offloads configuration is per queue. Tx offloads are enabled by default,
> and can be disabled using ETH_TXQ_FLAGS_NO* flags.
> This behaviour is not consistent with the Rx side where the Rx offloads
> configuration is per port. Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
> 
> Moreover, considering more Tx and Rx offloads will be added
> over time, the cost of managing them all inside the PMD will be tremendous,
> as the PMD will need to check the matching for the entire offload set
> for each mbuf it handles.
> In addition, on the current approach each Rx offload added breaks the
> ABI compatibility as it requires to add entries to existing bit-fields.
> 
> The series address above issues by defining a new offloads API.
> In the new API, offloads are divided into per-port and per-queue offloads,
> with a corresponding capability for each.
> The offloads are disabled by default. Each offload can be enabled or
> disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
> Such API will enable to easily add or remove offloads, without breaking the
> ABI compatibility.
> 
> In order to provide a smooth transition between the APIs the following actions
> were taken:
> *  The old offloads API is kept for the meanwhile.
> *  Helper function which copy from old to new API were added to ethdev,
>    enabling the PMD to support only one of the APIs.
> *  Helper function which copy from new to old API were also added,
>    to enable application to use the new API with PMD which still supports
>    the old one.
> 
> Per discussion made on the RFC of this series [1], the integration plan which was
> decided is to do the transition in two phases:
> * ethdev API will move on 17.11.
> * Apps and examples will move on 18.02.
> 
> This to enable PMD maintainers sufficient time to adopt the new API.
> 
> [1]
> http://dpdk.org/ml/archives/dev/2017-August/072643.html
> 
> on v6:
>  - Move mbuf fast free Tx offload to a seperate patch.
> 
> on v5:
>  - Fix documentation.
>  - Fix comments on port offloads configuration.
> 
> on v4:
>  - Added another patch for documentation.
>  - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
>  - Clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.
> 
> on v3:
>  - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
>    for the no refcnt and single mempool flags.
>  - Fix features documentation.
>  - Fix comment style.
> 
> on v2:
>  - Taking new approach of dividing offloads into per-queue and per-port one.
>  - Postpone the Tx/Rx public struct renaming to 18.02
>  - Squash the helper functions into the Rx/Tx offloads intro patches.
> 
> Shahaf Shuler (4):
>   ethdev: introduce Rx queue offloads API
>   ethdev: introduce Tx queue offloads API
>   ethdev: add mbuf fast free Tx offload
>   doc: add details on ethdev offloads API
> 
>  doc/guides/nics/features.rst            |  66 +++++---
>  doc/guides/nics/features/default.ini    |   1 +
>  doc/guides/prog_guide/poll_mode_drv.rst |  20 +++
>  lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
>  lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
>  5 files changed, 359 insertions(+), 40 deletions(-)
> 
> Series-reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.12.0

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API
  2017-10-04 16:12           ` [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API Ananyev, Konstantin
@ 2017-10-05  0:55             ` Ferruh Yigit
  0 siblings, 0 replies; 134+ messages in thread
From: Ferruh Yigit @ 2017-10-05  0:55 UTC (permalink / raw)
  To: Ananyev, Konstantin, Shahaf Shuler, thomas, arybchenko, jerin.jacob; +Cc: dev

On 10/4/2017 5:12 PM, Ananyev, Konstantin wrote:
> 
> 
>> -----Original Message-----
>> From: Shahaf Shuler [mailto:shahafs@mellanox.com]
>> Sent: Wednesday, October 4, 2017 9:18 AM
>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; thomas@monjalon.net; arybchenko@solarflare.com;
>> jerin.jacob@caviumnetworks.com; Yigit, Ferruh <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org
>> Subject: [PATCH v6 0/4] ethdev new offloads API
>>
>> Tx offloads configuration is per queue. Tx offloads are enabled by default,
>> and can be disabled using ETH_TXQ_FLAGS_NO* flags.
>> This behaviour is not consistent with the Rx side where the Rx offloads
>> configuration is per port. Rx offloads are disabled by default and enabled
>> according to bit field in rte_eth_rxmode structure.
>>
>> Moreover, considering more Tx and Rx offloads will be added
>> over time, the cost of managing them all inside the PMD will be tremendous,
>> as the PMD will need to check the matching for the entire offload set
>> for each mbuf it handles.
>> In addition, on the current approach each Rx offload added breaks the
>> ABI compatibility as it requires to add entries to existing bit-fields.
>>
>> The series address above issues by defining a new offloads API.
>> In the new API, offloads are divided into per-port and per-queue offloads,
>> with a corresponding capability for each.
>> The offloads are disabled by default. Each offload can be enabled or
>> disabled using the existing DEV_TX_OFFLOADS_* or DEV_RX_OFFLOADS_* flags.
>> Such API will enable to easily add or remove offloads, without breaking the
>> ABI compatibility.
>>
>> In order to provide a smooth transition between the APIs the following actions
>> were taken:
>> *  The old offloads API is kept for the meanwhile.
>> *  Helper function which copy from old to new API were added to ethdev,
>>    enabling the PMD to support only one of the APIs.
>> *  Helper function which copy from new to old API were also added,
>>    to enable application to use the new API with PMD which still supports
>>    the old one.
>>
>> Per discussion made on the RFC of this series [1], the integration plan which was
>> decided is to do the transition in two phases:
>> * ethdev API will move on 17.11.
>> * Apps and examples will move on 18.02.
>>
>> This to enable PMD maintainers sufficient time to adopt the new API.
>>
>> [1]
>> http://dpdk.org/ml/archives/dev/2017-August/072643.html
>>
>> on v6:
>>  - Move mbuf fast free Tx offload to a seperate patch.
>>
>> on v5:
>>  - Fix documentation.
>>  - Fix comments on port offloads configuration.
>>
>> on v4:
>>  - Added another patch for documentation.
>>  - Fixed ETH_TXQ_FLAGS_IGNORE flag override.
>>  - Clarify the description of DEV_TX_OFFLOAD_MBUF_FAST_FREE offload.
>>
>> on v3:
>>  - Introduce the DEV_TX_OFFLOAD_MBUF_FAST_FREE to act as an equivalent
>>    for the no refcnt and single mempool flags.
>>  - Fix features documentation.
>>  - Fix comment style.
>>
>> on v2:
>>  - Taking new approach of dividing offloads into per-queue and per-port one.
>>  - Postpone the Tx/Rx public struct renaming to 18.02
>>  - Squash the helper functions into the Rx/Tx offloads intro patches.
>>
>> Shahaf Shuler (4):
>>   ethdev: introduce Rx queue offloads API
>>   ethdev: introduce Tx queue offloads API
>>   ethdev: add mbuf fast free Tx offload
>>   doc: add details on ethdev offloads API
>>
>>  doc/guides/nics/features.rst            |  66 +++++---
>>  doc/guides/nics/features/default.ini    |   1 +
>>  doc/guides/prog_guide/poll_mode_drv.rst |  20 +++
>>  lib/librte_ether/rte_ethdev.c           | 223 +++++++++++++++++++++++++--
>>  lib/librte_ether/rte_ethdev.h           |  89 ++++++++++-
>>  5 files changed, 359 insertions(+), 40 deletions(-)
>>
>> Series-reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>

> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
  2017-10-04 13:46             ` Mcnamara, John
@ 2018-03-15  1:58             ` Patil, Harish
  2018-03-15  6:05               ` Shahaf Shuler
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
  2 siblings, 1 reply; 134+ messages in thread
From: Patil, Harish @ 2018-03-15  1:58 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: dev, ferruh.yigit

-----Original Message-----
From: dev <dev-bounces@dpdk.org> on behalf of Shahaf Shuler
<shahafs@mellanox.com>
Date: Wednesday, October 4, 2017 at 1:18 AM
To: "konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>,
"thomas@monjalon.net" <thomas@monjalon.net>, "arybchenko@solarflare.com"
<arybchenko@solarflare.com>, "Jacob,  Jerin"
<Jerin.JacobKollanukkaran@cavium.com>, "ferruh.yigit@intel.com"
<ferruh.yigit@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API

>Add the programmers guide details on the new offloads API introduced
>by commits:
>
>commit 67a1a59b597f ("ethdev: introduce Rx queue offloads API")
>commit f883eb32e2d4 ("ethdev: introduce Tx queue offloads API")
>
>Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>Reviewed-by: John McNamara <john.mcnamara@intel.com>
>---
> doc/guides/prog_guide/poll_mode_drv.rst | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
>diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
>b/doc/guides/prog_guide/poll_mode_drv.rst
>index 8922e39f4..423170997 100644
>--- a/doc/guides/prog_guide/poll_mode_drv.rst
>+++ b/doc/guides/prog_guide/poll_mode_drv.rst
>@@ -310,6 +310,26 @@ exported by each PMD. The list of flags and their
>precise meaning is
> described in the mbuf API documentation and in the in :ref:`Mbuf Library
> <Mbuf_Library>`, section "Meta Information".
> 
>+Per-Port and Per-Queue Offloads
>+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>+
>+In the DPDK offload API, offloads are divided into per-port and
>per-queue offloads.
>+The different offloads capabilities can be queried using
>``rte_eth_dev_info_get()``.
>+Supported offloads can be either per-port or per-queue.
>+
>+Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or
>``DEV_RX_OFFLOAD_*`` flags.
>+Per-port offload configuration is set using ``rte_eth_dev_configure``.
>+Per-queue offload configuration is set using ``rte_eth_rx_queue_setup``
>and ``rte_eth_tx_queue_setup``.
>+To enable per-port offload, the offload should be set on both device
>configuration and queue setup.
>+In case of a mixed configuration the queue setup shall return with an
>error.
>+To enable per-queue offload, the offload can be set only on the queue
>setup.
>+Offloads which are not enabled are disabled by default.
>+
>+For an application to use the Tx offloads API it should set the
>``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in
>``rte_eth_txconf`` struct.
>+In such cases it is not required to set other flags in ``txq_flags``.
>+For an application to use the Rx offloads API it should set the
>``ignore_offload_bitfield`` bit in the ``rte_eth_rxmode`` struct.
>+In such cases it is not required to set other bitfield offloads in the
>``rxmode`` struct.
>+
> Poll Mode Driver API
> --------------------
> 
>-- 
>2.12.0
>
Hi Shahaf,
Have a minor question here.
In the documentation it is stated that:
"To enable per-port offload, the offload should be set on both device
configuration and queue setup.”
Our NIC supports only port-level offloads. So my understanding is that to
enable per-port offload we just have to fill in
rx_offload_capa/tx_offload_capa in dev_infos_get() routine. So I didn’t
understand what is meant by ' both device configuration and queue setup’
here.
Pls let me know.
Thanks.



>


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API
  2018-03-15  1:58             ` Patil, Harish
@ 2018-03-15  6:05               ` Shahaf Shuler
  0 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-15  6:05 UTC (permalink / raw)
  To: Patil, Harish; +Cc: dev, ferruh.yigit

Thursday, March 15, 2018 3:58 AM, Patil, Harish:
> >2.12.0
> >
> Hi Shahaf,
> Have a minor question here.
> In the documentation it is stated that:
> "To enable per-port offload, the offload should be set on both device
> configuration and queue setup.”
> Our NIC supports only port-level offloads. So my understanding is that to
> enable per-port offload we just have to fill in
> rx_offload_capa/tx_offload_capa in dev_infos_get() routine. So I didn’t
> understand what is meant by ' both device configuration and queue setup’
> here.
> Pls let me know.
> Thanks.

It means on the [tx]rx_queue_setup the application should set the port offloads also in the queue offloads. This is the API for the application, and there is an ongoing discussion to remove this limitation, see [1].

You may add a routine to verify that (on the queue setup), or ignore completely the queue offloads configuration .

[1] http://dpdk.org/ml/archives/dev/2018-March/092684.html

> 
> 
> 
> >


^ permalink raw reply	[flat|nested] 134+ messages in thread

* [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
  2017-10-04 13:46             ` Mcnamara, John
  2018-03-15  1:58             ` Patil, Harish
@ 2018-03-16 15:51             ` Ferruh Yigit
  2018-03-17  0:16               ` Patil, Harish
                                 ` (3 more replies)
  2 siblings, 4 replies; 134+ messages in thread
From: Ferruh Yigit @ 2018-03-16 15:51 UTC (permalink / raw)
  To: John McNamara, Marko Kovacevic
  Cc: dev, Ferruh Yigit, Thomas Monjalon, shahafs, Patil, Harish

Don't mandate API to pass port offload configuration during queue setup,
this is unnecessary for devices that support only port level offloads.

Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
Cc: shahafs@mellanox.com

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Patil, Harish <harish.patil@cavium.com>

Btw, this expectation from API should be clear from source code and API
documentation (doxygen comments in header file) instead of
documentation. Am I missing something or we are doing something wrong
here?
---
 doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index e5d01874e..3247f309f 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-queue.
 Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
 Per-port offload configuration is set using ``rte_eth_dev_configure``.
 Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
-To enable per-port offload, the offload should be set on both device configuration and queue setup.
-In case of a mixed configuration the queue setup shall return with an error.
-To enable per-queue offload, the offload can be set only on the queue setup.
+Per-port offloads should be set on the port configuration. Queue offloads should be set on the queue configuration.
 Offloads which are not enabled are disabled by default.
 
 For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
-- 
2.13.6

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
@ 2018-03-17  0:16               ` Patil, Harish
  2018-03-18  5:52               ` Shahaf Shuler
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 134+ messages in thread
From: Patil, Harish @ 2018-03-17  0:16 UTC (permalink / raw)
  To: Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, shahafs


-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@intel.com>
Date: Friday, March 16, 2018 at 8:51 AM
To: John McNamara <john.mcnamara@intel.com>, Marko Kovacevic
<marko.kovacevic@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Ferruh Yigit <ferruh.yigit@intel.com>,
Thomas Monjalon <thomas@monjalon.net>, "shahafs@mellanox.com"
<shahafs@mellanox.com>, <Patil>, Harish Patil <Harish.Patil@cavium.com>
Subject: [PATCH] doc: update new ethdev offload API description

>Don't mandate API to pass port offload configuration during queue setup,
>this is unnecessary for devices that support only port level offloads.
>
>Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
>Cc: shahafs@mellanox.com
>
>Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>---
>Cc: Patil, Harish <harish.patil@cavium.com>
>
>Btw, this expectation from API should be clear from source code and API
>documentation (doxygen comments in header file) instead of
>documentation. Am I missing something or we are doing something wrong
>here?
>---
> doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
>diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
>b/doc/guides/prog_guide/poll_mode_drv.rst
>index e5d01874e..3247f309f 100644
>--- a/doc/guides/prog_guide/poll_mode_drv.rst
>+++ b/doc/guides/prog_guide/poll_mode_drv.rst
>@@ -303,9 +303,7 @@ Supported offloads can be either per-port or
>per-queue.
> Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or
>``DEV_RX_OFFLOAD_*`` flags.
> Per-port offload configuration is set using ``rte_eth_dev_configure``.
> Per-queue offload configuration is set using ``rte_eth_rx_queue_setup``
>and ``rte_eth_tx_queue_setup``.
>-To enable per-port offload, the offload should be set on both device
>configuration and queue setup.
>-In case of a mixed configuration the queue setup shall return with an
>error.
>-To enable per-queue offload, the offload can be set only on the queue
>setup.
>+Per-port offloads should be set on the port configuration. Queue
>offloads should be set on the queue configuration.
> Offloads which are not enabled are disabled by default.
> 
> For an application to use the Tx offloads API it should set the
>``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in
>``rte_eth_txconf`` struct.
>-- 
>2.13.6
>
Acked-by: Harish Patil <harish.patil@cavium.com>

>


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
  2018-03-17  0:16               ` Patil, Harish
@ 2018-03-18  5:52               ` Shahaf Shuler
  2018-03-21  9:47               ` Andrew Rybchenko
  2018-05-08 12:33               ` Ferruh Yigit
  3 siblings, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-18  5:52 UTC (permalink / raw)
  To: Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish

Friday, March 16, 2018 5:52 PM, Ferruh Yigit:
> Don't mandate API to pass port offload configuration during queue setup,
> this is unnecessary for devices that support only port level offloads.
> 
> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
> Cc: shahafs@mellanox.com
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Patil, Harish <harish.patil@cavium.com>
> 
> Btw, this expectation from API should be clear from source code and API
> documentation (doxygen comments in header file) instead of
> documentation. Am I missing something or we are doing something wrong
> here?
> ---
>  doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
> b/doc/guides/prog_guide/poll_mode_drv.rst
> index e5d01874e..3247f309f 100644
> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-
> queue.
>  Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or
> ``DEV_RX_OFFLOAD_*`` flags.
>  Per-port offload configuration is set using ``rte_eth_dev_configure``.
>  Per-queue offload configuration is set using ``rte_eth_rx_queue_setup``
> and ``rte_eth_tx_queue_setup``.
> -To enable per-port offload, the offload should be set on both device
> configuration and queue setup.
> -In case of a mixed configuration the queue setup shall return with an error.
> -To enable per-queue offload, the offload can be set only on the queue
> setup.
> +Per-port offloads should be set on the port configuration. Queue offloads
> should be set on the queue configuration.
>  Offloads which are not enabled are disabled by default.
> 
>  For an application to use the Tx offloads API it should set the
> ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in
> ``rte_eth_txconf`` struct.

I am OK with such change.

However, while documentation is good, most of the customers learn on the API usage from the existing examples. 
Currently both examples and testpmd behave according to the old approach, see example from testpmd[1] before the rx_queue_setup.

I think a modification there is needed if we are going to change the API. 



[1]
                       /* Apply Rx offloads configuration */                    
                       port->rx_conf.offloads = port->dev_conf.rxmode.offloads;


> --
> 2.13.6

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
  2018-03-17  0:16               ` Patil, Harish
  2018-03-18  5:52               ` Shahaf Shuler
@ 2018-03-21  9:47               ` Andrew Rybchenko
  2018-03-21 10:54                 ` Ferruh Yigit
  2018-05-08 12:33               ` Ferruh Yigit
  3 siblings, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2018-03-21  9:47 UTC (permalink / raw)
  To: Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, shahafs, Patil, Harish, Ivan Malov

On 03/16/2018 06:51 PM, Ferruh Yigit wrote:
> Don't mandate API to pass port offload configuration during queue setup,
> this is unnecessary for devices that support only port level offloads.
>
> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
> Cc: shahafs@mellanox.com
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Patil, Harish <harish.patil@cavium.com>
>
> Btw, this expectation from API should be clear from source code and API
> documentation (doxygen comments in header file) instead of
> documentation. Am I missing something or we are doing something wrong
> here?
> ---
>   doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
> index e5d01874e..3247f309f 100644
> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-queue.
>   Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
>   Per-port offload configuration is set using ``rte_eth_dev_configure``.
>   Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
> -To enable per-port offload, the offload should be set on both device configuration and queue setup.
> -In case of a mixed configuration the queue setup shall return with an error.
> -To enable per-queue offload, the offload can be set only on the queue setup.
> +Per-port offloads should be set on the port configuration. Queue offloads should be set on the queue configuration.
>   Offloads which are not enabled are disabled by default.
>   
>   For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.

net/sfc has code which double-checks old behaviour. So, it is not just
documentation update. We can provide patches if the behaviour
change is accepted.

IMHO, it should be allowed to specify queue offloads on port level.
It should simply enable these offloads on all queues. Also it will
match dev_info [rt]x_offload_capa which include both port and queue
offloads.

Yes, we lose possibility to enable on port level, but disable on queue
level by suggested changes, but I think it is OK - if you don't need
it for all queues, just control separately on queue level.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21  9:47               ` Andrew Rybchenko
@ 2018-03-21 10:54                 ` Ferruh Yigit
  2018-03-21 11:08                   ` Andrew Rybchenko
  2018-03-21 14:08                   ` Thomas Monjalon
  0 siblings, 2 replies; 134+ messages in thread
From: Ferruh Yigit @ 2018-03-21 10:54 UTC (permalink / raw)
  To: Andrew Rybchenko, John McNamara, Marko Kovacevic, Shahaf Shuler
  Cc: dev, Thomas Monjalon, shahafs, Patil, Harish, Ivan Malov

On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> On 03/16/2018 06:51 PM, Ferruh Yigit wrote:
>> Don't mandate API to pass port offload configuration during queue setup,
>> this is unnecessary for devices that support only port level offloads.
>>
>> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
>> Cc: shahafs@mellanox.com
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> Cc: Patil, Harish <harish.patil@cavium.com>
>>
>> Btw, this expectation from API should be clear from source code and API
>> documentation (doxygen comments in header file) instead of
>> documentation. Am I missing something or we are doing something wrong
>> here?
>> ---
>>  doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
>>  1 file changed, 1 insertion(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
>> index e5d01874e..3247f309f 100644
>> --- a/doc/guides/prog_guide/poll_mode_drv.rst
>> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
>> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-queue.
>>  Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
>>  Per-port offload configuration is set using ``rte_eth_dev_configure``.
>>  Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
>> -To enable per-port offload, the offload should be set on both device configuration and queue setup.
>> -In case of a mixed configuration the queue setup shall return with an error.
>> -To enable per-queue offload, the offload can be set only on the queue setup.
>> +Per-port offloads should be set on the port configuration. Queue offloads should be set on the queue configuration.
>>  Offloads which are not enabled are disabled by default.
>>  
>>  For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
> 
> net/sfc has code which double-checks old behaviour. So, it is not just
> documentation update. We can provide patches if the behaviour
> change is accepted.

Not definitely just doc update, PMDs needs to be modified. This patch is just to
agree on the behavior.

> 
> IMHO, it should be allowed to specify queue offloads on port level.
> It should simply enable these offloads on all queues. Also it will
> match dev_info [rt]x_offload_capa which include both port and queue
> offloads.
> 
> Yes, we lose possibility to enable on port level, but disable on queue
> level by suggested changes, but I think it is OK - if you don't need
> it for all queues, just control separately on queue level.

What I understand was queue offload can only enable more, but it seems it can
both enable or disable.

My concern was, even PMD reports no [rt]x_offload_capa at all, API forces
application to send at least port offloads during queue setup.

As long as application only allowed to send queue offloads within the boundaries
of the "queue offload capabilities", I am OK.

This will work fine for devices that support queue level offloads to enable -
disable queue specific offloads on top of port offloads. Will this make sense?

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 10:54                 ` Ferruh Yigit
@ 2018-03-21 11:08                   ` Andrew Rybchenko
  2018-03-21 11:10                     ` Shahaf Shuler
  2018-03-21 14:08                   ` Thomas Monjalon
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2018-03-21 11:08 UTC (permalink / raw)
  To: Ferruh Yigit, John McNamara, Marko Kovacevic, Shahaf Shuler
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

On 03/21/2018 01:54 PM, Ferruh Yigit wrote:
> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
>> On 03/16/2018 06:51 PM, Ferruh Yigit wrote:
>>> Don't mandate API to pass port offload configuration during queue setup,
>>> this is unnecessary for devices that support only port level offloads.
>>>
>>> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
>>> Cc: shahafs@mellanox.com
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> ---
>>> Cc: Patil, Harish <harish.patil@cavium.com>
>>>
>>> Btw, this expectation from API should be clear from source code and API
>>> documentation (doxygen comments in header file) instead of
>>> documentation. Am I missing something or we are doing something wrong
>>> here?
>>> ---
>>>   doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
>>>   1 file changed, 1 insertion(+), 3 deletions(-)
>>>
>>> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
>>> index e5d01874e..3247f309f 100644
>>> --- a/doc/guides/prog_guide/poll_mode_drv.rst
>>> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
>>> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-queue.
>>>   Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
>>>   Per-port offload configuration is set using ``rte_eth_dev_configure``.
>>>   Per-queue offload configuration is set using ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
>>> -To enable per-port offload, the offload should be set on both device configuration and queue setup.
>>> -In case of a mixed configuration the queue setup shall return with an error.
>>> -To enable per-queue offload, the offload can be set only on the queue setup.
>>> +Per-port offloads should be set on the port configuration. Queue offloads should be set on the queue configuration.
>>>   Offloads which are not enabled are disabled by default.
>>>   
>>>   For an application to use the Tx offloads API it should set the ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in ``rte_eth_txconf`` struct.
>> net/sfc has code which double-checks old behaviour. So, it is not just
>> documentation update. We can provide patches if the behaviour
>> change is accepted.
> Not definitely just doc update, PMDs needs to be modified. This patch is just to
> agree on the behavior.
>
>> IMHO, it should be allowed to specify queue offloads on port level.
>> It should simply enable these offloads on all queues. Also it will
>> match dev_info [rt]x_offload_capa which include both port and queue
>> offloads.
>>
>> Yes, we lose possibility to enable on port level, but disable on queue
>> level by suggested changes, but I think it is OK - if you don't need
>> it for all queues, just control separately on queue level.
> What I understand was queue offload can only enable more, but it seems it can
> both enable or disable.
>
> My concern was, even PMD reports no [rt]x_offload_capa at all, API forces
> application to send at least port offloads during queue setup.

I guess you mean [rt]x_queue_offload_capa above.

> As long as application only allowed to send queue offloads within the boundaries
> of the "queue offload capabilities", I am OK.

If so, queue offloads should not be included in [rt]x_offload_capa.
But I'm afraid it is too restrictive for apps.

> This will work fine for devices that support queue level offloads to enable -
> disable queue specific offloads on top of port offloads. Will this make sense?

IMHO, disable on queue level is not required for enabled on port level.
If app always wants some offloads, just check [rt]x_offload_capa and 
enable on
port level (regardless if it is actually per-port or per-queue).
If app wants to some offload per queue, check [rt]x_queue_offload_capa, do
not enable on port level and control on queue level.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:08                   ` Andrew Rybchenko
@ 2018-03-21 11:10                     ` Shahaf Shuler
  2018-03-21 11:19                       ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 11:10 UTC (permalink / raw)
  To: Andrew Rybchenko, Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 1:09 PM, Andrew Rybchenko
> On 03/21/2018 01:54 PM, Ferruh Yigit wrote:
> > On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> >> On 03/16/2018 06:51 PM, Ferruh Yigit wrote:
> >>> Don't mandate API to pass port offload configuration during queue
> >>> setup, this is unnecessary for devices that support only port level
> offloads.
> >>>
> >>> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
> >>> Cc: shahafs@mellanox.com
> >>>
> >>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >>> ---
> >>> Cc: Patil, Harish <harish.patil@cavium.com>
> >>>
> >>> Btw, this expectation from API should be clear from source code and
> >>> API documentation (doxygen comments in header file) instead of
> >>> documentation. Am I missing something or we are doing something
> >>> wrong here?
> >>> ---
> >>>   doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
> >>>   1 file changed, 1 insertion(+), 3 deletions(-)
> >>>
> >>> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
> >>> b/doc/guides/prog_guide/poll_mode_drv.rst
> >>> index e5d01874e..3247f309f 100644
> >>> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> >>> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> >>> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-
> queue.
> >>>   Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or
> ``DEV_RX_OFFLOAD_*`` flags.
> >>>   Per-port offload configuration is set using ``rte_eth_dev_configure``.
> >>>   Per-queue offload configuration is set using
> ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
> >>> -To enable per-port offload, the offload should be set on both device
> configuration and queue setup.
> >>> -In case of a mixed configuration the queue setup shall return with an
> error.
> >>> -To enable per-queue offload, the offload can be set only on the queue
> setup.
> >>> +Per-port offloads should be set on the port configuration. Queue
> offloads should be set on the queue configuration.
> >>>   Offloads which are not enabled are disabled by default.
> >>>
> >>>   For an application to use the Tx offloads API it should set the
> ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in
> ``rte_eth_txconf`` struct.
> >> net/sfc has code which double-checks old behaviour. So, it is not
> >> just documentation update. We can provide patches if the behaviour
> >> change is accepted.
> > Not definitely just doc update, PMDs needs to be modified. This patch
> > is just to agree on the behavior.
> >
> >> IMHO, it should be allowed to specify queue offloads on port level.
> >> It should simply enable these offloads on all queues. Also it will
> >> match dev_info [rt]x_offload_capa which include both port and queue
> >> offloads.
> >>
> >> Yes, we lose possibility to enable on port level, but disable on
> >> queue level by suggested changes, but I think it is OK - if you don't
> >> need it for all queues, just control separately on queue level.
> > What I understand was queue offload can only enable more, but it seems
> > it can both enable or disable.
> >
> > My concern was, even PMD reports no [rt]x_offload_capa at all, API
> > forces application to send at least port offloads during queue setup.
> 
> I guess you mean [rt]x_queue_offload_capa above.
> 
> > As long as application only allowed to send queue offloads within the
> > boundaries of the "queue offload capabilities", I am OK.
> 
> If so, queue offloads should not be included in [rt]x_offload_capa.
> But I'm afraid it is too restrictive for apps.
> 
> > This will work fine for devices that support queue level offloads to
> > enable - disable queue specific offloads on top of port offloads. Will this
> make sense?
> 
> IMHO, disable on queue level is not required for enabled on port level.
> If app always wants some offloads, just check [rt]x_offload_capa and enable
> on port level (regardless if it is actually per-port or per-queue).
> If app wants to some offload per queue, check [rt]x_queue_offload_capa,
> do not enable on port level and control on queue level.

+1. 

And I think Ferruh this is the suggestion by this patch, isn't it? 


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:10                     ` Shahaf Shuler
@ 2018-03-21 11:19                       ` Andrew Rybchenko
  2018-03-21 11:23                         ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2018-03-21 11:19 UTC (permalink / raw)
  To: Shahaf Shuler, Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

On 03/21/2018 02:10 PM, Shahaf Shuler wrote:
> Wednesday, March 21, 2018 1:09 PM, Andrew Rybchenko
>> On 03/21/2018 01:54 PM, Ferruh Yigit wrote:
>>> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
>>>> On 03/16/2018 06:51 PM, Ferruh Yigit wrote:
>>>>> Don't mandate API to pass port offload configuration during queue
>>>>> setup, this is unnecessary for devices that support only port level
>> offloads.
>>>>> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
>>>>> Cc: shahafs@mellanox.com
>>>>>
>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>> ---
>>>>> Cc: Patil, Harish <harish.patil@cavium.com>
>>>>>
>>>>> Btw, this expectation from API should be clear from source code and
>>>>> API documentation (doxygen comments in header file) instead of
>>>>> documentation. Am I missing something or we are doing something
>>>>> wrong here?
>>>>> ---
>>>>>    doc/guides/prog_guide/poll_mode_drv.rst | 4 +---
>>>>>    1 file changed, 1 insertion(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
>>>>> b/doc/guides/prog_guide/poll_mode_drv.rst
>>>>> index e5d01874e..3247f309f 100644
>>>>> --- a/doc/guides/prog_guide/poll_mode_drv.rst
>>>>> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
>>>>> @@ -303,9 +303,7 @@ Supported offloads can be either per-port or per-
>> queue.
>>>>>    Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or
>> ``DEV_RX_OFFLOAD_*`` flags.
>>>>>    Per-port offload configuration is set using ``rte_eth_dev_configure``.
>>>>>    Per-queue offload configuration is set using
>> ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup``.
>>>>> -To enable per-port offload, the offload should be set on both device
>> configuration and queue setup.
>>>>> -In case of a mixed configuration the queue setup shall return with an
>> error.
>>>>> -To enable per-queue offload, the offload can be set only on the queue
>> setup.
>>>>> +Per-port offloads should be set on the port configuration. Queue
>> offloads should be set on the queue configuration.
>>>>>    Offloads which are not enabled are disabled by default.
>>>>>
>>>>>    For an application to use the Tx offloads API it should set the
>> ``ETH_TXQ_FLAGS_IGNORE`` flag in the ``txq_flags`` field located in
>> ``rte_eth_txconf`` struct.
>>>> net/sfc has code which double-checks old behaviour. So, it is not
>>>> just documentation update. We can provide patches if the behaviour
>>>> change is accepted.
>>> Not definitely just doc update, PMDs needs to be modified. This patch
>>> is just to agree on the behavior.
>>>
>>>> IMHO, it should be allowed to specify queue offloads on port level.
>>>> It should simply enable these offloads on all queues. Also it will
>>>> match dev_info [rt]x_offload_capa which include both port and queue
>>>> offloads.
>>>>
>>>> Yes, we lose possibility to enable on port level, but disable on
>>>> queue level by suggested changes, but I think it is OK - if you don't
>>>> need it for all queues, just control separately on queue level.
>>> What I understand was queue offload can only enable more, but it seems
>>> it can both enable or disable.
>>>
>>> My concern was, even PMD reports no [rt]x_offload_capa at all, API
>>> forces application to send at least port offloads during queue setup.
>> I guess you mean [rt]x_queue_offload_capa above.
>>
>>> As long as application only allowed to send queue offloads within the
>>> boundaries of the "queue offload capabilities", I am OK.
>> If so, queue offloads should not be included in [rt]x_offload_capa.
>> But I'm afraid it is too restrictive for apps.
>>
>>> This will work fine for devices that support queue level offloads to
>>> enable - disable queue specific offloads on top of port offloads. Will this
>> make sense?
>>
>> IMHO, disable on queue level is not required for enabled on port level.
>> If app always wants some offloads, just check [rt]x_offload_capa and enable
>> on port level (regardless if it is actually per-port or per-queue).
>> If app wants to some offload per queue, check [rt]x_queue_offload_capa,
>> do not enable on port level and control on queue level.
> +1.
>
> And I think Ferruh this is the suggestion by this patch, isn't it?

Not exactly. We should add statement to allow to enable queue offloads
on port level (to enable on all queues).

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:19                       ` Andrew Rybchenko
@ 2018-03-21 11:23                         ` Shahaf Shuler
  2018-03-21 11:37                           ` Andrew Rybchenko
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 11:23 UTC (permalink / raw)
  To: Andrew Rybchenko, Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 1:20 PM, :Andrew Rybchenko
>Not exactly. We should add statement to allow to enable queue offloads
>on port level (to enable on all queues).

Why it is needed ?

Queue offload is also a port offload, for the simple case it is enabled on each of the queues.
PMDs should report rx[tx]_offload_capa = port_offloads | queue_offloads

So from the application side it enables a **port** offload which, by definition, will set the offload to each of the queues.
it is not “enabling queue offload on the port”.



^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:23                         ` Shahaf Shuler
@ 2018-03-21 11:37                           ` Andrew Rybchenko
  2018-03-21 11:40                             ` Shahaf Shuler
  2018-03-21 12:03                             ` Ananyev, Konstantin
  0 siblings, 2 replies; 134+ messages in thread
From: Andrew Rybchenko @ 2018-03-21 11:37 UTC (permalink / raw)
  To: Shahaf Shuler, Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
>
> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
>
> >Not exactly. We should add statement to allow to enable queue offloads
> >on port level (to enable on all queues).
>
> Why it is needed ?
>

May be just a paranoia to avoid misreading/misunderstanding.

> Queue offload is also a port offload, for the simple case it is 
> enabled on each of the queues.
>
> PMDs should report rx[tx]_offload_capa = port_offloads | queue_offloads
>
> So from the application side it enables a **port** offload which, by 
> definition, will set the offload to each of the queues.
>
> it is not “enabling queue offload on the port”.
>

I think it would be really useful for understanding to highlight
that what is enabled on port level is enabled on all queues
regardless queue conf.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:37                           ` Andrew Rybchenko
@ 2018-03-21 11:40                             ` Shahaf Shuler
  2018-03-21 12:52                               ` Ferruh Yigit
  2018-03-21 12:03                             ` Ananyev, Konstantin
  1 sibling, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 11:40 UTC (permalink / raw)
  To: Andrew Rybchenko, Ferruh Yigit, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 1:37 PM, Andrew Rybchenko:
> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> >
> > Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> >
> > >Not exactly. We should add statement to allow to enable queue
> > >offloads on port level (to enable on all queues).
> >
> > Why it is needed ?
> >
> 
> May be just a paranoia to avoid misreading/misunderstanding.
> 
> > Queue offload is also a port offload, for the simple case it is
> > enabled on each of the queues.
> >
> > PMDs should report rx[tx]_offload_capa = port_offloads |
> > queue_offloads
> >
> > So from the application side it enables a **port** offload which, by
> > definition, will set the offload to each of the queues.
> >
> > it is not “enabling queue offload on the port”.
> >
> 
> I think it would be really useful for understanding to highlight that what is
> enabled on port level is enabled on all queues regardless queue conf.

So I think the extra wording should explain that queue offload is also a port offload, and not to mix between the queue and port offload configuration. 


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:37                           ` Andrew Rybchenko
  2018-03-21 11:40                             ` Shahaf Shuler
@ 2018-03-21 12:03                             ` Ananyev, Konstantin
  2018-03-21 12:29                               ` Shahaf Shuler
  2018-03-21 12:34                               ` Andrew Rybchenko
  1 sibling, 2 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2018-03-21 12:03 UTC (permalink / raw)
  To: Andrew Rybchenko, Shahaf Shuler, Yigit, Ferruh, Mcnamara, John,
	Kovacevic, Marko
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Hi everyone,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Andrew Rybchenko
> Sent: Wednesday, March 21, 2018 11:37 AM
> To: Shahaf Shuler <shahafs@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Kovacevic, Marko <marko.kovacevic@intel.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
> <Ivan.Malov@oktetlabs.ru>
> Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
> 
> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> >
> > Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> >
> > >Not exactly. We should add statement to allow to enable queue offloads
> > >on port level (to enable on all queues).
> >
> > Why it is needed ?
> >
> 
> May be just a paranoia to avoid misreading/misunderstanding.
> 
> > Queue offload is also a port offload, for the simple case it is
> > enabled on each of the queues.
> >
> > PMDs should report rx[tx]_offload_capa = port_offloads | queue_offloads
> >
> > So from the application side it enables a **port** offload which, by
> > definition, will set the offload to each of the queues.
> >
> > it is not “enabling queue offload on the port”.
> >
> 
> I think it would be really useful for understanding to highlight
> that what is enabled on port level is enabled on all queues
> regardless queue conf.

Why not to allow queue offloads to override port offload for given queue?
Konstantin



^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 12:03                             ` Ananyev, Konstantin
@ 2018-03-21 12:29                               ` Shahaf Shuler
  2018-03-21 12:34                               ` Andrew Rybchenko
  1 sibling, 0 replies; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 12:29 UTC (permalink / raw)
  To: Ananyev, Konstantin, Andrew Rybchenko, Yigit, Ferruh, Mcnamara,
	John, Kovacevic, Marko
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 2:04 PM, Ananyev, Konstantin:
> Hi everyone,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Andrew
> Rybchenko
> > Sent: Wednesday, March 21, 2018 11:37 AM
> > To: Shahaf Shuler <shahafs@mellanox.com>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> > Kovacevic, Marko <marko.kovacevic@intel.com>
> > Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>;
> > Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
> > <Ivan.Malov@oktetlabs.ru>
> > Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API
> > description
> >
> > On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> > >
> > > Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> > >
> > > >Not exactly. We should add statement to allow to enable queue
> > > >offloads on port level (to enable on all queues).
> > >
> > > Why it is needed ?
> > >
> >
> > May be just a paranoia to avoid misreading/misunderstanding.
> >
> > > Queue offload is also a port offload, for the simple case it is
> > > enabled on each of the queues.
> > >
> > > PMDs should report rx[tx]_offload_capa = port_offloads |
> > > queue_offloads
> > >
> > > So from the application side it enables a **port** offload which, by
> > > definition, will set the offload to each of the queues.
> > >
> > > it is not “enabling queue offload on the port”.
> > >
> >
> > I think it would be really useful for understanding to highlight that
> > what is enabled on port level is enabled on all queues regardless
> > queue conf.
> 
> Why not to allow queue offloads to override port offload for given queue?
> Konstantin

What is the use case for that? Why would application want to enable offload on the port and then disable it on some of the queues?
Why not just enable it on the needed queues as part of the queue level configuration? 




> 


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 12:03                             ` Ananyev, Konstantin
  2018-03-21 12:29                               ` Shahaf Shuler
@ 2018-03-21 12:34                               ` Andrew Rybchenko
  2018-03-21 12:37                                 ` Ananyev, Konstantin
  1 sibling, 1 reply; 134+ messages in thread
From: Andrew Rybchenko @ 2018-03-21 12:34 UTC (permalink / raw)
  To: Ananyev, Konstantin, Shahaf Shuler, Yigit, Ferruh, Mcnamara,
	John, Kovacevic, Marko
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

On 03/21/2018 03:03 PM, Ananyev, Konstantin wrote:
> Hi everyone,
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Andrew Rybchenko
>> Sent: Wednesday, March 21, 2018 11:37 AM
>> To: Shahaf Shuler <shahafs@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
>> Kovacevic, Marko <marko.kovacevic@intel.com>
>> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
>> <Ivan.Malov@oktetlabs.ru>
>> Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
>>
>> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
>>> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
>>>
>>>> Not exactly. We should add statement to allow to enable queue offloads
>>>> on port level (to enable on all queues).
>>> Why it is needed ?
>>>
>> May be just a paranoia to avoid misreading/misunderstanding.
>>
>>> Queue offload is also a port offload, for the simple case it is
>>> enabled on each of the queues.
>>>
>>> PMDs should report rx[tx]_offload_capa = port_offloads | queue_offloads
>>>
>>> So from the application side it enables a **port** offload which, by
>>> definition, will set the offload to each of the queues.
>>>
>>> it is not “enabling queue offload on the port”.
>>>
>> I think it would be really useful for understanding to highlight
>> that what is enabled on port level is enabled on all queues
>> regardless queue conf.
> Why not to allow queue offloads to override port offload for given queue?

Basically it returns us to the initial point made by Ferruh:
If device has no queue offloads, but application still has to repeat
port offloads in queue offloads.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 12:34                               ` Andrew Rybchenko
@ 2018-03-21 12:37                                 ` Ananyev, Konstantin
  0 siblings, 0 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2018-03-21 12:37 UTC (permalink / raw)
  To: Andrew Rybchenko, Shahaf Shuler, Yigit, Ferruh, Mcnamara, John,
	Kovacevic, Marko
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov



> -----Original Message-----
> From: Andrew Rybchenko [mailto:arybchenko@solarflare.com]
> Sent: Wednesday, March 21, 2018 12:34 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
> <Ivan.Malov@oktetlabs.ru>
> Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
> 
> On 03/21/2018 03:03 PM, Ananyev, Konstantin wrote:
> > Hi everyone,
> >
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Andrew Rybchenko
> >> Sent: Wednesday, March 21, 2018 11:37 AM
> >> To: Shahaf Shuler <shahafs@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> >> Kovacevic, Marko <marko.kovacevic@intel.com>
> >> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
> >> <Ivan.Malov@oktetlabs.ru>
> >> Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
> >>
> >> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> >>> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> >>>
> >>>> Not exactly. We should add statement to allow to enable queue offloads
> >>>> on port level (to enable on all queues).
> >>> Why it is needed ?
> >>>
> >> May be just a paranoia to avoid misreading/misunderstanding.
> >>
> >>> Queue offload is also a port offload, for the simple case it is
> >>> enabled on each of the queues.
> >>>
> >>> PMDs should report rx[tx]_offload_capa = port_offloads | queue_offloads
> >>>
> >>> So from the application side it enables a **port** offload which, by
> >>> definition, will set the offload to each of the queues.
> >>>
> >>> it is not “enabling queue offload on the port”.
> >>>
> >> I think it would be really useful for understanding to highlight
> >> that what is enabled on port level is enabled on all queues
> >> regardless queue conf.
> > Why not to allow queue offloads to override port offload for given queue?
> 
> Basically it returns us to the initial point made by Ferruh:
> If device has no queue offloads, but application still has to repeat
> port offloads in queue offloads.

If device doesn't have per queue offloads (only per port) then there should be nothing
to enable/disable per queue, no?
Or you'd like to allow at queue_setup() to enable/disable port offloads too? 
   


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 11:40                             ` Shahaf Shuler
@ 2018-03-21 12:52                               ` Ferruh Yigit
  2018-03-21 13:06                                 ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Ferruh Yigit @ 2018-03-21 12:52 UTC (permalink / raw)
  To: Shahaf Shuler, Andrew Rybchenko, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

On 3/21/2018 11:40 AM, Shahaf Shuler wrote:
> Wednesday, March 21, 2018 1:37 PM, Andrew Rybchenko:
>> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
>>>
>>> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
>>>
>>>> Not exactly. We should add statement to allow to enable queue
>>>> offloads on port level (to enable on all queues).
>>>
>>> Why it is needed ?
>>>
>>
>> May be just a paranoia to avoid misreading/misunderstanding.
>>
>>> Queue offload is also a port offload, for the simple case it is
>>> enabled on each of the queues.
>>>
>>> PMDs should report rx[tx]_offload_capa = port_offloads |
>>> queue_offloads
>>>
>>> So from the application side it enables a **port** offload which, by
>>> definition, will set the offload to each of the queues.
>>>
>>> it is not “enabling queue offload on the port”.
>>>
>>
>> I think it would be really useful for understanding to highlight that what is
>> enabled on port level is enabled on all queues regardless queue conf.
> 
> So I think the extra wording should explain that queue offload is also a port offload, and not to mix between the queue and port offload configuration. 

+1 for more details, the sentences was the outcome of the previous discussion
but not clear enough. Perhaps some sample values can be also good.
Shahaf do you want to give a try?

And is following correct based on latest :
1- Port capability is always covers queue capability
P_cap = A, B, C, D
Q_cap = B, C, D

2- Requested port offloads should be subset of port capabilities, they will be
applied to all queues:
P_req = A, B
Q1: A, B
Q2: A, B

3- Later, requested queue offloads should be subset of queue capabilities, they
will be applied to specific queue:
Q_req = 1:B, C
Q1: A, B, C
Q2: A, B

Q_req = 2:D
Q1: A, B, C
Q2: A, D


Scenario 2:
1-
P_cap = A, B, C, D
Q_cap = ""

2-
P_req = A, B
Q1: A, B
Q2: A, B

3-
Q_req = ""
Q1: A, B
Q2: A, B

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 12:52                               ` Ferruh Yigit
@ 2018-03-21 13:06                                 ` Shahaf Shuler
  2018-03-21 13:11                                   ` Ananyev, Konstantin
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 13:06 UTC (permalink / raw)
  To: Ferruh Yigit, Andrew Rybchenko, John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 2:52 PM, Ferruh Yigit:
> On 3/21/2018 11:40 AM, Shahaf Shuler wrote:
> > Wednesday, March 21, 2018 1:37 PM, Andrew Rybchenko:
> >> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> >>>
> >>> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> >>>
> >>>> Not exactly. We should add statement to allow to enable queue
> >>>> offloads on port level (to enable on all queues).
> >>>
> >>> Why it is needed ?
> >>>
> >>
> >> May be just a paranoia to avoid misreading/misunderstanding.
> >>
> >>> Queue offload is also a port offload, for the simple case it is
> >>> enabled on each of the queues.
> >>>
> >>> PMDs should report rx[tx]_offload_capa = port_offloads |
> >>> queue_offloads
> >>>
> >>> So from the application side it enables a **port** offload which, by
> >>> definition, will set the offload to each of the queues.
> >>>
> >>> it is not “enabling queue offload on the port”.
> >>>
> >>
> >> I think it would be really useful for understanding to highlight that
> >> what is enabled on port level is enabled on all queues regardless queue
> conf.
> >
> > So I think the extra wording should explain that queue offload is also a port
> offload, and not to mix between the queue and port offload configuration.
> 
> +1 for more details, the sentences was the outcome of the previous
> +discussion
> but not clear enough. Perhaps some sample values can be also good.
> Shahaf do you want to give a try?

IMO what is missing is:
1. the syntax of this patch
2. explicitly write that queue offloads are always subset of the port offload
3. explicitly write that when port offload is enabled it applies to all queues (that one is already written AFAIR). 

> 
> And is following correct based on latest :
> 1- Port capability is always covers queue capability P_cap = A, B, C, D Q_cap =
> B, C, D
> 
> 2- Requested port offloads should be subset of port capabilities, they will be
> applied to all queues:
> P_req = A, B
> Q1: A, B
> Q2: A, B
> 
> 3- Later, requested queue offloads should be subset of queue capabilities,
> they will be applied to specific queue:
> Q_req = 1:B, C
> Q1: A, B, C
> Q2: A, B
> 
> Q_req = 2:D
> Q1: A, B, C
> Q2: A, D
> 
> 
> Scenario 2:
> 1-
> P_cap = A, B, C, D
> Q_cap = ""
> 
> 2-
> P_req = A, B
> Q1: A, B
> Q2: A, B
> 
> 3-
> Q_req = ""
> Q1: A, B
> Q2: A, B


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 13:06                                 ` Shahaf Shuler
@ 2018-03-21 13:11                                   ` Ananyev, Konstantin
  0 siblings, 0 replies; 134+ messages in thread
From: Ananyev, Konstantin @ 2018-03-21 13:11 UTC (permalink / raw)
  To: Shahaf Shuler, Yigit, Ferruh, Andrew Rybchenko, Mcnamara, John,
	Kovacevic, Marko
  Cc: dev, Thomas Monjalon, Patil, Harish, Ivan Malov



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Wednesday, March 21, 2018 1:06 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Andrew Rybchenko <arybchenko@solarflare.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Patil@dpdk.org; Harish <harish.patil@cavium.com>; Ivan Malov
> <Ivan.Malov@oktetlabs.ru>
> Subject: Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
> 
> Wednesday, March 21, 2018 2:52 PM, Ferruh Yigit:
> > On 3/21/2018 11:40 AM, Shahaf Shuler wrote:
> > > Wednesday, March 21, 2018 1:37 PM, Andrew Rybchenko:
> > >> On 03/21/2018 02:23 PM, Shahaf Shuler wrote:
> > >>>
> > >>> Wednesday, March 21, 2018 1:20 PM, *:*Andrew Rybchenko
> > >>>
> > >>>> Not exactly. We should add statement to allow to enable queue
> > >>>> offloads on port level (to enable on all queues).
> > >>>
> > >>> Why it is needed ?
> > >>>
> > >>
> > >> May be just a paranoia to avoid misreading/misunderstanding.
> > >>
> > >>> Queue offload is also a port offload, for the simple case it is
> > >>> enabled on each of the queues.
> > >>>
> > >>> PMDs should report rx[tx]_offload_capa = port_offloads |
> > >>> queue_offloads
> > >>>
> > >>> So from the application side it enables a **port** offload which, by
> > >>> definition, will set the offload to each of the queues.
> > >>>
> > >>> it is not “enabling queue offload on the port”.
> > >>>
> > >>
> > >> I think it would be really useful for understanding to highlight that
> > >> what is enabled on port level is enabled on all queues regardless queue
> > conf.
> > >
> > > So I think the extra wording should explain that queue offload is also a port
> > offload, and not to mix between the queue and port offload configuration.
> >
> > +1 for more details, the sentences was the outcome of the previous
> > +discussion
> > but not clear enough. Perhaps some sample values can be also good.
> > Shahaf do you want to give a try?
> 
> IMO what is missing is:
> 1. the syntax of this patch
> 2. explicitly write that queue offloads are always subset of the port offload
> 3. explicitly write that when port offload is enabled it applies to all queues (that one is already written AFAIR).

For me examples below seems quite handy.
It clearly shows that queue_setup() can do both: enable and disable any per-queue offload. 
Konstantin

> 
> >
> > And is following correct based on latest :
> > 1- Port capability is always covers queue capability P_cap = A, B, C, D Q_cap =
> > B, C, D
> >
> > 2- Requested port offloads should be subset of port capabilities, they will be
> > applied to all queues:
> > P_req = A, B
> > Q1: A, B
> > Q2: A, B
> >
> > 3- Later, requested queue offloads should be subset of queue capabilities,
> > they will be applied to specific queue:
> > Q_req = 1:B, C
> > Q1: A, B, C
> > Q2: A, B
> >
> > Q_req = 2:D
> > Q1: A, B, C
> > Q2: A, D
> >
> >
> > Scenario 2:
> > 1-
> > P_cap = A, B, C, D
> > Q_cap = ""
> >
> > 2-
> > P_req = A, B
> > Q1: A, B
> > Q2: A, B
> >
> > 3-
> > Q_req = ""
> > Q1: A, B
> > Q2: A, B


^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 10:54                 ` Ferruh Yigit
  2018-03-21 11:08                   ` Andrew Rybchenko
@ 2018-03-21 14:08                   ` Thomas Monjalon
  2018-03-21 14:28                     ` Ferruh Yigit
  1 sibling, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2018-03-21 14:08 UTC (permalink / raw)
  To: Ferruh Yigit, Shahaf Shuler
  Cc: dev, Andrew Rybchenko, John McNamara, Marko Kovacevic, Patil,
	Harish, Ivan Malov

21/03/2018 11:54, Ferruh Yigit:
> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> > IMHO, it should be allowed to specify queue offloads on port level.
> > It should simply enable these offloads on all queues. Also it will
> > match dev_info [rt]x_offload_capa which include both port and queue
> > offloads.
> > 
> > Yes, we lose possibility to enable on port level, but disable on queue
> > level by suggested changes, but I think it is OK - if you don't need
> > it for all queues, just control separately on queue level.
> 
> What I understand was queue offload can only enable more, but it seems it can
> both enable or disable.

Yes, queue offload should only enable more.
An offload enabled at port level, cannot be disabled at queue level.
A port offload can be repeated in queue configuration.
If a port offload is not repeated in queue configuration, there should be
no impact: it is still in the port configuration, thus applying to all queues.

About capabilities, the queue offloads must be a subset of port offloads.
The queue capabilities show which offloads can be enabled per queue.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 14:08                   ` Thomas Monjalon
@ 2018-03-21 14:28                     ` Ferruh Yigit
  2018-03-21 14:40                       ` Thomas Monjalon
  0 siblings, 1 reply; 134+ messages in thread
From: Ferruh Yigit @ 2018-03-21 14:28 UTC (permalink / raw)
  To: Thomas Monjalon, Shahaf Shuler
  Cc: dev, Andrew Rybchenko, John McNamara, Marko Kovacevic, Patil,
	Harish, Ivan Malov

On 3/21/2018 2:08 PM, Thomas Monjalon wrote:
> 21/03/2018 11:54, Ferruh Yigit:
>> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
>>> IMHO, it should be allowed to specify queue offloads on port level.
>>> It should simply enable these offloads on all queues. Also it will
>>> match dev_info [rt]x_offload_capa which include both port and queue
>>> offloads.
>>>
>>> Yes, we lose possibility to enable on port level, but disable on queue
>>> level by suggested changes, but I think it is OK - if you don't need
>>> it for all queues, just control separately on queue level.
>>
>> What I understand was queue offload can only enable more, but it seems it can
>> both enable or disable.
> 
> Yes, queue offload should only enable more.
> An offload enabled at port level, cannot be disabled at queue level.

Agree an offload enabled at port level can't be disabled at queue level, but why
not have the ability to disable a queue level offload with another queue setup call.

> A port offload can be repeated in queue configuration.
> If a port offload is not repeated in queue configuration, there should be
> no impact: it is still in the port configuration, thus applying to all queues.

This was a requirement, this patch targets removing the requirement to repeat
the port offload in queue config.

> 
> About capabilities, the queue offloads must be a subset of port offloads.
> The queue capabilities show which offloads can be enabled per queue.
> 
> 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 14:28                     ` Ferruh Yigit
@ 2018-03-21 14:40                       ` Thomas Monjalon
  2018-03-21 15:26                         ` Bruce Richardson
  0 siblings, 1 reply; 134+ messages in thread
From: Thomas Monjalon @ 2018-03-21 14:40 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Shahaf Shuler, dev, Andrew Rybchenko, John McNamara,
	Marko Kovacevic, Patil, Harish, Ivan Malov

21/03/2018 15:28, Ferruh Yigit:
> On 3/21/2018 2:08 PM, Thomas Monjalon wrote:
> > 21/03/2018 11:54, Ferruh Yigit:
> >> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> >>> IMHO, it should be allowed to specify queue offloads on port level.
> >>> It should simply enable these offloads on all queues. Also it will
> >>> match dev_info [rt]x_offload_capa which include both port and queue
> >>> offloads.
> >>>
> >>> Yes, we lose possibility to enable on port level, but disable on queue
> >>> level by suggested changes, but I think it is OK - if you don't need
> >>> it for all queues, just control separately on queue level.
> >>
> >> What I understand was queue offload can only enable more, but it seems it can
> >> both enable or disable.
> > 
> > Yes, queue offload should only enable more.
> > An offload enabled at port level, cannot be disabled at queue level.
> 
> Agree an offload enabled at port level can't be disabled at queue level, but why
> not have the ability to disable a queue level offload with another queue setup call.

Yes it should be possible to disable a queue offload:
1/ enable offload in queue configuration
2/ later disable this offload in queue configuration

So, when I say "only enable more", I mean queue config should only enable
more than port config. In other words, a port-level offload cannot be
disabled at queue level.

> > A port offload can be repeated in queue configuration.
> > If a port offload is not repeated in queue configuration, there should be
> > no impact: it is still in the port configuration, thus applying to all queues.
> 
> This was a requirement, this patch targets removing the requirement to repeat
> the port offload in queue config.

Understood, and I agree with this change if it is well explained
in sphinx and doxygen.
It is important to say that queue configuration has no impact on offloads
already enabled at port level.

> > About capabilities, the queue offloads must be a subset of port offloads.
> > The queue capabilities show which offloads can be enabled per queue.

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 14:40                       ` Thomas Monjalon
@ 2018-03-21 15:26                         ` Bruce Richardson
  2018-03-21 15:29                           ` Shahaf Shuler
  0 siblings, 1 reply; 134+ messages in thread
From: Bruce Richardson @ 2018-03-21 15:26 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Ferruh Yigit, Shahaf Shuler, dev, Andrew Rybchenko,
	John McNamara, Marko Kovacevic, Patil, Harish, Ivan Malov

On Wed, Mar 21, 2018 at 03:40:43PM +0100, Thomas Monjalon wrote:
> 21/03/2018 15:28, Ferruh Yigit:
> > On 3/21/2018 2:08 PM, Thomas Monjalon wrote:
> > > 21/03/2018 11:54, Ferruh Yigit:
> > >> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> > >>> IMHO, it should be allowed to specify queue offloads on port level.
> > >>> It should simply enable these offloads on all queues. Also it will
> > >>> match dev_info [rt]x_offload_capa which include both port and queue
> > >>> offloads.
> > >>>
> > >>> Yes, we lose possibility to enable on port level, but disable on queue
> > >>> level by suggested changes, but I think it is OK - if you don't need
> > >>> it for all queues, just control separately on queue level.
> > >>
> > >> What I understand was queue offload can only enable more, but it seems it can
> > >> both enable or disable.
> > > 
> > > Yes, queue offload should only enable more.
> > > An offload enabled at port level, cannot be disabled at queue level.
> > 
> > Agree an offload enabled at port level can't be disabled at queue level, but why
> > not have the ability to disable a queue level offload with another queue setup call.
> 
> Yes it should be possible to disable a queue offload:
> 1/ enable offload in queue configuration
> 2/ later disable this offload in queue configuration
> 
> So, when I say "only enable more", I mean queue config should only enable
> more than port config. In other words, a port-level offload cannot be
> disabled at queue level.
> 
> > > A port offload can be repeated in queue configuration.
> > > If a port offload is not repeated in queue configuration, there should be
> > > no impact: it is still in the port configuration, thus applying to all queues.
> > 
> > This was a requirement, this patch targets removing the requirement to repeat
> > the port offload in queue config.
> 
> Understood, and I agree with this change if it is well explained
> in sphinx and doxygen.
> It is important to say that queue configuration has no impact on offloads
> already enabled at port level.
> 
> > > About capabilities, the queue offloads must be a subset of port offloads.
> > > The queue capabilities show which offloads can be enabled per queue.
> 
> 
>

Why not abandon port-level config entirely? Then you just have queue-level
configs, with the restriction that on some NICs all queues must be
configured the same way. It can be up to the NIC drivers - or possibly
ethdev layer - to identify and report an error in such cases.

/Bruce 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 15:26                         ` Bruce Richardson
@ 2018-03-21 15:29                           ` Shahaf Shuler
  2018-03-21 15:44                             ` Bruce Richardson
  0 siblings, 1 reply; 134+ messages in thread
From: Shahaf Shuler @ 2018-03-21 15:29 UTC (permalink / raw)
  To: Bruce Richardson, Thomas Monjalon
  Cc: Ferruh Yigit, dev, Andrew Rybchenko, John McNamara,
	Marko Kovacevic, Patil, Harish, Ivan Malov

Wednesday, March 21, 2018 5:27 PM, Bruce Richardson
> On Wed, Mar 21, 2018 at 03:40:43PM +0100, Thomas Monjalon wrote:
> > 21/03/2018 15:28, Ferruh Yigit:
> > > On 3/21/2018 2:08 PM, Thomas Monjalon wrote:
> > > > 21/03/2018 11:54, Ferruh Yigit:
> > > >> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> > > >>> IMHO, it should be allowed to specify queue offloads on port level.
> > > >>> It should simply enable these offloads on all queues. Also it
> > > >>> will match dev_info [rt]x_offload_capa which include both port
> > > >>> and queue offloads.
> >
> 
> Why not abandon port-level config entirely? Then you just have queue-level
> configs, with the restriction that on some NICs all queues must be configured
> the same way. It can be up to the NIC drivers - or possibly ethdev layer - to
> identify and report an error in such cases.

I would love that. And this was part of the original proposal when we first modified the offloads API.

However Konstantin explained to me it will not work with Intel devices. There are cases were port configuration should be set on the PF w/o any queues created to enable offload on the VF.

> 
> /Bruce

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-21 15:29                           ` Shahaf Shuler
@ 2018-03-21 15:44                             ` Bruce Richardson
  0 siblings, 0 replies; 134+ messages in thread
From: Bruce Richardson @ 2018-03-21 15:44 UTC (permalink / raw)
  To: Shahaf Shuler
  Cc: Thomas Monjalon, Ferruh Yigit, dev, Andrew Rybchenko,
	John McNamara, Marko Kovacevic, Patil, Harish, Ivan Malov

On Wed, Mar 21, 2018 at 03:29:57PM +0000, Shahaf Shuler wrote:
> Wednesday, March 21, 2018 5:27 PM, Bruce Richardson
> > On Wed, Mar 21, 2018 at 03:40:43PM +0100, Thomas Monjalon wrote:
> > > 21/03/2018 15:28, Ferruh Yigit:
> > > > On 3/21/2018 2:08 PM, Thomas Monjalon wrote:
> > > > > 21/03/2018 11:54, Ferruh Yigit:
> > > > >> On 3/21/2018 9:47 AM, Andrew Rybchenko wrote:
> > > > >>> IMHO, it should be allowed to specify queue offloads on port level.
> > > > >>> It should simply enable these offloads on all queues. Also it
> > > > >>> will match dev_info [rt]x_offload_capa which include both port
> > > > >>> and queue offloads.
> > >
> > 
> > Why not abandon port-level config entirely? Then you just have queue-level
> > configs, with the restriction that on some NICs all queues must be configured
> > the same way. It can be up to the NIC drivers - or possibly ethdev layer - to
> > identify and report an error in such cases.
> 
> I would love that. And this was part of the original proposal when we first modified the offloads API.
> 
> However Konstantin explained to me it will not work with Intel devices. There are cases were port configuration should be set on the PF w/o any queues created to enable offload on the VF.
>
Apologies so, my bad for not having followed the whole discussion too
closely. Never mind. I'll go back to pretending I have something meaningful
to contribute on other threads instead. :-)

/Bruce 

^ permalink raw reply	[flat|nested] 134+ messages in thread

* Re: [dpdk-dev] [PATCH] doc: update new ethdev offload API description
  2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
                                 ` (2 preceding siblings ...)
  2018-03-21  9:47               ` Andrew Rybchenko
@ 2018-05-08 12:33               ` Ferruh Yigit
  3 siblings, 0 replies; 134+ messages in thread
From: Ferruh Yigit @ 2018-05-08 12:33 UTC (permalink / raw)
  To: John McNamara, Marko Kovacevic
  Cc: dev, Thomas Monjalon, shahafs, Harish Patil

On 3/16/2018 3:51 PM, Ferruh Yigit wrote:
> Don't mandate API to pass port offload configuration during queue setup,
> this is unnecessary for devices that support only port level offloads.
> 
> Fixes: 81ac560dc1b4 ("doc: add details on ethdev offloads API")
> Cc: shahafs@mellanox.com
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>

Superseded by [1] which has both doc update and implementation.

[1]
https://dpdk.org/dev/patchwork/patch/39457/

^ permalink raw reply	[flat|nested] 134+ messages in thread

end of thread, other threads:[~2018-05-08 12:33 UTC | newest]

Thread overview: 134+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-04  7:12 [dpdk-dev] [PATCH 0/4] ethdev new offloads API Shahaf Shuler
2017-09-04  7:12 ` [dpdk-dev] [PATCH 1/4] ethdev: rename Rx and Tx configuration structs Shahaf Shuler
2017-09-04 12:06   ` Ananyev, Konstantin
2017-09-04 12:45     ` Shahaf Shuler
2017-09-04  7:12 ` [dpdk-dev] [PATCH 2/4] ethdev: introduce Rx queue offloads API Shahaf Shuler
2017-09-04  7:12 ` [dpdk-dev] [PATCH 3/4] ethdev: introduce Tx " Shahaf Shuler
2017-09-04  7:12 ` [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new " Shahaf Shuler
2017-09-04 12:13   ` Ananyev, Konstantin
2017-09-04 13:25   ` Ananyev, Konstantin
2017-09-04 13:53     ` Thomas Monjalon
2017-09-04 14:18       ` Ananyev, Konstantin
2017-09-05  7:48         ` Thomas Monjalon
2017-09-05  8:09           ` Ananyev, Konstantin
2017-09-05 10:51             ` Shahaf Shuler
2017-09-05 13:50               ` Thomas Monjalon
2017-09-05 15:31               ` Ananyev, Konstantin
2017-09-06  6:01                 ` Shahaf Shuler
2017-09-06  9:33                   ` Ananyev, Konstantin
2017-09-13  9:27                     ` Thomas Monjalon
2017-09-13 11:16                       ` Shahaf Shuler
2017-09-13 12:41                         ` Thomas Monjalon
2017-09-13 12:56                           ` Ananyev, Konstantin
2017-09-13 13:20                             ` Thomas Monjalon
2017-09-13 21:42                               ` Ananyev, Konstantin
2017-09-14  8:02                                 ` Thomas Monjalon
2017-09-18 10:31                                   ` Bruce Richardson
2017-09-18 10:57                                     ` Ananyev, Konstantin
2017-09-18 11:04                                       ` Bruce Richardson
2017-09-18 11:27                                         ` Thomas Monjalon
2017-09-18 11:04                                       ` Bruce Richardson
2017-09-18 11:11                                         ` Ananyev, Konstantin
2017-09-18 11:32                                           ` Thomas Monjalon
2017-09-18 11:37                                             ` Bruce Richardson
2017-09-18 14:27                                               ` Shahaf Shuler
2017-09-18 14:42                                                 ` Thomas Monjalon
2017-09-18 14:44                                                 ` Bruce Richardson
2017-09-18 18:18                                                   ` Shahaf Shuler
2017-09-18 21:08                                                     ` Thomas Monjalon
2017-09-19  7:33                                                       ` Shahaf Shuler
2017-09-19  7:56                                                         ` Thomas Monjalon
2017-09-13 12:56                           ` Shahaf Shuler
2017-09-04 14:02     ` Shahaf Shuler
2017-09-04 15:55       ` Ananyev, Konstantin
2017-09-10 12:07 ` [dpdk-dev] [PATCH v2 0/2] ethdev " Shahaf Shuler
2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 1/2] ethdev: introduce Rx queue " Shahaf Shuler
2017-09-10 12:07   ` [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx " Shahaf Shuler
2017-09-10 17:48     ` Stephen Hemminger
2017-09-11  5:52       ` Shahaf Shuler
2017-09-11  6:21         ` Jerin Jacob
2017-09-11  7:56           ` Shahaf Shuler
2017-09-11  8:06             ` Jerin Jacob
2017-09-11  8:46               ` Shahaf Shuler
2017-09-11  9:05                 ` Jerin Jacob
2017-09-11 11:02                   ` Ananyev, Konstantin
2017-09-12  4:01                     ` Jerin Jacob
2017-09-12  5:25                       ` Shahaf Shuler
2017-09-12  5:51                         ` Jerin Jacob
2017-09-12  6:35                           ` Shahaf Shuler
2017-09-12  6:46                             ` Andrew Rybchenko
2017-09-12  7:17                             ` Jerin Jacob
2017-09-12  8:03                               ` Shahaf Shuler
2017-09-12 10:27                                 ` Andrew Rybchenko
2017-09-12 14:26                                   ` Ananyev, Konstantin
2017-09-12 14:36                                     ` Jerin Jacob
2017-09-12 14:43                                       ` Andrew Rybchenko
2017-09-12  6:43                           ` Andrew Rybchenko
2017-09-12  6:59                             ` Shahaf Shuler
2017-09-11  8:03     ` Andrew Rybchenko
2017-09-11 12:27       ` Shahaf Shuler
2017-09-11 13:10         ` Andrew Rybchenko
2017-09-13  6:37   ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Shahaf Shuler
2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 1/2] ethdev: introduce Rx queue " Shahaf Shuler
2017-09-13  8:13       ` Andrew Rybchenko
2017-09-13 12:49         ` Shahaf Shuler
2017-09-13  8:49       ` Andrew Rybchenko
2017-09-13  9:13         ` Andrew Rybchenko
2017-09-13 12:33           ` Shahaf Shuler
2017-09-13 12:34             ` Andrew Rybchenko
2017-09-13  6:37     ` [dpdk-dev] [PATCH v3 2/2] ethdev: introduce Tx " Shahaf Shuler
2017-09-13  8:40       ` Andrew Rybchenko
2017-09-13 12:51         ` Shahaf Shuler
2017-09-13  9:10     ` [dpdk-dev] [PATCH v3 0/2] ethdev new " Andrew Rybchenko
2017-09-17  6:54     ` [dpdk-dev] [PATCH v4 0/3] " Shahaf Shuler
2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 1/3] ethdev: introduce Rx queue " Shahaf Shuler
2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 2/3] ethdev: introduce Tx " Shahaf Shuler
2017-09-18  7:50         ` Andrew Rybchenko
2017-09-17  6:54       ` [dpdk-dev] [PATCH v4 3/3] doc: add details on ethdev " Shahaf Shuler
2017-09-18  7:51         ` Andrew Rybchenko
2017-09-18 13:40         ` Mcnamara, John
2017-09-18  7:51       ` [dpdk-dev] [PATCH v4 0/3] ethdev new " Andrew Rybchenko
2017-09-28 18:54       ` [dpdk-dev] [PATCH v5 " Shahaf Shuler
2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue " Shahaf Shuler
2017-10-03  0:32           ` Ferruh Yigit
2017-10-03  6:25             ` Shahaf Shuler
2017-10-03 19:46               ` Ferruh Yigit
2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 2/3] ethdev: introduce Tx " Shahaf Shuler
2017-10-03 19:50           ` Ferruh Yigit
2017-10-04  8:06             ` Shahaf Shuler
2017-09-28 18:54         ` [dpdk-dev] [PATCH v5 3/3] doc: add details on ethdev " Shahaf Shuler
2017-10-04  8:17         ` [dpdk-dev] [PATCH v6 0/4] ethdev new " Shahaf Shuler
2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue " Shahaf Shuler
2017-10-04  8:17           ` [dpdk-dev] [PATCH v6 2/4] ethdev: introduce Tx " Shahaf Shuler
2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 3/4] ethdev: add mbuf fast free Tx offload Shahaf Shuler
2017-10-04  8:18           ` [dpdk-dev] [PATCH v6 4/4] doc: add details on ethdev offloads API Shahaf Shuler
2017-10-04 13:46             ` Mcnamara, John
2018-03-15  1:58             ` Patil, Harish
2018-03-15  6:05               ` Shahaf Shuler
2018-03-16 15:51             ` [dpdk-dev] [PATCH] doc: update new ethdev offload API description Ferruh Yigit
2018-03-17  0:16               ` Patil, Harish
2018-03-18  5:52               ` Shahaf Shuler
2018-03-21  9:47               ` Andrew Rybchenko
2018-03-21 10:54                 ` Ferruh Yigit
2018-03-21 11:08                   ` Andrew Rybchenko
2018-03-21 11:10                     ` Shahaf Shuler
2018-03-21 11:19                       ` Andrew Rybchenko
2018-03-21 11:23                         ` Shahaf Shuler
2018-03-21 11:37                           ` Andrew Rybchenko
2018-03-21 11:40                             ` Shahaf Shuler
2018-03-21 12:52                               ` Ferruh Yigit
2018-03-21 13:06                                 ` Shahaf Shuler
2018-03-21 13:11                                   ` Ananyev, Konstantin
2018-03-21 12:03                             ` Ananyev, Konstantin
2018-03-21 12:29                               ` Shahaf Shuler
2018-03-21 12:34                               ` Andrew Rybchenko
2018-03-21 12:37                                 ` Ananyev, Konstantin
2018-03-21 14:08                   ` Thomas Monjalon
2018-03-21 14:28                     ` Ferruh Yigit
2018-03-21 14:40                       ` Thomas Monjalon
2018-03-21 15:26                         ` Bruce Richardson
2018-03-21 15:29                           ` Shahaf Shuler
2018-03-21 15:44                             ` Bruce Richardson
2018-05-08 12:33               ` Ferruh Yigit
2017-10-04 16:12           ` [dpdk-dev] [PATCH v6 0/4] ethdev new offloads API Ananyev, Konstantin
2017-10-05  0:55             ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).