* [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
@ 2019-11-05 8:40 ` Dekel Peled
2019-11-05 12:39 ` Andrew Rybchenko
2019-11-05 8:40 ` [dpdk-dev] [PATCH 2/3] net/mlx5: use " Dekel Peled
` (4 subsequent siblings)
5 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-05 8:40 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_19_11.rst | 8 ++++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 +++
15 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d966968..4d1bb5a 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f96ac38..9bffb16 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -380,6 +380,14 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7d9459f..88af61b 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -535,6 +535,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index dbce7a8..a561886 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3804,6 +3804,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 15872;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
@@ -3927,6 +3928,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL reg */
dev_info->max_rx_pktlen = 9728; /* includes CRC, cf MAXFRS reg */
+ dev_info->max_lro_pkt_size = 9728;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index dbbef29..28dfa3a 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -48,6 +48,7 @@
dev_info->min_rx_bufsize = 1024;
/**< Minimum size of RX buffer. */
dev_info->max_rx_pktlen = 9728;
+ dev_info->max_lro_pkt_size = 9728;
/**< Maximum configurable length of RX pkt. */
dev_info->max_rx_queues = IXGBE_VF_MAX_RX_QUEUES;
/**< Maximum number of RX queues. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..fdfc99b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -203,6 +203,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..1443faa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..9c960cd 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pkt_size = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de99..fa33c45 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 85ab5f0..2f52090 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1156,6 +1156,26 @@ struct rte_eth_dev *
return name;
}
+static inline int
+rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u > "
+ "max allowed value %u\n",
+ port_id, config_size, dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u < "
+ "min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1286,6 +1306,18 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ ret = rte_eth_check_lro_pkt_size(
+ port_id, dev_conf->rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1790,6 +1822,18 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ int ret = rte_eth_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index e6ef4b4..e10128d 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximal allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1223,6 +1225,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 8:40 ` [dpdk-dev] [PATCH 1/3] ethdev: " Dekel Peled
@ 2019-11-05 12:39 ` Andrew Rybchenko
2019-11-05 13:09 ` Thomas Monjalon
2019-11-05 14:18 ` Dekel Peled
0 siblings, 2 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2019-11-05 12:39 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, ferruh.yigit,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/5/19 11:40 AM, Dekel Peled wrote:
> This patch implements [1], to support API for configuration and
> validation of max size for LRO aggregated packet.
> API change notice [2] is removed, and release notes for 19.11
> are updated accordingly.
>
> Various PMDs using LRO offload are updated, the new data members are
> initialized to ensure they don't fail validation.
>
> [1] http://patches.dpdk.org/patch/58217/
> [2] http://patches.dpdk.org/patch/57492/
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Few comments below, otherwise
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
[snip]
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index 85ab5f0..2f52090 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
> return name;
> }
>
> +static inline int
> +rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> + uint32_t dev_info_size)
As I understand Thomas prefers static functions without rte_eth_ prefix.
I think it is reasonable.
> +{
> + int ret = 0;
> +
> + if (config_size > dev_info_size) {
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u > "
> + "max allowed value %u\n",
> + port_id, config_size, dev_info_size);
> + ret = -EINVAL;
> + } else if (config_size < RTE_ETHER_MIN_LEN) {
Shouldn't config_size == 0 fallback to maximum?
(I don't know and I simply would like to get comments on it)
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u < "
> + "min allowed value %u\n", port_id, config_size,
> + (unsigned int)RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + }
> + return ret;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1286,6 +1306,18 @@ struct rte_eth_dev *
> RTE_ETHER_MAX_LEN;
> }
>
> + /*
> + * If LRO is enabled, check that the maximum aggregated packet
> + * size is supported by the configured device.
> + */
> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + ret = rte_eth_check_lro_pkt_size(
> + port_id, dev_conf->rxmode.max_lro_pkt_size,
> + dev_info.max_lro_pkt_size);
> + if (ret)
if (ret != 0)
https://doc.dpdk.org/guides/contributing/coding_style.html#function-calls
and the style dominates in rte_ethdev.c
> + goto rollback;
> + }
> +
> /* Any requested offloading must be within its device capabilities */
> if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
> dev_conf->rxmode.offloads) {
> @@ -1790,6 +1822,18 @@ struct rte_eth_dev *
> return -EINVAL;
> }
>
> + /*
> + * If LRO is enabled, check that the maximum aggregated packet
> + * size is supported by the configured device.
> + */
> + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + int ret = rte_eth_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size,
> + dev_info.max_lro_pkt_size);
> + if (ret)
if (ret != 0)
https://doc.dpdk.org/guides/contributing/coding_style.html#function-calls
and the style dominates in rte_ethdev.c
> + return ret;
> + }
> +
> ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
> socket_id, &local_conf, mp);
> if (!ret) {
>
[snip]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 12:39 ` Andrew Rybchenko
@ 2019-11-05 13:09 ` Thomas Monjalon
2019-11-05 14:18 ` Dekel Peled
1 sibling, 0 replies; 79+ messages in thread
From: Thomas Monjalon @ 2019-11-05 13:09 UTC (permalink / raw)
To: Andrew Rybchenko
Cc: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, ferruh.yigit, jingjing.wu,
bernard.iremonger, dev
05/11/2019 13:39, Andrew Rybchenko:
> On 11/5/19 11:40 AM, Dekel Peled wrote:
> > --- a/lib/librte_ethdev/rte_ethdev.c
> > +++ b/lib/librte_ethdev/rte_ethdev.c
> > +static inline int
> > +rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> > + uint32_t dev_info_size)
>
> As I understand Thomas prefers static functions without rte_eth_ prefix.
> I think it is reasonable.
Indeed, rte_ prefix should be reserved for public API.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 12:39 ` Andrew Rybchenko
2019-11-05 13:09 ` Thomas Monjalon
@ 2019-11-05 14:18 ` Dekel Peled
2019-11-05 14:27 ` Andrew Rybchenko
1 sibling, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-05 14:18 UTC (permalink / raw)
To: Andrew Rybchenko, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, ferruh.yigit, jingjing.wu, bernard.iremonger
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Tuesday, November 5, 2019 2:40 PM
> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH 1/3] ethdev: support API to set max LRO packet size
>
> On 11/5/19 11:40 AM, Dekel Peled wrote:
> > This patch implements [1], to support API for configuration and
> > validation of max size for LRO aggregated packet.
> > API change notice [2] is removed, and release notes for 19.11 are
> > updated accordingly.
> >
> > Various PMDs using LRO offload are updated, the new data members are
> > initialized to ensure they don't fail validation.
> >
> > [1]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F58217%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
> 256f461
> >
> b%7C0%7C1%7C637085543948425032&sdata=C2laHnaMCQZbDUneQD0
> 2Kpi5iAcr%
> > 2FYDAS%2BMuO8IcV9s%3D&reserved=0
> > [2]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F57492%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
> 256f461
> >
> b%7C0%7C1%7C637085543948435028&sdata=XnexdrRYNmFyLqT9IL6ZKa
> CLF2JKr
> > oKPDVML7gXKceE%3D&reserved=0
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
>
> Few comments below, otherwise
>
> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
>
> [snip]
>
> > diff --git a/lib/librte_ethdev/rte_ethdev.c
> > b/lib/librte_ethdev/rte_ethdev.c index 85ab5f0..2f52090 100644
> > --- a/lib/librte_ethdev/rte_ethdev.c
> > +++ b/lib/librte_ethdev/rte_ethdev.c
> > @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
> > return name;
> > }
> >
> > +static inline int
> > +rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> > + uint32_t dev_info_size)
>
> As I understand Thomas prefers static functions without rte_eth_ prefix.
> I think it is reasonable.
Will remove prefix.
>
> > +{
> > + int ret = 0;
> > +
> > + if (config_size > dev_info_size) {
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u > "
> > + "max allowed value %u\n",
> > + port_id, config_size, dev_info_size);
> > + ret = -EINVAL;
> > + } else if (config_size < RTE_ETHER_MIN_LEN) {
>
> Shouldn't config_size == 0 fallback to maximum?
> (I don't know and I simply would like to get comments on it)
>
This check is for value smaller than minimum, not just 0.
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u < "
> > + "min allowed value %u\n", port_id, config_size,
> > + (unsigned int)RTE_ETHER_MIN_LEN);
> > + ret = -EINVAL;
> > + }
> > + return ret;
> > +}
> > +
> > int
> > rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> > const struct rte_eth_conf *dev_conf) @@ -1286,6
> +1306,18 @@
> > struct rte_eth_dev *
> >
> RTE_ETHER_MAX_LEN;
> > }
> >
> > + /*
> > + * If LRO is enabled, check that the maximum aggregated packet
> > + * size is supported by the configured device.
> > + */
> > + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > + ret = rte_eth_check_lro_pkt_size(
> > + port_id, dev_conf-
> >rxmode.max_lro_pkt_size,
> > + dev_info.max_lro_pkt_size);
> > + if (ret)
>
> if (ret != 0)
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.d
> pdk.org%2Fguides%2Fcontributing%2Fcoding_style.html%23function-
> calls&data=02%7C01%7Cdekelp%40mellanox.com%7C751aa0cb18b94b8
> a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C
> 637085543948435028&sdata=wnlhZz70T2l%2B0UOe54XqRhJG53Pc6zMqI
> %2F%2FSu%2B5qDqc%3D&reserved=0
> and the style dominates in rte_ethdev.c
>
Will change.
> > + goto rollback;
> > + }
> > +
> > /* Any requested offloading must be within its device capabilities */
> > if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
> > dev_conf->rxmode.offloads) {
> > @@ -1790,6 +1822,18 @@ struct rte_eth_dev *
> > return -EINVAL;
> > }
> >
> > + /*
> > + * If LRO is enabled, check that the maximum aggregated packet
> > + * size is supported by the configured device.
> > + */
> > + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > + int ret = rte_eth_check_lro_pkt_size(port_id,
> > + dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> > + dev_info.max_lro_pkt_size);
> > + if (ret)
>
> if (ret != 0)
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.d
> pdk.org%2Fguides%2Fcontributing%2Fcoding_style.html%23function-
> calls&data=02%7C01%7Cdekelp%40mellanox.com%7C751aa0cb18b94b8
> a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C
> 637085543948435028&sdata=wnlhZz70T2l%2B0UOe54XqRhJG53Pc6zMqI
> %2F%2FSu%2B5qDqc%3D&reserved=0
> and the style dominates in rte_ethdev.c
>
Will change.
> > + return ret;
> > + }
> > +
> > ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
> nb_rx_desc,
> > socket_id, &local_conf, mp);
> > if (!ret) {
> >
>
> [snip]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 14:18 ` Dekel Peled
@ 2019-11-05 14:27 ` Andrew Rybchenko
2019-11-05 14:51 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Andrew Rybchenko @ 2019-11-05 14:27 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, ferruh.yigit, jingjing.wu, bernard.iremonger
Cc: dev
On 11/5/19 5:18 PM, Dekel Peled wrote:
> Thanks, PSB.
>
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>> Sent: Tuesday, November 5, 2019 2:40 PM
>> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
>> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
>> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
>> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
>> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
>> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
>> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
>> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
>> shshaikh@marvell.com; maxime.coquelin@redhat.com;
>> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
>> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
>> jingjing.wu@intel.com; bernard.iremonger@intel.com
>> Cc: dev@dpdk.org
>> Subject: Re: [PATCH 1/3] ethdev: support API to set max LRO packet size
>>
>> On 11/5/19 11:40 AM, Dekel Peled wrote:
>>> This patch implements [1], to support API for configuration and
>>> validation of max size for LRO aggregated packet.
>>> API change notice [2] is removed, and release notes for 19.11 are
>>> updated accordingly.
>>>
>>> Various PMDs using LRO offload are updated, the new data members are
>>> initialized to ensure they don't fail validation.
>>>
>>> [1]
>>>
>> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
>>>
>> es.dpdk.org%2Fpatch%2F58217%2F&data=02%7C01%7Cdekelp%40mell
>> anox.co
>>>
>> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
>> 256f461
>>>
>> b%7C0%7C1%7C637085543948425032&sdata=C2laHnaMCQZbDUneQD0
>> 2Kpi5iAcr%
>>> 2FYDAS%2BMuO8IcV9s%3D&reserved=0
>>> [2]
>>>
>> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
>>>
>> es.dpdk.org%2Fpatch%2F57492%2F&data=02%7C01%7Cdekelp%40mell
>> anox.co
>>>
>> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
>> 256f461
>>>
>> b%7C0%7C1%7C637085543948435028&sdata=XnexdrRYNmFyLqT9IL6ZKa
>> CLF2JKr
>>> oKPDVML7gXKceE%3D&reserved=0
>>>
>>> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
>>
>> Few comments below, otherwise
>>
>> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
[snip]
>>> diff --git a/lib/librte_ethdev/rte_ethdev.c
>>> b/lib/librte_ethdev/rte_ethdev.c index 85ab5f0..2f52090 100644
>>> --- a/lib/librte_ethdev/rte_ethdev.c
>>> +++ b/lib/librte_ethdev/rte_ethdev.c
>>> @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
>>> return name;
>>> }
>>>
>>> +static inline int
>>> +rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
>>> + uint32_t dev_info_size)
>>
>> As I understand Thomas prefers static functions without rte_eth_ prefix.
>> I think it is reasonable.
>
> Will remove prefix.
>
>>
>>> +{
>>> + int ret = 0;
>>> +
>>> + if (config_size > dev_info_size) {
>>> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
>> max_lro_pkt_size %u > "
>>> + "max allowed value %u\n",
>>> + port_id, config_size, dev_info_size);
>>> + ret = -EINVAL;
>>> + } else if (config_size < RTE_ETHER_MIN_LEN) {
>>
>> Shouldn't config_size == 0 fallback to maximum?
>> (I don't know and I simply would like to get comments on it)
>>
>
> This check is for value smaller than minimum, not just 0.
Yes, I know, but the question still remains.
>>> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
>> max_lro_pkt_size %u < "
>>> + "min allowed value %u\n", port_id, config_size,
>>> + (unsigned int)RTE_ETHER_MIN_LEN);
>>> + ret = -EINVAL;
>>> + }
>>> + return ret;
>>> +}
>>> +
>>> int
>>> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t
>> nb_tx_q,
>>> const struct rte_eth_conf *dev_conf) @@ -1286,6
[snip]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] ethdev: support API to set max LRO packet size
2019-11-05 14:27 ` Andrew Rybchenko
@ 2019-11-05 14:51 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-05 14:51 UTC (permalink / raw)
To: Andrew Rybchenko, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, ferruh.yigit, jingjing.wu, bernard.iremonger
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> Sent: Tuesday, November 5, 2019 4:27 PM
> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH 1/3] ethdev: support API to set max LRO packet size
>
> On 11/5/19 5:18 PM, Dekel Peled wrote:
> > Thanks, PSB.
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <arybchenko@solarflare.com>
> >> Sent: Tuesday, November 5, 2019 2:40 PM
> >> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
> >> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> >> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> >> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> >> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> >> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
> >> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> >> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> >> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> >> tiwei.bie@intel.com; zhihong.wang@intel.com;
> yongwang@vmware.com;
> >> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> >> jingjing.wu@intel.com; bernard.iremonger@intel.com
> >> Cc: dev@dpdk.org
> >> Subject: Re: [PATCH 1/3] ethdev: support API to set max LRO packet
> >> size
> >>
> >> On 11/5/19 11:40 AM, Dekel Peled wrote:
> >>> This patch implements [1], to support API for configuration and
> >>> validation of max size for LRO aggregated packet.
> >>> API change notice [2] is removed, and release notes for 19.11 are
> >>> updated accordingly.
> >>>
> >>> Various PMDs using LRO offload are updated, the new data members
> are
> >>> initialized to ensure they don't fail validation.
> >>>
> >>> [1]
> >>>
> >>
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatc
> >> h
> >>>
> >>
> es.dpdk.org%2Fpatch%2F58217%2F&data=02%7C01%7Cdekelp%40mell
> >> anox.co
> >>>
> >>
> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
> >> 256f461
> >>>
> >>
> b%7C0%7C1%7C637085543948425032&sdata=C2laHnaMCQZbDUneQD0
> >> 2Kpi5iAcr%
> >>> 2FYDAS%2BMuO8IcV9s%3D&reserved=0
> >>> [2]
> >>>
> >>
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatc
> >> h
> >>>
> >>
> es.dpdk.org%2Fpatch%2F57492%2F&data=02%7C01%7Cdekelp%40mell
> >> anox.co
> >>>
> >>
> m%7C751aa0cb18b94b8a447c08d761ed4051%7Ca652971c7d2e4d9ba6a4d149
> >> 256f461
> >>>
> >>
> b%7C0%7C1%7C637085543948435028&sdata=XnexdrRYNmFyLqT9IL6ZKa
> >> CLF2JKr
> >>> oKPDVML7gXKceE%3D&reserved=0
> >>>
> >>> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> >>
> >> Few comments below, otherwise
> >>
> >> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
>
> [snip]
>
> >>> diff --git a/lib/librte_ethdev/rte_ethdev.c
> >>> b/lib/librte_ethdev/rte_ethdev.c index 85ab5f0..2f52090 100644
> >>> --- a/lib/librte_ethdev/rte_ethdev.c
> >>> +++ b/lib/librte_ethdev/rte_ethdev.c
> >>> @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
> >>> return name;
> >>> }
> >>>
> >>> +static inline int
> >>> +rte_eth_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> >>> + uint32_t dev_info_size)
> >>
> >> As I understand Thomas prefers static functions without rte_eth_ prefix.
> >> I think it is reasonable.
> >
> > Will remove prefix.
> >
> >>
> >>> +{
> >>> + int ret = 0;
> >>> +
> >>> + if (config_size > dev_info_size) {
> >>> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> >> max_lro_pkt_size %u > "
> >>> + "max allowed value %u\n",
> >>> + port_id, config_size, dev_info_size);
> >>> + ret = -EINVAL;
> >>> + } else if (config_size < RTE_ETHER_MIN_LEN) {
> >>
> >> Shouldn't config_size == 0 fallback to maximum?
> >> (I don't know and I simply would like to get comments on it)
> >>
> >
> > This check is for value smaller than minimum, not just 0.
>
> Yes, I know, but the question still remains.
Application may set value 0 explicitly, don't think it should be modified.
>
> >>> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> >> max_lro_pkt_size %u < "
> >>> + "min allowed value %u\n", port_id, config_size,
> >>> + (unsigned int)RTE_ETHER_MIN_LEN);
> >>> + ret = -EINVAL;
> >>> + }
> >>> + return ret;
> >>> +}
> >>> +
> >>> int
> >>> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t
> >> nb_tx_q,
> >>> const struct rte_eth_conf *dev_conf) @@ -1286,6
>
> [snip]
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 2/3] net/mlx5: use API to set max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 1/3] ethdev: " Dekel Peled
@ 2019-11-05 8:40 ` Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 3/3] app/testpmd: " Dekel Peled
` (3 subsequent siblings)
5 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-05 8:40 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..3b10daf 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9423e7b..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 3/3] app/testpmd: use API to set max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 1/3] ethdev: " Dekel Peled
2019-11-05 8:40 ` [dpdk-dev] [PATCH 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-05 8:40 ` Dekel Peled
2019-11-05 9:35 ` [dpdk-dev] [PATCH 0/3] support " Matan Azrad
` (2 subsequent siblings)
5 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-05 8:40 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 5 ++
app/test-pmd/testpmd.c | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
6 files changed, 95 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4478069..edfa60f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19024,6 +19096,7 @@ struct cmd_show_port_supported_ptypes_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index efe2812..50e6ac0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -629,6 +629,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9ea87c1..3e371e2 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -592,6 +592,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -888,6 +889,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 38acbc5..d4f67ec 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
/**< Default maximum frame length. */
+ .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
};
struct rte_eth_txmode tx_mode = {
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 00e0c2a..721f740 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,11 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+ The default value is 1518.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c68a742..0267295 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2139,6 +2139,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
` (2 preceding siblings ...)
2019-11-05 8:40 ` [dpdk-dev] [PATCH 3/3] app/testpmd: " Dekel Peled
@ 2019-11-05 9:35 ` Matan Azrad
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
5 siblings, 0 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-05 9:35 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, Thomas Monjalon, ferruh.yigit,
arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Dekel Peled
> This series implements support and use of API for configuration and
> validation of max size for LRO aggregated packet.
>
> Dekel Peled (3):
> ethdev: support API to set max LRO packet size
> net/mlx5: use API to set max LRO packet size
> app/testpmd: use API to set max LRO packet size
>
For all the series:
Acked-by: Matan Azrad <matan@mellanox.com>
> app/test-pmd/cmdline.c | 73
> +++++++++++++++++++++++++++++
> app/test-pmd/config.c | 2 +
> app/test-pmd/parameters.c | 5 ++
> app/test-pmd/testpmd.c | 1 +
> doc/guides/nics/features.rst | 2 +
> doc/guides/nics/mlx5.rst | 2 +
> doc/guides/rel_notes/deprecation.rst | 4 --
> doc/guides/rel_notes/release_19_11.rst | 8 ++++
> doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/mlx5.h | 3 ++
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
> drivers/net/mlx5/mlx5_rxq.c | 5 +-
> drivers/net/qede/qede_ethdev.c | 1 +
> drivers/net/virtio/virtio_ethdev.c | 1 +
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
> lib/librte_ethdev/rte_ethdev.h | 4 ++
> 22 files changed, 170 insertions(+), 6 deletions(-)
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] support API to set max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
` (3 preceding siblings ...)
2019-11-05 9:35 ` [dpdk-dev] [PATCH 0/3] support " Matan Azrad
@ 2019-11-06 11:34 ` Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
` (3 more replies)
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
5 siblings, 4 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 11:34 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This series implements support and use of API for configuration and
validation of max size for LRO aggregated packet.
v2: Updated ethdev patch per review comments.
Dekel Peled (3):
ethdev: support API to set max LRO packet size
net/mlx5: use API to set max LRO packet size
app/testpmd: use API to set max LRO packet size
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 5 ++
app/test-pmd/testpmd.c | 1 +
doc/guides/nics/features.rst | 2 +
doc/guides/nics/mlx5.rst | 2 +
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_19_11.rst | 8 ++++
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++
22 files changed, 170 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] ethdev: support API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
@ 2019-11-06 11:34 ` Dekel Peled
2019-11-06 12:26 ` Thomas Monjalon
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: use " Dekel Peled
` (2 subsequent siblings)
3 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 11:34 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_19_11.rst | 8 ++++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 +++
15 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d966968..4d1bb5a 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f96ac38..9bffb16 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -380,6 +380,14 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7d9459f..88af61b 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -535,6 +535,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3c7624f..863e3b1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3804,6 +3804,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 15872;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
@@ -3927,6 +3928,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL reg */
dev_info->max_rx_pktlen = 9728; /* includes CRC, cf MAXFRS reg */
+ dev_info->max_lro_pkt_size = 9728;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index dbbef29..28dfa3a 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -48,6 +48,7 @@
dev_info->min_rx_bufsize = 1024;
/**< Minimum size of RX buffer. */
dev_info->max_rx_pktlen = 9728;
+ dev_info->max_lro_pkt_size = 9728;
/**< Maximum configurable length of RX pkt. */
dev_info->max_rx_queues = IXGBE_VF_MAX_RX_QUEUES;
/**< Maximum number of RX queues. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..fdfc99b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -203,6 +203,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..1443faa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..9c960cd 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pkt_size = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de99..fa33c45 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 85ab5f0..7d8d1ed 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1156,6 +1156,26 @@ struct rte_eth_dev *
return name;
}
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u > "
+ "max allowed value %u\n",
+ port_id, config_size, dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u < "
+ "min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1286,6 +1306,18 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ ret = check_lro_pkt_size(
+ port_id, dev_conf->rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1790,6 +1822,18 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ int ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f0df03d..0a1e490 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximal allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1223,6 +1225,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] ethdev: support API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
@ 2019-11-06 12:26 ` Thomas Monjalon
2019-11-06 12:39 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Thomas Monjalon @ 2019-11-06 12:26 UTC (permalink / raw)
To: Dekel Peled
Cc: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, ferruh.yigit, arybchenko, jingjing.wu,
bernard.iremonger, dev
06/11/2019 12:34, Dekel Peled:
> This patch implements [1], to support API for configuration and
> validation of max size for LRO aggregated packet.
> API change notice [2] is removed, and release notes for 19.11
> are updated accordingly.
>
> Various PMDs using LRO offload are updated, the new data members are
> initialized to ensure they don't fail validation.
>
> [1] http://patches.dpdk.org/patch/58217/
> [2] http://patches.dpdk.org/patch/57492/
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
[...]
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
> return name;
> }
>
> +static inline int
> +check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> + uint32_t dev_info_size)
> +{
> + int ret = 0;
> +
> + if (config_size > dev_info_size) {
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u > "
> + "max allowed value %u\n",
Minor comment (can be fixed while merging):
it is better to keep fixed strings together so it can be grepped.
Here I would move " > " on the second line, so we can grep " > max allowed value ".
> + port_id, config_size, dev_info_size);
> + ret = -EINVAL;
> + } else if (config_size < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u < "
> + "min allowed value %u\n", port_id, config_size,
Same minor comment here.
> + (unsigned int)RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + }
> + return ret;
> +}
[...]
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -395,6 +395,8 @@ struct rte_eth_rxmode {
> + /** Maximal allowed size of LRO aggregated packet. */
Not sure, isn't it more correct to say "Maximum" in English?
> + uint32_t max_lro_pkt_size;
> @@ -1223,6 +1225,8 @@ struct rte_eth_dev_info {
> + /** Maximum configurable size of LRO aggregated packet. */
> + uint32_t max_lro_pkt_size;
Except minor comments above,
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] ethdev: support API to set max LRO packet size
2019-11-06 12:26 ` Thomas Monjalon
@ 2019-11-06 12:39 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 12:39 UTC (permalink / raw)
To: Thomas Monjalon
Cc: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, Matan Azrad,
Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger, dev
Thanks, PSB.
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, November 6, 2019 2:26 PM
> To: Dekel Peled <dekelp@mellanox.com>
> Cc: john.mcnamara@intel.com; marko.kovacevic@intel.com;
> nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> somnath.kotur@broadcom.com; anatoly.burakov@intel.com;
> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com;
> zhouguoyang@huawei.com; wenzhuo.lu@intel.com;
> konstantin.ananyev@intel.com; Matan Azrad <matan@mellanox.com>;
> Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> ferruh.yigit@intel.com; arybchenko@solarflare.com; jingjing.wu@intel.com;
> bernard.iremonger@intel.com; dev@dpdk.org
> Subject: Re: [PATCH v2 1/3] ethdev: support API to set max LRO packet size
>
> 06/11/2019 12:34, Dekel Peled:
> > This patch implements [1], to support API for configuration and
> > validation of max size for LRO aggregated packet.
> > API change notice [2] is removed, and release notes for 19.11 are
> > updated accordingly.
> >
> > Various PMDs using LRO offload are updated, the new data members are
> > initialized to ensure they don't fail validation.
> >
> > [1]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F58217%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7C21fb831af76a46c0597208d762b490f1%7Ca652971c7d2e4d9ba6a4d14925
> 6f461
> >
> b%7C0%7C1%7C637086399982280191&sdata=APp12mvk0RP92%2FzoNy
> Mj2%2BvV3
> > E4BAaTzo4M8xOIc1yc%3D&reserved=0
> > [2]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F57492%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7C21fb831af76a46c0597208d762b490f1%7Ca652971c7d2e4d9ba6a4d14925
> 6f461
> >
> b%7C0%7C1%7C637086399982280191&sdata=cBFoqYfJPNKMAv0AMVQ
> 9iO77ikiTi
> > pFwoFJx5pRQxCE%3D&reserved=0
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> > ---
> [...]
> > --- a/lib/librte_ethdev/rte_ethdev.c
> > +++ b/lib/librte_ethdev/rte_ethdev.c
> > @@ -1156,6 +1156,26 @@ struct rte_eth_dev *
> > return name;
> > }
> >
> > +static inline int
> > +check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> > + uint32_t dev_info_size)
> > +{
> > + int ret = 0;
> > +
> > + if (config_size > dev_info_size) {
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u > "
> > + "max allowed value %u\n",
>
> Minor comment (can be fixed while merging):
> it is better to keep fixed strings together so it can be grepped.
> Here I would move " > " on the second line, so we can grep " > max allowed
> value ".
>
I'll edit it in v3.
> > + port_id, config_size, dev_info_size);
> > + ret = -EINVAL;
> > + } else if (config_size < RTE_ETHER_MIN_LEN) {
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u < "
> > + "min allowed value %u\n", port_id, config_size,
>
> Same minor comment here.
>
I'll edit it in v3.
> > + (unsigned int)RTE_ETHER_MIN_LEN);
> > + ret = -EINVAL;
> > + }
> > + return ret;
> > +}
> [...]
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -395,6 +395,8 @@ struct rte_eth_rxmode {
> > + /** Maximal allowed size of LRO aggregated packet. */
>
> Not sure, isn't it more correct to say "Maximum" in English?
>
I'll edit it in v3.
> > + uint32_t max_lro_pkt_size;
> > @@ -1223,6 +1225,8 @@ struct rte_eth_dev_info {
> > + /** Maximum configurable size of LRO aggregated packet. */
> > + uint32_t max_lro_pkt_size;
>
> Except minor comments above,
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] net/mlx5: use API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
@ 2019-11-06 11:34 ` Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 3/3] app/testpmd: " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
3 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 11:34 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..3b10daf 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9423e7b..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] app/testpmd: use API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 1/3] ethdev: " Dekel Peled
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-06 11:34 ` Dekel Peled
2019-11-06 12:35 ` Iremonger, Bernard
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
3 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 11:34 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 5 ++
app/test-pmd/testpmd.c | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
6 files changed, 95 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4478069..edfa60f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19024,6 +19096,7 @@ struct cmd_show_port_supported_ptypes_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index efe2812..50e6ac0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -629,6 +629,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9ea87c1..3e371e2 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -592,6 +592,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -888,6 +889,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 38acbc5..d4f67ec 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
/**< Default maximum frame length. */
+ .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
};
struct rte_eth_txmode tx_mode = {
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index ef677ba..bc17f3f 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,11 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+ The default value is 1518.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c68a742..0267295 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2139,6 +2139,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/3] app/testpmd: use API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 3/3] app/testpmd: " Dekel Peled
@ 2019-11-06 12:35 ` Iremonger, Bernard
2019-11-06 13:14 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Iremonger, Bernard @ 2019-11-06 12:35 UTC (permalink / raw)
To: Dekel Peled, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Ananyev, Konstantin,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
Bie, Tiwei, Wang, Zhihong, yongwang, thomas, Yigit, Ferruh,
arybchenko, Wu, Jingjing
Cc: dev
Hi Dekel,
> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Wednesday, November 6, 2019 11:35 AM
> To: Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; matan@mellanox.com;
> shahafs@mellanox.com; viacheslavo@mellanox.com; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; thomas@monjalon.net; Yigit, Ferruh
> <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v2 3/3] app/testpmd: use API to set max LRO packet size
>
> This patch implements use of the API for LRO aggregated packet max size.
> It adds command-line and runtime commands to configure this value, and
> adds option to show the supported value.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> ---
> app/test-pmd/cmdline.c | 73
> +++++++++++++++++++++++++++++
> app/test-pmd/config.c | 2 +
> app/test-pmd/parameters.c | 5 ++
> app/test-pmd/testpmd.c | 1 +
> doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> 6 files changed, 95 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> 4478069..edfa60f 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
> },
> };
>
> +/* *** config max LRO aggregated packet size *** */ struct
> +cmd_config_max_lro_pkt_size_result {
> + cmdline_fixed_string_t port;
> + cmdline_fixed_string_t keyword;
> + cmdline_fixed_string_t all;
> + cmdline_fixed_string_t name;
> + uint32_t value;
> +};
> +
> +static void
> +cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
> + __attribute__((unused)) struct cmdline *cl,
> + __attribute__((unused)) void *data) {
> + struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
> + portid_t pid;
> +
> + if (!all_ports_stopped()) {
> + printf("Please stop all ports first\n");
> + return;
> + }
> +
> + RTE_ETH_FOREACH_DEV(pid) {
> + struct rte_port *port = &ports[pid];
> +
> + if (!strcmp(res->name, "max-lro-pkt-size")) {
> + if (res->value ==
> + port-
> >dev_conf.rxmode.max_lro_pkt_size)
> + return;
> +
Should there be a check on the input value, max is RTE_ETHER_MAX_LEN ?
> + port->dev_conf.rxmode.max_lro_pkt_size = res-
> >value;
> + } else {
> + printf("Unknown parameter\n");
> + return;
> + }
> + }
> +
> + init_port_config();
> +
> + cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1); }
> +
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> + port, "port");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> + keyword, "config");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> + all, "all");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
> + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> + name, "max-lro-pkt-size");
> +cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
> + TOKEN_NUM_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> + value, UINT32);
> +
> +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> + .f = cmd_config_max_lro_pkt_size_parsed,
> + .data = NULL,
> + .help_str = "port config all max-lro-pkt-size <value>",
> + .tokens = {
> + (void *)&cmd_config_max_lro_pkt_size_port,
> + (void *)&cmd_config_max_lro_pkt_size_keyword,
> + (void *)&cmd_config_max_lro_pkt_size_all,
> + (void *)&cmd_config_max_lro_pkt_size_name,
> + (void *)&cmd_config_max_lro_pkt_size_value,
> + NULL,
> + },
> +};
> +
> /* *** configure port MTU *** */
> struct cmd_config_mtu_result {
> cmdline_fixed_string_t port;
> @@ -19024,6 +19096,7 @@ struct
> cmd_show_port_supported_ptypes_result {
> (cmdline_parse_inst_t *)&cmd_config_rx_tx,
> (cmdline_parse_inst_t *)&cmd_config_mtu,
> (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
> + (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
> (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
> (cmdline_parse_inst_t *)&cmd_config_rss,
> (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size, diff --git
> a/app/test-pmd/config.c b/app/test-pmd/config.c index efe2812..50e6ac0
> 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -629,6 +629,8 @@ static int bus_match_all(const struct rte_bus *bus,
> const void *data)
> printf("Minimum size of RX buffer: %u\n",
> dev_info.min_rx_bufsize);
> printf("Maximum configurable length of RX packet: %u\n",
> dev_info.max_rx_pktlen);
> + printf("Maximum configurable size of LRO aggregated packet: %u\n",
> + dev_info.max_lro_pkt_size);
> if (dev_info.max_vfs)
> printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
> if (dev_info.max_vmdq_pools)
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index
> 9ea87c1..3e371e2 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -592,6 +592,7 @@
> { "mbuf-size", 1, 0, 0 },
> { "total-num-mbufs", 1, 0, 0 },
> { "max-pkt-len", 1, 0, 0 },
> + { "max-lro-pkt-size", 1, 0, 0 },
The max-lro-pkt-size option should be documented in the usage() function around line 110 in parameters.c
> { "pkt-filter-mode", 1, 0, 0 },
> { "pkt-filter-report-hash", 1, 0, 0 },
> { "pkt-filter-size", 1, 0, 0 },
> @@ -888,6 +889,10 @@
> "Invalid max-pkt-len=%d -
> should be > %d\n",
> n, RTE_ETHER_MIN_LEN);
> }
> + if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-
> size")) {
> + n = atoi(optarg);
Should there be a check on the value input, max value is RTE_ETHER_MAX_LEN?
> + rx_mode.max_lro_pkt_size = (uint32_t) n;
> + }
> if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode"))
> {
> if (!strcmp(optarg, "signature"))
> fdir_conf.mode =
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> 38acbc5..d4f67ec 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = { struct
> rte_eth_rxmode rx_mode = {
> .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> /**< Default maximum frame length. */
> + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
> };
>
> struct rte_eth_txmode tx_mode = {
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> b/doc/guides/testpmd_app_ug/run_app.rst
> index ef677ba..bc17f3f 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -112,6 +112,11 @@ The command line options are:
>
> Set the maximum packet size to N bytes, where N >= 64. The default value
> is 1518.
>
> +* ``--max-lro-pkt-size=N``
> +
> + Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
> + The default value is 1518.
Should a max value be specified ?
> +
> * ``--eth-peers-configfile=name``
>
> Use a configuration file containing the Ethernet addresses of the peer
> ports.
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index c68a742..0267295 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -2139,6 +2139,15 @@ Set the maximum packet length::
>
> This is equivalent to the ``--max-pkt-len`` command-line option.
>
> +port config - max-lro-pkt-size
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Set the maximum LRO aggregated packet size::
> +
> + testpmd> port config all max-lro-pkt-size (value)
> +
> +This is equivalent to the ``--max-lro-pkt-size`` command-line option.
> +
> port config - Drop Packets
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> --
> 1.8.3.1
Regards,
Bernard.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/3] app/testpmd: use API to set max LRO packet size
2019-11-06 12:35 ` Iremonger, Bernard
@ 2019-11-06 13:14 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 13:14 UTC (permalink / raw)
To: Iremonger, Bernard, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Ananyev, Konstantin,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, Yigit, Ferruh, arybchenko, Wu, Jingjing
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Iremonger, Bernard <bernard.iremonger@intel.com>
> Sent: Wednesday, November 6, 2019 2:36 PM
> To: Dekel Peled <dekelp@mellanox.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Matan Azrad <matan@mellanox.com>;
> Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; Yigit,
> Ferruh <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v2 3/3] app/testpmd: use API to set max LRO packet size
>
> Hi Dekel,
>
> > -----Original Message-----
> > From: Dekel Peled <dekelp@mellanox.com>
> > Sent: Wednesday, November 6, 2019 11:35 AM
> > To: Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko
> > <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> > Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> > cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu,
> Wenzhuo
> > <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; matan@mellanox.com;
> > shahafs@mellanox.com; viacheslavo@mellanox.com;
> rmody@marvell.com;
> > shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> > <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> > yongwang@vmware.com; thomas@monjalon.net; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> > <jingjing.wu@intel.com>; Iremonger, Bernard
> > <bernard.iremonger@intel.com>
> > Cc: dev@dpdk.org
> > Subject: [PATCH v2 3/3] app/testpmd: use API to set max LRO packet
> > size
> >
> > This patch implements use of the API for LRO aggregated packet max size.
> > It adds command-line and runtime commands to configure this value, and
> > adds option to show the supported value.
> > Documentation is updated accordingly.
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > ---
> > app/test-pmd/cmdline.c | 73
> > +++++++++++++++++++++++++++++
> > app/test-pmd/config.c | 2 +
> > app/test-pmd/parameters.c | 5 ++
> > app/test-pmd/testpmd.c | 1 +
> > doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> > 6 files changed, 95 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > 4478069..edfa60f 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
> > },
> > };
> >
> > +/* *** config max LRO aggregated packet size *** */ struct
> > +cmd_config_max_lro_pkt_size_result {
> > + cmdline_fixed_string_t port;
> > + cmdline_fixed_string_t keyword;
> > + cmdline_fixed_string_t all;
> > + cmdline_fixed_string_t name;
> > + uint32_t value;
> > +};
> > +
> > +static void
> > +cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
> > + __attribute__((unused)) struct cmdline *cl,
> > + __attribute__((unused)) void *data) {
> > + struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
> > + portid_t pid;
> > +
> > + if (!all_ports_stopped()) {
> > + printf("Please stop all ports first\n");
> > + return;
> > + }
> > +
> > + RTE_ETH_FOREACH_DEV(pid) {
> > + struct rte_port *port = &ports[pid];
> > +
> > + if (!strcmp(res->name, "max-lro-pkt-size")) {
> > + if (res->value ==
> > + port-
> > >dev_conf.rxmode.max_lro_pkt_size)
> > + return;
> > +
>
> Should there be a check on the input value, max is RTE_ETHER_MAX_LEN ?
>
Max is device specific, can't check it here.
>
> > + port->dev_conf.rxmode.max_lro_pkt_size = res-
> > >value;
> > + } else {
> > + printf("Unknown parameter\n");
> > + return;
> > + }
> > + }
> > +
> > + init_port_config();
> > +
> > + cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1); }
> > +
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_config_max_lro_pkt_size_result,
> > + port, "port");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword
> =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_config_max_lro_pkt_size_result,
> > + keyword, "config");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_config_max_lro_pkt_size_result,
> > + all, "all");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
> > + TOKEN_STRING_INITIALIZER(struct
> > cmd_config_max_lro_pkt_size_result,
> > + name, "max-lro-pkt-size");
> > +cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
> > + TOKEN_NUM_INITIALIZER(struct
> > cmd_config_max_lro_pkt_size_result,
> > + value, UINT32);
> > +
> > +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> > + .f = cmd_config_max_lro_pkt_size_parsed,
> > + .data = NULL,
> > + .help_str = "port config all max-lro-pkt-size <value>",
> > + .tokens = {
> > + (void *)&cmd_config_max_lro_pkt_size_port,
> > + (void *)&cmd_config_max_lro_pkt_size_keyword,
> > + (void *)&cmd_config_max_lro_pkt_size_all,
> > + (void *)&cmd_config_max_lro_pkt_size_name,
> > + (void *)&cmd_config_max_lro_pkt_size_value,
> > + NULL,
> > + },
> > +};
> > +
> > /* *** configure port MTU *** */
> > struct cmd_config_mtu_result {
> > cmdline_fixed_string_t port;
> > @@ -19024,6 +19096,7 @@ struct
> > cmd_show_port_supported_ptypes_result {
> > (cmdline_parse_inst_t *)&cmd_config_rx_tx,
> > (cmdline_parse_inst_t *)&cmd_config_mtu,
> > (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
> > + (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
> > (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
> > (cmdline_parse_inst_t *)&cmd_config_rss,
> > (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size, diff --git
> > a/app/test-pmd/config.c b/app/test-pmd/config.c index efe2812..50e6ac0
> > 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -629,6 +629,8 @@ static int bus_match_all(const struct rte_bus
> > *bus, const void *data)
> > printf("Minimum size of RX buffer: %u\n",
> dev_info.min_rx_bufsize);
> > printf("Maximum configurable length of RX packet: %u\n",
> > dev_info.max_rx_pktlen);
> > + printf("Maximum configurable size of LRO aggregated packet: %u\n",
> > + dev_info.max_lro_pkt_size);
> > if (dev_info.max_vfs)
> > printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
> > if (dev_info.max_vmdq_pools)
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index
> > 9ea87c1..3e371e2 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -592,6 +592,7 @@
> > { "mbuf-size", 1, 0, 0 },
> > { "total-num-mbufs", 1, 0, 0 },
> > { "max-pkt-len", 1, 0, 0 },
> > + { "max-lro-pkt-size", 1, 0, 0 },
>
> The max-lro-pkt-size option should be documented in the usage() function
> around line 110 in parameters.c
>
I'll add in in v3.
> > { "pkt-filter-mode", 1, 0, 0 },
> > { "pkt-filter-report-hash", 1, 0, 0 },
> > { "pkt-filter-size", 1, 0, 0 },
> > @@ -888,6 +889,10 @@
> > "Invalid max-pkt-len=%d -
> > should be > %d\n",
> > n, RTE_ETHER_MIN_LEN);
> > }
> > + if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-
> > size")) {
> > + n = atoi(optarg);
>
> Should there be a check on the value input, max value is
> RTE_ETHER_MAX_LEN?
Max is device specific, can't check it here.
>
> > + rx_mode.max_lro_pkt_size = (uint32_t) n;
> > + }
> > if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode"))
> {
> > if (!strcmp(optarg, "signature"))
> > fdir_conf.mode =
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 38acbc5..d4f67ec 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = { struct
> > rte_eth_rxmode rx_mode = {
> > .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> > /**< Default maximum frame length. */
> > + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
> > };
> >
> > struct rte_eth_txmode tx_mode = {
> > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > b/doc/guides/testpmd_app_ug/run_app.rst
> > index ef677ba..bc17f3f 100644
> > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > @@ -112,6 +112,11 @@ The command line options are:
> >
> > Set the maximum packet size to N bytes, where N >= 64. The
> > default value is 1518.
> >
> > +* ``--max-lro-pkt-size=N``
> > +
> > + Set the maximum LRO aggregated packet size to N bytes, where N >=
> 64.
> > + The default value is 1518.
>
> Should a max value be specified ?
Max is device specific, can't specify it here.
>
> > +
> > * ``--eth-peers-configfile=name``
> >
> > Use a configuration file containing the Ethernet addresses of the
> > peer ports.
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index c68a742..0267295 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -2139,6 +2139,15 @@ Set the maximum packet length::
> >
> > This is equivalent to the ``--max-pkt-len`` command-line option.
> >
> > +port config - max-lro-pkt-size
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Set the maximum LRO aggregated packet size::
> > +
> > + testpmd> port config all max-lro-pkt-size (value)
> > +
> > +This is equivalent to the ``--max-lro-pkt-size`` command-line option.
> > +
> > port config - Drop Packets
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > --
> > 1.8.3.1
>
> Regards,
>
> Bernard.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] support API to set max LRO packet size
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
` (2 preceding siblings ...)
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 3/3] app/testpmd: " Dekel Peled
@ 2019-11-06 14:28 ` Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
` (4 more replies)
3 siblings, 5 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 14:28 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This series implements support and use of API for configuration and
validation of max size for LRO aggregated packet.
v2: Updated ethdev patch per review comments.
v3: Updated ethdev and testpmd patches per review comments.
Dekel Peled (3):
ethdev: support API to set max LRO packet size
net/mlx5: use API to set max LRO packet size
app/testpmd: use API to set max LRO packet size
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/nics/features.rst | 2 +
doc/guides/nics/mlx5.rst | 2 +
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_19_11.rst | 8 ++++
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++
22 files changed, 172 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] ethdev: support API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
@ 2019-11-06 14:28 ` Dekel Peled
2019-11-07 11:57 ` [dpdk-dev] [EXT] " Shahed Shaikh
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 2/3] net/mlx5: use " Dekel Peled
` (3 subsequent siblings)
4 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 14:28 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_19_11.rst | 8 ++++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 +++
15 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d966968..4d1bb5a 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f96ac38..9bffb16 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -380,6 +380,14 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7d9459f..88af61b 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -535,6 +535,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3c7624f..863e3b1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3804,6 +3804,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 15872;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
@@ -3927,6 +3928,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL reg */
dev_info->max_rx_pktlen = 9728; /* includes CRC, cf MAXFRS reg */
+ dev_info->max_lro_pkt_size = 9728;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index dbbef29..28dfa3a 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -48,6 +48,7 @@
dev_info->min_rx_bufsize = 1024;
/**< Minimum size of RX buffer. */
dev_info->max_rx_pktlen = 9728;
+ dev_info->max_lro_pkt_size = 9728;
/**< Maximum configurable length of RX pkt. */
dev_info->max_rx_queues = IXGBE_VF_MAX_RX_QUEUES;
/**< Maximum number of RX queues. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..fdfc99b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -203,6 +203,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..1443faa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..9c960cd 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pkt_size = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de99..fa33c45 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 85ab5f0..9cdb4a1 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1156,6 +1156,26 @@ struct rte_eth_dev *
return name;
}
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "> max allowed value %u\n", port_id, config_size,
+ dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "< min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1286,6 +1306,18 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ ret = check_lro_pkt_size(
+ port_id, dev_conf->rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1790,6 +1822,18 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ int ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f0df03d..f3ef253 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximum allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1223,6 +1225,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v3 1/3] ethdev: support API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
@ 2019-11-07 11:57 ` Shahed Shaikh
2019-11-07 12:18 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Shahed Shaikh @ 2019-11-07 11:57 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, Rasesh Mody, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, ferruh.yigit,
arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Wednesday, November 6, 2019 7:58 PM
> To: john.mcnamara@intel.com; marko.kovacevic@intel.com;
> nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> somnath.kotur@broadcom.com; anatoly.burakov@intel.com;
> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com;
> zhouguoyang@huawei.com; wenzhuo.lu@intel.com;
> konstantin.ananyev@intel.com; matan@mellanox.com;
> shahafs@mellanox.com; viacheslavo@mellanox.com; Rasesh Mody
> <rmody@marvell.com>; Shahed Shaikh <shshaikh@marvell.com>;
> maxime.coquelin@redhat.com; tiwei.bie@intel.com; zhihong.wang@intel.com;
> yongwang@vmware.com; thomas@monjalon.net; ferruh.yigit@intel.com;
> arybchenko@solarflare.com; jingjing.wu@intel.com;
> bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: [EXT] [PATCH v3 1/3] ethdev: support API to set max LRO packet size
>
> External Email
>
> ----------------------------------------------------------------------
> This patch implements [1], to support API for configuration and
>
> validation of max size for LRO aggregated packet.
>
> API change notice [2] is removed, and release notes for 19.11
>
> are updated accordingly.
>
>
>
> Various PMDs using LRO offload are updated, the new data members are
>
> initialized to ensure they don't fail validation.
>
>
>
> [1] https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__patches.dpdk.org_patch_58217_&d=DwIBAg&c=nKjWec2b6R0mOyPaz7xtf
> Q&r=qeevDAWea4uZZbQhISypy6IVgD4EOfIP7D-
> cw6N6nsI&m=ghr6KiUvwMCjBQ7KRJJ8BfoTOHlY9gICprAIn4yao9A&s=ANfM9GH
> SB9nUhm8O8YaoqcXO0eTBO7TmGhVmekbNO6U&e=
>
> [2] https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__patches.dpdk.org_patch_57492_&d=DwIBAg&c=nKjWec2b6R0mOyPaz7xtf
> Q&r=qeevDAWea4uZZbQhISypy6IVgD4EOfIP7D-
> cw6N6nsI&m=ghr6KiUvwMCjBQ7KRJJ8BfoTOHlY9gICprAIn4yao9A&s=ppYVWXe
> Jj3xhnvWCV2hIfNu6wy9N2r5lvJxgJnHKfSc&e=
>
>
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
>
> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
> ---
>
> doc/guides/nics/features.rst | 2 ++
>
> doc/guides/rel_notes/deprecation.rst | 4 ---
>
> doc/guides/rel_notes/release_19_11.rst | 8 ++++++
>
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
>
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
>
> drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
>
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
>
> drivers/net/mlx5/mlx5.h | 3 +++
>
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
>
> drivers/net/mlx5/mlx5_rxq.c | 1 -
>
> drivers/net/qede/qede_ethdev.c | 1 +
>
> drivers/net/virtio/virtio_ethdev.c | 1 +
>
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
>
> lib/librte_ethdev/rte_ethdev.c | 44
> ++++++++++++++++++++++++++++++++
>
> lib/librte_ethdev/rte_ethdev.h | 4 +++
>
> 15 files changed, 70 insertions(+), 5 deletions(-)
>
> @@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev
> *eth_dev)
>
>
>
> dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
>
> dev_info->max_rx_pktlen =
> (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
>
> + dev_info->max_lro_pkt_size =
> (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN
Please use 0x7FFF instead of ETH_TX_MAX_NON_LSO_PKT_LEN.
We set the same limit in qede_ethdev.c: qede_update_sge_tpa_params()
sge_tpa_params->tpa_max_size = 0x7FFF;
Thanks,
Shahed
>
> dev_info->rx_desc_lim = qede_rx_desc_lim;
>
> dev_info->tx_desc_lim = qede_tx_desc_lim;
>
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH v3 1/3] ethdev: support API to set max LRO packet size
2019-11-07 11:57 ` [dpdk-dev] [EXT] " Shahed Shaikh
@ 2019-11-07 12:18 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 12:18 UTC (permalink / raw)
To: Shahed Shaikh, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, Rasesh Mody,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, ferruh.yigit, arybchenko, jingjing.wu,
bernard.iremonger
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Shahed Shaikh <shshaikh@marvell.com>
> Sent: Thursday, November 7, 2019 1:57 PM
> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; Rasesh Mody
> <rmody@marvell.com>; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> arybchenko@solarflare.com; jingjing.wu@intel.com;
> bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: RE: [EXT] [PATCH v3 1/3] ethdev: support API to set max LRO packet
> size
>
> > -----Original Message-----
> > From: Dekel Peled <dekelp@mellanox.com>
> > Sent: Wednesday, November 6, 2019 7:58 PM
> > To: john.mcnamara@intel.com; marko.kovacevic@intel.com;
> > nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> > somnath.kotur@broadcom.com; anatoly.burakov@intel.com;
> > xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com;
> > zhouguoyang@huawei.com; wenzhuo.lu@intel.com;
> > konstantin.ananyev@intel.com; matan@mellanox.com;
> > shahafs@mellanox.com; viacheslavo@mellanox.com; Rasesh Mody
> > <rmody@marvell.com>; Shahed Shaikh <shshaikh@marvell.com>;
> > maxime.coquelin@redhat.com; tiwei.bie@intel.com;
> > zhihong.wang@intel.com; yongwang@vmware.com;
> thomas@monjalon.net;
> > ferruh.yigit@intel.com; arybchenko@solarflare.com;
> > jingjing.wu@intel.com; bernard.iremonger@intel.com
> > Cc: dev@dpdk.org
> > Subject: [EXT] [PATCH v3 1/3] ethdev: support API to set max LRO
> > packet size
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > This patch implements [1], to support API for configuration and
> >
> > validation of max size for LRO aggregated packet.
> >
> > API change notice [2] is removed, and release notes for 19.11
> >
> > are updated accordingly.
> >
> >
> >
> > Various PMDs using LRO offload are updated, the new data members are
> >
> > initialized to ensure they don't fail validation.
> >
> >
> >
> > [1]
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> > efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-
> &data=02%7C01%7Cdekelp
> >
> %40mellanox.com%7Cabfce6c19bcf4f42c75208d76379d781%7Ca652971c7d2e
> 4d9ba
> >
> 6a4d149256f461b%7C0%7C1%7C637087247277571736&sdata=gWGd5Ew
> ntdYj44F
> > PZXNTG0uMivl42B0bE2%2FZ2izxV%2Fo%3D&reserved=0
> >
> 3A__patches.dpdk.org_patch_58217_&d=DwIBAg&c=nKjWec2b6R0mOyPaz
> 7xtf
> > Q&r=qeevDAWea4uZZbQhISypy6IVgD4EOfIP7D-
> >
> cw6N6nsI&m=ghr6KiUvwMCjBQ7KRJJ8BfoTOHlY9gICprAIn4yao9A&s=ANfM9
> GH
> > SB9nUhm8O8YaoqcXO0eTBO7TmGhVmekbNO6U&e=
> >
> > [2]
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> > efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-
> &data=02%7C01%7Cdekelp
> >
> %40mellanox.com%7Cabfce6c19bcf4f42c75208d76379d781%7Ca652971c7d2e
> 4d9ba
> >
> 6a4d149256f461b%7C0%7C1%7C637087247277571736&sdata=gWGd5Ew
> ntdYj44F
> > PZXNTG0uMivl42B0bE2%2FZ2izxV%2Fo%3D&reserved=0
> >
> 3A__patches.dpdk.org_patch_57492_&d=DwIBAg&c=nKjWec2b6R0mOyPaz
> 7xtf
> > Q&r=qeevDAWea4uZZbQhISypy6IVgD4EOfIP7D-
> >
> cw6N6nsI&m=ghr6KiUvwMCjBQ7KRJJ8BfoTOHlY9gICprAIn4yao9A&s=ppYVW
> Xe
> > Jj3xhnvWCV2hIfNu6wy9N2r5lvJxgJnHKfSc&e=
> >
> >
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> >
> > Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> >
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> >
> > ---
> >
> > doc/guides/nics/features.rst | 2 ++
> >
> > doc/guides/rel_notes/deprecation.rst | 4 ---
> >
> > doc/guides/rel_notes/release_19_11.rst | 8 ++++++
> >
> > drivers/net/bnxt/bnxt_ethdev.c | 1 +
> >
> > drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> >
> > drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
> >
> > drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> >
> > drivers/net/mlx5/mlx5.h | 3 +++
> >
> > drivers/net/mlx5/mlx5_ethdev.c | 1 +
> >
> > drivers/net/mlx5/mlx5_rxq.c | 1 -
> >
> > drivers/net/qede/qede_ethdev.c | 1 +
> >
> > drivers/net/virtio/virtio_ethdev.c | 1 +
> >
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> >
> > lib/librte_ethdev/rte_ethdev.c | 44
> > ++++++++++++++++++++++++++++++++
> >
> > lib/librte_ethdev/rte_ethdev.h | 4 +++
> >
> > 15 files changed, 70 insertions(+), 5 deletions(-)
> >
> > @@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev
> > *eth_dev)
> >
> >
> >
> > dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
> >
> > dev_info->max_rx_pktlen =
> > (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
> >
> > + dev_info->max_lro_pkt_size =
> > (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN
>
> Please use 0x7FFF instead of ETH_TX_MAX_NON_LSO_PKT_LEN.
>
Sending v4 with this change.
> We set the same limit in qede_ethdev.c: qede_update_sge_tpa_params()
>
> sge_tpa_params->tpa_max_size = 0x7FFF;
>
> Thanks,
> Shahed
> >
> > dev_info->rx_desc_lim = qede_rx_desc_lim;
> >
> > dev_info->tx_desc_lim = qede_tx_desc_lim;
> >
> >
Regards,
Dekel
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] net/mlx5: use API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
@ 2019-11-06 14:28 ` Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 3/3] app/testpmd: " Dekel Peled
` (2 subsequent siblings)
4 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 14:28 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..3b10daf 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9423e7b..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] app/testpmd: use API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 1/3] ethdev: " Dekel Peled
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-06 14:28 ` Dekel Peled
2019-11-06 16:41 ` [dpdk-dev] [PATCH v3 0/3] support " Iremonger, Bernard
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
4 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-06 14:28 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
6 files changed, 97 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 4478069..edfa60f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19024,6 +19096,7 @@ struct cmd_show_port_supported_ptypes_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index efe2812..50e6ac0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -629,6 +629,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9ea87c1..eda395b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -107,6 +107,8 @@
printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
"in mbuf pools.\n");
printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
+ printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated packet "
+ "size to N bytes.\n");
#ifdef RTE_LIBRTE_CMDLINE
printf(" --eth-peers-configfile=name: config file with ethernet addresses "
"of peer ports.\n");
@@ -592,6 +594,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -888,6 +891,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 38acbc5..d4f67ec 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
/**< Default maximum frame length. */
+ .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
};
struct rte_eth_txmode tx_mode = {
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index ef677ba..bc17f3f 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,11 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+ The default value is 1518.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c68a742..0267295 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2139,6 +2139,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/3] support API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
` (2 preceding siblings ...)
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 3/3] app/testpmd: " Dekel Peled
@ 2019-11-06 16:41 ` Iremonger, Bernard
2019-11-07 6:10 ` Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
4 siblings, 1 reply; 79+ messages in thread
From: Iremonger, Bernard @ 2019-11-06 16:41 UTC (permalink / raw)
To: Dekel Peled, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Ananyev, Konstantin,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
Bie, Tiwei, Wang, Zhihong, yongwang, thomas, Yigit, Ferruh,
arybchenko, Wu, Jingjing
Cc: dev
Hi Dekel,
> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Wednesday, November 6, 2019 2:28 PM
> To: Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; matan@mellanox.com;
> shahafs@mellanox.com; viacheslavo@mellanox.com; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; thomas@monjalon.net; Yigit, Ferruh
> <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v3 0/3] support API to set max LRO packet size
>
> This series implements support and use of API for configuration and
> validation of max size for LRO aggregated packet.
>
> v2: Updated ethdev patch per review comments.
> v3: Updated ethdev and testpmd patches per review comments.
My comments on the v2 testpmd patch have not been addressed in the v3 patch.
>
> Dekel Peled (3):
> ethdev: support API to set max LRO packet size
> net/mlx5: use API to set max LRO packet size
> app/testpmd: use API to set max LRO packet size
>
> app/test-pmd/cmdline.c | 73
> +++++++++++++++++++++++++++++
> app/test-pmd/config.c | 2 +
> app/test-pmd/parameters.c | 7 +++
> app/test-pmd/testpmd.c | 1 +
> doc/guides/nics/features.rst | 2 +
> doc/guides/nics/mlx5.rst | 2 +
> doc/guides/rel_notes/deprecation.rst | 4 --
> doc/guides/rel_notes/release_19_11.rst | 8 ++++
> doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/mlx5.h | 3 ++
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
> drivers/net/mlx5/mlx5_rxq.c | 5 +-
> drivers/net/qede/qede_ethdev.c | 1 +
> drivers/net/virtio/virtio_ethdev.c | 1 +
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
> lib/librte_ethdev/rte_ethdev.h | 4 ++
> 22 files changed, 172 insertions(+), 6 deletions(-)
>
> --
> 1.8.3.1
Regards,
Bernard.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/3] support API to set max LRO packet size
2019-11-06 16:41 ` [dpdk-dev] [PATCH v3 0/3] support " Iremonger, Bernard
@ 2019-11-07 6:10 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 6:10 UTC (permalink / raw)
To: Iremonger, Bernard, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Ananyev, Konstantin,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, Yigit, Ferruh, arybchenko, Wu, Jingjing
Cc: dev
Hi Bernard, PSB.
> -----Original Message-----
> From: Iremonger, Bernard <bernard.iremonger@intel.com>
> Sent: Wednesday, November 6, 2019 6:41 PM
> To: Dekel Peled <dekelp@mellanox.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Matan Azrad <matan@mellanox.com>;
> Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; Yigit,
> Ferruh <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v3 0/3] support API to set max LRO packet size
>
> Hi Dekel,
>
> > -----Original Message-----
> > From: Dekel Peled <dekelp@mellanox.com>
> > Sent: Wednesday, November 6, 2019 2:28 PM
> > To: Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko
> > <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> > Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> > cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu,
> Wenzhuo
> > <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; matan@mellanox.com;
> > shahafs@mellanox.com; viacheslavo@mellanox.com;
> rmody@marvell.com;
> > shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> > <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> > yongwang@vmware.com; thomas@monjalon.net; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> > <jingjing.wu@intel.com>; Iremonger, Bernard
> > <bernard.iremonger@intel.com>
> > Cc: dev@dpdk.org
> > Subject: [PATCH v3 0/3] support API to set max LRO packet size
> >
> > This series implements support and use of API for configuration and
> > validation of max size for LRO aggregated packet.
> >
> > v2: Updated ethdev patch per review comments.
> > v3: Updated ethdev and testpmd patches per review comments.
>
> My comments on the v2 testpmd patch have not been addressed in the v3
> patch.
I accepted your comment about update of function usage(), and added it in v3.
I replied your other comments in v2 reply email, and didn't see additional response.
Regards,
Dekel
>
> >
> > Dekel Peled (3):
> > ethdev: support API to set max LRO packet size
> > net/mlx5: use API to set max LRO packet size
> > app/testpmd: use API to set max LRO packet size
> >
> > app/test-pmd/cmdline.c | 73
> > +++++++++++++++++++++++++++++
> > app/test-pmd/config.c | 2 +
> > app/test-pmd/parameters.c | 7 +++
> > app/test-pmd/testpmd.c | 1 +
> > doc/guides/nics/features.rst | 2 +
> > doc/guides/nics/mlx5.rst | 2 +
> > doc/guides/rel_notes/deprecation.rst | 4 --
> > doc/guides/rel_notes/release_19_11.rst | 8 ++++
> > doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> > drivers/net/bnxt/bnxt_ethdev.c | 1 +
> > drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> > drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
> > drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> > drivers/net/mlx5/mlx5.h | 3 ++
> > drivers/net/mlx5/mlx5_ethdev.c | 1 +
> > drivers/net/mlx5/mlx5_rxq.c | 5 +-
> > drivers/net/qede/qede_ethdev.c | 1 +
> > drivers/net/virtio/virtio_ethdev.c | 1 +
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> > lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
> > lib/librte_ethdev/rte_ethdev.h | 4 ++
> > 22 files changed, 172 insertions(+), 6 deletions(-)
> >
> > --
> > 1.8.3.1
>
> Regards,
>
> Bernard.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] support API to set max LRO packet size
2019-11-06 14:28 ` [dpdk-dev] [PATCH v3 0/3] support " Dekel Peled
` (3 preceding siblings ...)
2019-11-06 16:41 ` [dpdk-dev] [PATCH v3 0/3] support " Iremonger, Bernard
@ 2019-11-07 12:35 ` Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
` (4 more replies)
4 siblings, 5 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 12:35 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This series implements support and use of API for configuration and
validation of max size for LRO aggregated packet.
v2: Updated ethdev patch per review comments.
v3: Updated ethdev and testpmd patches per review comments.
v4: Updated ethdev patch for QEDE PMD per review comments.
Dekel Peled (3):
ethdev: support API to set max LRO packet size
net/mlx5: use API to set max LRO packet size
app/testpmd: use API to set max LRO packet size
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/nics/features.rst | 2 +
doc/guides/nics/mlx5.rst | 2 +
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_19_11.rst | 8 ++++
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++
22 files changed, 172 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
@ 2019-11-07 12:35 ` Dekel Peled
2019-11-07 20:15 ` Ferruh Yigit
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 2/3] net/mlx5: use " Dekel Peled
` (3 subsequent siblings)
4 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 12:35 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_19_11.rst | 8 ++++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 2 ++
drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 +++
15 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 7a31cf7..2138ce3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 23182d1..b2b788c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -406,6 +406,14 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b9b055e..741b897 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -519,6 +519,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 30c0379..c391f51 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3814,6 +3814,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 15872;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
@@ -3937,6 +3938,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL reg */
dev_info->max_rx_pktlen = 9728; /* includes CRC, cf MAXFRS reg */
+ dev_info->max_lro_pkt_size = 9728;
dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index dbbef29..28dfa3a 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -48,6 +48,7 @@
dev_info->min_rx_bufsize = 1024;
/**< Minimum size of RX buffer. */
dev_info->max_rx_pktlen = 9728;
+ dev_info->max_lro_pkt_size = 9728;
/**< Maximum configurable length of RX pkt. */
dev_info->max_rx_queues = IXGBE_VF_MAX_RX_QUEUES;
/**< Maximum number of RX queues. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..fdfc99b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -203,6 +203,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..1443faa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..ccbb8a4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pkt_size = (uint32_t)0x7FFF;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de99..fa33c45 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 652c369..c642ba5 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1136,6 +1136,26 @@ struct rte_eth_dev *
return name;
}
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "> max allowed value %u\n", port_id, config_size,
+ dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "< min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1266,6 +1286,18 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ ret = check_lro_pkt_size(
+ port_id, dev_conf->rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1770,6 +1802,18 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ int ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 44d77b3..1b76df5 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximum allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1218,6 +1220,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
@ 2019-11-07 20:15 ` Ferruh Yigit
2019-11-08 6:54 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-07 20:15 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/7/2019 12:35 PM, Dekel Peled wrote:
> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> RTE_ETHER_MAX_LEN;
> }
>
> + /*
> + * If LRO is enabled, check that the maximum aggregated packet
> + * size is supported by the configured device.
> + */
> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + ret = check_lro_pkt_size(
> + port_id, dev_conf->rxmode.max_lro_pkt_size,
> + dev_info.max_lro_pkt_size);
> + if (ret != 0)
> + goto rollback;
> + }
> +
This check forces applications that enable LRO to provide 'max_lro_pkt_size'
config value.
- Why it is mandatory now, how it was working before if it is mandatory value?
- What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
- What do you think setting 'max_lro_pkt_size' config value to what PMD provided
if application doesn't provide it?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-07 20:15 ` Ferruh Yigit
@ 2019-11-08 6:54 ` Matan Azrad
2019-11-08 9:19 ` Ferruh Yigit
0 siblings, 1 reply; 79+ messages in thread
From: Matan Azrad @ 2019-11-08 6:54 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
Hi
From: Ferruh Yigit
> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >
> RTE_ETHER_MAX_LEN;
> > }
> >
> > + /*
> > + * If LRO is enabled, check that the maximum aggregated packet
> > + * size is supported by the configured device.
> > + */
> > + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > + ret = check_lro_pkt_size(
> > + port_id, dev_conf-
> >rxmode.max_lro_pkt_size,
> > + dev_info.max_lro_pkt_size);
> > + if (ret != 0)
> > + goto rollback;
> > + }
> > +
>
> This check forces applications that enable LRO to provide 'max_lro_pkt_size'
> config value.
Yes.(we can break an API, we noticed it)
> - Why it is mandatory now, how it was working before if it is mandatory
> value?
It is the same as max_rx_pkt_len which is mandatory for jumbo frame offload.
So now, when the user configures a LRO offload he must to set max lro pkt len.
We don't want to confuse the user here with the max rx pkt len configurations and behaviors, they should be with same logic.
This parameter defines well the LRO behavior.
Before this, each PMD took its own interpretation to what should be the maximum size for LRO aggregated packets.
Now, the user must say what is his intension, and the ethdev can limit it according to the device capability.
By this way, also, the PMD can organize\optimize its data-path more.
Also, the application can create different mempools for LRO queues to allow bigger packet receiving for LRO traffic.
> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
Yes, you can see the feature description Dekel added.
This patch also updates all the PMDs support an LRO for non-0 value.
as same as max rx pkt len, no?
> - What do you think setting 'max_lro_pkt_size' config value to what PMD
> provided if application doesn't provide it?
Same answers as above.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 6:54 ` Matan Azrad
@ 2019-11-08 9:19 ` Ferruh Yigit
2019-11-08 10:10 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-08 9:19 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/8/2019 6:54 AM, Matan Azrad wrote:
> Hi
>
> From: Ferruh Yigit
>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>
>> RTE_ETHER_MAX_LEN;
>>> }
>>>
>>> + /*
>>> + * If LRO is enabled, check that the maximum aggregated packet
>>> + * size is supported by the configured device.
>>> + */
>>> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>>> + ret = check_lro_pkt_size(
>>> + port_id, dev_conf-
>>> rxmode.max_lro_pkt_size,
>>> + dev_info.max_lro_pkt_size);
>>> + if (ret != 0)
>>> + goto rollback;
>>> + }
>>> +
>>
>> This check forces applications that enable LRO to provide 'max_lro_pkt_size'
>> config value.
>
> Yes.(we can break an API, we noticed it)
I am not talking about API/ABI breakage, that part is OK.
With this check, if the application requested LRO offload but not provided
'max_lro_pkt_size' value, device configuration will fail.
Can there be a case application is good with whatever the PMD can support as max?
>
>> - Why it is mandatory now, how it was working before if it is mandatory
>> value?
>
> It is the same as max_rx_pkt_len which is mandatory for jumbo frame offload.
> So now, when the user configures a LRO offload he must to set max lro pkt len.
> We don't want to confuse the user here with the max rx pkt len configurations and behaviors, they should be with same logic.
>
> This parameter defines well the LRO behavior.
> Before this, each PMD took its own interpretation to what should be the maximum size for LRO aggregated packets.
> Now, the user must say what is his intension, and the ethdev can limit it according to the device capability.
> By this way, also, the PMD can organize\optimize its data-path more.
> Also, the application can create different mempools for LRO queues to allow bigger packet receiving for LRO traffic.
>
>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> Yes, you can see the feature description Dekel added.
> This patch also updates all the PMDs support an LRO for non-0 value.
Of course I can see the updates Matan, my point is "What happens if PMD doesn't
provide 'max_lro_pkt_size'",
1) There is no check for it right, so it is acceptable?
2) Are we making this filed mandatory to provide for PMDs, it is easy to make
new fields mandatory for PMDs but is this really necessary?
>
> as same as max rx pkt len, no?
>
>> - What do you think setting 'max_lro_pkt_size' config value to what PMD
>> provided if application doesn't provide it?
> Same answers as above.
>
If application doesn't care the value, as it has been till now, and not provided
explicit 'max_lro_pkt_size', why not ethdev level use the value provided by PMD
instead of failing?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 9:19 ` Ferruh Yigit
@ 2019-11-08 10:10 ` Matan Azrad
2019-11-08 11:37 ` Ferruh Yigit
0 siblings, 1 reply; 79+ messages in thread
From: Matan Azrad @ 2019-11-08 10:10 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Ferruh Yigit
> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > Hi
> >
> > From: Ferruh Yigit
> >> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>
> >> RTE_ETHER_MAX_LEN;
> >>> }
> >>>
> >>> + /*
> >>> + * If LRO is enabled, check that the maximum aggregated packet
> >>> + * size is supported by the configured device.
> >>> + */
> >>> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> >>> + ret = check_lro_pkt_size(
> >>> + port_id, dev_conf-
> >>> rxmode.max_lro_pkt_size,
> >>> + dev_info.max_lro_pkt_size);
> >>> + if (ret != 0)
> >>> + goto rollback;
> >>> + }
> >>> +
> >>
> >> This check forces applications that enable LRO to provide
> 'max_lro_pkt_size'
> >> config value.
> >
> > Yes.(we can break an API, we noticed it)
>
> I am not talking about API/ABI breakage, that part is OK.
> With this check, if the application requested LRO offload but not provided
> 'max_lro_pkt_size' value, device configuration will fail.
>
Yes
> Can there be a case application is good with whatever the PMD can support
> as max?
Yes can be - you know, we can do everything we want but it is better to be consistent:
Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max lro pkt len should be mandatory for LRO offload.
So your question is actually why both, non-lro packets and LRO packets max size are mandatory...
I think it should be important values for net applications management.
Also good for mbuf size managements.
> >
> >> - Why it is mandatory now, how it was working before if it is
> >> mandatory value?
> >
> > It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> offload.
> > So now, when the user configures a LRO offload he must to set max lro pkt
> len.
> > We don't want to confuse the user here with the max rx pkt len
> configurations and behaviors, they should be with same logic.
> >
> > This parameter defines well the LRO behavior.
> > Before this, each PMD took its own interpretation to what should be the
> maximum size for LRO aggregated packets.
> > Now, the user must say what is his intension, and the ethdev can limit it
> according to the device capability.
> > By this way, also, the PMD can organize\optimize its data-path more.
> > Also, the application can create different mempools for LRO queues to
> allow bigger packet receiving for LRO traffic.
> >
> >> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> > Yes, you can see the feature description Dekel added.
> > This patch also updates all the PMDs support an LRO for non-0 value.
>
> Of course I can see the updates Matan, my point is "What happens if PMD
> doesn't provide 'max_lro_pkt_size'",
> 1) There is no check for it right, so it is acceptable?
There is check.
If the capability is 0, any non-zero configuration will fail.
> 2) Are we making this filed mandatory to provide for PMDs, it is easy to make
> new fields mandatory for PMDs but is this really necessary?
Yes, for consistence.
> >
> > as same as max rx pkt len, no?
> >
> >> - What do you think setting 'max_lro_pkt_size' config value to what
> >> PMD provided if application doesn't provide it?
> > Same answers as above.
> >
>
> If application doesn't care the value, as it has been till now, and not provided
> explicit 'max_lro_pkt_size', why not ethdev level use the value provided by
> PMD instead of failing?
Again, same question we can ask on max rx pkt len.
Looks like the packet size is very important value which should be set by the application.
Previous applications have no option to configure it, so they haven't configure it, (probably cover it somehow) I think it is our miss to supply this info.
Let's do it in same way as we do max rx pkt len (as this patch main idea).
Later, we can change both to other meaning.
Matan
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 10:10 ` Matan Azrad
@ 2019-11-08 11:37 ` Ferruh Yigit
2019-11-08 11:56 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-08 11:37 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/8/2019 10:10 AM, Matan Azrad wrote:
>
>
> From: Ferruh Yigit
>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>> Hi
>>>
>>> From: Ferruh Yigit
>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>
>>>> RTE_ETHER_MAX_LEN;
>>>>> }
>>>>>
>>>>> + /*
>>>>> + * If LRO is enabled, check that the maximum aggregated packet
>>>>> + * size is supported by the configured device.
>>>>> + */
>>>>> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>>>>> + ret = check_lro_pkt_size(
>>>>> + port_id, dev_conf-
>>>>> rxmode.max_lro_pkt_size,
>>>>> + dev_info.max_lro_pkt_size);
>>>>> + if (ret != 0)
>>>>> + goto rollback;
>>>>> + }
>>>>> +
>>>>
>>>> This check forces applications that enable LRO to provide
>> 'max_lro_pkt_size'
>>>> config value.
>>>
>>> Yes.(we can break an API, we noticed it)
>>
>> I am not talking about API/ABI breakage, that part is OK.
>> With this check, if the application requested LRO offload but not provided
>> 'max_lro_pkt_size' value, device configuration will fail.
>>
> Yes
>> Can there be a case application is good with whatever the PMD can support
>> as max?
> Yes can be - you know, we can do everything we want but it is better to be consistent:
> Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max lro pkt len should be mandatory for LRO offload.
>
> So your question is actually why both, non-lro packets and LRO packets max size are mandatory...
>
>
> I think it should be important values for net applications management.
> Also good for mbuf size managements.
>
>>>
>>>> - Why it is mandatory now, how it was working before if it is
>>>> mandatory value?
>>>
>>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
>> offload.
>>> So now, when the user configures a LRO offload he must to set max lro pkt
>> len.
>>> We don't want to confuse the user here with the max rx pkt len
>> configurations and behaviors, they should be with same logic.
>>>
>>> This parameter defines well the LRO behavior.
>>> Before this, each PMD took its own interpretation to what should be the
>> maximum size for LRO aggregated packets.
>>> Now, the user must say what is his intension, and the ethdev can limit it
>> according to the device capability.
>>> By this way, also, the PMD can organize\optimize its data-path more.
>>> Also, the application can create different mempools for LRO queues to
>> allow bigger packet receiving for LRO traffic.
>>>
>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
>>> Yes, you can see the feature description Dekel added.
>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>
>> Of course I can see the updates Matan, my point is "What happens if PMD
>> doesn't provide 'max_lro_pkt_size'",
>> 1) There is no check for it right, so it is acceptable?
>
> There is check.
> If the capability is 0, any non-zero configuration will fail.
>
>> 2) Are we making this filed mandatory to provide for PMDs, it is easy to make
>> new fields mandatory for PMDs but is this really necessary?
>
> Yes, for consistence.
>
>>>
>>> as same as max rx pkt len, no?
>>>
>>>> - What do you think setting 'max_lro_pkt_size' config value to what
>>>> PMD provided if application doesn't provide it?
>>> Same answers as above.
>>>
>>
>> If application doesn't care the value, as it has been till now, and not provided
>> explicit 'max_lro_pkt_size', why not ethdev level use the value provided by
>> PMD instead of failing?
>
> Again, same question we can ask on max rx pkt len.
>
> Looks like the packet size is very important value which should be set by the application.
>
> Previous applications have no option to configure it, so they haven't configure it, (probably cover it somehow) I think it is our miss to supply this info.
>
> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> Later, we can change both to other meaning.
>
I think it is not a good reason to introduce a new mandatory config option for
application because of 'max_rx_pkt_len' does it.
Will it work, if:
- If application doesn't provide this value, use the PMD max
- If both application and PMD doesn't provide this value, fail on configure()?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 11:37 ` Ferruh Yigit
@ 2019-11-08 11:56 ` Matan Azrad
2019-11-08 12:51 ` Ferruh Yigit
2019-11-08 13:11 ` Ananyev, Konstantin
0 siblings, 2 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-08 11:56 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Ferruh Yigit
> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>> Hi
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>
> >>>> RTE_ETHER_MAX_LEN;
> >>>>> }
> >>>>>
> >>>>> + /*
> >>>>> + * If LRO is enabled, check that the maximum aggregated
> packet
> >>>>> + * size is supported by the configured device.
> >>>>> + */
> >>>>> + if (dev_conf->rxmode.offloads &
> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>> + ret = check_lro_pkt_size(
> >>>>> + port_id, dev_conf-
> >>>>> rxmode.max_lro_pkt_size,
> >>>>> + dev_info.max_lro_pkt_size);
> >>>>> + if (ret != 0)
> >>>>> + goto rollback;
> >>>>> + }
> >>>>> +
> >>>>
> >>>> This check forces applications that enable LRO to provide
> >> 'max_lro_pkt_size'
> >>>> config value.
> >>>
> >>> Yes.(we can break an API, we noticed it)
> >>
> >> I am not talking about API/ABI breakage, that part is OK.
> >> With this check, if the application requested LRO offload but not
> >> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>
> > Yes
> >> Can there be a case application is good with whatever the PMD can
> >> support as max?
> > Yes can be - you know, we can do everything we want but it is better to be
> consistent:
> > Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
> lro pkt len should be mandatory for LRO offload.
> >
> > So your question is actually why both, non-lro packets and LRO packets max
> size are mandatory...
> >
> >
> > I think it should be important values for net applications management.
> > Also good for mbuf size managements.
> >
> >>>
> >>>> - Why it is mandatory now, how it was working before if it is
> >>>> mandatory value?
> >>>
> >>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> >> offload.
> >>> So now, when the user configures a LRO offload he must to set max
> >>> lro pkt
> >> len.
> >>> We don't want to confuse the user here with the max rx pkt len
> >> configurations and behaviors, they should be with same logic.
> >>>
> >>> This parameter defines well the LRO behavior.
> >>> Before this, each PMD took its own interpretation to what should be
> >>> the
> >> maximum size for LRO aggregated packets.
> >>> Now, the user must say what is his intension, and the ethdev can
> >>> limit it
> >> according to the device capability.
> >>> By this way, also, the PMD can organize\optimize its data-path more.
> >>> Also, the application can create different mempools for LRO queues
> >>> to
> >> allow bigger packet receiving for LRO traffic.
> >>>
> >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> >>> Yes, you can see the feature description Dekel added.
> >>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>
> >> Of course I can see the updates Matan, my point is "What happens if
> >> PMD doesn't provide 'max_lro_pkt_size'",
> >> 1) There is no check for it right, so it is acceptable?
> >
> > There is check.
> > If the capability is 0, any non-zero configuration will fail.
> >
> >> 2) Are we making this filed mandatory to provide for PMDs, it is easy
> >> to make new fields mandatory for PMDs but is this really necessary?
> >
> > Yes, for consistence.
> >
> >>>
> >>> as same as max rx pkt len, no?
> >>>
> >>>> - What do you think setting 'max_lro_pkt_size' config value to what
> >>>> PMD provided if application doesn't provide it?
> >>> Same answers as above.
> >>>
> >>
> >> If application doesn't care the value, as it has been till now, and
> >> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >> the value provided by PMD instead of failing?
> >
> > Again, same question we can ask on max rx pkt len.
> >
> > Looks like the packet size is very important value which should be set by
> the application.
> >
> > Previous applications have no option to configure it, so they haven't
> configure it, (probably cover it somehow) I think it is our miss to supply this
> info.
> >
> > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > Later, we can change both to other meaning.
> >
>
> I think it is not a good reason to introduce a new mandatory config option for
> application because of 'max_rx_pkt_len' does it.
It is mandatory only if LRO offload is configured.
> Will it work, if:
> - If application doesn't provide this value, use the PMD max
May cause a problem if the mbuf size is not enough for the PMD maximum.
> - If both application and PMD doesn't provide this value, fail on configure()?
It will work.
In my opinion - not ideal.
Matan
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 11:56 ` Matan Azrad
@ 2019-11-08 12:51 ` Ferruh Yigit
2019-11-08 16:11 ` Dekel Peled
2019-11-09 18:20 ` Matan Azrad
2019-11-08 13:11 ` Ananyev, Konstantin
1 sibling, 2 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-08 12:51 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/8/2019 11:56 AM, Matan Azrad wrote:
>
>
> From: Ferruh Yigit
>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>> Hi
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>
>>>>>> RTE_ETHER_MAX_LEN;
>>>>>>> }
>>>>>>>
>>>>>>> + /*
>>>>>>> + * If LRO is enabled, check that the maximum aggregated
>> packet
>>>>>>> + * size is supported by the configured device.
>>>>>>> + */
>>>>>>> + if (dev_conf->rxmode.offloads &
>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>> + ret = check_lro_pkt_size(
>>>>>>> + port_id, dev_conf-
>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>> + dev_info.max_lro_pkt_size);
>>>>>>> + if (ret != 0)
>>>>>>> + goto rollback;
>>>>>>> + }
>>>>>>> +
>>>>>>
>>>>>> This check forces applications that enable LRO to provide
>>>> 'max_lro_pkt_size'
>>>>>> config value.
>>>>>
>>>>> Yes.(we can break an API, we noticed it)
>>>>
>>>> I am not talking about API/ABI breakage, that part is OK.
>>>> With this check, if the application requested LRO offload but not
>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>
>>> Yes
>>>> Can there be a case application is good with whatever the PMD can
>>>> support as max?
>>> Yes can be - you know, we can do everything we want but it is better to be
>> consistent:
>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
>> lro pkt len should be mandatory for LRO offload.
>>>
>>> So your question is actually why both, non-lro packets and LRO packets max
>> size are mandatory...
>>>
>>>
>>> I think it should be important values for net applications management.
>>> Also good for mbuf size managements.
>>>
>>>>>
>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>> mandatory value?
>>>>>
>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
>>>> offload.
>>>>> So now, when the user configures a LRO offload he must to set max
>>>>> lro pkt
>>>> len.
>>>>> We don't want to confuse the user here with the max rx pkt len
>>>> configurations and behaviors, they should be with same logic.
>>>>>
>>>>> This parameter defines well the LRO behavior.
>>>>> Before this, each PMD took its own interpretation to what should be
>>>>> the
>>>> maximum size for LRO aggregated packets.
>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>> limit it
>>>> according to the device capability.
>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>> Also, the application can create different mempools for LRO queues
>>>>> to
>>>> allow bigger packet receiving for LRO traffic.
>>>>>
>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
>>>>> Yes, you can see the feature description Dekel added.
>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>
>>>> Of course I can see the updates Matan, my point is "What happens if
>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>> 1) There is no check for it right, so it is acceptable?
>>>
>>> There is check.
>>> If the capability is 0, any non-zero configuration will fail.
>>>
>>>> 2) Are we making this filed mandatory to provide for PMDs, it is easy
>>>> to make new fields mandatory for PMDs but is this really necessary?
>>>
>>> Yes, for consistence.
>>>
>>>>>
>>>>> as same as max rx pkt len, no?
>>>>>
>>>>>> - What do you think setting 'max_lro_pkt_size' config value to what
>>>>>> PMD provided if application doesn't provide it?
>>>>> Same answers as above.
>>>>>
>>>>
>>>> If application doesn't care the value, as it has been till now, and
>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>> the value provided by PMD instead of failing?
>>>
>>> Again, same question we can ask on max rx pkt len.
>>>
>>> Looks like the packet size is very important value which should be set by
>> the application.
>>>
>>> Previous applications have no option to configure it, so they haven't
>> configure it, (probably cover it somehow) I think it is our miss to supply this
>> info.
>>>
>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>> Later, we can change both to other meaning.
>>>
>>
>> I think it is not a good reason to introduce a new mandatory config option for
>> application because of 'max_rx_pkt_len' does it.
>
> It is mandatory only if LRO offload is configured.
>
>> Will it work, if:
>> - If application doesn't provide this value, use the PMD max
>
> May cause a problem if the mbuf size is not enough for the PMD maximum.
OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
be used but you already explained that application may want to use different
mempools for LRO queues.
For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into account and
program the device accordingly (of course in LRO enabled case) ?
This part seems missing and should be highlighted to other PMD maintainers.
>
>> - If both application and PMD doesn't provide this value, fail on configure()?
>
> It will work.
> In my opinion - not ideal.
>
> Matan
>
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 12:51 ` Ferruh Yigit
@ 2019-11-08 16:11 ` Dekel Peled
2019-11-08 16:53 ` Ferruh Yigit
2019-11-09 18:20 ` Matan Azrad
1 sibling, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:11 UTC (permalink / raw)
To: Ferruh Yigit, Matan Azrad, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, November 8, 2019 2:52 PM
> To: Matan Azrad <matan@mellanox.com>; Dekel Peled
> <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Shahaf Shuler
> <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
> rmody@marvell.com; shshaikh@marvell.com;
> maxime.coquelin@redhat.com; tiwei.bie@intel.com;
> zhihong.wang@intel.com; yongwang@vmware.com; Thomas Monjalon
> <thomas@monjalon.net>; arybchenko@solarflare.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
>
> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>> Hi
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>
> >>>>>> RTE_ETHER_MAX_LEN;
> >>>>>>> }
> >>>>>>>
> >>>>>>> + /*
> >>>>>>> + * If LRO is enabled, check that the maximum aggregated
> >> packet
> >>>>>>> + * size is supported by the configured device.
> >>>>>>> + */
> >>>>>>> + if (dev_conf->rxmode.offloads &
> >> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>> + ret = check_lro_pkt_size(
> >>>>>>> + port_id, dev_conf-
> >>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>> + dev_info.max_lro_pkt_size);
> >>>>>>> + if (ret != 0)
> >>>>>>> + goto rollback;
> >>>>>>> + }
> >>>>>>> +
> >>>>>>
> >>>>>> This check forces applications that enable LRO to provide
> >>>> 'max_lro_pkt_size'
> >>>>>> config value.
> >>>>>
> >>>>> Yes.(we can break an API, we noticed it)
> >>>>
> >>>> I am not talking about API/ABI breakage, that part is OK.
> >>>> With this check, if the application requested LRO offload but not
> >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>
> >>> Yes
> >>>> Can there be a case application is good with whatever the PMD can
> >>>> support as max?
> >>> Yes can be - you know, we can do everything we want but it is better
> >>> to be
> >> consistent:
> >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>> offload, max
> >> lro pkt len should be mandatory for LRO offload.
> >>>
> >>> So your question is actually why both, non-lro packets and LRO
> >>> packets max
> >> size are mandatory...
> >>>
> >>>
> >>> I think it should be important values for net applications management.
> >>> Also good for mbuf size managements.
> >>>
> >>>>>
> >>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>> mandatory value?
> >>>>>
> >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>> frame
> >>>> offload.
> >>>>> So now, when the user configures a LRO offload he must to set max
> >>>>> lro pkt
> >>>> len.
> >>>>> We don't want to confuse the user here with the max rx pkt len
> >>>> configurations and behaviors, they should be with same logic.
> >>>>>
> >>>>> This parameter defines well the LRO behavior.
> >>>>> Before this, each PMD took its own interpretation to what should
> >>>>> be the
> >>>> maximum size for LRO aggregated packets.
> >>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>> limit it
> >>>> according to the device capability.
> >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> >>>>> Also, the application can create different mempools for LRO queues
> >>>>> to
> >>>> allow bigger packet receiving for LRO traffic.
> >>>>>
> >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> >>>>> Yes, you can see the feature description Dekel added.
> >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>>>
> >>>> Of course I can see the updates Matan, my point is "What happens if
> >>>> PMD doesn't provide 'max_lro_pkt_size'",
> >>>> 1) There is no check for it right, so it is acceptable?
> >>>
> >>> There is check.
> >>> If the capability is 0, any non-zero configuration will fail.
> >>>
> >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>> easy to make new fields mandatory for PMDs but is this really
> necessary?
> >>>
> >>> Yes, for consistence.
> >>>
> >>>>>
> >>>>> as same as max rx pkt len, no?
> >>>>>
> >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>> what PMD provided if application doesn't provide it?
> >>>>> Same answers as above.
> >>>>>
> >>>>
> >>>> If application doesn't care the value, as it has been till now, and
> >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >>>> the value provided by PMD instead of failing?
> >>>
> >>> Again, same question we can ask on max rx pkt len.
> >>>
> >>> Looks like the packet size is very important value which should be
> >>> set by
> >> the application.
> >>>
> >>> Previous applications have no option to configure it, so they
> >>> haven't
> >> configure it, (probably cover it somehow) I think it is our miss to
> >> supply this info.
> >>>
> >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> >>> Later, we can change both to other meaning.
> >>>
> >>
> >> I think it is not a good reason to introduce a new mandatory config
> >> option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
> >
> >> Will it work, if:
> >> - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
>
> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> be used but you already explained that application may want to use different
> mempools for LRO queues.
>
> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> account and program the device accordingly (of course in LRO enabled case)
> ?
> This part seems missing and should be highlighted to other PMD maintainers.
>
All relevant PMDs were modified and maintainers are copied on this patch series.
> >
> >> - If both application and PMD doesn't provide this value, fail on
> configure()?
> >
> > It will work.
> > In my opinion - not ideal.
> >
> > Matan
> >
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 16:11 ` Dekel Peled
@ 2019-11-08 16:53 ` Ferruh Yigit
0 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-08 16:53 UTC (permalink / raw)
To: Dekel Peled, Matan Azrad, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/8/2019 4:11 PM, Dekel Peled wrote:
> Thanks, PSB.
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, November 8, 2019 2:52 PM
>> To: Matan Azrad <matan@mellanox.com>; Dekel Peled
>> <dekelp@mellanox.com>; john.mcnamara@intel.com;
>> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
>> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
>> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
>> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
>> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Shahaf Shuler
>> <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
>> rmody@marvell.com; shshaikh@marvell.com;
>> maxime.coquelin@redhat.com; tiwei.bie@intel.com;
>> zhihong.wang@intel.com; yongwang@vmware.com; Thomas Monjalon
>> <thomas@monjalon.net>; arybchenko@solarflare.com;
>> jingjing.wu@intel.com; bernard.iremonger@intel.com
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
>> packet size
>>
>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>> Hi
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>
>>>>>>>> RTE_ETHER_MAX_LEN;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> + /*
>>>>>>>>> + * If LRO is enabled, check that the maximum aggregated
>>>> packet
>>>>>>>>> + * size is supported by the configured device.
>>>>>>>>> + */
>>>>>>>>> + if (dev_conf->rxmode.offloads &
>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>> + ret = check_lro_pkt_size(
>>>>>>>>> + port_id, dev_conf-
>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>> + dev_info.max_lro_pkt_size);
>>>>>>>>> + if (ret != 0)
>>>>>>>>> + goto rollback;
>>>>>>>>> + }
>>>>>>>>> +
>>>>>>>>
>>>>>>>> This check forces applications that enable LRO to provide
>>>>>> 'max_lro_pkt_size'
>>>>>>>> config value.
>>>>>>>
>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>
>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>> With this check, if the application requested LRO offload but not
>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>
>>>>> Yes
>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>> support as max?
>>>>> Yes can be - you know, we can do everything we want but it is better
>>>>> to be
>>>> consistent:
>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>> offload, max
>>>> lro pkt len should be mandatory for LRO offload.
>>>>>
>>>>> So your question is actually why both, non-lro packets and LRO
>>>>> packets max
>>>> size are mandatory...
>>>>>
>>>>>
>>>>> I think it should be important values for net applications management.
>>>>> Also good for mbuf size managements.
>>>>>
>>>>>>>
>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>> mandatory value?
>>>>>>>
>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>> frame
>>>>>> offload.
>>>>>>> So now, when the user configures a LRO offload he must to set max
>>>>>>> lro pkt
>>>>>> len.
>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>
>>>>>>> This parameter defines well the LRO behavior.
>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>> be the
>>>>>> maximum size for LRO aggregated packets.
>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>> limit it
>>>>>> according to the device capability.
>>>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>>>> Also, the application can create different mempools for LRO queues
>>>>>>> to
>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>
>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
>> '0'?
>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>>>
>>>>>> Of course I can see the updates Matan, my point is "What happens if
>>>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>
>>>>> There is check.
>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>
>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>> easy to make new fields mandatory for PMDs but is this really
>> necessary?
>>>>>
>>>>> Yes, for consistence.
>>>>>
>>>>>>>
>>>>>>> as same as max rx pkt len, no?
>>>>>>>
>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>> Same answers as above.
>>>>>>>
>>>>>>
>>>>>> If application doesn't care the value, as it has been till now, and
>>>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>>>> the value provided by PMD instead of failing?
>>>>>
>>>>> Again, same question we can ask on max rx pkt len.
>>>>>
>>>>> Looks like the packet size is very important value which should be
>>>>> set by
>>>> the application.
>>>>>
>>>>> Previous applications have no option to configure it, so they
>>>>> haven't
>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>> supply this info.
>>>>>
>>>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>>>> Later, we can change both to other meaning.
>>>>>
>>>>
>>>> I think it is not a good reason to introduce a new mandatory config
>>>> option for application because of 'max_rx_pkt_len' does it.
>>>
>>> It is mandatory only if LRO offload is configured.
>>>
>>>> Will it work, if:
>>>> - If application doesn't provide this value, use the PMD max
>>>
>>> May cause a problem if the mbuf size is not enough for the PMD maximum.
>>
>> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
>> be used but you already explained that application may want to use different
>> mempools for LRO queues.
>>
>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>> account and program the device accordingly (of course in LRO enabled case)
>> ?
>> This part seems missing and should be highlighted to other PMD maintainers.
>>
>
> All relevant PMDs were modified and maintainers are copied on this patch series.
>
What modified is PMD announcing a 'dev_info->max_lro_pkt_size' value, which is good.
But PMDs are not using user provided 'rxmode.max_lro_pkt_size' value, I assume
they are still using 'max_rx_pkt_len' to configure the device.
+1 to cc'ing maintainers, but everyone not able to follow all patches and not
sure if every maintainer read the patch and recognized they should update their
driver. I think better to highlight this things in cover letter / emails etc.
I hope it is more clear now.
Not for this patch, but generally;
As a process, previously I proposed a keeping a todo list under documentation
for PMDs for these kind of things, that each PMD maintainer can go there to
figure out what kind of changes required because of others changes, but that
didn't go in.
Other option is whoever updating library update all PMDs fully, but based on
feature it can be very hard to update others PMDs.
Overall these gaps are causing inconsistencies between PMDs and we need a proper
solution.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 12:51 ` Ferruh Yigit
2019-11-08 16:11 ` Dekel Peled
@ 2019-11-09 18:20 ` Matan Azrad
2019-11-10 23:40 ` Ananyev, Konstantin
2019-11-11 11:15 ` Ferruh Yigit
1 sibling, 2 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-09 18:20 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
Hi
From: Ferruh Yigit
> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>> Hi
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>
> >>>>>> RTE_ETHER_MAX_LEN;
> >>>>>>> }
> >>>>>>>
> >>>>>>> + /*
> >>>>>>> + * If LRO is enabled, check that the maximum aggregated
> >> packet
> >>>>>>> + * size is supported by the configured device.
> >>>>>>> + */
> >>>>>>> + if (dev_conf->rxmode.offloads &
> >> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>> + ret = check_lro_pkt_size(
> >>>>>>> + port_id, dev_conf-
> >>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>> + dev_info.max_lro_pkt_size);
> >>>>>>> + if (ret != 0)
> >>>>>>> + goto rollback;
> >>>>>>> + }
> >>>>>>> +
> >>>>>>
> >>>>>> This check forces applications that enable LRO to provide
> >>>> 'max_lro_pkt_size'
> >>>>>> config value.
> >>>>>
> >>>>> Yes.(we can break an API, we noticed it)
> >>>>
> >>>> I am not talking about API/ABI breakage, that part is OK.
> >>>> With this check, if the application requested LRO offload but not
> >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>
> >>> Yes
> >>>> Can there be a case application is good with whatever the PMD can
> >>>> support as max?
> >>> Yes can be - you know, we can do everything we want but it is better
> >>> to be
> >> consistent:
> >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>> offload, max
> >> lro pkt len should be mandatory for LRO offload.
> >>>
> >>> So your question is actually why both, non-lro packets and LRO
> >>> packets max
> >> size are mandatory...
> >>>
> >>>
> >>> I think it should be important values for net applications management.
> >>> Also good for mbuf size managements.
> >>>
> >>>>>
> >>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>> mandatory value?
> >>>>>
> >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>> frame
> >>>> offload.
> >>>>> So now, when the user configures a LRO offload he must to set max
> >>>>> lro pkt
> >>>> len.
> >>>>> We don't want to confuse the user here with the max rx pkt len
> >>>> configurations and behaviors, they should be with same logic.
> >>>>>
> >>>>> This parameter defines well the LRO behavior.
> >>>>> Before this, each PMD took its own interpretation to what should
> >>>>> be the
> >>>> maximum size for LRO aggregated packets.
> >>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>> limit it
> >>>> according to the device capability.
> >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> >>>>> Also, the application can create different mempools for LRO queues
> >>>>> to
> >>>> allow bigger packet receiving for LRO traffic.
> >>>>>
> >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> >>>>> Yes, you can see the feature description Dekel added.
> >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>>>
> >>>> Of course I can see the updates Matan, my point is "What happens if
> >>>> PMD doesn't provide 'max_lro_pkt_size'",
> >>>> 1) There is no check for it right, so it is acceptable?
> >>>
> >>> There is check.
> >>> If the capability is 0, any non-zero configuration will fail.
> >>>
> >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>> easy to make new fields mandatory for PMDs but is this really
> necessary?
> >>>
> >>> Yes, for consistence.
> >>>
> >>>>>
> >>>>> as same as max rx pkt len, no?
> >>>>>
> >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>> what PMD provided if application doesn't provide it?
> >>>>> Same answers as above.
> >>>>>
> >>>>
> >>>> If application doesn't care the value, as it has been till now, and
> >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >>>> the value provided by PMD instead of failing?
> >>>
> >>> Again, same question we can ask on max rx pkt len.
> >>>
> >>> Looks like the packet size is very important value which should be
> >>> set by
> >> the application.
> >>>
> >>> Previous applications have no option to configure it, so they
> >>> haven't
> >> configure it, (probably cover it somehow) I think it is our miss to
> >> supply this info.
> >>>
> >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> >>> Later, we can change both to other meaning.
> >>>
> >>
> >> I think it is not a good reason to introduce a new mandatory config
> >> option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
> >
> >> Will it work, if:
> >> - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
>
> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> be used but you already explained that application may want to use different
> mempools for LRO queues.
>
So , are you agree with the idea?
> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> account and program the device accordingly (of course in LRO enabled case)
> ?
> This part seems missing and should be highlighted to other PMD maintainers.
Yes, you are right.
PMDs must limit the LRO aggregated packet according to the new field,
And it probably very hard for the patch introducer to understand how to do it for each PMD.
I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should be done in limited time.
My suggestion here:
1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
2. To merge the ethdev patch in the start of rc3.
3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
Note: this should be small change and only for ~5 PMDs:
a. Introduce the info field according to the device ability.
b. For each LRO queue:
Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition).
What do you think?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-09 18:20 ` Matan Azrad
@ 2019-11-10 23:40 ` Ananyev, Konstantin
2019-11-11 8:01 ` Matan Azrad
2019-11-11 11:15 ` Ferruh Yigit
1 sibling, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-10 23:40 UTC (permalink / raw)
To: Matan Azrad, Yigit, Ferruh, Dekel Peled, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
>
> From: Ferruh Yigit
> > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > >
> > >
> > > From: Ferruh Yigit
> > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > >>>
> > >>>
> > >>> From: Ferruh Yigit
> > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > >>>>> Hi
> > >>>>>
> > >>>>> From: Ferruh Yigit
> > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > >>>>>>>
> > >>>>>> RTE_ETHER_MAX_LEN;
> > >>>>>>> }
> > >>>>>>>
> > >>>>>>> + /*
> > >>>>>>> + * If LRO is enabled, check that the maximum aggregated
> > >> packet
> > >>>>>>> + * size is supported by the configured device.
> > >>>>>>> + */
> > >>>>>>> + if (dev_conf->rxmode.offloads &
> > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > >>>>>>> + ret = check_lro_pkt_size(
> > >>>>>>> + port_id, dev_conf-
> > >>>>>>> rxmode.max_lro_pkt_size,
> > >>>>>>> + dev_info.max_lro_pkt_size);
> > >>>>>>> + if (ret != 0)
> > >>>>>>> + goto rollback;
> > >>>>>>> + }
> > >>>>>>> +
> > >>>>>>
> > >>>>>> This check forces applications that enable LRO to provide
> > >>>> 'max_lro_pkt_size'
> > >>>>>> config value.
> > >>>>>
> > >>>>> Yes.(we can break an API, we noticed it)
> > >>>>
> > >>>> I am not talking about API/ABI breakage, that part is OK.
> > >>>> With this check, if the application requested LRO offload but not
> > >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> > >>>>
> > >>> Yes
> > >>>> Can there be a case application is good with whatever the PMD can
> > >>>> support as max?
> > >>> Yes can be - you know, we can do everything we want but it is better
> > >>> to be
> > >> consistent:
> > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > >>> offload, max
> > >> lro pkt len should be mandatory for LRO offload.
> > >>>
> > >>> So your question is actually why both, non-lro packets and LRO
> > >>> packets max
> > >> size are mandatory...
> > >>>
> > >>>
> > >>> I think it should be important values for net applications management.
> > >>> Also good for mbuf size managements.
> > >>>
> > >>>>>
> > >>>>>> - Why it is mandatory now, how it was working before if it is
> > >>>>>> mandatory value?
> > >>>>>
> > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > >>>>> frame
> > >>>> offload.
> > >>>>> So now, when the user configures a LRO offload he must to set max
> > >>>>> lro pkt
> > >>>> len.
> > >>>>> We don't want to confuse the user here with the max rx pkt len
> > >>>> configurations and behaviors, they should be with same logic.
> > >>>>>
> > >>>>> This parameter defines well the LRO behavior.
> > >>>>> Before this, each PMD took its own interpretation to what should
> > >>>>> be the
> > >>>> maximum size for LRO aggregated packets.
> > >>>>> Now, the user must say what is his intension, and the ethdev can
> > >>>>> limit it
> > >>>> according to the device capability.
> > >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> > >>>>> Also, the application can create different mempools for LRO queues
> > >>>>> to
> > >>>> allow bigger packet receiving for LRO traffic.
> > >>>>>
> > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> > '0'?
> > >>>>> Yes, you can see the feature description Dekel added.
> > >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> > >>>>
> > >>>> Of course I can see the updates Matan, my point is "What happens if
> > >>>> PMD doesn't provide 'max_lro_pkt_size'",
> > >>>> 1) There is no check for it right, so it is acceptable?
> > >>>
> > >>> There is check.
> > >>> If the capability is 0, any non-zero configuration will fail.
> > >>>
> > >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> > >>>> easy to make new fields mandatory for PMDs but is this really
> > necessary?
> > >>>
> > >>> Yes, for consistence.
> > >>>
> > >>>>>
> > >>>>> as same as max rx pkt len, no?
> > >>>>>
> > >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> > >>>>>> what PMD provided if application doesn't provide it?
> > >>>>> Same answers as above.
> > >>>>>
> > >>>>
> > >>>> If application doesn't care the value, as it has been till now, and
> > >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> > >>>> the value provided by PMD instead of failing?
> > >>>
> > >>> Again, same question we can ask on max rx pkt len.
> > >>>
> > >>> Looks like the packet size is very important value which should be
> > >>> set by
> > >> the application.
> > >>>
> > >>> Previous applications have no option to configure it, so they
> > >>> haven't
> > >> configure it, (probably cover it somehow) I think it is our miss to
> > >> supply this info.
> > >>>
> > >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > >>> Later, we can change both to other meaning.
> > >>>
> > >>
> > >> I think it is not a good reason to introduce a new mandatory config
> > >> option for application because of 'max_rx_pkt_len' does it.
> > >
> > > It is mandatory only if LRO offload is configured.
> > >
> > >> Will it work, if:
> > >> - If application doesn't provide this value, use the PMD max
> > >
> > > May cause a problem if the mbuf size is not enough for the PMD maximum.
> >
> > OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> > be used but you already explained that application may want to use different
> > mempools for LRO queues.
> >
> So , are you agree with the idea?
>
> > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > account and program the device accordingly (of course in LRO enabled case)
> > ?
> > This part seems missing and should be highlighted to other PMD maintainers.
>
>
> Yes, you are right.
> PMDs must limit the LRO aggregated packet according to the new field,
> And it probably very hard for the patch introducer to understand how to do it for each PMD.
>
> I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should
> be done in limited time.
>
> My suggestion here:
> 1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
> 2. To merge the ethdev patch in the start of rc3.
> 3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
> Note: this should be small change and only for ~5 PMDs:
> a. Introduce the info field according to the device ability.
> b. For each LRO queue:
> Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition).
That's definitely looks like a significant behavior change for existing apps and PMDs,
and I wonder what for?
Why we can't keep max_rx_pkt_len semantics as it is right now,
and just add an optional ability to limit max size of LRO aggregations?
>
> What do you think?
>
>
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-10 23:40 ` Ananyev, Konstantin
@ 2019-11-11 8:01 ` Matan Azrad
2019-11-12 18:31 ` Ananyev, Konstantin
0 siblings, 1 reply; 79+ messages in thread
From: Matan Azrad @ 2019-11-11 8:01 UTC (permalink / raw)
To: Ananyev, Konstantin, Yigit, Ferruh, Dekel Peled, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
From: Ananyev, Konstantin
> >
> > From: Ferruh Yigit
> > > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > > >
> > > >
> > > > From: Ferruh Yigit
> > > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > >>>
> > > >>>
> > > >>> From: Ferruh Yigit
> > > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > >>>>> Hi
> > > >>>>>
> > > >>>>> From: Ferruh Yigit
> > > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > >>>>>>>
> > > >>>>>> RTE_ETHER_MAX_LEN;
> > > >>>>>>> }
> > > >>>>>>>
> > > >>>>>>> + /*
> > > >>>>>>> + * If LRO is enabled, check that the maximum aggregated
> > > >> packet
> > > >>>>>>> + * size is supported by the configured device.
> > > >>>>>>> + */
> > > >>>>>>> + if (dev_conf->rxmode.offloads &
> > > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > > >>>>>>> + ret = check_lro_pkt_size(
> > > >>>>>>> + port_id, dev_conf-
> > > >>>>>>> rxmode.max_lro_pkt_size,
> > > >>>>>>> + dev_info.max_lro_pkt_size);
> > > >>>>>>> + if (ret != 0)
> > > >>>>>>> + goto rollback;
> > > >>>>>>> + }
> > > >>>>>>> +
> > > >>>>>>
> > > >>>>>> This check forces applications that enable LRO to provide
> > > >>>> 'max_lro_pkt_size'
> > > >>>>>> config value.
> > > >>>>>
> > > >>>>> Yes.(we can break an API, we noticed it)
> > > >>>>
> > > >>>> I am not talking about API/ABI breakage, that part is OK.
> > > >>>> With this check, if the application requested LRO offload but
> > > >>>> not provided 'max_lro_pkt_size' value, device configuration will fail.
> > > >>>>
> > > >>> Yes
> > > >>>> Can there be a case application is good with whatever the PMD
> > > >>>> can support as max?
> > > >>> Yes can be - you know, we can do everything we want but it is
> > > >>> better to be
> > > >> consistent:
> > > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > >>> offload, max
> > > >> lro pkt len should be mandatory for LRO offload.
> > > >>>
> > > >>> So your question is actually why both, non-lro packets and LRO
> > > >>> packets max
> > > >> size are mandatory...
> > > >>>
> > > >>>
> > > >>> I think it should be important values for net applications
> management.
> > > >>> Also good for mbuf size managements.
> > > >>>
> > > >>>>>
> > > >>>>>> - Why it is mandatory now, how it was working before if it is
> > > >>>>>> mandatory value?
> > > >>>>>
> > > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > >>>>> frame
> > > >>>> offload.
> > > >>>>> So now, when the user configures a LRO offload he must to set
> > > >>>>> max lro pkt
> > > >>>> len.
> > > >>>>> We don't want to confuse the user here with the max rx pkt len
> > > >>>> configurations and behaviors, they should be with same logic.
> > > >>>>>
> > > >>>>> This parameter defines well the LRO behavior.
> > > >>>>> Before this, each PMD took its own interpretation to what
> > > >>>>> should be the
> > > >>>> maximum size for LRO aggregated packets.
> > > >>>>> Now, the user must say what is his intension, and the ethdev
> > > >>>>> can limit it
> > > >>>> according to the device capability.
> > > >>>>> By this way, also, the PMD can organize\optimize its data-path
> more.
> > > >>>>> Also, the application can create different mempools for LRO
> > > >>>>> queues to
> > > >>>> allow bigger packet receiving for LRO traffic.
> > > >>>>>
> > > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> > > >>>>>> it is
> > > '0'?
> > > >>>>> Yes, you can see the feature description Dekel added.
> > > >>>>> This patch also updates all the PMDs support an LRO for non-0
> value.
> > > >>>>
> > > >>>> Of course I can see the updates Matan, my point is "What
> > > >>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > >>>> 1) There is no check for it right, so it is acceptable?
> > > >>>
> > > >>> There is check.
> > > >>> If the capability is 0, any non-zero configuration will fail.
> > > >>>
> > > >>>> 2) Are we making this filed mandatory to provide for PMDs, it
> > > >>>> is easy to make new fields mandatory for PMDs but is this
> > > >>>> really
> > > necessary?
> > > >>>
> > > >>> Yes, for consistence.
> > > >>>
> > > >>>>>
> > > >>>>> as same as max rx pkt len, no?
> > > >>>>>
> > > >>>>>> - What do you think setting 'max_lro_pkt_size' config value
> > > >>>>>> to what PMD provided if application doesn't provide it?
> > > >>>>> Same answers as above.
> > > >>>>>
> > > >>>>
> > > >>>> If application doesn't care the value, as it has been till now,
> > > >>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > >>>> level use the value provided by PMD instead of failing?
> > > >>>
> > > >>> Again, same question we can ask on max rx pkt len.
> > > >>>
> > > >>> Looks like the packet size is very important value which should
> > > >>> be set by
> > > >> the application.
> > > >>>
> > > >>> Previous applications have no option to configure it, so they
> > > >>> haven't
> > > >> configure it, (probably cover it somehow) I think it is our miss
> > > >> to supply this info.
> > > >>>
> > > >>> Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> > > >>> Later, we can change both to other meaning.
> > > >>>
> > > >>
> > > >> I think it is not a good reason to introduce a new mandatory
> > > >> config option for application because of 'max_rx_pkt_len' does it.
> > > >
> > > > It is mandatory only if LRO offload is configured.
> > > >
> > > >> Will it work, if:
> > > >> - If application doesn't provide this value, use the PMD max
> > > >
> > > > May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> > >
> > > OK, this is what I was missing, for this case I was thinking
> > > max_rx_pkt_len will be used but you already explained that
> > > application may want to use different mempools for LRO queues.
> > >
> > So , are you agree with the idea?
> >
> > > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > > account and program the device accordingly (of course in LRO enabled
> > > case) ?
> > > This part seems missing and should be highlighted to other PMD
> maintainers.
> >
> >
> > Yes, you are right.
> > PMDs must limit the LRO aggregated packet according to the new field,
> > And it probably very hard for the patch introducer to understand how to do
> it for each PMD.
> >
> > I think each new configuration requires other maintainers\developers
> > to adjust their own PMD code to the new configuration and it should be
> done in limited time.
> >
> > My suggestion here:
> > 1. To reserve the info field and the configuration field for rc2.(if
> > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> start of rc3.
> > 3. Request each relevant PMD to adjust its PMD to the new configuration
> for the end of rc3.
> > Note: this should be small change and only for ~5 PMDs:
> > a. Introduce the info field according to the device ability.
> > b. For each LRO queue:
> > Use the LRO max size configuration instead of the
> current max rx pkt len configuration(looks like small condition).
>
> That's definitely looks like a significant behavior change for existing apps and
> PMDs, and I wonder what for?
There was a miss in configuration:
It doesn't make sense to limit non-lro queues with the same packets length of lro queues:
Naturally, LRO packets are bigger significantly(because of the HW aggregation), hence,
the user may use bigger mbufs for the LRO packets, so potentially, it is better to separate mempool, one for the LRO queues with big mbufs and the second for the non-LRO queues with smaller mbufs (to optimize the memory usage).
Since the user may want tail-room in the LRO mbuf it may limit the LRO packet size to smaller number than the mbuf (- HEADROOM) and for this reason as same as the usage of the regular field (max_rx_pkt_len) a new field should be set for LRO queues.
> Why we can't keep max_rx_pkt_len semantics as it is right now, and just add
> an optional ability to limit max size of LRO aggregations?
What is the semantic of max_rx_pkt_len regards LRO packets? It is not clear from the documentation.
So this patch defines it well:
Non-LRO queues should be limited to max_rx_pkt_len.
LRO queues should be limited to max_lro_pkt_len.
The consistence in the ways of the configuration for RX packet length should be the same.
max_rx_pkt_len is mandatory for JUMBO offload => max_lro_pkt_len is mandatory for LRO offload.
Current applications uses LRO just need to configure the field same as current max_rx_pkt_len if they want to stay with the same behavior - really not a big change.
If the application want to improve their memory usage as I said above, the new fields allow it as well.
> > What do you think?
> >
> >
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-11 8:01 ` Matan Azrad
@ 2019-11-12 18:31 ` Ananyev, Konstantin
0 siblings, 0 replies; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-12 18:31 UTC (permalink / raw)
To: Matan Azrad, Yigit, Ferruh, Dekel Peled, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, November 11, 2019 8:01 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov, Anatoly <anatoly.burakov@intel.com>;
> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Shahaf
> Shuler <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com; shshaikh@marvell.com;
> maxime.coquelin@redhat.com; Bie, Tiwei <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
>
>
>
> From: Ananyev, Konstantin
> > >
> > > From: Ferruh Yigit
> > > > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > > > >
> > > > >
> > > > > From: Ferruh Yigit
> > > > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > > >>>
> > > > >>>
> > > > >>> From: Ferruh Yigit
> > > > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > > >>>>> Hi
> > > > >>>>>
> > > > >>>>> From: Ferruh Yigit
> > > > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > >>>>>>>
> > > > >>>>>> RTE_ETHER_MAX_LEN;
> > > > >>>>>>> }
> > > > >>>>>>>
> > > > >>>>>>> + /*
> > > > >>>>>>> + * If LRO is enabled, check that the maximum aggregated
> > > > >> packet
> > > > >>>>>>> + * size is supported by the configured device.
> > > > >>>>>>> + */
> > > > >>>>>>> + if (dev_conf->rxmode.offloads &
> > > > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > > > >>>>>>> + ret = check_lro_pkt_size(
> > > > >>>>>>> + port_id, dev_conf-
> > > > >>>>>>> rxmode.max_lro_pkt_size,
> > > > >>>>>>> + dev_info.max_lro_pkt_size);
> > > > >>>>>>> + if (ret != 0)
> > > > >>>>>>> + goto rollback;
> > > > >>>>>>> + }
> > > > >>>>>>> +
> > > > >>>>>>
> > > > >>>>>> This check forces applications that enable LRO to provide
> > > > >>>> 'max_lro_pkt_size'
> > > > >>>>>> config value.
> > > > >>>>>
> > > > >>>>> Yes.(we can break an API, we noticed it)
> > > > >>>>
> > > > >>>> I am not talking about API/ABI breakage, that part is OK.
> > > > >>>> With this check, if the application requested LRO offload but
> > > > >>>> not provided 'max_lro_pkt_size' value, device configuration will fail.
> > > > >>>>
> > > > >>> Yes
> > > > >>>> Can there be a case application is good with whatever the PMD
> > > > >>>> can support as max?
> > > > >>> Yes can be - you know, we can do everything we want but it is
> > > > >>> better to be
> > > > >> consistent:
> > > > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > >>> offload, max
> > > > >> lro pkt len should be mandatory for LRO offload.
> > > > >>>
> > > > >>> So your question is actually why both, non-lro packets and LRO
> > > > >>> packets max
> > > > >> size are mandatory...
> > > > >>>
> > > > >>>
> > > > >>> I think it should be important values for net applications
> > management.
> > > > >>> Also good for mbuf size managements.
> > > > >>>
> > > > >>>>>
> > > > >>>>>> - Why it is mandatory now, how it was working before if it is
> > > > >>>>>> mandatory value?
> > > > >>>>>
> > > > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > > >>>>> frame
> > > > >>>> offload.
> > > > >>>>> So now, when the user configures a LRO offload he must to set
> > > > >>>>> max lro pkt
> > > > >>>> len.
> > > > >>>>> We don't want to confuse the user here with the max rx pkt len
> > > > >>>> configurations and behaviors, they should be with same logic.
> > > > >>>>>
> > > > >>>>> This parameter defines well the LRO behavior.
> > > > >>>>> Before this, each PMD took its own interpretation to what
> > > > >>>>> should be the
> > > > >>>> maximum size for LRO aggregated packets.
> > > > >>>>> Now, the user must say what is his intension, and the ethdev
> > > > >>>>> can limit it
> > > > >>>> according to the device capability.
> > > > >>>>> By this way, also, the PMD can organize\optimize its data-path
> > more.
> > > > >>>>> Also, the application can create different mempools for LRO
> > > > >>>>> queues to
> > > > >>>> allow bigger packet receiving for LRO traffic.
> > > > >>>>>
> > > > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> > > > >>>>>> it is
> > > > '0'?
> > > > >>>>> Yes, you can see the feature description Dekel added.
> > > > >>>>> This patch also updates all the PMDs support an LRO for non-0
> > value.
> > > > >>>>
> > > > >>>> Of course I can see the updates Matan, my point is "What
> > > > >>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > >>>> 1) There is no check for it right, so it is acceptable?
> > > > >>>
> > > > >>> There is check.
> > > > >>> If the capability is 0, any non-zero configuration will fail.
> > > > >>>
> > > > >>>> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > >>>> is easy to make new fields mandatory for PMDs but is this
> > > > >>>> really
> > > > necessary?
> > > > >>>
> > > > >>> Yes, for consistence.
> > > > >>>
> > > > >>>>>
> > > > >>>>> as same as max rx pkt len, no?
> > > > >>>>>
> > > > >>>>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > >>>>>> to what PMD provided if application doesn't provide it?
> > > > >>>>> Same answers as above.
> > > > >>>>>
> > > > >>>>
> > > > >>>> If application doesn't care the value, as it has been till now,
> > > > >>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > > >>>> level use the value provided by PMD instead of failing?
> > > > >>>
> > > > >>> Again, same question we can ask on max rx pkt len.
> > > > >>>
> > > > >>> Looks like the packet size is very important value which should
> > > > >>> be set by
> > > > >> the application.
> > > > >>>
> > > > >>> Previous applications have no option to configure it, so they
> > > > >>> haven't
> > > > >> configure it, (probably cover it somehow) I think it is our miss
> > > > >> to supply this info.
> > > > >>>
> > > > >>> Let's do it in same way as we do max rx pkt len (as this patch main
> > idea).
> > > > >>> Later, we can change both to other meaning.
> > > > >>>
> > > > >>
> > > > >> I think it is not a good reason to introduce a new mandatory
> > > > >> config option for application because of 'max_rx_pkt_len' does it.
> > > > >
> > > > > It is mandatory only if LRO offload is configured.
> > > > >
> > > > >> Will it work, if:
> > > > >> - If application doesn't provide this value, use the PMD max
> > > > >
> > > > > May cause a problem if the mbuf size is not enough for the PMD
> > maximum.
> > > >
> > > > OK, this is what I was missing, for this case I was thinking
> > > > max_rx_pkt_len will be used but you already explained that
> > > > application may want to use different mempools for LRO queues.
> > > >
> > > So , are you agree with the idea?
> > >
> > > > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > > > account and program the device accordingly (of course in LRO enabled
> > > > case) ?
> > > > This part seems missing and should be highlighted to other PMD
> > maintainers.
> > >
> > >
> > > Yes, you are right.
> > > PMDs must limit the LRO aggregated packet according to the new field,
> > > And it probably very hard for the patch introducer to understand how to do
> > it for each PMD.
> > >
> > > I think each new configuration requires other maintainers\developers
> > > to adjust their own PMD code to the new configuration and it should be
> > done in limited time.
> > >
> > > My suggestion here:
> > > 1. To reserve the info field and the configuration field for rc2.(if
> > > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> > start of rc3.
> > > 3. Request each relevant PMD to adjust its PMD to the new configuration
> > for the end of rc3.
> > > Note: this should be small change and only for ~5 PMDs:
> > > a. Introduce the info field according to the device ability.
> > > b. For each LRO queue:
> > > Use the LRO max size configuration instead of the
> > current max rx pkt len configuration(looks like small condition).
> >
> > That's definitely looks like a significant behavior change for existing apps and
> > PMDs, and I wonder what for?
>
> There was a miss in configuration:
>
> It doesn't make sense to limit non-lro queues with the same packets length of lro queues:
> Naturally, LRO packets are bigger significantly(because of the HW aggregation), hence,
> the user may use bigger mbufs for the LRO packets, so potentially, it is better to separate mempool, one for the LRO queues with
> big mbufs and the second for the non-LRO queues with smaller mbufs (to optimize the memory usage).
> Since the user may want tail-room in the LRO mbuf it may limit the LRO packet size to smaller number than the mbuf (-
> HEADROOM) and for this reason as same as the usage of the regular field (max_rx_pkt_len) a new field should be set for LRO queues.
>
> > Why we can't keep max_rx_pkt_len semantics as it is right now, and just add
> > an optional ability to limit max size of LRO aggregations?
>
> What is the semantic of max_rx_pkt_len regards LRO packets? It is not clear from the documentation.
That's probably where misunderstanding starts.
For me:
max_rx_pkt_len is the maximum size of the ethernet packet the NIC
will accept (doesn't matter LRO enabled or not).
Now if LRO is enabled NIC can accumulate multiple 'physical' packets
into one big 'virtual' one.
So when LRO is enabled max_lro_size limits how big these
accumulated 'virtual' packets can be.
While max_rx_pkt_len still limits max ethernet packet size NIC will accept.
In what you suggest it is not clear to me what will be max_lro_size semantics.
Would it be maximum size of the ethernet packet the NIC will accept (equivalent of max_rx_pkt_len)
or would it be accumulated 'virtual' packet size limit?
Or might be something else?
>
> So this patch defines it well:
> Non-LRO queues should be limited to max_rx_pkt_len.
> LRO queues should be limited to max_lro_pkt_len.
>
> The consistence in the ways of the configuration for RX packet length should be the same.
> max_rx_pkt_len is mandatory for JUMBO offload => max_lro_pkt_len is mandatory for LRO offload.
>
>
> Current applications uses LRO just need to configure the field same as current max_rx_pkt_len if they want to stay with the same behavior -
> really not a big change.
> If the application want to improve their memory usage as I said above, the new fields allow it as well.
>
> > > What do you think?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-09 18:20 ` Matan Azrad
2019-11-10 23:40 ` Ananyev, Konstantin
@ 2019-11-11 11:15 ` Ferruh Yigit
2019-11-11 11:33 ` Matan Azrad
1 sibling, 1 reply; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-11 11:15 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/9/2019 6:20 PM, Matan Azrad wrote:
> Hi
>
> From: Ferruh Yigit
>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>> Hi
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>
>>>>>>>> RTE_ETHER_MAX_LEN;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> + /*
>>>>>>>>> + * If LRO is enabled, check that the maximum aggregated
>>>> packet
>>>>>>>>> + * size is supported by the configured device.
>>>>>>>>> + */
>>>>>>>>> + if (dev_conf->rxmode.offloads &
>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>> + ret = check_lro_pkt_size(
>>>>>>>>> + port_id, dev_conf-
>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>> + dev_info.max_lro_pkt_size);
>>>>>>>>> + if (ret != 0)
>>>>>>>>> + goto rollback;
>>>>>>>>> + }
>>>>>>>>> +
>>>>>>>>
>>>>>>>> This check forces applications that enable LRO to provide
>>>>>> 'max_lro_pkt_size'
>>>>>>>> config value.
>>>>>>>
>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>
>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>> With this check, if the application requested LRO offload but not
>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>
>>>>> Yes
>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>> support as max?
>>>>> Yes can be - you know, we can do everything we want but it is better
>>>>> to be
>>>> consistent:
>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>> offload, max
>>>> lro pkt len should be mandatory for LRO offload.
>>>>>
>>>>> So your question is actually why both, non-lro packets and LRO
>>>>> packets max
>>>> size are mandatory...
>>>>>
>>>>>
>>>>> I think it should be important values for net applications management.
>>>>> Also good for mbuf size managements.
>>>>>
>>>>>>>
>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>> mandatory value?
>>>>>>>
>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>> frame
>>>>>> offload.
>>>>>>> So now, when the user configures a LRO offload he must to set max
>>>>>>> lro pkt
>>>>>> len.
>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>
>>>>>>> This parameter defines well the LRO behavior.
>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>> be the
>>>>>> maximum size for LRO aggregated packets.
>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>> limit it
>>>>>> according to the device capability.
>>>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>>>> Also, the application can create different mempools for LRO queues
>>>>>>> to
>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>
>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
>> '0'?
>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>>>
>>>>>> Of course I can see the updates Matan, my point is "What happens if
>>>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>
>>>>> There is check.
>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>
>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>> easy to make new fields mandatory for PMDs but is this really
>> necessary?
>>>>>
>>>>> Yes, for consistence.
>>>>>
>>>>>>>
>>>>>>> as same as max rx pkt len, no?
>>>>>>>
>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>> Same answers as above.
>>>>>>>
>>>>>>
>>>>>> If application doesn't care the value, as it has been till now, and
>>>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>>>> the value provided by PMD instead of failing?
>>>>>
>>>>> Again, same question we can ask on max rx pkt len.
>>>>>
>>>>> Looks like the packet size is very important value which should be
>>>>> set by
>>>> the application.
>>>>>
>>>>> Previous applications have no option to configure it, so they
>>>>> haven't
>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>> supply this info.
>>>>>
>>>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>>>> Later, we can change both to other meaning.
>>>>>
>>>>
>>>> I think it is not a good reason to introduce a new mandatory config
>>>> option for application because of 'max_rx_pkt_len' does it.
>>>
>>> It is mandatory only if LRO offload is configured.
>>>
>>>> Will it work, if:
>>>> - If application doesn't provide this value, use the PMD max
>>>
>>> May cause a problem if the mbuf size is not enough for the PMD maximum.
>>
>> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
>> be used but you already explained that application may want to use different
>> mempools for LRO queues.
>>
> So , are you agree with the idea?
>
>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>> account and program the device accordingly (of course in LRO enabled case)
>> ?
>> This part seems missing and should be highlighted to other PMD maintainers.
>
>
> Yes, you are right.
> PMDs must limit the LRO aggregated packet according to the new field,
> And it probably very hard for the patch introducer to understand how to do it for each PMD.
>
> I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should be done in limited time.
Agree.
But experience showed that this synchronization is not as easy as it sounds,
whoever changing the interface/library says other PMDs should reflect the change
but most of the times other PMD maintainers not aware of it or if they do they
have other priorities for the release, so the changes should be in a way to give
more time to PMDs to adapt it and during this time library change shouldn't
break other PMDs.
>
> My suggestion here:
> 1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
> 2. To merge the ethdev patch in the start of rc3.
> 3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
> Note: this should be small change and only for ~5 PMDs:
> a. Introduce the info field according to the device ability.
> b. For each LRO queue:
> Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition).
>
> What do you think?
There is already a v6 which only updates dev_info fields to have the
'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we can
go with it for rc2.
For the configuration part, I suggest deferring it next release, which gives
more time for discussion and enough time for other PMDs to implement it.
And related configuration, right now devices already configured to limit the
packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
'max_lro_pkt_len' for the queues LRO is supported, why not make this
configuration more explicitly with specific API as Konstantin suggested [1],
this way it only affects the applications that are interested in and the PMDs
that want to support this.
Current implementation is under 'rte_eth_dev_configure()' which is used by all
DPDK applications and impact of changing it is much larger, also it makes
mandatory for applications to provide this config option when LRO enabled,
explicit API gives same result without making a mandatory config option.
[1]
int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-11 11:15 ` Ferruh Yigit
@ 2019-11-11 11:33 ` Matan Azrad
2019-11-11 12:21 ` Ferruh Yigit
0 siblings, 1 reply; 79+ messages in thread
From: Matan Azrad @ 2019-11-11 11:33 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Ferruh Yigit
> On 11/9/2019 6:20 PM, Matan Azrad wrote:
> > Hi
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>>>
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> From: Ferruh Yigit
> >>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>>>
> >>>>>>>> RTE_ETHER_MAX_LEN;
> >>>>>>>>> }
> >>>>>>>>>
> >>>>>>>>> + /*
> >>>>>>>>> + * If LRO is enabled, check that the maximum aggregated
> >>>> packet
> >>>>>>>>> + * size is supported by the configured device.
> >>>>>>>>> + */
> >>>>>>>>> + if (dev_conf->rxmode.offloads &
> >>>> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>>>> + ret = check_lro_pkt_size(
> >>>>>>>>> + port_id, dev_conf-
> >>>>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>>>> + dev_info.max_lro_pkt_size);
> >>>>>>>>> + if (ret != 0)
> >>>>>>>>> + goto rollback;
> >>>>>>>>> + }
> >>>>>>>>> +
> >>>>>>>>
> >>>>>>>> This check forces applications that enable LRO to provide
> >>>>>> 'max_lro_pkt_size'
> >>>>>>>> config value.
> >>>>>>>
> >>>>>>> Yes.(we can break an API, we noticed it)
> >>>>>>
> >>>>>> I am not talking about API/ABI breakage, that part is OK.
> >>>>>> With this check, if the application requested LRO offload but not
> >>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>>>
> >>>>> Yes
> >>>>>> Can there be a case application is good with whatever the PMD can
> >>>>>> support as max?
> >>>>> Yes can be - you know, we can do everything we want but it is
> >>>>> better to be
> >>>> consistent:
> >>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>>>> offload, max
> >>>> lro pkt len should be mandatory for LRO offload.
> >>>>>
> >>>>> So your question is actually why both, non-lro packets and LRO
> >>>>> packets max
> >>>> size are mandatory...
> >>>>>
> >>>>>
> >>>>> I think it should be important values for net applications management.
> >>>>> Also good for mbuf size managements.
> >>>>>
> >>>>>>>
> >>>>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>>>> mandatory value?
> >>>>>>>
> >>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>>>> frame
> >>>>>> offload.
> >>>>>>> So now, when the user configures a LRO offload he must to set
> >>>>>>> max lro pkt
> >>>>>> len.
> >>>>>>> We don't want to confuse the user here with the max rx pkt len
> >>>>>> configurations and behaviors, they should be with same logic.
> >>>>>>>
> >>>>>>> This parameter defines well the LRO behavior.
> >>>>>>> Before this, each PMD took its own interpretation to what should
> >>>>>>> be the
> >>>>>> maximum size for LRO aggregated packets.
> >>>>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>>>> limit it
> >>>>>> according to the device capability.
> >>>>>>> By this way, also, the PMD can organize\optimize its data-path
> more.
> >>>>>>> Also, the application can create different mempools for LRO
> >>>>>>> queues to
> >>>>>> allow bigger packet receiving for LRO traffic.
> >>>>>>>
> >>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it
> >>>>>>>> is
> >> '0'?
> >>>>>>> Yes, you can see the feature description Dekel added.
> >>>>>>> This patch also updates all the PMDs support an LRO for non-0
> value.
> >>>>>>
> >>>>>> Of course I can see the updates Matan, my point is "What happens
> >>>>>> if PMD doesn't provide 'max_lro_pkt_size'",
> >>>>>> 1) There is no check for it right, so it is acceptable?
> >>>>>
> >>>>> There is check.
> >>>>> If the capability is 0, any non-zero configuration will fail.
> >>>>>
> >>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>>>> easy to make new fields mandatory for PMDs but is this really
> >> necessary?
> >>>>>
> >>>>> Yes, for consistence.
> >>>>>
> >>>>>>>
> >>>>>>> as same as max rx pkt len, no?
> >>>>>>>
> >>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>>>> what PMD provided if application doesn't provide it?
> >>>>>>> Same answers as above.
> >>>>>>>
> >>>>>>
> >>>>>> If application doesn't care the value, as it has been till now,
> >>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> >>>>>> level use the value provided by PMD instead of failing?
> >>>>>
> >>>>> Again, same question we can ask on max rx pkt len.
> >>>>>
> >>>>> Looks like the packet size is very important value which should be
> >>>>> set by
> >>>> the application.
> >>>>>
> >>>>> Previous applications have no option to configure it, so they
> >>>>> haven't
> >>>> configure it, (probably cover it somehow) I think it is our miss to
> >>>> supply this info.
> >>>>>
> >>>>> Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> >>>>> Later, we can change both to other meaning.
> >>>>>
> >>>>
> >>>> I think it is not a good reason to introduce a new mandatory config
> >>>> option for application because of 'max_rx_pkt_len' does it.
> >>>
> >>> It is mandatory only if LRO offload is configured.
> >>>
> >>>> Will it work, if:
> >>>> - If application doesn't provide this value, use the PMD max
> >>>
> >>> May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> >>
> >> OK, this is what I was missing, for this case I was thinking
> >> max_rx_pkt_len will be used but you already explained that
> >> application may want to use different mempools for LRO queues.
> >>
> > So , are you agree with the idea?
> >
> >> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> >> account and program the device accordingly (of course in LRO enabled
> >> case) ?
> >> This part seems missing and should be highlighted to other PMD
> maintainers.
> >
> >
> > Yes, you are right.
> > PMDs must limit the LRO aggregated packet according to the new field,
> > And it probably very hard for the patch introducer to understand how to do
> it for each PMD.
> >
> > I think each new configuration requires other maintainers\developers to
> adjust their own PMD code to the new configuration and it should be done in
> limited time.
>
> Agree.
> But experience showed that this synchronization is not as easy as it sounds,
> whoever changing the interface/library says other PMDs should reflect the
> change but most of the times other PMD maintainers not aware of it or if
> they do they have other priorities for the release, so the changes should be
> in a way to give more time to PMDs to adapt it and during this time library
> change shouldn't break other PMDs.
>
Yes.
> > My suggestion here:
> > 1. To reserve the info field and the configuration field for rc2.(if
> > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> start of rc3.
> > 3. Request each relevant PMD to adjust its PMD to the new configuration
> for the end of rc3.
> > Note: this should be small change and only for ~5 PMDs:
> > a. Introduce the info field according to the device ability.
> > b. For each LRO queue:
> > Use the LRO max size configuration instead of the
> current max rx pkt len configuration(looks like small condition).
> >
> > What do you think?
>
> There is already a v6 which only updates dev_info fields to have the
> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we
> can go with it for rc2.
>
Doesn’t make sense to expose the info field without the configuration.
> For the configuration part, I suggest deferring it next release, which gives
> more time for discussion and enough time for other PMDs to implement it.
>
>
> And related configuration, right now devices already configured to limit the
> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
> 'max_lro_pkt_len' for the queues LRO is supported, why not make this
> configuration more explicitly with specific API as Konstantin suggested [1],
> this way it only affects the applications that are interested in and the PMDs
> that want to support this.
> Current implementation is under 'rte_eth_dev_configure()' which is used by
> all DPDK applications and impact of changing it is much larger, also it makes
> mandatory for applications to provide this config option when LRO enabled,
> explicit API gives same result without making a mandatory config option.
>
> [1]
> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
Please see my answers to Konstantin regarding this topic.
One more option:
In order to not break PMDs because of this feature:
0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-11 11:33 ` Matan Azrad
@ 2019-11-11 12:21 ` Ferruh Yigit
2019-11-11 13:32 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-11 12:21 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/11/2019 11:33 AM, Matan Azrad wrote:
>
>
> From: Ferruh Yigit
>> On 11/9/2019 6:20 PM, Matan Azrad wrote:
>>> Hi
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>>>
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> From: Ferruh Yigit
>>>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>>>
>>>>>>>>>> RTE_ETHER_MAX_LEN;
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> + /*
>>>>>>>>>>> + * If LRO is enabled, check that the maximum aggregated
>>>>>> packet
>>>>>>>>>>> + * size is supported by the configured device.
>>>>>>>>>>> + */
>>>>>>>>>>> + if (dev_conf->rxmode.offloads &
>>>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>>>> + ret = check_lro_pkt_size(
>>>>>>>>>>> + port_id, dev_conf-
>>>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>>>> + dev_info.max_lro_pkt_size);
>>>>>>>>>>> + if (ret != 0)
>>>>>>>>>>> + goto rollback;
>>>>>>>>>>> + }
>>>>>>>>>>> +
>>>>>>>>>>
>>>>>>>>>> This check forces applications that enable LRO to provide
>>>>>>>> 'max_lro_pkt_size'
>>>>>>>>>> config value.
>>>>>>>>>
>>>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>>>
>>>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>>>> With this check, if the application requested LRO offload but not
>>>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>>>
>>>>>>> Yes
>>>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>>>> support as max?
>>>>>>> Yes can be - you know, we can do everything we want but it is
>>>>>>> better to be
>>>>>> consistent:
>>>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>>>> offload, max
>>>>>> lro pkt len should be mandatory for LRO offload.
>>>>>>>
>>>>>>> So your question is actually why both, non-lro packets and LRO
>>>>>>> packets max
>>>>>> size are mandatory...
>>>>>>>
>>>>>>>
>>>>>>> I think it should be important values for net applications management.
>>>>>>> Also good for mbuf size managements.
>>>>>>>
>>>>>>>>>
>>>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>>>> mandatory value?
>>>>>>>>>
>>>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>>>> frame
>>>>>>>> offload.
>>>>>>>>> So now, when the user configures a LRO offload he must to set
>>>>>>>>> max lro pkt
>>>>>>>> len.
>>>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>>>
>>>>>>>>> This parameter defines well the LRO behavior.
>>>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>>>> be the
>>>>>>>> maximum size for LRO aggregated packets.
>>>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>>>> limit it
>>>>>>>> according to the device capability.
>>>>>>>>> By this way, also, the PMD can organize\optimize its data-path
>> more.
>>>>>>>>> Also, the application can create different mempools for LRO
>>>>>>>>> queues to
>>>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>>>
>>>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it
>>>>>>>>>> is
>>>> '0'?
>>>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>>>> This patch also updates all the PMDs support an LRO for non-0
>> value.
>>>>>>>>
>>>>>>>> Of course I can see the updates Matan, my point is "What happens
>>>>>>>> if PMD doesn't provide 'max_lro_pkt_size'",
>>>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>>>
>>>>>>> There is check.
>>>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>>>
>>>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>>>> easy to make new fields mandatory for PMDs but is this really
>>>> necessary?
>>>>>>>
>>>>>>> Yes, for consistence.
>>>>>>>
>>>>>>>>>
>>>>>>>>> as same as max rx pkt len, no?
>>>>>>>>>
>>>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>>>> Same answers as above.
>>>>>>>>>
>>>>>>>>
>>>>>>>> If application doesn't care the value, as it has been till now,
>>>>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
>>>>>>>> level use the value provided by PMD instead of failing?
>>>>>>>
>>>>>>> Again, same question we can ask on max rx pkt len.
>>>>>>>
>>>>>>> Looks like the packet size is very important value which should be
>>>>>>> set by
>>>>>> the application.
>>>>>>>
>>>>>>> Previous applications have no option to configure it, so they
>>>>>>> haven't
>>>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>>>> supply this info.
>>>>>>>
>>>>>>> Let's do it in same way as we do max rx pkt len (as this patch main
>> idea).
>>>>>>> Later, we can change both to other meaning.
>>>>>>>
>>>>>>
>>>>>> I think it is not a good reason to introduce a new mandatory config
>>>>>> option for application because of 'max_rx_pkt_len' does it.
>>>>>
>>>>> It is mandatory only if LRO offload is configured.
>>>>>
>>>>>> Will it work, if:
>>>>>> - If application doesn't provide this value, use the PMD max
>>>>>
>>>>> May cause a problem if the mbuf size is not enough for the PMD
>> maximum.
>>>>
>>>> OK, this is what I was missing, for this case I was thinking
>>>> max_rx_pkt_len will be used but you already explained that
>>>> application may want to use different mempools for LRO queues.
>>>>
>>> So , are you agree with the idea?
>>>
>>>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>>>> account and program the device accordingly (of course in LRO enabled
>>>> case) ?
>>>> This part seems missing and should be highlighted to other PMD
>> maintainers.
>>>
>>>
>>> Yes, you are right.
>>> PMDs must limit the LRO aggregated packet according to the new field,
>>> And it probably very hard for the patch introducer to understand how to do
>> it for each PMD.
>>>
>>> I think each new configuration requires other maintainers\developers to
>> adjust their own PMD code to the new configuration and it should be done in
>> limited time.
>>
>> Agree.
>> But experience showed that this synchronization is not as easy as it sounds,
>> whoever changing the interface/library says other PMDs should reflect the
>> change but most of the times other PMD maintainers not aware of it or if
>> they do they have other priorities for the release, so the changes should be
>> in a way to give more time to PMDs to adapt it and during this time library
>> change shouldn't break other PMDs.
>>
>
> Yes.
>
>>> My suggestion here:
>>> 1. To reserve the info field and the configuration field for rc2.(if
>>> it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
>> start of rc3.
>>> 3. Request each relevant PMD to adjust its PMD to the new configuration
>> for the end of rc3.
>>> Note: this should be small change and only for ~5 PMDs:
>>> a. Introduce the info field according to the device ability.
>>> b. For each LRO queue:
>>> Use the LRO max size configuration instead of the
>> current max rx pkt len configuration(looks like small condition).
>>>
>>> What do you think?
>>
>> There is already a v6 which only updates dev_info fields to have the
>> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we
>> can go with it for rc2.
>>
>
> Doesn’t make sense to expose the info field without the configuration.
>
>
>> For the configuration part, I suggest deferring it next release, which gives
>> more time for discussion and enough time for other PMDs to implement it.
>>
>>
>> And related configuration, right now devices already configured to limit the
>> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
>> 'max_lro_pkt_len' for the queues LRO is supported, why not make this
>> configuration more explicitly with specific API as Konstantin suggested [1],
>> this way it only affects the applications that are interested in and the PMDs
>> that want to support this.
>> Current implementation is under 'rte_eth_dev_configure()' which is used by
>> all DPDK applications and impact of changing it is much larger, also it makes
>> mandatory for applications to provide this config option when LRO enabled,
>> explicit API gives same result without making a mandatory config option.
>>
>> [1]
>> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
>
> Please see my answers to Konstantin regarding this topic.
>
>
>
> One more option:
> In order to not break PMDs because of this feature:
> 0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail.
>
I don't see this is a mandatory field if the LRO is enabled, am I missing
something? And current implementation does so by failing configure(), the affect
to the applications is my first concern.
Second is when application supplied the proper values but PMD is not doing
anything without letting application anything done.
That is why I think explicit API makes this clear and only required by
application wants to use it.
Similar can be done with following, this also doesn't require both application
and PMD changes, wdyt?
ethdev, configure():
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pktlen) {
if (dev_info.max_lro_pktlen) {
validate(rxmode.max_lro_pktlen, dev_info.max_lro_pktlen)
} else if (dev_info.max_rx_pktlen)
validate(rxmode.max_lro_pktlen, dev_info.max_rx_pktlen)
}
}
}
in PMD:
if (LRO) {
queue.max_pktlen = rxmode.max_lro_pktlen ?
rxmode.max_lro_pktlen :
rxmode.max_tx_pktlen;
}
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-11 12:21 ` Ferruh Yigit
@ 2019-11-11 13:32 ` Matan Azrad
0 siblings, 0 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-11 13:32 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Ferruh Yigit
> On 11/11/2019 11:33 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/9/2019 6:20 PM, Matan Azrad wrote:
> >>> Hi
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >>>>>
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> From: Ferruh Yigit
> >>>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>>>>>> Hi
> >>>>>>>>>
> >>>>>>>>> From: Ferruh Yigit
> >>>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>>>>>
> >>>>>>>>>> RTE_ETHER_MAX_LEN;
> >>>>>>>>>>> }
> >>>>>>>>>>>
> >>>>>>>>>>> + /*
> >>>>>>>>>>> + * If LRO is enabled, check that the maximum
> aggregated
> >>>>>> packet
> >>>>>>>>>>> + * size is supported by the configured device.
> >>>>>>>>>>> + */
> >>>>>>>>>>> + if (dev_conf->rxmode.offloads &
> >>>>>> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>>>>>> + ret = check_lro_pkt_size(
> >>>>>>>>>>> + port_id, dev_conf-
> >>>>>>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>>>>>> + dev_info.max_lro_pkt_size);
> >>>>>>>>>>> + if (ret != 0)
> >>>>>>>>>>> + goto rollback;
> >>>>>>>>>>> + }
> >>>>>>>>>>> +
> >>>>>>>>>>
> >>>>>>>>>> This check forces applications that enable LRO to provide
> >>>>>>>> 'max_lro_pkt_size'
> >>>>>>>>>> config value.
> >>>>>>>>>
> >>>>>>>>> Yes.(we can break an API, we noticed it)
> >>>>>>>>
> >>>>>>>> I am not talking about API/ABI breakage, that part is OK.
> >>>>>>>> With this check, if the application requested LRO offload but
> >>>>>>>> not provided 'max_lro_pkt_size' value, device configuration will
> fail.
> >>>>>>>>
> >>>>>>> Yes
> >>>>>>>> Can there be a case application is good with whatever the PMD
> >>>>>>>> can support as max?
> >>>>>>> Yes can be - you know, we can do everything we want but it is
> >>>>>>> better to be
> >>>>>> consistent:
> >>>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>>>>>> offload, max
> >>>>>> lro pkt len should be mandatory for LRO offload.
> >>>>>>>
> >>>>>>> So your question is actually why both, non-lro packets and LRO
> >>>>>>> packets max
> >>>>>> size are mandatory...
> >>>>>>>
> >>>>>>>
> >>>>>>> I think it should be important values for net applications
> management.
> >>>>>>> Also good for mbuf size managements.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>>>>>> mandatory value?
> >>>>>>>>>
> >>>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>>>>>> frame
> >>>>>>>> offload.
> >>>>>>>>> So now, when the user configures a LRO offload he must to set
> >>>>>>>>> max lro pkt
> >>>>>>>> len.
> >>>>>>>>> We don't want to confuse the user here with the max rx pkt len
> >>>>>>>> configurations and behaviors, they should be with same logic.
> >>>>>>>>>
> >>>>>>>>> This parameter defines well the LRO behavior.
> >>>>>>>>> Before this, each PMD took its own interpretation to what
> >>>>>>>>> should be the
> >>>>>>>> maximum size for LRO aggregated packets.
> >>>>>>>>> Now, the user must say what is his intension, and the ethdev
> >>>>>>>>> can limit it
> >>>>>>>> according to the device capability.
> >>>>>>>>> By this way, also, the PMD can organize\optimize its data-path
> >> more.
> >>>>>>>>> Also, the application can create different mempools for LRO
> >>>>>>>>> queues to
> >>>>>>>> allow bigger packet receiving for LRO traffic.
> >>>>>>>>>
> >>>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> >>>>>>>>>> it is
> >>>> '0'?
> >>>>>>>>> Yes, you can see the feature description Dekel added.
> >>>>>>>>> This patch also updates all the PMDs support an LRO for non-0
> >> value.
> >>>>>>>>
> >>>>>>>> Of course I can see the updates Matan, my point is "What
> >>>>>>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> >>>>>>>> 1) There is no check for it right, so it is acceptable?
> >>>>>>>
> >>>>>>> There is check.
> >>>>>>> If the capability is 0, any non-zero configuration will fail.
> >>>>>>>
> >>>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it
> >>>>>>>> is easy to make new fields mandatory for PMDs but is this
> >>>>>>>> really
> >>>> necessary?
> >>>>>>>
> >>>>>>> Yes, for consistence.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>> as same as max rx pkt len, no?
> >>>>>>>>>
> >>>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value
> >>>>>>>>>> to what PMD provided if application doesn't provide it?
> >>>>>>>>> Same answers as above.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> If application doesn't care the value, as it has been till now,
> >>>>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> >>>>>>>> level use the value provided by PMD instead of failing?
> >>>>>>>
> >>>>>>> Again, same question we can ask on max rx pkt len.
> >>>>>>>
> >>>>>>> Looks like the packet size is very important value which should
> >>>>>>> be set by
> >>>>>> the application.
> >>>>>>>
> >>>>>>> Previous applications have no option to configure it, so they
> >>>>>>> haven't
> >>>>>> configure it, (probably cover it somehow) I think it is our miss
> >>>>>> to supply this info.
> >>>>>>>
> >>>>>>> Let's do it in same way as we do max rx pkt len (as this patch
> >>>>>>> main
> >> idea).
> >>>>>>> Later, we can change both to other meaning.
> >>>>>>>
> >>>>>>
> >>>>>> I think it is not a good reason to introduce a new mandatory
> >>>>>> config option for application because of 'max_rx_pkt_len' does it.
> >>>>>
> >>>>> It is mandatory only if LRO offload is configured.
> >>>>>
> >>>>>> Will it work, if:
> >>>>>> - If application doesn't provide this value, use the PMD max
> >>>>>
> >>>>> May cause a problem if the mbuf size is not enough for the PMD
> >> maximum.
> >>>>
> >>>> OK, this is what I was missing, for this case I was thinking
> >>>> max_rx_pkt_len will be used but you already explained that
> >>>> application may want to use different mempools for LRO queues.
> >>>>
> >>> So , are you agree with the idea?
> >>>
> >>>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size'
> >>>> into account and program the device accordingly (of course in LRO
> >>>> enabled
> >>>> case) ?
> >>>> This part seems missing and should be highlighted to other PMD
> >> maintainers.
> >>>
> >>>
> >>> Yes, you are right.
> >>> PMDs must limit the LRO aggregated packet according to the new
> >>> field, And it probably very hard for the patch introducer to
> >>> understand how to do
> >> it for each PMD.
> >>>
> >>> I think each new configuration requires other maintainers\developers
> >>> to
> >> adjust their own PMD code to the new configuration and it should be
> >> done in limited time.
> >>
> >> Agree.
> >> But experience showed that this synchronization is not as easy as it
> >> sounds, whoever changing the interface/library says other PMDs should
> >> reflect the change but most of the times other PMD maintainers not
> >> aware of it or if they do they have other priorities for the release,
> >> so the changes should be in a way to give more time to PMDs to adapt
> >> it and during this time library change shouldn't break other PMDs.
> >>
> >
> > Yes.
> >
> >>> My suggestion here:
> >>> 1. To reserve the info field and the configuration field for rc2.(if
> >>> it is critical not to break ABI for rc3) 2. To merge the ethdev
> >>> patch in the
> >> start of rc3.
> >>> 3. Request each relevant PMD to adjust its PMD to the new
> >>> configuration
> >> for the end of rc3.
> >>> Note: this should be small change and only for ~5 PMDs:
> >>> a. Introduce the info field according to the device ability.
> >>> b. For each LRO queue:
> >>> Use the LRO max size configuration instead of the
> >> current max rx pkt len configuration(looks like small condition).
> >>>
> >>> What do you think?
> >>
> >> There is already a v6 which only updates dev_info fields to have the
> >> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I
> >> think we can go with it for rc2.
> >>
> >
> > Doesn’t make sense to expose the info field without the configuration.
> >
> >
> >> For the configuration part, I suggest deferring it next release,
> >> which gives more time for discussion and enough time for other PMDs to
> implement it.
> >>
> >>
> >> And related configuration, right now devices already configured to
> >> limit the packet size to 'max_rx_pkt_len', it can be an optimization
> >> to increase it to 'max_lro_pkt_len' for the queues LRO is supported,
> >> why not make this configuration more explicitly with specific API as
> >> Konstantin suggested [1], this way it only affects the applications
> >> that are interested in and the PMDs that want to support this.
> >> Current implementation is under 'rte_eth_dev_configure()' which is
> >> used by all DPDK applications and impact of changing it is much
> >> larger, also it makes mandatory for applications to provide this
> >> config option when LRO enabled, explicit API gives same result without
> making a mandatory config option.
> >>
> >> [1]
> >> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
> >
> > Please see my answers to Konstantin regarding this topic.
> >
> >
> >
> > One more option:
> > In order to not break PMDs because of this feature:
> > 0 in the capability field means, The PMD doesn't support LRO special
> limitation so if the application configuration is not the same like
> max_rx_pkt_len the validation will fail.
> >
>
> I don't see this is a mandatory field if the LRO is enabled, am I missing
> something?
From the application size, this is mandatory, you right exactly like max_rx_pkt_len.
> And current implementation does so by failing configure(), the
> affect to the applications is my first concern.
This is a small effect as any API change.
If an exists application wants to save its current LRO behavior, it just need to put max_lro_pkt_len=max_rx_pkt_len.
Do you think this is a big change? Why?
> Second is when application supplied the proper values but PMD is not doing
> anything without letting application anything done.
>
PMD which doesn't change its info to value != 0, as I said, means that it doesn't support LRO queues special limited size.
When the PMD maintainers have time to support the feature, they just need to change the info value to be !=0 and to take into account the max_lro_pkt_len in configuration.
> That is why I think explicit API makes this clear and only required by
> application wants to use it.
I think that exposing a new function API is not good because it introduce a different way to limit the Rx packet size.
For regular packet - configure it in the configuration struct.
For LRO packets - use a function to do it.
It is very confusing and not intuitive from the user side.
Why not to save convention and consistent, so
max_rx_pkt_len is mandatory (for JUMBO offload) and in the config structure
Also the new lro conf is mandatory (for LRO offload) and in the configuration structure.
So, all the configurations to limit the Rx packet size are in the same place and done by the same way.
> Similar can be done with following, this also doesn't require both application
> and PMD changes, wdyt?
>
> ethdev, configure():
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pktlen) {
> if (dev_info.max_lro_pktlen) {
> validate(rxmode.max_lro_pktlen, dev_info.max_lro_pktlen)
> } else if (dev_info.max_rx_pktlen)
> validate(rxmode.max_lro_pktlen, dev_info.max_rx_pktlen)
> }
> }
> }
>
>
> in PMD:
> if (LRO) {
> queue.max_pktlen = rxmode.max_lro_pktlen ?
> rxmode.max_lro_pktlen :
> rxmode.max_tx_pktlen;
> }
Again, my only concern here is the consistency of Rx packet size limitation mandatory for LRO and non-LRO.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 11:56 ` Matan Azrad
2019-11-08 12:51 ` Ferruh Yigit
@ 2019-11-08 13:11 ` Ananyev, Konstantin
2019-11-08 14:10 ` Dekel Peled
1 sibling, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-08 13:11 UTC (permalink / raw)
To: Matan Azrad, Yigit, Ferruh, Dekel Peled, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Friday, November 8, 2019 11:56 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Kovacevic, Marko <marko.kovacevic@intel.com>; nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> somnath.kotur@broadcom.com; Burakov, Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
> rmody@marvell.com; shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei <tiwei.bie@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>; yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; arybchenko@solarflare.com; Wu,
> Jingjing <jingjing.wu@intel.com>; Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
>
>
>
> From: Ferruh Yigit
> > On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > >
> > >
> > > From: Ferruh Yigit
> > >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > >>> Hi
> > >>>
> > >>> From: Ferruh Yigit
> > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > >>>>>
> > >>>> RTE_ETHER_MAX_LEN;
> > >>>>> }
> > >>>>>
> > >>>>> + /*
> > >>>>> + * If LRO is enabled, check that the maximum aggregated
> > packet
> > >>>>> + * size is supported by the configured device.
> > >>>>> + */
> > >>>>> + if (dev_conf->rxmode.offloads &
> > DEV_RX_OFFLOAD_TCP_LRO) {
> > >>>>> + ret = check_lro_pkt_size(
> > >>>>> + port_id, dev_conf-
> > >>>>> rxmode.max_lro_pkt_size,
> > >>>>> + dev_info.max_lro_pkt_size);
> > >>>>> + if (ret != 0)
> > >>>>> + goto rollback;
> > >>>>> + }
> > >>>>> +
> > >>>>
> > >>>> This check forces applications that enable LRO to provide
> > >> 'max_lro_pkt_size'
> > >>>> config value.
> > >>>
> > >>> Yes.(we can break an API, we noticed it)
> > >>
> > >> I am not talking about API/ABI breakage, that part is OK.
> > >> With this check, if the application requested LRO offload but not
> > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > >>
> > > Yes
> > >> Can there be a case application is good with whatever the PMD can
> > >> support as max?
> > > Yes can be - you know, we can do everything we want but it is better to be
> > consistent:
> > > Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
> > lro pkt len should be mandatory for LRO offload.
> > >
> > > So your question is actually why both, non-lro packets and LRO packets max
> > size are mandatory...
> > >
> > >
> > > I think it should be important values for net applications management.
> > > Also good for mbuf size managements.
> > >
> > >>>
> > >>>> - Why it is mandatory now, how it was working before if it is
> > >>>> mandatory value?
> > >>>
> > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> > >> offload.
> > >>> So now, when the user configures a LRO offload he must to set max
> > >>> lro pkt
> > >> len.
> > >>> We don't want to confuse the user here with the max rx pkt len
> > >> configurations and behaviors, they should be with same logic.
> > >>>
> > >>> This parameter defines well the LRO behavior.
> > >>> Before this, each PMD took its own interpretation to what should be
> > >>> the
> > >> maximum size for LRO aggregated packets.
> > >>> Now, the user must say what is his intension, and the ethdev can
> > >>> limit it
> > >> according to the device capability.
> > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > >>> Also, the application can create different mempools for LRO queues
> > >>> to
> > >> allow bigger packet receiving for LRO traffic.
> > >>>
> > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> > >>> Yes, you can see the feature description Dekel added.
> > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > >>
> > >> Of course I can see the updates Matan, my point is "What happens if
> > >> PMD doesn't provide 'max_lro_pkt_size'",
> > >> 1) There is no check for it right, so it is acceptable?
> > >
> > > There is check.
> > > If the capability is 0, any non-zero configuration will fail.
> > >
> > >> 2) Are we making this filed mandatory to provide for PMDs, it is easy
> > >> to make new fields mandatory for PMDs but is this really necessary?
> > >
> > > Yes, for consistence.
> > >
> > >>>
> > >>> as same as max rx pkt len, no?
> > >>>
> > >>>> - What do you think setting 'max_lro_pkt_size' config value to what
> > >>>> PMD provided if application doesn't provide it?
> > >>> Same answers as above.
> > >>>
> > >>
> > >> If application doesn't care the value, as it has been till now, and
> > >> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> > >> the value provided by PMD instead of failing?
> > >
> > > Again, same question we can ask on max rx pkt len.
> > >
> > > Looks like the packet size is very important value which should be set by
> > the application.
> > >
> > > Previous applications have no option to configure it, so they haven't
> > configure it, (probably cover it somehow) I think it is our miss to supply this
> > info.
> > >
> > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > Later, we can change both to other meaning.
> > >
> >
> > I think it is not a good reason to introduce a new mandatory config option for
> > application because of 'max_rx_pkt_len' does it.
>
> It is mandatory only if LRO offload is configured.
So max_rx_pkt_len will remain max size of one packet,
while max_lro_len will be max accumulate size for each LRO session?
BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
ixgbe_vf, as I remember, doesn’t support LRO at all.
>
> > Will it work, if:
> > - If application doesn't provide this value, use the PMD max
>
> May cause a problem if the mbuf size is not enough for the PMD maximum.
Another question, what will happen if PMD will ignore that value and will
generate packets bigger then requested?
>
> > - If both application and PMD doesn't provide this value, fail on configure()?
>
> It will work.
> In my opinion - not ideal.
>
> Matan
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 13:11 ` Ananyev, Konstantin
@ 2019-11-08 14:10 ` Dekel Peled
2019-11-08 14:52 ` Ananyev, Konstantin
0 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 14:10 UTC (permalink / raw)
To: Ananyev, Konstantin, Matan Azrad, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, November 8, 2019 3:11 PM
> To: Matan Azrad <matan@mellanox.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
>
>
>
> > -----Original Message-----
> > From: Matan Azrad <matan@mellanox.com>
> > Sent: Friday, November 8, 2019 11:56 AM
> > To: Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled
> > <dekelp@mellanox.com>; Mcnamara, John <john.mcnamara@intel.com>;
> > Kovacevic, Marko <marko.kovacevic@intel.com>;
> nhorman@tuxdriver.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> > Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> > cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu,
> Wenzhuo
> > <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; Shahaf Shuler
> <shahafs@mellanox.com>;
> > Slava Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> > shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> > <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> > yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> > arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> > Iremonger, Bernard <bernard.iremonger@intel.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max
> > LRO packet size
> >
> >
> >
> > From: Ferruh Yigit
> > > On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > >
> > > >
> > > > From: Ferruh Yigit
> > > >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > >>> Hi
> > > >>>
> > > >>> From: Ferruh Yigit
> > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > >>>>>
> > > >>>> RTE_ETHER_MAX_LEN;
> > > >>>>> }
> > > >>>>>
> > > >>>>> + /*
> > > >>>>> + * If LRO is enabled, check that the maximum aggregated
> > > packet
> > > >>>>> + * size is supported by the configured device.
> > > >>>>> + */
> > > >>>>> + if (dev_conf->rxmode.offloads &
> > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > >>>>> + ret = check_lro_pkt_size(
> > > >>>>> + port_id, dev_conf-
> > > >>>>> rxmode.max_lro_pkt_size,
> > > >>>>> + dev_info.max_lro_pkt_size);
> > > >>>>> + if (ret != 0)
> > > >>>>> + goto rollback;
> > > >>>>> + }
> > > >>>>> +
> > > >>>>
> > > >>>> This check forces applications that enable LRO to provide
> > > >> 'max_lro_pkt_size'
> > > >>>> config value.
> > > >>>
> > > >>> Yes.(we can break an API, we noticed it)
> > > >>
> > > >> I am not talking about API/ABI breakage, that part is OK.
> > > >> With this check, if the application requested LRO offload but not
> > > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > > >>
> > > > Yes
> > > >> Can there be a case application is good with whatever the PMD can
> > > >> support as max?
> > > > Yes can be - you know, we can do everything we want but it is
> > > > better to be
> > > consistent:
> > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > offload, max
> > > lro pkt len should be mandatory for LRO offload.
> > > >
> > > > So your question is actually why both, non-lro packets and LRO
> > > > packets max
> > > size are mandatory...
> > > >
> > > >
> > > > I think it should be important values for net applications management.
> > > > Also good for mbuf size managements.
> > > >
> > > >>>
> > > >>>> - Why it is mandatory now, how it was working before if it is
> > > >>>> mandatory value?
> > > >>>
> > > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > >>> frame
> > > >> offload.
> > > >>> So now, when the user configures a LRO offload he must to set
> > > >>> max lro pkt
> > > >> len.
> > > >>> We don't want to confuse the user here with the max rx pkt len
> > > >> configurations and behaviors, they should be with same logic.
> > > >>>
> > > >>> This parameter defines well the LRO behavior.
> > > >>> Before this, each PMD took its own interpretation to what should
> > > >>> be the
> > > >> maximum size for LRO aggregated packets.
> > > >>> Now, the user must say what is his intension, and the ethdev can
> > > >>> limit it
> > > >> according to the device capability.
> > > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > > >>> Also, the application can create different mempools for LRO
> > > >>> queues to
> > > >> allow bigger packet receiving for LRO traffic.
> > > >>>
> > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> > > >>> Yes, you can see the feature description Dekel added.
> > > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > > >>
> > > >> Of course I can see the updates Matan, my point is "What happens
> > > >> if PMD doesn't provide 'max_lro_pkt_size'",
> > > >> 1) There is no check for it right, so it is acceptable?
> > > >
> > > > There is check.
> > > > If the capability is 0, any non-zero configuration will fail.
> > > >
> > > >> 2) Are we making this filed mandatory to provide for PMDs, it is
> > > >> easy to make new fields mandatory for PMDs but is this really
> necessary?
> > > >
> > > > Yes, for consistence.
> > > >
> > > >>>
> > > >>> as same as max rx pkt len, no?
> > > >>>
> > > >>>> - What do you think setting 'max_lro_pkt_size' config value to
> > > >>>> what PMD provided if application doesn't provide it?
> > > >>> Same answers as above.
> > > >>>
> > > >>
> > > >> If application doesn't care the value, as it has been till now,
> > > >> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > >> level use the value provided by PMD instead of failing?
> > > >
> > > > Again, same question we can ask on max rx pkt len.
> > > >
> > > > Looks like the packet size is very important value which should be
> > > > set by
> > > the application.
> > > >
> > > > Previous applications have no option to configure it, so they
> > > > haven't
> > > configure it, (probably cover it somehow) I think it is our miss to
> > > supply this info.
> > > >
> > > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > > Later, we can change both to other meaning.
> > > >
> > >
> > > I think it is not a good reason to introduce a new mandatory config
> > > option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
>
> So max_rx_pkt_len will remain max size of one packet, while max_lro_len
> will be max accumulate size for each LRO session?
>
Yes.
> BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
Change to RTE_IPV4_MAX_PKT_LEN?
> ixgbe_vf, as I remember, doesn’t support LRO at all.
Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
Remove it?
>
> >
> > > Will it work, if:
> > > - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
>
> Another question, what will happen if PMD will ignore that value and will
> generate packets bigger then requested?
PMD should use this value and not ignore it.
>
> >
> > > - If both application and PMD doesn't provide this value, fail on
> configure()?
> >
> > It will work.
> > In my opinion - not ideal.
> >
> > Matan
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 14:10 ` Dekel Peled
@ 2019-11-08 14:52 ` Ananyev, Konstantin
2019-11-08 16:08 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-08 14:52 UTC (permalink / raw)
To: Dekel Peled, Matan Azrad, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
> > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > >>>>>
> > > > >>>> RTE_ETHER_MAX_LEN;
> > > > >>>>> }
> > > > >>>>>
> > > > >>>>> + /*
> > > > >>>>> + * If LRO is enabled, check that the maximum aggregated
> > > > packet
> > > > >>>>> + * size is supported by the configured device.
> > > > >>>>> + */
> > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > >>>>> + ret = check_lro_pkt_size(
> > > > >>>>> + port_id, dev_conf-
> > > > >>>>> rxmode.max_lro_pkt_size,
> > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > >>>>> + if (ret != 0)
> > > > >>>>> + goto rollback;
> > > > >>>>> + }
> > > > >>>>> +
> > > > >>>>
> > > > >>>> This check forces applications that enable LRO to provide
> > > > >> 'max_lro_pkt_size'
> > > > >>>> config value.
> > > > >>>
> > > > >>> Yes.(we can break an API, we noticed it)
> > > > >>
> > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > >> With this check, if the application requested LRO offload but not
> > > > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > > > >>
> > > > > Yes
> > > > >> Can there be a case application is good with whatever the PMD can
> > > > >> support as max?
> > > > > Yes can be - you know, we can do everything we want but it is
> > > > > better to be
> > > > consistent:
> > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > offload, max
> > > > lro pkt len should be mandatory for LRO offload.
> > > > >
> > > > > So your question is actually why both, non-lro packets and LRO
> > > > > packets max
> > > > size are mandatory...
> > > > >
> > > > >
> > > > > I think it should be important values for net applications management.
> > > > > Also good for mbuf size managements.
> > > > >
> > > > >>>
> > > > >>>> - Why it is mandatory now, how it was working before if it is
> > > > >>>> mandatory value?
> > > > >>>
> > > > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > > >>> frame
> > > > >> offload.
> > > > >>> So now, when the user configures a LRO offload he must to set
> > > > >>> max lro pkt
> > > > >> len.
> > > > >>> We don't want to confuse the user here with the max rx pkt len
> > > > >> configurations and behaviors, they should be with same logic.
> > > > >>>
> > > > >>> This parameter defines well the LRO behavior.
> > > > >>> Before this, each PMD took its own interpretation to what should
> > > > >>> be the
> > > > >> maximum size for LRO aggregated packets.
> > > > >>> Now, the user must say what is his intension, and the ethdev can
> > > > >>> limit it
> > > > >> according to the device capability.
> > > > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > > > >>> Also, the application can create different mempools for LRO
> > > > >>> queues to
> > > > >> allow bigger packet receiving for LRO traffic.
> > > > >>>
> > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> > '0'?
> > > > >>> Yes, you can see the feature description Dekel added.
> > > > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > > > >>
> > > > >> Of course I can see the updates Matan, my point is "What happens
> > > > >> if PMD doesn't provide 'max_lro_pkt_size'",
> > > > >> 1) There is no check for it right, so it is acceptable?
> > > > >
> > > > > There is check.
> > > > > If the capability is 0, any non-zero configuration will fail.
> > > > >
> > > > >> 2) Are we making this filed mandatory to provide for PMDs, it is
> > > > >> easy to make new fields mandatory for PMDs but is this really
> > necessary?
> > > > >
> > > > > Yes, for consistence.
> > > > >
> > > > >>>
> > > > >>> as same as max rx pkt len, no?
> > > > >>>
> > > > >>>> - What do you think setting 'max_lro_pkt_size' config value to
> > > > >>>> what PMD provided if application doesn't provide it?
> > > > >>> Same answers as above.
> > > > >>>
> > > > >>
> > > > >> If application doesn't care the value, as it has been till now,
> > > > >> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > > >> level use the value provided by PMD instead of failing?
> > > > >
> > > > > Again, same question we can ask on max rx pkt len.
> > > > >
> > > > > Looks like the packet size is very important value which should be
> > > > > set by
> > > > the application.
> > > > >
> > > > > Previous applications have no option to configure it, so they
> > > > > haven't
> > > > configure it, (probably cover it somehow) I think it is our miss to
> > > > supply this info.
> > > > >
> > > > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > > > Later, we can change both to other meaning.
> > > > >
> > > >
> > > > I think it is not a good reason to introduce a new mandatory config
> > > > option for application because of 'max_rx_pkt_len' does it.
> > >
> > > It is mandatory only if LRO offload is configured.
> >
> > So max_rx_pkt_len will remain max size of one packet, while max_lro_len
> > will be max accumulate size for each LRO session?
> >
>
> Yes.
>
> > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
>
> Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> Change to RTE_IPV4_MAX_PKT_LEN?
>
> > ixgbe_vf, as I remember, doesn’t support LRO at all.
>
> Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> Remove it?
Yes, please for both.
>
> >
> > >
> > > > Will it work, if:
> > > > - If application doesn't provide this value, use the PMD max
> > >
> > > May cause a problem if the mbuf size is not enough for the PMD maximum.
> >
> > Another question, what will happen if PMD will ignore that value and will
> > generate packets bigger then requested?
>
> PMD should use this value and not ignore it.
Hmm, ok but this patch updates mxl driver only...
I suppose you expect other PMD maintainers to do the job for their PMDs, right?
If so, are they aware (and agree) for this new hard requirement and changes required?
Again what PMD should do if it can't support exact value?
Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
Should it fail, or round to smallest, or ...?
Actually I wonder, should it really be a hard requirement or more like a guidance to PMD?
Why app needs and *exact* value for LRO size?
> >
> > >
> > > > - If both application and PMD doesn't provide this value, fail on
> > configure()?
> > >
> > > It will work.
> > > In my opinion - not ideal.
> > >
> > > Matan
> > >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 14:52 ` Ananyev, Konstantin
@ 2019-11-08 16:08 ` Dekel Peled
2019-11-08 16:28 ` Ananyev, Konstantin
0 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:08 UTC (permalink / raw)
To: Ananyev, Konstantin, Matan Azrad, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, November 8, 2019 4:53 PM
> To: Dekel Peled <dekelp@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
>
>
> > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > >>>>>
> > > > > >>>> RTE_ETHER_MAX_LEN;
> > > > > >>>>> }
> > > > > >>>>>
> > > > > >>>>> + /*
> > > > > >>>>> + * If LRO is enabled, check that the maximum
> aggregated
> > > > > packet
> > > > > >>>>> + * size is supported by the configured device.
> > > > > >>>>> + */
> > > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > >>>>> + ret = check_lro_pkt_size(
> > > > > >>>>> + port_id, dev_conf-
> > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > > >>>>> + if (ret != 0)
> > > > > >>>>> + goto rollback;
> > > > > >>>>> + }
> > > > > >>>>> +
> > > > > >>>>
> > > > > >>>> This check forces applications that enable LRO to provide
> > > > > >> 'max_lro_pkt_size'
> > > > > >>>> config value.
> > > > > >>>
> > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > >>
> > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > >> With this check, if the application requested LRO offload but
> > > > > >> not provided 'max_lro_pkt_size' value, device configuration will
> fail.
> > > > > >>
> > > > > > Yes
> > > > > >> Can there be a case application is good with whatever the PMD
> > > > > >> can support as max?
> > > > > > Yes can be - you know, we can do everything we want but it is
> > > > > > better to be
> > > > > consistent:
> > > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > > offload, max
> > > > > lro pkt len should be mandatory for LRO offload.
> > > > > >
> > > > > > So your question is actually why both, non-lro packets and LRO
> > > > > > packets max
> > > > > size are mandatory...
> > > > > >
> > > > > >
> > > > > > I think it should be important values for net applications
> management.
> > > > > > Also good for mbuf size managements.
> > > > > >
> > > > > >>>
> > > > > >>>> - Why it is mandatory now, how it was working before if it
> > > > > >>>> is mandatory value?
> > > > > >>>
> > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > >>> jumbo frame
> > > > > >> offload.
> > > > > >>> So now, when the user configures a LRO offload he must to
> > > > > >>> set max lro pkt
> > > > > >> len.
> > > > > >>> We don't want to confuse the user here with the max rx pkt
> > > > > >>> len
> > > > > >> configurations and behaviors, they should be with same logic.
> > > > > >>>
> > > > > >>> This parameter defines well the LRO behavior.
> > > > > >>> Before this, each PMD took its own interpretation to what
> > > > > >>> should be the
> > > > > >> maximum size for LRO aggregated packets.
> > > > > >>> Now, the user must say what is his intension, and the ethdev
> > > > > >>> can limit it
> > > > > >> according to the device capability.
> > > > > >>> By this way, also, the PMD can organize\optimize its data-path
> more.
> > > > > >>> Also, the application can create different mempools for LRO
> > > > > >>> queues to
> > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > >>>
> > > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size',
> > > > > >>>> so it is
> > > '0'?
> > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > >>> This patch also updates all the PMDs support an LRO for non-0
> value.
> > > > > >>
> > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > >
> > > > > > There is check.
> > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > >
> > > > > >> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > > >> is easy to make new fields mandatory for PMDs but is this
> > > > > >> really
> > > necessary?
> > > > > >
> > > > > > Yes, for consistence.
> > > > > >
> > > > > >>>
> > > > > >>> as same as max rx pkt len, no?
> > > > > >>>
> > > > > >>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > > >>>> to what PMD provided if application doesn't provide it?
> > > > > >>> Same answers as above.
> > > > > >>>
> > > > > >>
> > > > > >> If application doesn't care the value, as it has been till
> > > > > >> now, and not provided explicit 'max_lro_pkt_size', why not
> > > > > >> ethdev level use the value provided by PMD instead of failing?
> > > > > >
> > > > > > Again, same question we can ask on max rx pkt len.
> > > > > >
> > > > > > Looks like the packet size is very important value which
> > > > > > should be set by
> > > > > the application.
> > > > > >
> > > > > > Previous applications have no option to configure it, so they
> > > > > > haven't
> > > > > configure it, (probably cover it somehow) I think it is our miss
> > > > > to supply this info.
> > > > > >
> > > > > > Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> > > > > > Later, we can change both to other meaning.
> > > > > >
> > > > >
> > > > > I think it is not a good reason to introduce a new mandatory
> > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > >
> > > > It is mandatory only if LRO offload is configured.
> > >
> > > So max_rx_pkt_len will remain max size of one packet, while
> > > max_lro_len will be max accumulate size for each LRO session?
> > >
> >
> > Yes.
> >
> > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> >
> > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > Change to RTE_IPV4_MAX_PKT_LEN?
> >
> > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> >
> > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > Remove it?
>
> Yes, please for both.
Will change in v5.
>
> >
> > >
> > > >
> > > > > Will it work, if:
> > > > > - If application doesn't provide this value, use the PMD max
> > > >
> > > > May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> > >
> > > Another question, what will happen if PMD will ignore that value and
> > > will generate packets bigger then requested?
> >
> > PMD should use this value and not ignore it.
>
> Hmm, ok but this patch updates mxl driver only...
> I suppose you expect other PMD maintainers to do the job for their PMDs,
> right?
> If so, are they aware (and agree) for this new hard requirement and changes
> required?
> Again what PMD should do if it can't support exact value?
> Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
> Should it fail, or round to smallest, or ...?
>
> Actually I wonder, should it really be a hard requirement or more like a
> guidance to PMD?
> Why app needs and *exact* value for LRO size?
The exact value should be configured to HW as LRO session limit.
>
>
> > >
> > > >
> > > > > - If both application and PMD doesn't provide this value, fail
> > > > > on
> > > configure()?
> > > >
> > > > It will work.
> > > > In my opinion - not ideal.
> > > >
> > > > Matan
> > > >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 16:08 ` Dekel Peled
@ 2019-11-08 16:28 ` Ananyev, Konstantin
2019-11-09 18:26 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-08 16:28 UTC (permalink / raw)
To: Dekel Peled, Matan Azrad, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
> >
> >
> > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > >>>>>
> > > > > > >>>> RTE_ETHER_MAX_LEN;
> > > > > > >>>>> }
> > > > > > >>>>>
> > > > > > >>>>> + /*
> > > > > > >>>>> + * If LRO is enabled, check that the maximum
> > aggregated
> > > > > > packet
> > > > > > >>>>> + * size is supported by the configured device.
> > > > > > >>>>> + */
> > > > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > >>>>> + ret = check_lro_pkt_size(
> > > > > > >>>>> + port_id, dev_conf-
> > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > > > >>>>> + if (ret != 0)
> > > > > > >>>>> + goto rollback;
> > > > > > >>>>> + }
> > > > > > >>>>> +
> > > > > > >>>>
> > > > > > >>>> This check forces applications that enable LRO to provide
> > > > > > >> 'max_lro_pkt_size'
> > > > > > >>>> config value.
> > > > > > >>>
> > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > >>
> > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > >> With this check, if the application requested LRO offload but
> > > > > > >> not provided 'max_lro_pkt_size' value, device configuration will
> > fail.
> > > > > > >>
> > > > > > > Yes
> > > > > > >> Can there be a case application is good with whatever the PMD
> > > > > > >> can support as max?
> > > > > > > Yes can be - you know, we can do everything we want but it is
> > > > > > > better to be
> > > > > > consistent:
> > > > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > > > offload, max
> > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > >
> > > > > > > So your question is actually why both, non-lro packets and LRO
> > > > > > > packets max
> > > > > > size are mandatory...
> > > > > > >
> > > > > > >
> > > > > > > I think it should be important values for net applications
> > management.
> > > > > > > Also good for mbuf size managements.
> > > > > > >
> > > > > > >>>
> > > > > > >>>> - Why it is mandatory now, how it was working before if it
> > > > > > >>>> is mandatory value?
> > > > > > >>>
> > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > >>> jumbo frame
> > > > > > >> offload.
> > > > > > >>> So now, when the user configures a LRO offload he must to
> > > > > > >>> set max lro pkt
> > > > > > >> len.
> > > > > > >>> We don't want to confuse the user here with the max rx pkt
> > > > > > >>> len
> > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > >>>
> > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > >>> Before this, each PMD took its own interpretation to what
> > > > > > >>> should be the
> > > > > > >> maximum size for LRO aggregated packets.
> > > > > > >>> Now, the user must say what is his intension, and the ethdev
> > > > > > >>> can limit it
> > > > > > >> according to the device capability.
> > > > > > >>> By this way, also, the PMD can organize\optimize its data-path
> > more.
> > > > > > >>> Also, the application can create different mempools for LRO
> > > > > > >>> queues to
> > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > >>>
> > > > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size',
> > > > > > >>>> so it is
> > > > '0'?
> > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > >>> This patch also updates all the PMDs support an LRO for non-0
> > value.
> > > > > > >>
> > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > >
> > > > > > > There is check.
> > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > >
> > > > > > >> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > > > >> is easy to make new fields mandatory for PMDs but is this
> > > > > > >> really
> > > > necessary?
> > > > > > >
> > > > > > > Yes, for consistence.
> > > > > > >
> > > > > > >>>
> > > > > > >>> as same as max rx pkt len, no?
> > > > > > >>>
> > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > > > >>>> to what PMD provided if application doesn't provide it?
> > > > > > >>> Same answers as above.
> > > > > > >>>
> > > > > > >>
> > > > > > >> If application doesn't care the value, as it has been till
> > > > > > >> now, and not provided explicit 'max_lro_pkt_size', why not
> > > > > > >> ethdev level use the value provided by PMD instead of failing?
> > > > > > >
> > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > >
> > > > > > > Looks like the packet size is very important value which
> > > > > > > should be set by
> > > > > > the application.
> > > > > > >
> > > > > > > Previous applications have no option to configure it, so they
> > > > > > > haven't
> > > > > > configure it, (probably cover it somehow) I think it is our miss
> > > > > > to supply this info.
> > > > > > >
> > > > > > > Let's do it in same way as we do max rx pkt len (as this patch main
> > idea).
> > > > > > > Later, we can change both to other meaning.
> > > > > > >
> > > > > >
> > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > >
> > > > > It is mandatory only if LRO offload is configured.
> > > >
> > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > max_lro_len will be max accumulate size for each LRO session?
> > > >
> > >
> > > Yes.
> > >
> > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > >
> > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > Change to RTE_IPV4_MAX_PKT_LEN?
> > >
> > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > >
> > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > Remove it?
> >
> > Yes, please for both.
>
> Will change in v5.
>
> >
> > >
> > > >
> > > > >
> > > > > > Will it work, if:
> > > > > > - If application doesn't provide this value, use the PMD max
> > > > >
> > > > > May cause a problem if the mbuf size is not enough for the PMD
> > maximum.
> > > >
> > > > Another question, what will happen if PMD will ignore that value and
> > > > will generate packets bigger then requested?
> > >
> > > PMD should use this value and not ignore it.
> >
> > Hmm, ok but this patch updates mxl driver only...
> > I suppose you expect other PMD maintainers to do the job for their PMDs,
> > right?
> > If so, are they aware (and agree) for this new hard requirement and changes
> > required?
> > Again what PMD should do if it can't support exact value?
> > Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
> > Should it fail, or round to smallest, or ...?
> >
> > Actually I wonder, should it really be a hard requirement or more like a
> > guidance to PMD?
> > Why app needs and *exact* value for LRO size?
>
> The exact value should be configured to HW as LRO session limit.
But if the HW can't support this exact value, see the example above?
In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
configuration at all.
That way PMDs who do support LRO, but don't want to (can't to)
support configurable LRO size will stay untouched.
>
> >
> >
> > > >
> > > > >
> > > > > > - If both application and PMD doesn't provide this value, fail
> > > > > > on
> > > > configure()?
> > > > >
> > > > > It will work.
> > > > > In my opinion - not ideal.
> > > > >
> > > > > Matan
> > > > >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-08 16:28 ` Ananyev, Konstantin
@ 2019-11-09 18:26 ` Matan Azrad
2019-11-10 22:51 ` Ananyev, Konstantin
0 siblings, 1 reply; 79+ messages in thread
From: Matan Azrad @ 2019-11-09 18:26 UTC (permalink / raw)
To: Ananyev, Konstantin, Dekel Peled, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Hi Konstantin
From: Ananyev, Konstantin
> Sent: Friday, November 8, 2019 6:29 PM
> To: Dekel Peled <dekelp@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
>
>
> > >
> > >
> > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > >>>>>
> > > > > > > >>>> RTE_ETHER_MAX_LEN;
> > > > > > > >>>>> }
> > > > > > > >>>>>
> > > > > > > >>>>> + /*
> > > > > > > >>>>> + * If LRO is enabled, check that the maximum
> > > aggregated
> > > > > > > packet
> > > > > > > >>>>> + * size is supported by the configured device.
> > > > > > > >>>>> + */
> > > > > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > >>>>> + ret = check_lro_pkt_size(
> > > > > > > >>>>> + port_id, dev_conf-
> > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > > > > >>>>> + if (ret != 0)
> > > > > > > >>>>> + goto rollback;
> > > > > > > >>>>> + }
> > > > > > > >>>>> +
> > > > > > > >>>>
> > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > >>>> provide
> > > > > > > >> 'max_lro_pkt_size'
> > > > > > > >>>> config value.
> > > > > > > >>>
> > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > >>
> > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > >> With this check, if the application requested LRO offload
> > > > > > > >> but not provided 'max_lro_pkt_size' value, device
> > > > > > > >> configuration will
> > > fail.
> > > > > > > >>
> > > > > > > > Yes
> > > > > > > >> Can there be a case application is good with whatever the
> > > > > > > >> PMD can support as max?
> > > > > > > > Yes can be - you know, we can do everything we want but it
> > > > > > > > is better to be
> > > > > > > consistent:
> > > > > > > > Due to the fact of Max rx pkt len field is mandatory for
> > > > > > > > JUMBO offload, max
> > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > >
> > > > > > > > So your question is actually why both, non-lro packets and
> > > > > > > > LRO packets max
> > > > > > > size are mandatory...
> > > > > > > >
> > > > > > > >
> > > > > > > > I think it should be important values for net applications
> > > management.
> > > > > > > > Also good for mbuf size managements.
> > > > > > > >
> > > > > > > >>>
> > > > > > > >>>> - Why it is mandatory now, how it was working before if
> > > > > > > >>>> it is mandatory value?
> > > > > > > >>>
> > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > > >>> jumbo frame
> > > > > > > >> offload.
> > > > > > > >>> So now, when the user configures a LRO offload he must
> > > > > > > >>> to set max lro pkt
> > > > > > > >> len.
> > > > > > > >>> We don't want to confuse the user here with the max rx
> > > > > > > >>> pkt len
> > > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > > >>>
> > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > >>> what should be the
> > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > >>> Now, the user must say what is his intension, and the
> > > > > > > >>> ethdev can limit it
> > > > > > > >> according to the device capability.
> > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > >>> data-path
> > > more.
> > > > > > > >>> Also, the application can create different mempools for
> > > > > > > >>> LRO queues to
> > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > >>>
> > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > '0'?
> > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > >>> This patch also updates all the PMDs support an LRO for
> > > > > > > >>> non-0
> > > value.
> > > > > > > >>
> > > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > >
> > > > > > > > There is check.
> > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > >
> > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > >> PMDs, it is easy to make new fields mandatory for PMDs
> > > > > > > >> but is this really
> > > > > necessary?
> > > > > > > >
> > > > > > > > Yes, for consistence.
> > > > > > > >
> > > > > > > >>>
> > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > >>>
> > > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config
> > > > > > > >>>> value to what PMD provided if application doesn't provide
> it?
> > > > > > > >>> Same answers as above.
> > > > > > > >>>
> > > > > > > >>
> > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > >> till now, and not provided explicit 'max_lro_pkt_size',
> > > > > > > >> why not ethdev level use the value provided by PMD instead
> of failing?
> > > > > > > >
> > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > >
> > > > > > > > Looks like the packet size is very important value which
> > > > > > > > should be set by
> > > > > > > the application.
> > > > > > > >
> > > > > > > > Previous applications have no option to configure it, so
> > > > > > > > they haven't
> > > > > > > configure it, (probably cover it somehow) I think it is our
> > > > > > > miss to supply this info.
> > > > > > > >
> > > > > > > > Let's do it in same way as we do max rx pkt len (as this
> > > > > > > > patch main
> > > idea).
> > > > > > > > Later, we can change both to other meaning.
> > > > > > > >
> > > > > > >
> > > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > > >
> > > > > > It is mandatory only if LRO offload is configured.
> > > > >
> > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > >
> > > >
> > > > Yes.
> > > >
> > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > >
> > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > >
> > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > >
> > > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > Remove it?
> > >
> > > Yes, please for both.
> >
> > Will change in v5.
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > Will it work, if:
> > > > > > > - If application doesn't provide this value, use the PMD max
> > > > > >
> > > > > > May cause a problem if the mbuf size is not enough for the PMD
> > > maximum.
> > > > >
> > > > > Another question, what will happen if PMD will ignore that value
> > > > > and will generate packets bigger then requested?
> > > >
> > > > PMD should use this value and not ignore it.
> > >
> > > Hmm, ok but this patch updates mxl driver only...
> > > I suppose you expect other PMD maintainers to do the job for their
> > > PMDs, right?
> > > If so, are they aware (and agree) for this new hard requirement and
> > > changes required?
> > > Again what PMD should do if it can't support exact value?
> > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> 24KB?
> > > Should it fail, or round to smallest, or ...?
> > >
> > > Actually I wonder, should it really be a hard requirement or more
> > > like a guidance to PMD?
> > > Why app needs and *exact* value for LRO size?
> >
> > The exact value should be configured to HW as LRO session limit.
>
> But if the HW can't support this exact value, see the example above?
> In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
> configuration at all.
> That way PMDs who do support LRO, but don't want to (can't to) support
> configurable LRO size will stay untouched.
Each HW should support packet size limitation no matter if it is LRO packet or not:
How does the PMD limit the packet size for max rx packet len conf?
How does the PMD limit the packet size for the mbuf size?
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-09 18:26 ` Matan Azrad
@ 2019-11-10 22:51 ` Ananyev, Konstantin
2019-11-11 6:53 ` Matan Azrad
0 siblings, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-10 22:51 UTC (permalink / raw)
To: Matan Azrad, Dekel Peled, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Hi Matan,
> > > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > > >>>>>
> > > > > > > > >>>> RTE_ETHER_MAX_LEN;
> > > > > > > > >>>>> }
> > > > > > > > >>>>>
> > > > > > > > >>>>> + /*
> > > > > > > > >>>>> + * If LRO is enabled, check that the maximum
> > > > aggregated
> > > > > > > > packet
> > > > > > > > >>>>> + * size is supported by the configured device.
> > > > > > > > >>>>> + */
> > > > > > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > > >>>>> + ret = check_lro_pkt_size(
> > > > > > > > >>>>> + port_id, dev_conf-
> > > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > > > > > >>>>> + if (ret != 0)
> > > > > > > > >>>>> + goto rollback;
> > > > > > > > >>>>> + }
> > > > > > > > >>>>> +
> > > > > > > > >>>>
> > > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > > >>>> provide
> > > > > > > > >> 'max_lro_pkt_size'
> > > > > > > > >>>> config value.
> > > > > > > > >>>
> > > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > > >>
> > > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > > >> With this check, if the application requested LRO offload
> > > > > > > > >> but not provided 'max_lro_pkt_size' value, device
> > > > > > > > >> configuration will
> > > > fail.
> > > > > > > > >>
> > > > > > > > > Yes
> > > > > > > > >> Can there be a case application is good with whatever the
> > > > > > > > >> PMD can support as max?
> > > > > > > > > Yes can be - you know, we can do everything we want but it
> > > > > > > > > is better to be
> > > > > > > > consistent:
> > > > > > > > > Due to the fact of Max rx pkt len field is mandatory for
> > > > > > > > > JUMBO offload, max
> > > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > > >
> > > > > > > > > So your question is actually why both, non-lro packets and
> > > > > > > > > LRO packets max
> > > > > > > > size are mandatory...
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think it should be important values for net applications
> > > > management.
> > > > > > > > > Also good for mbuf size managements.
> > > > > > > > >
> > > > > > > > >>>
> > > > > > > > >>>> - Why it is mandatory now, how it was working before if
> > > > > > > > >>>> it is mandatory value?
> > > > > > > > >>>
> > > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > > > >>> jumbo frame
> > > > > > > > >> offload.
> > > > > > > > >>> So now, when the user configures a LRO offload he must
> > > > > > > > >>> to set max lro pkt
> > > > > > > > >> len.
> > > > > > > > >>> We don't want to confuse the user here with the max rx
> > > > > > > > >>> pkt len
> > > > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > > > >>>
> > > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > > >>> what should be the
> > > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > > >>> Now, the user must say what is his intension, and the
> > > > > > > > >>> ethdev can limit it
> > > > > > > > >> according to the device capability.
> > > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > > >>> data-path
> > > > more.
> > > > > > > > >>> Also, the application can create different mempools for
> > > > > > > > >>> LRO queues to
> > > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > > >>>
> > > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > > '0'?
> > > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > > >>> This patch also updates all the PMDs support an LRO for
> > > > > > > > >>> non-0
> > > > value.
> > > > > > > > >>
> > > > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > > >
> > > > > > > > > There is check.
> > > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > > >
> > > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > > >> PMDs, it is easy to make new fields mandatory for PMDs
> > > > > > > > >> but is this really
> > > > > > necessary?
> > > > > > > > >
> > > > > > > > > Yes, for consistence.
> > > > > > > > >
> > > > > > > > >>>
> > > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > > >>>
> > > > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config
> > > > > > > > >>>> value to what PMD provided if application doesn't provide
> > it?
> > > > > > > > >>> Same answers as above.
> > > > > > > > >>>
> > > > > > > > >>
> > > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > > >> till now, and not provided explicit 'max_lro_pkt_size',
> > > > > > > > >> why not ethdev level use the value provided by PMD instead
> > of failing?
> > > > > > > > >
> > > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > > >
> > > > > > > > > Looks like the packet size is very important value which
> > > > > > > > > should be set by
> > > > > > > > the application.
> > > > > > > > >
> > > > > > > > > Previous applications have no option to configure it, so
> > > > > > > > > they haven't
> > > > > > > > configure it, (probably cover it somehow) I think it is our
> > > > > > > > miss to supply this info.
> > > > > > > > >
> > > > > > > > > Let's do it in same way as we do max rx pkt len (as this
> > > > > > > > > patch main
> > > > idea).
> > > > > > > > > Later, we can change both to other meaning.
> > > > > > > > >
> > > > > > > >
> > > > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > > > >
> > > > > > > It is mandatory only if LRO offload is configured.
> > > > > >
> > > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > > >
> > > > >
> > > > > Yes.
> > > > >
> > > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > > >
> > > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > > >
> > > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > > >
> > > > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > > Remove it?
> > > >
> > > > Yes, please for both.
> > >
> > > Will change in v5.
> > >
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > Will it work, if:
> > > > > > > > - If application doesn't provide this value, use the PMD max
> > > > > > >
> > > > > > > May cause a problem if the mbuf size is not enough for the PMD
> > > > maximum.
> > > > > >
> > > > > > Another question, what will happen if PMD will ignore that value
> > > > > > and will generate packets bigger then requested?
> > > > >
> > > > > PMD should use this value and not ignore it.
> > > >
> > > > Hmm, ok but this patch updates mxl driver only...
> > > > I suppose you expect other PMD maintainers to do the job for their
> > > > PMDs, right?
> > > > If so, are they aware (and agree) for this new hard requirement and
> > > > changes required?
> > > > Again what PMD should do if it can't support exact value?
> > > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> > 24KB?
> > > > Should it fail, or round to smallest, or ...?
> > > >
> > > > Actually I wonder, should it really be a hard requirement or more
> > > > like a guidance to PMD?
> > > > Why app needs and *exact* value for LRO size?
> > >
> > > The exact value should be configured to HW as LRO session limit.
> >
> > But if the HW can't support this exact value, see the example above?
> > In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> > Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
> > configuration at all.
> > That way PMDs who do support LRO, but don't want to (can't to) support
> > configurable LRO size will stay untouched.
>
> Each HW should support packet size limitation no matter if it is LRO packet or not:
> How does the PMD limit the packet size for max rx packet len conf?
> How does the PMD limit the packet size for the mbuf size?
Not sure I understand your statement and questions above...
For sure PMD has to support max_rx_pktlen., but how does it relate to max_lro?
Konstantin
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
2019-11-10 22:51 ` Ananyev, Konstantin
@ 2019-11-11 6:53 ` Matan Azrad
0 siblings, 0 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-11 6:53 UTC (permalink / raw)
To: Ananyev, Konstantin, Dekel Peled, Yigit, Ferruh, Mcnamara, John,
Kovacevic, Marko, nhorman, ajit.khaparde, somnath.kotur, Burakov,
Anatoly, xuanziyang2, cloud.wangxiaoyun, zhouguoyang, Lu,
Wenzhuo, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, Bie, Tiwei, Wang, Zhihong, yongwang,
Thomas Monjalon, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Hi
From: Ananyev, Konstantin
> Hi Matan,
>
> > > > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > > > >>>>>
> > > > > > > > > >>>> RTE_ETHER_MAX_LEN;
> > > > > > > > > >>>>> }
> > > > > > > > > >>>>>
> > > > > > > > > >>>>> + /*
> > > > > > > > > >>>>> + * If LRO is enabled, check that the maximum
> > > > > aggregated
> > > > > > > > > packet
> > > > > > > > > >>>>> + * size is supported by the configured device.
> > > > > > > > > >>>>> + */
> > > > > > > > > >>>>> + if (dev_conf->rxmode.offloads &
> > > > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > > > >>>>> + ret = check_lro_pkt_size(
> > > > > > > > > >>>>> + port_id, dev_conf-
> > > > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > > > >>>>> + dev_info.max_lro_pkt_size);
> > > > > > > > > >>>>> + if (ret != 0)
> > > > > > > > > >>>>> + goto rollback;
> > > > > > > > > >>>>> + }
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>
> > > > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > > > >>>> provide
> > > > > > > > > >> 'max_lro_pkt_size'
> > > > > > > > > >>>> config value.
> > > > > > > > > >>>
> > > > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > > > >>
> > > > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > > > >> With this check, if the application requested LRO
> > > > > > > > > >> offload but not provided 'max_lro_pkt_size' value,
> > > > > > > > > >> device configuration will
> > > > > fail.
> > > > > > > > > >>
> > > > > > > > > > Yes
> > > > > > > > > >> Can there be a case application is good with whatever
> > > > > > > > > >> the PMD can support as max?
> > > > > > > > > > Yes can be - you know, we can do everything we want
> > > > > > > > > > but it is better to be
> > > > > > > > > consistent:
> > > > > > > > > > Due to the fact of Max rx pkt len field is mandatory
> > > > > > > > > > for JUMBO offload, max
> > > > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > > > >
> > > > > > > > > > So your question is actually why both, non-lro packets
> > > > > > > > > > and LRO packets max
> > > > > > > > > size are mandatory...
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > I think it should be important values for net
> > > > > > > > > > applications
> > > > > management.
> > > > > > > > > > Also good for mbuf size managements.
> > > > > > > > > >
> > > > > > > > > >>>
> > > > > > > > > >>>> - Why it is mandatory now, how it was working
> > > > > > > > > >>>> before if it is mandatory value?
> > > > > > > > > >>>
> > > > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory
> > > > > > > > > >>> for jumbo frame
> > > > > > > > > >> offload.
> > > > > > > > > >>> So now, when the user configures a LRO offload he
> > > > > > > > > >>> must to set max lro pkt
> > > > > > > > > >> len.
> > > > > > > > > >>> We don't want to confuse the user here with the max
> > > > > > > > > >>> rx pkt len
> > > > > > > > > >> configurations and behaviors, they should be with same
> logic.
> > > > > > > > > >>>
> > > > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > > > >>> what should be the
> > > > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > > > >>> Now, the user must say what is his intension, and
> > > > > > > > > >>> the ethdev can limit it
> > > > > > > > > >> according to the device capability.
> > > > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > > > >>> data-path
> > > > > more.
> > > > > > > > > >>> Also, the application can create different mempools
> > > > > > > > > >>> for LRO queues to
> > > > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > > > >>>
> > > > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > > > '0'?
> > > > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > > > >>> This patch also updates all the PMDs support an LRO
> > > > > > > > > >>> for
> > > > > > > > > >>> non-0
> > > > > value.
> > > > > > > > > >>
> > > > > > > > > >> Of course I can see the updates Matan, my point is
> > > > > > > > > >> "What happens if PMD doesn't provide
> > > > > > > > > >> 'max_lro_pkt_size'",
> > > > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > > > >
> > > > > > > > > > There is check.
> > > > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > > > >
> > > > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > > > >> PMDs, it is easy to make new fields mandatory for
> > > > > > > > > >> PMDs but is this really
> > > > > > > necessary?
> > > > > > > > > >
> > > > > > > > > > Yes, for consistence.
> > > > > > > > > >
> > > > > > > > > >>>
> > > > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > > > >>>
> > > > > > > > > >>>> - What do you think setting 'max_lro_pkt_size'
> > > > > > > > > >>>> config value to what PMD provided if application
> > > > > > > > > >>>> doesn't provide
> > > it?
> > > > > > > > > >>> Same answers as above.
> > > > > > > > > >>>
> > > > > > > > > >>
> > > > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > > > >> till now, and not provided explicit
> > > > > > > > > >> 'max_lro_pkt_size', why not ethdev level use the
> > > > > > > > > >> value provided by PMD instead
> > > of failing?
> > > > > > > > > >
> > > > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > > > >
> > > > > > > > > > Looks like the packet size is very important value
> > > > > > > > > > which should be set by
> > > > > > > > > the application.
> > > > > > > > > >
> > > > > > > > > > Previous applications have no option to configure it,
> > > > > > > > > > so they haven't
> > > > > > > > > configure it, (probably cover it somehow) I think it is
> > > > > > > > > our miss to supply this info.
> > > > > > > > > >
> > > > > > > > > > Let's do it in same way as we do max rx pkt len (as
> > > > > > > > > > this patch main
> > > > > idea).
> > > > > > > > > > Later, we can change both to other meaning.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think it is not a good reason to introduce a new
> > > > > > > > > mandatory config option for application because of
> 'max_rx_pkt_len' does it.
> > > > > > > >
> > > > > > > > It is mandatory only if LRO offload is configured.
> > > > > > >
> > > > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > > > >
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > > > >
> > > > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > > > >
> > > > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > > > >
> > > > > > Please see my change in
> > > > > > drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > > > Remove it?
> > > > >
> > > > > Yes, please for both.
> > > >
> > > > Will change in v5.
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > > Will it work, if:
> > > > > > > > > - If application doesn't provide this value, use the PMD
> > > > > > > > > max
> > > > > > > >
> > > > > > > > May cause a problem if the mbuf size is not enough for the
> > > > > > > > PMD
> > > > > maximum.
> > > > > > >
> > > > > > > Another question, what will happen if PMD will ignore that
> > > > > > > value and will generate packets bigger then requested?
> > > > > >
> > > > > > PMD should use this value and not ignore it.
> > > > >
> > > > > Hmm, ok but this patch updates mxl driver only...
> > > > > I suppose you expect other PMD maintainers to do the job for
> > > > > their PMDs, right?
> > > > > If so, are they aware (and agree) for this new hard requirement
> > > > > and changes required?
> > > > > Again what PMD should do if it can't support exact value?
> > > > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> > > 24KB?
> > > > > Should it fail, or round to smallest, or ...?
> > > > >
> > > > > Actually I wonder, should it really be a hard requirement or
> > > > > more like a guidance to PMD?
> > > > > Why app needs and *exact* value for LRO size?
> > > >
> > > > The exact value should be configured to HW as LRO session limit.
> > >
> > > But if the HW can't support this exact value, see the example above?
> > > In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> > > Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO
> > > size configuration at all.
> > > That way PMDs who do support LRO, but don't want to (can't to)
> > > support configurable LRO size will stay untouched.
> >
> > Each HW should support packet size limitation no matter if it is LRO packet
> or not:
> > How does the PMD limit the packet size for max rx packet len conf?
> > How does the PMD limit the packet size for the mbuf size?
>
> Not sure I understand your statement and questions above...
> For sure PMD has to support max_rx_pktlen., but how does it relate to
> max_lro?
You said that HW may not support LRO max size configuration.
I answered that as same as the HW can limit packets to the configuration of max_rx_pkt_len, so it can limit LRO packets size here too.
For simplifications:
Rx Queues which are not configured to do LRO offload should limit their packets to the max_rx_pkt_len field.
Rx Queues which are configured to do LRO offload should limit their packets to the max_lro_pkt_len new field.
In addition, both should limit the packets size to the mbuf size of the Rx mempool configured to the Rx queue( if scatter offload is not enabled).
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
@ 2019-11-07 12:35 ` Dekel Peled
2019-11-08 9:12 ` Slava Ovsiienko
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
` (2 subsequent siblings)
4 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 12:35 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4f1093f..3b10daf 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9423e7b..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-08 9:12 ` Slava Ovsiienko
2019-11-08 9:23 ` Ferruh Yigit
0 siblings, 1 reply; 79+ messages in thread
From: Slava Ovsiienko @ 2019-11-08 9:12 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, Thomas Monjalon, ferruh.yigit,
arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Thursday, November 7, 2019 14:35
> To: john.mcnamara@intel.com; marko.kovacevic@intel.com;
> nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> somnath.kotur@broadcom.com; anatoly.burakov@intel.com;
> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com;
> zhouguoyang@huawei.com; wenzhuo.lu@intel.com;
> konstantin.ananyev@intel.com; Matan Azrad <matan@mellanox.com>;
> Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> arybchenko@solarflare.com; jingjing.wu@intel.com;
> bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size
>
> This patch implements use of the API for LRO aggregated packet max size.
> Rx queue create is updated to use the relevant configuration.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size
2019-11-08 9:12 ` Slava Ovsiienko
@ 2019-11-08 9:23 ` Ferruh Yigit
0 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-08 9:23 UTC (permalink / raw)
To: Slava Ovsiienko, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Matan Azrad, Shahaf Shuler, rmody, shshaikh,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
On 11/8/2019 9:12 AM, Slava Ovsiienko wrote:
>> -----Original Message-----
>> From: Dekel Peled <dekelp@mellanox.com>
>> Sent: Thursday, November 7, 2019 14:35
>> To: john.mcnamara@intel.com; marko.kovacevic@intel.com;
>> nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
>> somnath.kotur@broadcom.com; anatoly.burakov@intel.com;
>> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com;
>> zhouguoyang@huawei.com; wenzhuo.lu@intel.com;
>> konstantin.ananyev@intel.com; Matan Azrad <matan@mellanox.com>;
>> Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko
>> <viacheslavo@mellanox.com>; rmody@marvell.com;
>> shshaikh@marvell.com; maxime.coquelin@redhat.com;
>> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
>> Thomas Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
>> arybchenko@solarflare.com; jingjing.wu@intel.com;
>> bernard.iremonger@intel.com
>> Cc: dev@dpdk.org
>> Subject: [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size
>>
>> This patch implements use of the API for LRO aggregated packet max size.
>> Rx queue create is updated to use the relevant configuration.
>> Documentation is updated accordingly.
>>
>> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
>
This is an ethdev level API change, that will affect multiple PMDs, shouldn't we
get more input than single company?
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 1/3] ethdev: " Dekel Peled
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-07 12:35 ` Dekel Peled
2019-11-07 14:20 ` Iremonger, Bernard
2019-11-07 20:25 ` Ferruh Yigit
2019-11-08 6:28 ` [dpdk-dev] [PATCH v4 0/3] support " Matan Azrad
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
4 siblings, 2 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-07 12:35 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
---
app/test-pmd/cmdline.c | 73 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
6 files changed, 97 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 49c45a3..62bbc81 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2037,6 +2037,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19025,6 +19097,7 @@ struct cmd_show_port_supported_ptypes_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index b603974..e1e5cf7 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -616,6 +616,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9ea87c1..eda395b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -107,6 +107,8 @@
printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
"in mbuf pools.\n");
printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
+ printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated packet "
+ "size to N bytes.\n");
#ifdef RTE_LIBRTE_CMDLINE
printf(" --eth-peers-configfile=name: config file with ethernet addresses "
"of peer ports.\n");
@@ -592,6 +594,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -888,6 +891,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5ba9741..3fe694f 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
/**< Default maximum frame length. */
+ .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
};
struct rte_eth_txmode tx_mode = {
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 00e0c2a..721f740 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,11 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+ The default value is 1518.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c68a742..0267295 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2139,6 +2139,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
@ 2019-11-07 14:20 ` Iremonger, Bernard
2019-11-07 20:25 ` Ferruh Yigit
1 sibling, 0 replies; 79+ messages in thread
From: Iremonger, Bernard @ 2019-11-07 14:20 UTC (permalink / raw)
To: Dekel Peled, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Ananyev, Konstantin,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
Bie, Tiwei, Wang, Zhihong, yongwang, thomas, Yigit, Ferruh,
arybchenko, Wu, Jingjing
Cc: dev
> -----Original Message-----
> From: Dekel Peled <dekelp@mellanox.com>
> Sent: Thursday, November 7, 2019 12:35 PM
> To: Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; matan@mellanox.com;
> shahafs@mellanox.com; viacheslavo@mellanox.com; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; thomas@monjalon.net; Yigit, Ferruh
> <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
>
> This patch implements use of the API for LRO aggregated packet max size.
> It adds command-line and runtime commands to configure this value, and
> adds option to show the supported value.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
2019-11-07 14:20 ` Iremonger, Bernard
@ 2019-11-07 20:25 ` Ferruh Yigit
2019-11-08 6:56 ` Matan Azrad
2019-11-08 13:58 ` Dekel Peled
1 sibling, 2 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-07 20:25 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/7/2019 12:35 PM, Dekel Peled wrote:
> This patch implements use of the API for LRO aggregated packet
> max size.
> It adds command-line and runtime commands to configure this value,
> and adds option to show the supported value.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
<...>
> +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> + .f = cmd_config_max_lro_pkt_size_parsed,
> + .data = NULL,
> + .help_str = "port config all max-lro-pkt-size <value>",
Can you please update "cmd_help_long_parsed()" function to add this new command
to the help output?
<...>
> @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
> struct rte_eth_rxmode rx_mode = {
> .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> /**< Default maximum frame length. */
> + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
If PMD value used if application doesn't provide a default value, my comment on
previous patch, we can remove this value. So 'max_lro_pkt_size' can be used only
set explicitly, otherwise used PMD values.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
2019-11-07 20:25 ` Ferruh Yigit
@ 2019-11-08 6:56 ` Matan Azrad
2019-11-08 13:58 ` Dekel Peled
1 sibling, 0 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-08 6:56 UTC (permalink / raw)
To: Ferruh Yigit, Dekel Peled, john.mcnamara, marko.kovacevic,
nhorman, ajit.khaparde, somnath.kotur, anatoly.burakov,
xuanziyang2, cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu,
konstantin.ananyev, Shahaf Shuler, Slava Ovsiienko, rmody,
shshaikh, maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
Hi
From: Ferruh Yigit
> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > This patch implements use of the API for LRO aggregated packet max
> > size.
> > It adds command-line and runtime commands to configure this value, and
> > adds option to show the supported value.
> > Documentation is updated accordingly.
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
>
> <...>
>
> > +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> > + .f = cmd_config_max_lro_pkt_size_parsed,
> > + .data = NULL,
> > + .help_str = "port config all max-lro-pkt-size <value>",
>
> Can you please update "cmd_help_long_parsed()" function to add this new
> command to the help output?
>
> <...>
>
> > @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = { struct
> > rte_eth_rxmode rx_mode = {
> > .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> > /**< Default maximum frame length. */
> > + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
>
> If PMD value used if application doesn't provide a default value, my
> comment on previous patch, we can remove this value. So
> 'max_lro_pkt_size' can be used only set explicitly, otherwise used PMD
> values.
Also here, should be the same behavior as for max_rx_pkt_len.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
2019-11-07 20:25 ` Ferruh Yigit
2019-11-08 6:56 ` Matan Azrad
@ 2019-11-08 13:58 ` Dekel Peled
1 sibling, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 13:58 UTC (permalink / raw)
To: Ferruh Yigit, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Matan Azrad, Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh,
maxime.coquelin, tiwei.bie, zhihong.wang, yongwang,
Thomas Monjalon, arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, November 7, 2019 10:26 PM
> To: Dekel Peled <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Matan Azrad
> <matan@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com;
> tiwei.bie@intel.com; zhihong.wang@intel.com; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; arybchenko@solarflare.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v4 3/3] app/testpmd: use API to set max LRO packet size
>
> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > This patch implements use of the API for LRO aggregated packet max
> > size.
> > It adds command-line and runtime commands to configure this value, and
> > adds option to show the supported value.
> > Documentation is updated accordingly.
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
>
> <...>
>
> > +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> > + .f = cmd_config_max_lro_pkt_size_parsed,
> > + .data = NULL,
> > + .help_str = "port config all max-lro-pkt-size <value>",
>
> Can you please update "cmd_help_long_parsed()" function to add this new
> command to the help output?
>
Will send v5 with update.
> <...>
>
> > @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = { struct
> > rte_eth_rxmode rx_mode = {
> > .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> > /**< Default maximum frame length. */
> > + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
>
> If PMD value used if application doesn't provide a default value, my
> comment on previous patch, we can remove this value. So
> 'max_lro_pkt_size' can be used only set explicitly, otherwise used PMD
> values.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/3] support API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
` (2 preceding siblings ...)
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 3/3] app/testpmd: " Dekel Peled
@ 2019-11-08 6:28 ` Matan Azrad
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
4 siblings, 0 replies; 79+ messages in thread
From: Matan Azrad @ 2019-11-08 6:28 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, Thomas Monjalon, ferruh.yigit,
arybchenko, jingjing.wu, bernard.iremonger
Cc: dev
From: Dekel Peled
> This series implements support and use of API for configuration and
> validation of max size for LRO aggregated packet.
>
> v2: Updated ethdev patch per review comments.
> v3: Updated ethdev and testpmd patches per review comments.
> v4: Updated ethdev patch for QEDE PMD per review comments.
>
> Dekel Peled (3):
> ethdev: support API to set max LRO packet size
> net/mlx5: use API to set max LRO packet size
> app/testpmd: use API to set max LRO packet size
For all the series:
Acked-by: Matan Azrad <matan@mellanox.com>
> app/test-pmd/cmdline.c | 73
> +++++++++++++++++++++++++++++
> app/test-pmd/config.c | 2 +
> app/test-pmd/parameters.c | 7 +++
> app/test-pmd/testpmd.c | 1 +
> doc/guides/nics/features.rst | 2 +
> doc/guides/nics/mlx5.rst | 2 +
> doc/guides/rel_notes/deprecation.rst | 4 --
> doc/guides/rel_notes/release_19_11.rst | 8 ++++
> doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 2 +
> drivers/net/ixgbe/ixgbe_vf_representor.c | 1 +
> drivers/net/mlx5/mlx5.h | 3 ++
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
> drivers/net/mlx5/mlx5_rxq.c | 5 +-
> drivers/net/qede/qede_ethdev.c | 1 +
> drivers/net/virtio/virtio_ethdev.c | 1 +
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
> lib/librte_ethdev/rte_ethdev.h | 4 ++
> 22 files changed, 172 insertions(+), 6 deletions(-)
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] support API to set max LRO packet size
2019-11-07 12:35 ` [dpdk-dev] [PATCH v4 " Dekel Peled
` (3 preceding siblings ...)
2019-11-08 6:28 ` [dpdk-dev] [PATCH v4 0/3] support " Matan Azrad
@ 2019-11-08 16:42 ` Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
` (2 more replies)
4 siblings, 3 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:42 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This series implements support and use of API for configuration and
validation of max size for LRO aggregated packet.
v2: Updated ethdev patch per review comments.
v3: Updated ethdev and testpmd patches per review comments.
v4: Updated ethdev patch for QEDE PMD per review comments.
v5: Updated ethdev patch for IXGBE PMD, and testpmd patch, per review comments.
Dekel Peled (3):
ethdev: support API to set max LRO packet size
net/mlx5: use API to set max LRO packet size
app/testpmd: use API to set max LRO packet size
app/test-pmd/cmdline.c | 76 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/nics/features.rst | 2 +
doc/guides/nics/mlx5.rst | 2 +
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_19_11.rst | 8 +++
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 +++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++
21 files changed, 173 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] ethdev: support API to set max LRO packet size
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
@ 2019-11-08 16:42 ` Dekel Peled
2019-11-10 23:07 ` Ananyev, Konstantin
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 2/3] net/mlx5: use " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: " Dekel Peled
2 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:42 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_19_11.rst | 8 +++++++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++++
14 files changed, 68 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 7a31cf7..2138ce3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 87b7bd0..a3fc023 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -418,6 +418,14 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b9b055e..741b897 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -519,6 +519,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 30c0379..5719552 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3814,6 +3814,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = RTE_IPV4_MAX_PKT_LEN;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fab58c9..4783b5c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -206,6 +206,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 2b7c867..3adc824 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..ccbb8a4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pkt_size = (uint32_t)0x7FFF;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 044eb10..22ce5a2 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pkt_size = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 652c369..c642ba5 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1136,6 +1136,26 @@ struct rte_eth_dev *
return name;
}
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "> max allowed value %u\n", port_id, config_size,
+ dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "< min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1266,6 +1286,18 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ ret = check_lro_pkt_size(
+ port_id, dev_conf->rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1770,6 +1802,18 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ int ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 44d77b3..1b76df5 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximum allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1218,6 +1220,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: support API to set max LRO packet size
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
@ 2019-11-10 23:07 ` Ananyev, Konstantin
2019-11-11 7:40 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-10 23:07 UTC (permalink / raw)
To: Dekel Peled, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, Bie, Tiwei, Wang,
Zhihong, yongwang, thomas, Yigit, Ferruh, arybchenko, Wu,
Jingjing, Iremonger, Bernard
Cc: dev
> This patch implements [1], to support API for configuration and
> validation of max size for LRO aggregated packet.
> API change notice [2] is removed, and release notes for 19.11
> are updated accordingly.
>
> Various PMDs using LRO offload are updated, the new data members are
> initialized to ensure they don't fail validation.
>
> [1] http://patches.dpdk.org/patch/58217/
> [2] http://patches.dpdk.org/patch/57492/
Actually if the requirement is just to allow user to limit max lro size,
then why not to add just new function for that:
int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
?
And make it optional for the drivers to support it.
That way PMD/devices that allow LRO max size to be configurable,
can support it others can fail.
Konstantin
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Matan Azrad <matan@mellanox.com>
> ---
> doc/guides/nics/features.rst | 2 ++
> doc/guides/rel_notes/deprecation.rst | 4 ----
> doc/guides/rel_notes/release_19_11.rst | 8 +++++++
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
> drivers/net/mlx5/mlx5.h | 3 +++
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
> drivers/net/mlx5/mlx5_rxq.c | 1 -
> drivers/net/qede/qede_ethdev.c | 1 +
> drivers/net/virtio/virtio_ethdev.c | 1 +
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> lib/librte_ethdev/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++++
> lib/librte_ethdev/rte_ethdev.h | 4 ++++
> 14 files changed, 68 insertions(+), 5 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 7a31cf7..2138ce3 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -193,10 +193,12 @@ LRO
> Supports Large Receive Offload.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
> + ``dev_conf.rxmode.max_lro_pkt_size``.
> * **[implements] datapath**: ``LRO functionality``.
> * **[implements] rte_eth_dev_data**: ``lro``.
> * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
> * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
> +* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
>
>
> .. _nic_features_tso:
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index c10dc30..fdec33d 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -87,10 +87,6 @@ Deprecation Notices
> This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
> thereby improve Rx performance if application wishes do so.
>
> -* ethdev: New 32-bit fields may be added for maximum LRO session size, in
> - struct ``rte_eth_dev_info`` for the port capability and in struct
> - ``rte_eth_rxmode`` for the port configuration.
> -
> * cryptodev: support for using IV with all sizes is added, J0 still can
> be used but only when IV length in following structs ``rte_crypto_auth_xform``,
> ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
> diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
> index 87b7bd0..a3fc023 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -418,6 +418,14 @@ ABI Changes
> align the Ethernet header on receive and all known encapsulations
> preserve the alignment of the header.
>
> +* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
> + struct ``rte_eth_dev_info`` for the port capability and in struct
> + ``rte_eth_rxmode`` for the port configuration.
> + Application should use the new field in struct ``rte_eth_rxmode`` to configure
> + the requested size.
That part I am not happy with: * application should use*.
Many apps I suppose are ok with default LRO size selected by PMD/HW.
Why to force changes in all of them?
> + PMD should use the new field in struct ``rte_eth_dev_info`` to report the
> + supported port capability.
> +
>
> Shared Library Versions
> -----------------------
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index b9b055e..741b897 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -519,6 +519,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
> /* Fast path specifics */
> dev_info->min_rx_bufsize = 1;
> dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
> + dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
>
> dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
> if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 9f37a40..b33b2cf 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
> info->max_tx_queues = nic_dev->nic_cap.max_sqs;
> info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
> info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
> + info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
> info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
> info->min_mtu = HINIC_MIN_MTU_SIZE;
> info->max_mtu = HINIC_MAX_MTU_SIZE;
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 30c0379..5719552 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -3814,6 +3814,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
> }
> dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
> dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
> + dev_info->max_lro_pkt_size = RTE_IPV4_MAX_PKT_LEN;
> dev_info->max_mac_addrs = hw->mac.num_rar_entries;
> dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
> dev_info->max_vfs = pci_dev->max_vfs;
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
> index fab58c9..4783b5c 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -206,6 +206,9 @@ struct mlx5_hca_attr {
> #define MLX5_LRO_SUPPORTED(dev) \
> (((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
>
> +/* Maximal size of aggregated LRO packet. */
> +#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
> +
> /* LRO configurations structure. */
> struct mlx5_lro_config {
> uint32_t supported:1; /* Whether LRO is supported. */
> diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
> index 2b7c867..3adc824 100644
> --- a/drivers/net/mlx5/mlx5_ethdev.c
> +++ b/drivers/net/mlx5/mlx5_ethdev.c
> @@ -606,6 +606,7 @@ struct ethtool_link_settings {
> /* FIXME: we should ask the device for these values. */
> info->min_rx_bufsize = 32;
> info->max_rx_pktlen = 65536;
> + info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
> /*
> * Since we need one CQ per QP, the limit is the minimum number
> * between the two values.
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index 24d0eaa..9423e7b 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
> return 0;
> }
>
> -#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
> #define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
> sizeof(struct rte_vlan_hdr) * 2 + \
> sizeof(struct rte_ipv6_hdr)))
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 575982f..ccbb8a4 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
>
> dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
> dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
> + dev_info->max_lro_pkt_size = (uint32_t)0x7FFF;
> dev_info->rx_desc_lim = qede_rx_desc_lim;
> dev_info->tx_desc_lim = qede_tx_desc_lim;
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 044eb10..22ce5a2 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
> RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
> dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
> dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
> + dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
> dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
>
> host_features = VTPCI_OPS(hw)->get_features(hw);
> diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> index d1faeaa..d18e8bc 100644
> --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> @@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
> dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
> dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
> dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
> + dev_info->max_lro_pkt_size = 16384;
> dev_info->speed_capa = ETH_LINK_SPEED_10G;
> dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
>
> diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
> index 652c369..c642ba5 100644
> --- a/lib/librte_ethdev/rte_ethdev.c
> +++ b/lib/librte_ethdev/rte_ethdev.c
> @@ -1136,6 +1136,26 @@ struct rte_eth_dev *
> return name;
> }
>
> +static inline int
> +check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> + uint32_t dev_info_size)
> +{
> + int ret = 0;
> +
> + if (config_size > dev_info_size) {
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
> + "> max allowed value %u\n", port_id, config_size,
> + dev_info_size);
> + ret = -EINVAL;
> + } else if (config_size < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
> + "< min allowed value %u\n", port_id, config_size,
> + (unsigned int)RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + }
> + return ret;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> RTE_ETHER_MAX_LEN;
> }
>
> + /*
> + * If LRO is enabled, check that the maximum aggregated packet
> + * size is supported by the configured device.
> + */
> + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + ret = check_lro_pkt_size(
> + port_id, dev_conf->rxmode.max_lro_pkt_size,
> + dev_info.max_lro_pkt_size);
> + if (ret != 0)
> + goto rollback;
> + }
> +
> /* Any requested offloading must be within its device capabilities */
> if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
> dev_conf->rxmode.offloads) {
> @@ -1770,6 +1802,18 @@ struct rte_eth_dev *
> return -EINVAL;
> }
>
> + /*
> + * If LRO is enabled, check that the maximum aggregated packet
> + * size is supported by the configured device.
> + */
> + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + int ret = check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size,
> + dev_info.max_lro_pkt_size);
> + if (ret != 0)
> + return ret;
> + }
> +
> ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
> socket_id, &local_conf, mp);
> if (!ret) {
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index 44d77b3..1b76df5 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -395,6 +395,8 @@ struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> enum rte_eth_rx_mq_mode mq_mode;
> uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> + /** Maximum allowed size of LRO aggregated packet. */
> + uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> /**
> * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> @@ -1218,6 +1220,8 @@ struct rte_eth_dev_info {
> const uint32_t *dev_flags; /**< Device flags */
> uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
> uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
> + /** Maximum configurable size of LRO aggregated packet. */
> + uint32_t max_lro_pkt_size;
> uint16_t max_rx_queues; /**< Maximum number of RX queues. */
> uint16_t max_tx_queues; /**< Maximum number of TX queues. */
> uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: support API to set max LRO packet size
2019-11-10 23:07 ` Ananyev, Konstantin
@ 2019-11-11 7:40 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 7:40 UTC (permalink / raw)
To: Ananyev, Konstantin, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Matan Azrad,
Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh, maxime.coquelin,
Bie, Tiwei, Wang, Zhihong, yongwang, Thomas Monjalon, Yigit,
Ferruh, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Thnaks, PSB.
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Monday, November 11, 2019 1:08 AM
> To: Dekel Peled <dekelp@mellanox.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Matan Azrad <matan@mellanox.com>; Shahaf
> Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; Yigit,
> Ferruh <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v5 1/3] ethdev: support API to set max LRO packet size
>
>
>
> > This patch implements [1], to support API for configuration and
> > validation of max size for LRO aggregated packet.
> > API change notice [2] is removed, and release notes for 19.11 are
> > updated accordingly.
> >
> > Various PMDs using LRO offload are updated, the new data members are
> > initialized to ensure they don't fail validation.
> >
> > [1]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F58217%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7Cc12f78c6bd3d48bc223008d76632cd0f%7Ca652971c7d2e4d9ba6a4d1492
> 56f461
> >
> b%7C0%7C0%7C637090240692948792&sdata=pU6LJBYDmlzPFzHc%2FQlF
> UVXpGuv
> > ulTcl%2F29GsXdvBF8%3D&reserved=0
> > [2]
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F57492%2F&data=02%7C01%7Cdekelp%40mell
> anox.co
> >
> m%7Cc12f78c6bd3d48bc223008d76632cd0f%7Ca652971c7d2e4d9ba6a4d1492
> 56f461
> >
> b%7C0%7C0%7C637090240692958790&sdata=VegV5HcYhkabDgcOF29u
> %2FFLq25I
> > %2BEDZTaEn20A2t2Wo%3D&reserved=0
>
> Actually if the requirement is just to allow user to limit max lro size, then why
> not to add just new function for that:
>
> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro); ?
>
> And make it optional for the drivers to support it.
> That way PMD/devices that allow LRO max size to be configurable, can
> support it others can fail.
Current implementation is consistent with the existing max_rx_pkt_len use, in case LRO is used.
When using jumbo frames the packet len must be specified.
Using LRO the max session size should be specified.
>
> Konstantin
>
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > Acked-by: Matan Azrad <matan@mellanox.com>
> > ---
> > doc/guides/nics/features.rst | 2 ++
> > doc/guides/rel_notes/deprecation.rst | 4 ----
> > doc/guides/rel_notes/release_19_11.rst | 8 +++++++
> > drivers/net/bnxt/bnxt_ethdev.c | 1 +
> > drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> > drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
> > drivers/net/mlx5/mlx5.h | 3 +++
> > drivers/net/mlx5/mlx5_ethdev.c | 1 +
> > drivers/net/mlx5/mlx5_rxq.c | 1 -
> > drivers/net/qede/qede_ethdev.c | 1 +
> > drivers/net/virtio/virtio_ethdev.c | 1 +
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> > lib/librte_ethdev/rte_ethdev.c | 44
> ++++++++++++++++++++++++++++++++++
> > lib/librte_ethdev/rte_ethdev.h | 4 ++++
> > 14 files changed, 68 insertions(+), 5 deletions(-)
> >
> > diff --git a/doc/guides/nics/features.rst
> > b/doc/guides/nics/features.rst index 7a31cf7..2138ce3 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -193,10 +193,12 @@ LRO
> > Supports Large Receive Offload.
> >
> > * **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
> > + ``dev_conf.rxmode.max_lro_pkt_size``.
> > * **[implements] datapath**: ``LRO functionality``.
> > * **[implements] rte_eth_dev_data**: ``lro``.
> > * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``,
> ``mbuf.tso_segsz``.
> > * **[provides] rte_eth_dev_info**:
> ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
> > +* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
> >
> >
> > .. _nic_features_tso:
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index c10dc30..fdec33d 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -87,10 +87,6 @@ Deprecation Notices
> > This scheme will allow PMDs to avoid lookup to internal ptype table on Rx
> and
> > thereby improve Rx performance if application wishes do so.
> >
> > -* ethdev: New 32-bit fields may be added for maximum LRO session
> > size, in
> > - struct ``rte_eth_dev_info`` for the port capability and in struct
> > - ``rte_eth_rxmode`` for the port configuration.
> > -
> > * cryptodev: support for using IV with all sizes is added, J0 still can
> > be used but only when IV length in following structs
> ``rte_crypto_auth_xform``,
> > ``rte_crypto_aead_xform`` is set to zero. When IV length is greater
> > or equal diff --git a/doc/guides/rel_notes/release_19_11.rst
> > b/doc/guides/rel_notes/release_19_11.rst
> > index 87b7bd0..a3fc023 100644
> > --- a/doc/guides/rel_notes/release_19_11.rst
> > +++ b/doc/guides/rel_notes/release_19_11.rst
> > @@ -418,6 +418,14 @@ ABI Changes
> > align the Ethernet header on receive and all known encapsulations
> > preserve the alignment of the header.
> >
> > +* ethdev: Added 32-bit fields for maximum LRO aggregated packet size,
> > +in
> > + struct ``rte_eth_dev_info`` for the port capability and in struct
> > + ``rte_eth_rxmode`` for the port configuration.
> > + Application should use the new field in struct ``rte_eth_rxmode``
> > +to configure
> > + the requested size.
>
> That part I am not happy with: * application should use*.
> Many apps I suppose are ok with default LRO size selected by PMD/HW.
> Why to force changes in all of them?
Again this is to keep consistent with max_rx_pkt_len use.
>
> > + PMD should use the new field in struct ``rte_eth_dev_info`` to
> > + report the supported port capability.
> > +
> >
> > Shared Library Versions
> > -----------------------
> > diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> > b/drivers/net/bnxt/bnxt_ethdev.c index b9b055e..741b897 100644
> > --- a/drivers/net/bnxt/bnxt_ethdev.c
> > +++ b/drivers/net/bnxt/bnxt_ethdev.c
> > @@ -519,6 +519,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev
> *eth_dev,
> > /* Fast path specifics */
> > dev_info->min_rx_bufsize = 1;
> > dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
> > + dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
> >
> > dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
> > if (bp->flags & BNXT_FLAG_PTP_SUPPORTED) diff --git
> > a/drivers/net/hinic/hinic_pmd_ethdev.c
> > b/drivers/net/hinic/hinic_pmd_ethdev.c
> > index 9f37a40..b33b2cf 100644
> > --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> > +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> > @@ -727,6 +727,7 @@ static void hinic_get_speed_capa(struct
> rte_eth_dev *dev, uint32_t *speed_capa)
> > info->max_tx_queues = nic_dev->nic_cap.max_sqs;
> > info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
> > info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
> > + info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
> > info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
> > info->min_mtu = HINIC_MIN_MTU_SIZE;
> > info->max_mtu = HINIC_MAX_MTU_SIZE;
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> > b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 30c0379..5719552 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -3814,6 +3814,7 @@ static int
> ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
> > }
> > dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL
> register */
> > dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS
> register
> > */
> > + dev_info->max_lro_pkt_size = RTE_IPV4_MAX_PKT_LEN;
> > dev_info->max_mac_addrs = hw->mac.num_rar_entries;
> > dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
> > dev_info->max_vfs = pci_dev->max_vfs; diff --git
> > a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> > fab58c9..4783b5c 100644
> > --- a/drivers/net/mlx5/mlx5.h
> > +++ b/drivers/net/mlx5/mlx5.h
> > @@ -206,6 +206,9 @@ struct mlx5_hca_attr { #define
> > MLX5_LRO_SUPPORTED(dev) \
> > (((struct mlx5_priv
> > *)((dev)->data->dev_private))->config.lro.supported)
> >
> > +/* Maximal size of aggregated LRO packet. */ #define
> > +MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
> > +
> > /* LRO configurations structure. */
> > struct mlx5_lro_config {
> > uint32_t supported:1; /* Whether LRO is supported. */ diff --git
> > a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
> > index 2b7c867..3adc824 100644
> > --- a/drivers/net/mlx5/mlx5_ethdev.c
> > +++ b/drivers/net/mlx5/mlx5_ethdev.c
> > @@ -606,6 +606,7 @@ struct ethtool_link_settings {
> > /* FIXME: we should ask the device for these values. */
> > info->min_rx_bufsize = 32;
> > info->max_rx_pktlen = 65536;
> > + info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
> > /*
> > * Since we need one CQ per QP, the limit is the minimum number
> > * between the two values.
> > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> > index 24d0eaa..9423e7b 100644
> > --- a/drivers/net/mlx5/mlx5_rxq.c
> > +++ b/drivers/net/mlx5/mlx5_rxq.c
> > @@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
> > return 0;
> > }
> >
> > -#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u) #define
> > MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct
> rte_ether_hdr) + \
> > sizeof(struct rte_vlan_hdr) * 2 + \
> > sizeof(struct rte_ipv6_hdr)))
> > diff --git a/drivers/net/qede/qede_ethdev.c
> > b/drivers/net/qede/qede_ethdev.c index 575982f..ccbb8a4 100644
> > --- a/drivers/net/qede/qede_ethdev.c
> > +++ b/drivers/net/qede/qede_ethdev.c
> > @@ -1277,6 +1277,7 @@ static int qede_dev_configure(struct rte_eth_dev
> > *eth_dev)
> >
> > dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
> > dev_info->max_rx_pktlen =
> (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
> > + dev_info->max_lro_pkt_size = (uint32_t)0x7FFF;
> > dev_info->rx_desc_lim = qede_rx_desc_lim;
> > dev_info->tx_desc_lim = qede_tx_desc_lim;
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c
> > b/drivers/net/virtio/virtio_ethdev.c
> > index 044eb10..22ce5a2 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -2435,6 +2435,7 @@ static void virtio_dev_free_mbufs(struct
> rte_eth_dev *dev)
> > RTE_MIN(hw->max_queue_pairs,
> VIRTIO_MAX_TX_QUEUES);
> > dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
> > dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
> > + dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
> > dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
> >
> > host_features = VTPCI_OPS(hw)->get_features(hw); diff --git
> > a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > index d1faeaa..d18e8bc 100644
> > --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > @@ -1161,6 +1161,7 @@ static int eth_vmxnet3_pci_remove(struct
> rte_pci_device *pci_dev)
> > dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
> > dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
> > dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS
> register
> > */
> > + dev_info->max_lro_pkt_size = 16384;
> > dev_info->speed_capa = ETH_LINK_SPEED_10G;
> > dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
> >
> > diff --git a/lib/librte_ethdev/rte_ethdev.c
> > b/lib/librte_ethdev/rte_ethdev.c index 652c369..c642ba5 100644
> > --- a/lib/librte_ethdev/rte_ethdev.c
> > +++ b/lib/librte_ethdev/rte_ethdev.c
> > @@ -1136,6 +1136,26 @@ struct rte_eth_dev *
> > return name;
> > }
> >
> > +static inline int
> > +check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> > + uint32_t dev_info_size)
> > +{
> > + int ret = 0;
> > +
> > + if (config_size > dev_info_size) {
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u "
> > + "> max allowed value %u\n", port_id, config_size,
> > + dev_info_size);
> > + ret = -EINVAL;
> > + } else if (config_size < RTE_ETHER_MIN_LEN) {
> > + RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d
> max_lro_pkt_size %u "
> > + "< min allowed value %u\n", port_id, config_size,
> > + (unsigned int)RTE_ETHER_MIN_LEN);
> > + ret = -EINVAL;
> > + }
> > + return ret;
> > +}
> > +
> > int
> > rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> > const struct rte_eth_conf *dev_conf) @@ -1266,6
> +1286,18 @@
> > struct rte_eth_dev *
> >
> RTE_ETHER_MAX_LEN;
> > }
> >
> > + /*
> > + * If LRO is enabled, check that the maximum aggregated packet
> > + * size is supported by the configured device.
> > + */
> > + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > + ret = check_lro_pkt_size(
> > + port_id, dev_conf-
> >rxmode.max_lro_pkt_size,
> > + dev_info.max_lro_pkt_size);
> > + if (ret != 0)
> > + goto rollback;
> > + }
> > +
> > /* Any requested offloading must be within its device capabilities */
> > if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
> > dev_conf->rxmode.offloads) {
> > @@ -1770,6 +1802,18 @@ struct rte_eth_dev *
> > return -EINVAL;
> > }
> >
> > + /*
> > + * If LRO is enabled, check that the maximum aggregated packet
> > + * size is supported by the configured device.
> > + */
> > + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > + int ret = check_lro_pkt_size(port_id,
> > + dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> > + dev_info.max_lro_pkt_size);
> > + if (ret != 0)
> > + return ret;
> > + }
> > +
> > ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
> nb_rx_desc,
> > socket_id, &local_conf, mp);
> > if (!ret) {
> > diff --git a/lib/librte_ethdev/rte_ethdev.h
> > b/lib/librte_ethdev/rte_ethdev.h index 44d77b3..1b76df5 100644
> > --- a/lib/librte_ethdev/rte_ethdev.h
> > +++ b/lib/librte_ethdev/rte_ethdev.h
> > @@ -395,6 +395,8 @@ struct rte_eth_rxmode {
> > /** The multi-queue packet distribution mode to be used, e.g. RSS.
> */
> > enum rte_eth_rx_mq_mode mq_mode;
> > uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME
> enabled. */
> > + /** Maximum allowed size of LRO aggregated packet. */
> > + uint32_t max_lro_pkt_size;
> > uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> > /**
> > * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > @@ -1218,6 +1220,8 @@ struct rte_eth_dev_info {
> > const uint32_t *dev_flags; /**< Device flags */
> > uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
> > uint32_t max_rx_pktlen; /**< Maximum configurable length of RX
> pkt.
> > */
> > + /** Maximum configurable size of LRO aggregated packet. */
> > + uint32_t max_lro_pkt_size;
> > uint16_t max_rx_queues; /**< Maximum number of RX queues. */
> > uint16_t max_tx_queues; /**< Maximum number of TX queues. */
> > uint32_t max_mac_addrs; /**< Maximum number of MAC
> addresses. */
> > --
> > 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] net/mlx5: use API to set max LRO packet size
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
@ 2019-11-08 16:42 ` Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: " Dekel Peled
2 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:42 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 3651e82..adfaac2 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 9423e7b..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] app/testpmd: use API to set max LRO packet size
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 1/3] ethdev: " Dekel Peled
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-08 16:42 ` Dekel Peled
2019-11-10 23:11 ` Ananyev, Konstantin
2 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-08 16:42 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
app/test-pmd/cmdline.c | 76 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
app/test-pmd/testpmd.c | 1 +
doc/guides/testpmd_app_ug/run_app.rst | 5 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
6 files changed, 100 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 78c6899..2206a70 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -777,6 +777,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config all max-pkt-len (value)\n"
" Set the max packet length.\n\n"
+ "port config all max-lro-pkt-size (value)\n"
+ " Set the max LRO aggregated packet size.\n\n"
+
"port config all drop-en (on|off)\n"
" Enable or disable packet drop on all RX queues of all ports when no "
"receive buffers available.\n\n"
@@ -2040,6 +2043,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19124,6 +19199,7 @@ struct cmd_show_rx_tx_desc_status_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index b603974..e1e5cf7 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -616,6 +616,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9ea87c1..eda395b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -107,6 +107,8 @@
printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
"in mbuf pools.\n");
printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
+ printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated packet "
+ "size to N bytes.\n");
#ifdef RTE_LIBRTE_CMDLINE
printf(" --eth-peers-configfile=name: config file with ethernet addresses "
"of peer ports.\n");
@@ -592,6 +594,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -888,6 +891,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5ba9741..3fe694f 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
struct rte_eth_rxmode rx_mode = {
.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
/**< Default maximum frame length. */
+ .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
};
struct rte_eth_txmode tx_mode = {
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 00e0c2a..721f740 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,11 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+ The default value is 1518.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9a5e5cb..9cfc82a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2147,6 +2147,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] app/testpmd: use API to set max LRO packet size
2019-11-08 16:42 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: " Dekel Peled
@ 2019-11-10 23:11 ` Ananyev, Konstantin
2019-11-11 7:40 ` Dekel Peled
0 siblings, 1 reply; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-10 23:11 UTC (permalink / raw)
To: Dekel Peled, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, Bie, Tiwei, Wang,
Zhihong, yongwang, thomas, Yigit, Ferruh, arybchenko, Wu,
Jingjing, Iremonger, Bernard
Cc: dev
>
> This patch implements use of the API for LRO aggregated packet
> max size.
> It adds command-line and runtime commands to configure this value,
> and adds option to show the supported value.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Matan Azrad <matan@mellanox.com>
> ---
> app/test-pmd/cmdline.c | 76 +++++++++++++++++++++++++++++
> app/test-pmd/config.c | 2 +
> app/test-pmd/parameters.c | 7 +++
> app/test-pmd/testpmd.c | 1 +
> doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> 6 files changed, 100 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 78c6899..2206a70 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -777,6 +777,9 @@ static void cmd_help_long_parsed(void *parsed_result,
> "port config all max-pkt-len (value)\n"
> " Set the max packet length.\n\n"
>
> + "port config all max-lro-pkt-size (value)\n"
> + " Set the max LRO aggregated packet size.\n\n"
> +
> "port config all drop-en (on|off)\n"
> " Enable or disable packet drop on all RX queues of all ports when no "
> "receive buffers available.\n\n"
> @@ -2040,6 +2043,78 @@ struct cmd_config_max_pkt_len_result {
> },
> };
>
> +/* *** config max LRO aggregated packet size *** */
> +struct cmd_config_max_lro_pkt_size_result {
> + cmdline_fixed_string_t port;
> + cmdline_fixed_string_t keyword;
> + cmdline_fixed_string_t all;
> + cmdline_fixed_string_t name;
> + uint32_t value;
> +};
> +
> +static void
> +cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
> + __attribute__((unused)) struct cmdline *cl,
> + __attribute__((unused)) void *data)
> +{
> + struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
> + portid_t pid;
> +
> + if (!all_ports_stopped()) {
> + printf("Please stop all ports first\n");
> + return;
> + }
> +
> + RTE_ETH_FOREACH_DEV(pid) {
> + struct rte_port *port = &ports[pid];
> +
> + if (!strcmp(res->name, "max-lro-pkt-size")) {
> + if (res->value ==
> + port->dev_conf.rxmode.max_lro_pkt_size)
> + return;
> +
> + port->dev_conf.rxmode.max_lro_pkt_size = res->value;
> + } else {
> + printf("Unknown parameter\n");
> + return;
> + }
> + }
> +
> + init_port_config();
> +
> + cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
> +}
> +
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
> + TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
> + port, "port");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
> + TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
> + keyword, "config");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
> + TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
> + all, "all");
> +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
> + TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
> + name, "max-lro-pkt-size");
> +cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
> + TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
> + value, UINT32);
> +
> +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> + .f = cmd_config_max_lro_pkt_size_parsed,
> + .data = NULL,
> + .help_str = "port config all max-lro-pkt-size <value>",
> + .tokens = {
> + (void *)&cmd_config_max_lro_pkt_size_port,
> + (void *)&cmd_config_max_lro_pkt_size_keyword,
> + (void *)&cmd_config_max_lro_pkt_size_all,
> + (void *)&cmd_config_max_lro_pkt_size_name,
> + (void *)&cmd_config_max_lro_pkt_size_value,
> + NULL,
> + },
> +};
> +
> /* *** configure port MTU *** */
> struct cmd_config_mtu_result {
> cmdline_fixed_string_t port;
> @@ -19124,6 +19199,7 @@ struct cmd_show_rx_tx_desc_status_result {
> (cmdline_parse_inst_t *)&cmd_config_rx_tx,
> (cmdline_parse_inst_t *)&cmd_config_mtu,
> (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
> + (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
> (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
> (cmdline_parse_inst_t *)&cmd_config_rss,
> (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index b603974..e1e5cf7 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -616,6 +616,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
> printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
> printf("Maximum configurable length of RX packet: %u\n",
> dev_info.max_rx_pktlen);
> + printf("Maximum configurable size of LRO aggregated packet: %u\n",
> + dev_info.max_lro_pkt_size);
> if (dev_info.max_vfs)
> printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
> if (dev_info.max_vmdq_pools)
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 9ea87c1..eda395b 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -107,6 +107,8 @@
> printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
> "in mbuf pools.\n");
> printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
> + printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated packet "
> + "size to N bytes.\n");
> #ifdef RTE_LIBRTE_CMDLINE
> printf(" --eth-peers-configfile=name: config file with ethernet addresses "
> "of peer ports.\n");
> @@ -592,6 +594,7 @@
> { "mbuf-size", 1, 0, 0 },
> { "total-num-mbufs", 1, 0, 0 },
> { "max-pkt-len", 1, 0, 0 },
> + { "max-lro-pkt-size", 1, 0, 0 },
> { "pkt-filter-mode", 1, 0, 0 },
> { "pkt-filter-report-hash", 1, 0, 0 },
> { "pkt-filter-size", 1, 0, 0 },
> @@ -888,6 +891,10 @@
> "Invalid max-pkt-len=%d - should be > %d\n",
> n, RTE_ETHER_MIN_LEN);
> }
> + if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
> + n = atoi(optarg);
> + rx_mode.max_lro_pkt_size = (uint32_t) n;
> + }
> if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
> if (!strcmp(optarg, "signature"))
> fdir_conf.mode =
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 5ba9741..3fe694f 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = {
> struct rte_eth_rxmode rx_mode = {
> .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> /**< Default maximum frame length. */
> + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
That looks like a change in current testpmd behavior, correct?
If so, is there real need for that?
Can't we have either some value for max_lro_pktlen, that would
indicate PMD to use the default value it prefers?
Or probably better to let have a separate function to set max lro size.
Then by default PMD will always use its preferred value,
and when needed user can change it via special function call.
> };
>
> struct rte_eth_txmode tx_mode = {
> diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
> index 00e0c2a..721f740 100644
> --- a/doc/guides/testpmd_app_ug/run_app.rst
> +++ b/doc/guides/testpmd_app_ug/run_app.rst
> @@ -112,6 +112,11 @@ The command line options are:
>
> Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
>
> +* ``--max-lro-pkt-size=N``
> +
> + Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
> + The default value is 1518.
> +
> * ``--eth-peers-configfile=name``
>
> Use a configuration file containing the Ethernet addresses of the peer ports.
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 9a5e5cb..9cfc82a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -2147,6 +2147,15 @@ Set the maximum packet length::
>
> This is equivalent to the ``--max-pkt-len`` command-line option.
>
> +port config - max-lro-pkt-size
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Set the maximum LRO aggregated packet size::
> +
> + testpmd> port config all max-lro-pkt-size (value)
> +
> +This is equivalent to the ``--max-lro-pkt-size`` command-line option.
> +
> port config - Drop Packets
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] app/testpmd: use API to set max LRO packet size
2019-11-10 23:11 ` Ananyev, Konstantin
@ 2019-11-11 7:40 ` Dekel Peled
0 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 7:40 UTC (permalink / raw)
To: Ananyev, Konstantin, Mcnamara, John, Kovacevic, Marko, nhorman,
ajit.khaparde, somnath.kotur, Burakov, Anatoly, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, Lu, Wenzhuo, Matan Azrad,
Shahaf Shuler, Slava Ovsiienko, rmody, shshaikh, maxime.coquelin,
Bie, Tiwei, Wang, Zhihong, yongwang, Thomas Monjalon, Yigit,
Ferruh, arybchenko, Wu, Jingjing, Iremonger, Bernard
Cc: dev
Thanks, PSB.
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Monday, November 11, 2019 1:11 AM
> To: Dekel Peled <dekelp@mellanox.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Matan Azrad <matan@mellanox.com>; Shahaf
> Shuler <shahafs@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; Yigit,
> Ferruh <ferruh.yigit@intel.com>; arybchenko@solarflare.com; Wu, Jingjing
> <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v5 3/3] app/testpmd: use API to set max LRO packet size
>
>
> >
> > This patch implements use of the API for LRO aggregated packet max
> > size.
> > It adds command-line and runtime commands to configure this value, and
> > adds option to show the supported value.
> > Documentation is updated accordingly.
> >
> > Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> > Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > Acked-by: Matan Azrad <matan@mellanox.com>
> > ---
> > app/test-pmd/cmdline.c | 76
> +++++++++++++++++++++++++++++
> > app/test-pmd/config.c | 2 +
> > app/test-pmd/parameters.c | 7 +++
> > app/test-pmd/testpmd.c | 1 +
> > doc/guides/testpmd_app_ug/run_app.rst | 5 ++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
> > 6 files changed, 100 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > 78c6899..2206a70 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -777,6 +777,9 @@ static void cmd_help_long_parsed(void
> *parsed_result,
> > "port config all max-pkt-len (value)\n"
> > " Set the max packet length.\n\n"
> >
> > + "port config all max-lro-pkt-size (value)\n"
> > + " Set the max LRO aggregated packet size.\n\n"
> > +
> > "port config all drop-en (on|off)\n"
> > " Enable or disable packet drop on all RX queues of
> all ports when no "
> > "receive buffers available.\n\n"
> > @@ -2040,6 +2043,78 @@ struct cmd_config_max_pkt_len_result {
> > },
> > };
> >
> > +/* *** config max LRO aggregated packet size *** */ struct
> > +cmd_config_max_lro_pkt_size_result {
> > + cmdline_fixed_string_t port;
> > + cmdline_fixed_string_t keyword;
> > + cmdline_fixed_string_t all;
> > + cmdline_fixed_string_t name;
> > + uint32_t value;
> > +};
> > +
> > +static void
> > +cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
> > + __attribute__((unused)) struct cmdline *cl,
> > + __attribute__((unused)) void *data) {
> > + struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
> > + portid_t pid;
> > +
> > + if (!all_ports_stopped()) {
> > + printf("Please stop all ports first\n");
> > + return;
> > + }
> > +
> > + RTE_ETH_FOREACH_DEV(pid) {
> > + struct rte_port *port = &ports[pid];
> > +
> > + if (!strcmp(res->name, "max-lro-pkt-size")) {
> > + if (res->value ==
> > + port-
> >dev_conf.rxmode.max_lro_pkt_size)
> > + return;
> > +
> > + port->dev_conf.rxmode.max_lro_pkt_size = res-
> >value;
> > + } else {
> > + printf("Unknown parameter\n");
> > + return;
> > + }
> > + }
> > +
> > + init_port_config();
> > +
> > + cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1); }
> > +
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
> > + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> > + port, "port");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword
> =
> > + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> > + keyword, "config");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
> > + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> > + all, "all");
> > +cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
> > + TOKEN_STRING_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> > + name, "max-lro-pkt-size");
> > +cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
> > + TOKEN_NUM_INITIALIZER(struct
> cmd_config_max_lro_pkt_size_result,
> > + value, UINT32);
> > +
> > +cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
> > + .f = cmd_config_max_lro_pkt_size_parsed,
> > + .data = NULL,
> > + .help_str = "port config all max-lro-pkt-size <value>",
> > + .tokens = {
> > + (void *)&cmd_config_max_lro_pkt_size_port,
> > + (void *)&cmd_config_max_lro_pkt_size_keyword,
> > + (void *)&cmd_config_max_lro_pkt_size_all,
> > + (void *)&cmd_config_max_lro_pkt_size_name,
> > + (void *)&cmd_config_max_lro_pkt_size_value,
> > + NULL,
> > + },
> > +};
> > +
> > /* *** configure port MTU *** */
> > struct cmd_config_mtu_result {
> > cmdline_fixed_string_t port;
> > @@ -19124,6 +19199,7 @@ struct cmd_show_rx_tx_desc_status_result {
> > (cmdline_parse_inst_t *)&cmd_config_rx_tx,
> > (cmdline_parse_inst_t *)&cmd_config_mtu,
> > (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
> > + (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
> > (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
> > (cmdline_parse_inst_t *)&cmd_config_rss,
> > (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size, diff --git
> > a/app/test-pmd/config.c b/app/test-pmd/config.c index b603974..e1e5cf7
> > 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -616,6 +616,8 @@ static int bus_match_all(const struct rte_bus *bus,
> const void *data)
> > printf("Minimum size of RX buffer: %u\n",
> dev_info.min_rx_bufsize);
> > printf("Maximum configurable length of RX packet: %u\n",
> > dev_info.max_rx_pktlen);
> > + printf("Maximum configurable size of LRO aggregated packet: %u\n",
> > + dev_info.max_lro_pkt_size);
> > if (dev_info.max_vfs)
> > printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
> > if (dev_info.max_vmdq_pools)
> > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> > index 9ea87c1..eda395b 100644
> > --- a/app/test-pmd/parameters.c
> > +++ b/app/test-pmd/parameters.c
> > @@ -107,6 +107,8 @@
> > printf(" --total-num-mbufs=N: set the number of mbufs to be
> allocated "
> > "in mbuf pools.\n");
> > printf(" --max-pkt-len=N: set the maximum size of packet to N
> > bytes.\n");
> > + printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated
> packet "
> > + "size to N bytes.\n");
> > #ifdef RTE_LIBRTE_CMDLINE
> > printf(" --eth-peers-configfile=name: config file with ethernet
> addresses "
> > "of peer ports.\n");
> > @@ -592,6 +594,7 @@
> > { "mbuf-size", 1, 0, 0 },
> > { "total-num-mbufs", 1, 0, 0 },
> > { "max-pkt-len", 1, 0, 0 },
> > + { "max-lro-pkt-size", 1, 0, 0 },
> > { "pkt-filter-mode", 1, 0, 0 },
> > { "pkt-filter-report-hash", 1, 0, 0 },
> > { "pkt-filter-size", 1, 0, 0 },
> > @@ -888,6 +891,10 @@
> > "Invalid max-pkt-len=%d -
> should be > %d\n",
> > n, RTE_ETHER_MIN_LEN);
> > }
> > + if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-
> size")) {
> > + n = atoi(optarg);
> > + rx_mode.max_lro_pkt_size = (uint32_t) n;
> > + }
> > if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode"))
> {
> > if (!strcmp(optarg, "signature"))
> > fdir_conf.mode =
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 5ba9741..3fe694f 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -419,6 +419,7 @@ struct fwd_engine * fwd_engines[] = { struct
> > rte_eth_rxmode rx_mode = {
> > .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> > /**< Default maximum frame length. */
> > + .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
>
> That looks like a change in current testpmd behavior, correct?
> If so, is there real need for that?
> Can't we have either some value for max_lro_pktlen, that would indicate
> PMD to use the default value it prefers?
> Or probably better to let have a separate function to set max lro size.
> Then by default PMD will always use its preferred value, and when needed
> user can change it via special function call.
>
This is to keep consistent with max_rx_pkt_len use.
> > };
> >
> > struct rte_eth_txmode tx_mode = {
> > diff --git a/doc/guides/testpmd_app_ug/run_app.rst
> > b/doc/guides/testpmd_app_ug/run_app.rst
> > index 00e0c2a..721f740 100644
> > --- a/doc/guides/testpmd_app_ug/run_app.rst
> > +++ b/doc/guides/testpmd_app_ug/run_app.rst
> > @@ -112,6 +112,11 @@ The command line options are:
> >
> > Set the maximum packet size to N bytes, where N >= 64. The default
> value is 1518.
> >
> > +* ``--max-lro-pkt-size=N``
> > +
> > + Set the maximum LRO aggregated packet size to N bytes, where N >=
> 64.
> > + The default value is 1518.
> > +
> > * ``--eth-peers-configfile=name``
> >
> > Use a configuration file containing the Ethernet addresses of the peer
> ports.
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 9a5e5cb..9cfc82a 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -2147,6 +2147,15 @@ Set the maximum packet length::
> >
> > This is equivalent to the ``--max-pkt-len`` command-line option.
> >
> > +port config - max-lro-pkt-size
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Set the maximum LRO aggregated packet size::
> > +
> > + testpmd> port config all max-lro-pkt-size (value)
> > +
> > +This is equivalent to the ``--max-lro-pkt-size`` command-line option.
> > +
> > port config - Drop Packets
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > --
> > 1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v6] ethdev: add max LRO packet size
2019-11-05 8:40 [dpdk-dev] [PATCH 0/3] support API to set max LRO packet size Dekel Peled
` (4 preceding siblings ...)
2019-11-06 11:34 ` [dpdk-dev] [PATCH v2 " Dekel Peled
@ 2019-11-08 23:07 ` Thomas Monjalon
2019-11-10 22:47 ` Ananyev, Konstantin
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
5 siblings, 2 replies; 79+ messages in thread
From: Thomas Monjalon @ 2019-11-08 23:07 UTC (permalink / raw)
To: John McNamara, Marko Kovacevic, Neil Horman, Ajit Khaparde,
Somnath Kotur, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Wenzhuo Lu, Konstantin Ananyev, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Rasesh Mody, Shahed Shaikh,
Maxime Coquelin, Tiwei Bie, Zhihong Wang, Yong Wang,
Ferruh Yigit, Andrew Rybchenko
Cc: dev, Dekel Peled
From: Dekel Peled <dekelp@mellanox.com>
The maximum supported aggregated packet size for LRO
is advertised in rte_eth_dev_info.
For some devices, max_lro_pktlen may be different of
the basic max_rx_pktlen property.
Various PMDs supporting LRO are updated.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v6: This is half of v5 1/3. Only the agreed part is here.
Hope it represents the consensus, so we make a step forward.
The field max_lro_pkt_size is renamed to max_lro_pktlen
in order to look like max_rx_pktlen.
---
doc/guides/nics/features.rst | 1 +
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_19_11.rst | 3 +++
drivers/net/bnxt/bnxt_ethdev.c | 1 +
drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/qede/qede_ethdev.c | 1 +
drivers/net/virtio/virtio_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
lib/librte_ethdev/rte_ethdev.h | 1 +
13 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d96696801a..1b2e120a9d 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -197,6 +197,7 @@ Supports Large Receive Offload.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pktlen``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b0b992dcb5..d4fcf9975b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -88,10 +88,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 795c7601c0..473af44374 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -403,6 +403,9 @@ ABI Changes
align the Ethernet header on receive and all known encapsulations
preserve the alignment of the header.
+* ethdev: Added 32-bit field for maximum LRO aggregated packet size,
+ as port capability in the struct ``rte_eth_dev_info``.
+
* security: The field ``replay_win_sz`` has been moved from ipsec library
based ``rte_ipsec_sa_prm`` structure to security library based structure
``rte_security_ipsec_xform``, which specify the Anti replay window size
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 58a4f98c9f..95c60a3757 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -535,6 +535,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
/* Fast path specifics */
dev_info->min_rx_bufsize = 1;
dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+ dev_info->max_lro_pktlen = BNXT_MAX_PKT_LEN;
dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a404be..cbd2d032f9 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->max_tx_queues = nic_dev->nic_cap.max_sqs;
info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_lro_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
info->min_mtu = HINIC_MIN_MTU_SIZE;
info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index dbce7a80e9..a01b8bbf10 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3804,6 +3804,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
}
dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pktlen = RTE_IPV4_MAX_PKT_LEN;
dev_info->max_mac_addrs = hw->mac.num_rar_entries;
dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
dev_info->max_vfs = pci_dev->max_vfs;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b6a51b2b4d..935adbbbf3 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -198,6 +198,9 @@ TAILQ_HEAD(mlx5_flows, rte_flow);
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 2278b24c01..91de186365 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -552,6 +552,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pktlen = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f0ab8438d3..aca2e67e0c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1524,7 +1524,6 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 53fdfde9a8..fd05856836 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1273,6 +1273,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+ dev_info->max_lro_pktlen = (uint32_t)0x7FFF;
dev_info->rx_desc_lim = qede_rx_desc_lim;
dev_info->tx_desc_lim = qede_tx_desc_lim;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de9945c..d97f3c6645 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+ dev_info->max_lro_pktlen = VIRTIO_MAX_RX_PKTLEN;
dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa81b..6c99a2a8e0 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+ dev_info->max_lro_pktlen = 16384;
dev_info->speed_capa = ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index c36c1b631f..b47eea60d9 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1183,6 +1183,7 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ uint32_t max_lro_pktlen; /**< Maximum size of LRO aggregated packet. */
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
2.23.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v6] ethdev: add max LRO packet size
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
@ 2019-11-10 22:47 ` Ananyev, Konstantin
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
1 sibling, 0 replies; 79+ messages in thread
From: Ananyev, Konstantin @ 2019-11-10 22:47 UTC (permalink / raw)
To: Thomas Monjalon, Mcnamara, John, Kovacevic, Marko, Neil Horman,
Ajit Khaparde, Somnath Kotur, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Lu, Wenzhuo, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Rasesh Mody, Shahed Shaikh,
Maxime Coquelin, Bie, Tiwei, Wang, Zhihong, Yong Wang, Yigit,
Ferruh, Andrew Rybchenko
Cc: dev, Dekel Peled
>
> From: Dekel Peled <dekelp@mellanox.com>
>
> The maximum supported aggregated packet size for LRO
> is advertised in rte_eth_dev_info.
> For some devices, max_lro_pktlen may be different of
> the basic max_rx_pktlen property.
>
> Various PMDs supporting LRO are updated.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>
> v6: This is half of v5 1/3. Only the agreed part is here.
> Hope it represents the consensus, so we make a step forward.
> The field max_lro_pkt_size is renamed to max_lro_pktlen
> in order to look like max_rx_pktlen.
>
> ---
> doc/guides/nics/features.rst | 1 +
> doc/guides/rel_notes/deprecation.rst | 4 ----
> doc/guides/rel_notes/release_19_11.rst | 3 +++
> drivers/net/bnxt/bnxt_ethdev.c | 1 +
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 +
> drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
> drivers/net/mlx5/mlx5.h | 3 +++
> drivers/net/mlx5/mlx5_ethdev.c | 1 +
> drivers/net/mlx5/mlx5_rxq.c | 1 -
> drivers/net/qede/qede_ethdev.c | 1 +
> drivers/net/virtio/virtio_ethdev.c | 1 +
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
> lib/librte_ethdev/rte_ethdev.h | 1 +
> 13 files changed, 15 insertions(+), 5 deletions(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index d96696801a..1b2e120a9d 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -197,6 +197,7 @@ Supports Large Receive Offload.
> * **[implements] rte_eth_dev_data**: ``lro``.
> * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
> * **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
> +* **[provides] rte_eth_dev_info**: ``max_lro_pktlen``.
>
>
> .. _nic_features_tso:
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index b0b992dcb5..d4fcf9975b 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -88,10 +88,6 @@ Deprecation Notices
> This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
> thereby improve Rx performance if application wishes do so.
>
> -* ethdev: New 32-bit fields may be added for maximum LRO session size, in
> - struct ``rte_eth_dev_info`` for the port capability and in struct
> - ``rte_eth_rxmode`` for the port configuration.
> -
> * cryptodev: support for using IV with all sizes is added, J0 still can
> be used but only when IV length in following structs ``rte_crypto_auth_xform``,
> ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
> diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
> index 795c7601c0..473af44374 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -403,6 +403,9 @@ ABI Changes
> align the Ethernet header on receive and all known encapsulations
> preserve the alignment of the header.
>
> +* ethdev: Added 32-bit field for maximum LRO aggregated packet size,
> + as port capability in the struct ``rte_eth_dev_info``.
> +
> * security: The field ``replay_win_sz`` has been moved from ipsec library
> based ``rte_ipsec_sa_prm`` structure to security library based structure
> ``rte_security_ipsec_xform``, which specify the Anti replay window size
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 58a4f98c9f..95c60a3757 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -535,6 +535,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
> /* Fast path specifics */
> dev_info->min_rx_bufsize = 1;
> dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
> + dev_info->max_lro_pktlen = BNXT_MAX_PKT_LEN;
>
> dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
> if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 9f37a404be..cbd2d032f9 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -727,6 +727,7 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
> info->max_tx_queues = nic_dev->nic_cap.max_sqs;
> info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
> info->max_rx_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
> + info->max_lro_pktlen = HINIC_MAX_JUMBO_FRAME_SIZE;
> info->max_mac_addrs = HINIC_MAX_UC_MAC_ADDRS;
> info->min_mtu = HINIC_MIN_MTU_SIZE;
> info->max_mtu = HINIC_MAX_MTU_SIZE;
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index dbce7a80e9..a01b8bbf10 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -3804,6 +3804,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> }
> dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
> dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
> + dev_info->max_lro_pktlen = RTE_IPV4_MAX_PKT_LEN;
> dev_info->max_mac_addrs = hw->mac.num_rar_entries;
> dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
> dev_info->max_vfs = pci_dev->max_vfs;
Actually after looking at ixgbe code once again -
as we set LRO capability conditionally, we probably better to set max_lro_pktlen
conditionally too. Something like:
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3820,6 +3820,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->tx_queue_offload_capa = ixgbe_get_tx_queue_offloads(dev);
dev_info->tx_offload_capa = ixgbe_get_tx_port_offloads(dev);
+ if (dev_info->rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+ dev_info->max_lro_pktlen = RTE_IPV4_MAX_PKT_LEN;
+
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
.pthresh = IXGBE_DEFAULT_RX_PTHRESH,
Sorry for missed that previously.
Apart from that: LGTM.
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
> index b6a51b2b4d..935adbbbf3 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -198,6 +198,9 @@ TAILQ_HEAD(mlx5_flows, rte_flow);
> #define MLX5_LRO_SUPPORTED(dev) \
> (((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
>
> +/* Maximal size of aggregated LRO packet. */
> +#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
> +
> /* LRO configurations structure. */
> struct mlx5_lro_config {
> uint32_t supported:1; /* Whether LRO is supported. */
> diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
> index 2278b24c01..91de186365 100644
> --- a/drivers/net/mlx5/mlx5_ethdev.c
> +++ b/drivers/net/mlx5/mlx5_ethdev.c
> @@ -552,6 +552,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
> /* FIXME: we should ask the device for these values. */
> info->min_rx_bufsize = 32;
> info->max_rx_pktlen = 65536;
> + info->max_lro_pktlen = MLX5_MAX_LRO_SIZE;
> /*
> * Since we need one CQ per QP, the limit is the minimum number
> * between the two values.
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index f0ab8438d3..aca2e67e0c 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1524,7 +1524,6 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
> return 0;
> }
>
> -#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
> #define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
> sizeof(struct rte_vlan_hdr) * 2 + \
> sizeof(struct rte_ipv6_hdr)))
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 53fdfde9a8..fd05856836 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1273,6 +1273,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
>
> dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
> dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
> + dev_info->max_lro_pktlen = (uint32_t)0x7FFF;
> dev_info->rx_desc_lim = qede_rx_desc_lim;
> dev_info->tx_desc_lim = qede_tx_desc_lim;
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 646de9945c..d97f3c6645 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -2435,6 +2435,7 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
> dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
> dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
> + dev_info->max_lro_pktlen = VIRTIO_MAX_RX_PKTLEN;
> dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
>
> host_features = VTPCI_OPS(hw)->get_features(hw);
> diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> index d1faeaa81b..6c99a2a8e0 100644
> --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> @@ -1161,6 +1161,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
> dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
> dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
> dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
> + dev_info->max_lro_pktlen = 16384;
> dev_info->speed_capa = ETH_LINK_SPEED_10G;
> dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
>
> diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
> index c36c1b631f..b47eea60d9 100644
> --- a/lib/librte_ethdev/rte_ethdev.h
> +++ b/lib/librte_ethdev/rte_ethdev.h
> @@ -1183,6 +1183,7 @@ struct rte_eth_dev_info {
> const uint32_t *dev_flags; /**< Device flags */
> uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
> uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
> + uint32_t max_lro_pktlen; /**< Maximum size of LRO aggregated packet. */
> uint16_t max_rx_queues; /**< Maximum number of RX queues. */
> uint16_t max_tx_queues; /**< Maximum number of TX queues. */
> uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
> --
> 2.23.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v7 0/3] support API to set max LRO packet size
2019-11-08 23:07 ` [dpdk-dev] [PATCH v6] ethdev: add " Thomas Monjalon
2019-11-10 22:47 ` Ananyev, Konstantin
@ 2019-11-11 17:47 ` Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
` (3 more replies)
1 sibling, 4 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 17:47 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This series implements support and use of API for configuration and
validation of max size for LRO aggregated packet.
v2: Updated ethdev patch per review comments.
v3: Updated ethdev and testpmd patches per review comments.
v4: Updated ethdev patch for QEDE PMD per review comments.
v5: Updated ethdev patch for IXGBE PMD, and testpmd patch,
per review comments.
v6: This is half of v5 1/3. Only the agreed part is here.
v7: Remove other PMDs update, allow max_lro_pkt_len 0 in
application conf, and in device info.
Dekel Peled (3):
ethdev: support API to set max LRO packet size
net/mlx5: use API to set max LRO packet size
app/testpmd: use API to set max LRO packet size
app/test-pmd/cmdline.c | 76 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
doc/guides/nics/features.rst | 2 +
doc/guides/nics/mlx5.rst | 2 +
doc/guides/rel_notes/deprecation.rst | 4 --
doc/guides/rel_notes/release_19_11.rst | 8 +++
doc/guides/testpmd_app_ug/run_app.rst | 4 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
lib/librte_ethdev/rte_ethdev.c | 59 ++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 ++
14 files changed, 180 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v7 1/3] ethdev: support API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
@ 2019-11-11 17:47 ` Dekel Peled
2019-11-12 0:46 ` Ferruh Yigit
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 2/3] net/mlx5: use " Dekel Peled
` (2 subsequent siblings)
3 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 17:47 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.
[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/features.rst | 2 ++
doc/guides/rel_notes/deprecation.rst | 4 ---
doc/guides/rel_notes/release_19_11.rst | 8 +++++
lib/librte_ethdev/rte_ethdev.c | 59 ++++++++++++++++++++++++++++++++++
lib/librte_ethdev/rte_ethdev.h | 4 +++
5 files changed, 73 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 7a31cf7..2138ce3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@ LRO
Supports Large Receive Offload.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+ ``dev_conf.rxmode.max_lro_pkt_size``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``max_lro_pkt_size``.
.. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fad208b..dbfb059 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -83,10 +83,6 @@ Deprecation Notices
This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
thereby improve Rx performance if application wishes do so.
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
- struct ``rte_eth_dev_info`` for the port capability and in struct
- ``rte_eth_rxmode`` for the port configuration.
-
* cryptodev: support for using IV with all sizes is added, J0 still can
be used but only when IV length in following structs ``rte_crypto_auth_xform``,
``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index da48051..d29acbe 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -444,6 +444,14 @@ ABI Changes
* ipsec: The field ``replay_win_sz`` has been removed from the structure
``rte_ipsec_sa_prm`` as it has been added to the security library.
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+ struct ``rte_eth_dev_info`` for the port capability and in struct
+ ``rte_eth_rxmode`` for the port configuration.
+ Application should use the new field in struct ``rte_eth_rxmode`` to configure
+ the requested size.
+ PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+ supported port capability.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 652c369..55e0e0d 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1136,6 +1136,33 @@ struct rte_eth_dev *
return name;
}
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+ uint32_t max_rx_pkt_len, uint32_t dev_info_size)
+{
+ int ret = 0;
+
+ if (dev_info_size == 0) {
+ if (config_size != max_rx_pkt_len) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
+ " %u != %u is not allowed\n",
+ port_id, config_size, max_rx_pkt_len);
+ ret = -EINVAL;
+ }
+ } else if (config_size > dev_info_size) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "> max allowed value %u\n", port_id, config_size,
+ dev_info_size);
+ ret = -EINVAL;
+ } else if (config_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+ "< min allowed value %u\n", port_id, config_size,
+ (unsigned int)RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1266,6 +1293,22 @@ struct rte_eth_dev *
RTE_ETHER_MAX_LEN;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev_conf->rxmode.max_lro_pkt_size == 0)
+ dev->data->dev_conf.rxmode.max_lro_pkt_size =
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ goto rollback;
+ }
+
/* Any requested offloading must be within its device capabilities */
if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
dev_conf->rxmode.offloads) {
@@ -1770,6 +1813,22 @@ struct rte_eth_dev *
return -EINVAL;
}
+ /*
+ * If LRO is enabled, check that the maximum aggregated packet
+ * size is supported by the configured device.
+ */
+ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
+ dev->data->dev_conf.rxmode.max_lro_pkt_size =
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ int ret = check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size,
+ dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ dev_info.max_lro_pkt_size);
+ if (ret != 0)
+ return ret;
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
socket_id, &local_conf, mp);
if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 44d77b3..1b76df5 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@ struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ /** Maximum allowed size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1218,6 +1220,8 @@ struct rte_eth_dev_info {
const uint32_t *dev_flags; /**< Device flags */
uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+ /** Maximum configurable size of LRO aggregated packet. */
+ uint32_t max_lro_pkt_size;
uint16_t max_rx_queues; /**< Maximum number of RX queues. */
uint16_t max_tx_queues; /**< Maximum number of TX queues. */
uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/3] ethdev: support API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
@ 2019-11-12 0:46 ` Ferruh Yigit
0 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-12 0:46 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/11/2019 5:47 PM, Dekel Peled wrote:
> This patch implements [1], to support API for configuration and
> validation of max size for LRO aggregated packet.
> API change notice [2] is removed, and release notes for 19.11
> are updated accordingly.
>
> [1] http://patches.dpdk.org/patch/58217/
> [2] http://patches.dpdk.org/patch/57492/
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v7 2/3] net/mlx5: use API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
@ 2019-11-11 17:47 ` Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: " Dekel Peled
2019-11-12 0:47 ` [dpdk-dev] [PATCH v7 0/3] support " Ferruh Yigit
3 siblings, 0 replies; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 17:47 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
Rx queue create is updated to use the relevant configuration.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
doc/guides/nics/mlx5.rst | 2 ++
drivers/net/mlx5/mlx5.h | 3 +++
drivers/net/mlx5/mlx5_ethdev.c | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +++--
4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 5fd313c..fd5a326 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -207,6 +207,8 @@ Limitations
- KEEP_CRC offload cannot be supported with LRO.
- The first mbuf length, without head-room, must be big enough to include the
TCP header (122B).
+ - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
+ it with size limited to max LRO size, not to max RX packet length.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 511463a..0c3a90e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -218,6 +218,9 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 2b7c867..3adc824 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@ struct ethtool_link_settings {
/* FIXME: we should ask the device for these values. */
info->min_rx_bufsize = 32;
info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
/*
* Since we need one CQ per QP, the limit is the minimum number
* between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..c725e14 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@ struct mlx5_rxq_obj *
return 0;
}
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
@@ -1773,7 +1772,9 @@ struct mlx5_rxq_ctrl *
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
- unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int max_rx_pkt_len = lro_on_queue ?
+ dev->data->dev_conf.rxmode.max_lro_pkt_size :
+ dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v7 3/3] app/testpmd: use API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 1/3] ethdev: " Dekel Peled
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 2/3] net/mlx5: use " Dekel Peled
@ 2019-11-11 17:47 ` Dekel Peled
2019-11-12 0:46 ` Ferruh Yigit
2019-11-12 0:47 ` [dpdk-dev] [PATCH v7 0/3] support " Ferruh Yigit
3 siblings, 1 reply; 79+ messages in thread
From: Dekel Peled @ 2019-11-11 17:47 UTC (permalink / raw)
To: john.mcnamara, marko.kovacevic, nhorman, ajit.khaparde,
somnath.kotur, anatoly.burakov, xuanziyang2, cloud.wangxiaoyun,
zhouguoyang, wenzhuo.lu, konstantin.ananyev, matan, shahafs,
viacheslavo, rmody, shshaikh, maxime.coquelin, tiwei.bie,
zhihong.wang, yongwang, thomas, ferruh.yigit, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
This patch implements use of the API for LRO aggregated packet
max size.
It adds command-line and runtime commands to configure this value,
and adds option to show the supported value.
Documentation is updated accordingly.
Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
app/test-pmd/cmdline.c | 76 +++++++++++++++++++++++++++++
app/test-pmd/config.c | 2 +
app/test-pmd/parameters.c | 7 +++
doc/guides/testpmd_app_ug/run_app.rst | 4 ++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 9 ++++
5 files changed, 98 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 48627c8..5cf7a4d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -777,6 +777,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config all max-pkt-len (value)\n"
" Set the max packet length.\n\n"
+ "port config all max-lro-pkt-size (value)\n"
+ " Set the max LRO aggregated packet size.\n\n"
+
"port config all drop-en (on|off)\n"
" Enable or disable packet drop on all RX queues of all ports when no "
"receive buffers available.\n\n"
@@ -2040,6 +2043,78 @@ struct cmd_config_max_pkt_len_result {
},
};
+/* *** config max LRO aggregated packet size *** */
+struct cmd_config_max_lro_pkt_size_result {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t keyword;
+ cmdline_fixed_string_t all;
+ cmdline_fixed_string_t name;
+ uint32_t value;
+};
+
+static void
+cmd_config_max_lro_pkt_size_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_config_max_lro_pkt_size_result *res = parsed_result;
+ portid_t pid;
+
+ if (!all_ports_stopped()) {
+ printf("Please stop all ports first\n");
+ return;
+ }
+
+ RTE_ETH_FOREACH_DEV(pid) {
+ struct rte_port *port = &ports[pid];
+
+ if (!strcmp(res->name, "max-lro-pkt-size")) {
+ if (res->value ==
+ port->dev_conf.rxmode.max_lro_pkt_size)
+ return;
+
+ port->dev_conf.rxmode.max_lro_pkt_size = res->value;
+ } else {
+ printf("Unknown parameter\n");
+ return;
+ }
+ }
+
+ init_port_config();
+
+ cmd_reconfig_device_queue(RTE_PORT_ALL, 1, 1);
+}
+
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ port, "port");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_keyword =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ keyword, "config");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_all =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ all, "all");
+cmdline_parse_token_string_t cmd_config_max_lro_pkt_size_name =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ name, "max-lro-pkt-size");
+cmdline_parse_token_num_t cmd_config_max_lro_pkt_size_value =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_max_lro_pkt_size_result,
+ value, UINT32);
+
+cmdline_parse_inst_t cmd_config_max_lro_pkt_size = {
+ .f = cmd_config_max_lro_pkt_size_parsed,
+ .data = NULL,
+ .help_str = "port config all max-lro-pkt-size <value>",
+ .tokens = {
+ (void *)&cmd_config_max_lro_pkt_size_port,
+ (void *)&cmd_config_max_lro_pkt_size_keyword,
+ (void *)&cmd_config_max_lro_pkt_size_all,
+ (void *)&cmd_config_max_lro_pkt_size_name,
+ (void *)&cmd_config_max_lro_pkt_size_value,
+ NULL,
+ },
+};
+
/* *** configure port MTU *** */
struct cmd_config_mtu_result {
cmdline_fixed_string_t port;
@@ -19124,6 +19199,7 @@ struct cmd_show_rx_tx_desc_status_result {
(cmdline_parse_inst_t *)&cmd_config_rx_tx,
(cmdline_parse_inst_t *)&cmd_config_mtu,
(cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
+ (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
(cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
(cmdline_parse_inst_t *)&cmd_config_rss,
(cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2a51d96..d599682 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -624,6 +624,8 @@ static int bus_match_all(const struct rte_bus *bus, const void *data)
printf("Minimum size of RX buffer: %u\n", dev_info.min_rx_bufsize);
printf("Maximum configurable length of RX packet: %u\n",
dev_info.max_rx_pktlen);
+ printf("Maximum configurable size of LRO aggregated packet: %u\n",
+ dev_info.max_lro_pkt_size);
if (dev_info.max_vfs)
printf("Maximum number of VFs: %u\n", dev_info.max_vfs);
if (dev_info.max_vmdq_pools)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 9b6e35b..deca7a6 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -107,6 +107,8 @@
printf(" --total-num-mbufs=N: set the number of mbufs to be allocated "
"in mbuf pools.\n");
printf(" --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
+ printf(" --max-lro-pkt-size=N: set the maximum LRO aggregated packet "
+ "size to N bytes.\n");
#ifdef RTE_LIBRTE_CMDLINE
printf(" --eth-peers-configfile=name: config file with ethernet addresses "
"of peer ports.\n");
@@ -594,6 +596,7 @@
{ "mbuf-size", 1, 0, 0 },
{ "total-num-mbufs", 1, 0, 0 },
{ "max-pkt-len", 1, 0, 0 },
+ { "max-lro-pkt-size", 1, 0, 0 },
{ "pkt-filter-mode", 1, 0, 0 },
{ "pkt-filter-report-hash", 1, 0, 0 },
{ "pkt-filter-size", 1, 0, 0 },
@@ -891,6 +894,10 @@
"Invalid max-pkt-len=%d - should be > %d\n",
n, RTE_ETHER_MIN_LEN);
}
+ if (!strcmp(lgopts[opt_idx].name, "max-lro-pkt-size")) {
+ n = atoi(optarg);
+ rx_mode.max_lro_pkt_size = (uint32_t) n;
+ }
if (!strcmp(lgopts[opt_idx].name, "pkt-filter-mode")) {
if (!strcmp(optarg, "signature"))
fdir_conf.mode =
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 8c7fe44..9ab4d70 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -112,6 +112,10 @@ The command line options are:
Set the maximum packet size to N bytes, where N >= 64. The default value is 1518.
+* ``--max-lro-pkt-size=N``
+
+ Set the maximum LRO aggregated packet size to N bytes, where N >= 64.
+
* ``--eth-peers-configfile=name``
Use a configuration file containing the Ethernet addresses of the peer ports.
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9a5e5cb..9cfc82a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2147,6 +2147,15 @@ Set the maximum packet length::
This is equivalent to the ``--max-pkt-len`` command-line option.
+port config - max-lro-pkt-size
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the maximum LRO aggregated packet size::
+
+ testpmd> port config all max-lro-pkt-size (value)
+
+This is equivalent to the ``--max-lro-pkt-size`` command-line option.
+
port config - Drop Packets
~~~~~~~~~~~~~~~~~~~~~~~~~~
--
1.8.3.1
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v7 3/3] app/testpmd: use API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: " Dekel Peled
@ 2019-11-12 0:46 ` Ferruh Yigit
0 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-12 0:46 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/11/2019 5:47 PM, Dekel Peled wrote:
> This patch implements use of the API for LRO aggregated packet
> max size.
> It adds command-line and runtime commands to configure this value,
> and adds option to show the supported value.
> Documentation is updated accordingly.
>
> Signed-off-by: Dekel Peled <dekelp@mellanox.com>
> Acked-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/3] support API to set max LRO packet size
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 0/3] support API to set " Dekel Peled
` (2 preceding siblings ...)
2019-11-11 17:47 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: " Dekel Peled
@ 2019-11-12 0:47 ` Ferruh Yigit
3 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2019-11-12 0:47 UTC (permalink / raw)
To: Dekel Peled, john.mcnamara, marko.kovacevic, nhorman,
ajit.khaparde, somnath.kotur, anatoly.burakov, xuanziyang2,
cloud.wangxiaoyun, zhouguoyang, wenzhuo.lu, konstantin.ananyev,
matan, shahafs, viacheslavo, rmody, shshaikh, maxime.coquelin,
tiwei.bie, zhihong.wang, yongwang, thomas, arybchenko,
jingjing.wu, bernard.iremonger
Cc: dev
On 11/11/2019 5:47 PM, Dekel Peled wrote:
> This series implements support and use of API for configuration and
> validation of max size for LRO aggregated packet.
>
> v2: Updated ethdev patch per review comments.
> v3: Updated ethdev and testpmd patches per review comments.
> v4: Updated ethdev patch for QEDE PMD per review comments.
> v5: Updated ethdev patch for IXGBE PMD, and testpmd patch,
> per review comments.
> v6: This is half of v5 1/3. Only the agreed part is here.
> v7: Remove other PMDs update, allow max_lro_pkt_len 0 in
> application conf, and in device info.
>
> Dekel Peled (3):
> ethdev: support API to set max LRO packet size
> net/mlx5: use API to set max LRO packet size
> app/testpmd: use API to set max LRO packet size
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 79+ messages in thread