From: Bing Zhao <bingz@nvidia.com>
To: <viacheslavo@nvidia.com>, <matan@nvidia.com>
Cc: <dev@dpdk.org>, <rasland@nvidia.com>, <thomas@monjalon.net>,
<orika@nvidia.com>
Subject: [dpdk-dev] [PATCH v5 2/2] net/mlx5: check delay drop settings in kernel driver
Date: Fri, 5 Nov 2021 15:36:17 +0200 [thread overview]
Message-ID: <20211105133617.177189-3-bingz@nvidia.com> (raw)
In-Reply-To: <20211105133617.177189-1-bingz@nvidia.com>
The delay drop is the common feature managed on per device basis
and the kernel driver is responsible one for the initialization and
rearming.
By default, the timeout value is set to activate the delay drop when
the driver is loaded.
A private flag "dropless_rq" is used to control the rearming. Only
when it is on, the rearming will be handled once received a timeout
event. Or else, the delay drop will be deactivated after the first
timeout occurs and all the Rx queues won't have this feature.
The PMD is trying to query this flag and warn the application when
some queues are created with delay drop but the flag is off.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/nics/mlx5.rst | 16 ++++
doc/guides/rel_notes/release_21_11.rst | 1 +
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 111 ++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_trigger.c | 18 ++++
drivers/net/mlx5/windows/mlx5_ethdev_os.c | 17 ++++
6 files changed, 164 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 061a44c723..97d6c1227c 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -619,6 +619,22 @@ Driver options
The packets being received will not be dropped immediately when the WQEs are
exhausted in a Rx queue with delay drop enabled.
+ A timeout value is set in the driver to control the waiting time before
+ dropping a packet. Once the timer is expired, the delay drop will be
+ deactivated for all the Rx queues with this feature enable. To re-activeate
+ it, a rearming is needed and it is part of the kernel driver starting from
+ OFED 5.5.
+
+ To enable / disable the delay drop rearming, the private flag ``dropless_rq``
+ can be set and queried via ethtool:
+
+ - ethtool --set-priv-flags <netdev> dropless_rq on (/ off)
+ - ethtool --show-priv-flags <netdev>
+
+ The configuration flag is global per PF and can only be set on the PF, once
+ it is on, all the VFs', SFs' and representors' Rx queues will share the timer
+ and rearming.
+
- ``mprq_en`` parameter [int]
A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 92180bb4bd..9556aa8bd9 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -192,6 +192,7 @@ New Features
* Added implicit mempool registration to avoid data path hiccups (opt-out).
* Added NIC offloads for the PMD on Windows (TSO, VLAN strip, CRC keep).
* Added socket direct mode bonding support.
+ * Added delay drop support for Rx queue.
* **Updated Solarflare network PMD.**
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 9d0e491d0c..c19825ee52 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -1630,3 +1630,114 @@ mlx5_get_mac(struct rte_eth_dev *dev, uint8_t (*mac)[RTE_ETHER_ADDR_LEN])
memcpy(mac, request.ifr_hwaddr.sa_data, RTE_ETHER_ADDR_LEN);
return 0;
}
+
+/*
+ * Query dropless_rq private flag value provided by ETHTOOL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * - 0 on success, flag is not set.
+ * - 1 on success, flag is set.
+ * - negative errno value otherwise and rte_errno is set.
+ */
+int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev)
+{
+ struct {
+ struct ethtool_sset_info hdr;
+ uint32_t buf[1];
+ } sset_info;
+ struct ethtool_drvinfo drvinfo;
+ struct ifreq ifr;
+ struct ethtool_gstrings *strings = NULL;
+ struct ethtool_value flags;
+ const int32_t flag_len = sizeof(flags.data) * CHAR_BIT;
+ int32_t str_sz;
+ int32_t len;
+ int32_t i;
+ int ret;
+
+ sset_info.hdr.cmd = ETHTOOL_GSSET_INFO;
+ sset_info.hdr.reserved = 0;
+ sset_info.hdr.sset_mask = 1ULL << ETH_SS_PRIV_FLAGS;
+ ifr.ifr_data = (caddr_t)&sset_info;
+ ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
+ if (!ret) {
+ const uint32_t *sset_lengths = sset_info.hdr.data;
+
+ len = sset_info.hdr.sset_mask ? sset_lengths[0] : 0;
+ } else if (ret == -EOPNOTSUPP) {
+ drvinfo.cmd = ETHTOOL_GDRVINFO;
+ ifr.ifr_data = (caddr_t)&drvinfo;
+ ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "port %u cannot get the driver info",
+ dev->data->port_id);
+ goto exit;
+ }
+ len = *(uint32_t *)((char *)&drvinfo +
+ offsetof(struct ethtool_drvinfo, n_priv_flags));
+ } else {
+ DRV_LOG(WARNING, "port %u cannot get the sset info",
+ dev->data->port_id);
+ goto exit;
+ }
+ if (!len) {
+ DRV_LOG(WARNING, "port %u does not have private flag",
+ dev->data->port_id);
+ rte_errno = EOPNOTSUPP;
+ ret = -rte_errno;
+ goto exit;
+ } else if (len > flag_len) {
+ DRV_LOG(WARNING, "port %u maximal private flags number is %d",
+ dev->data->port_id, flag_len);
+ len = flag_len;
+ }
+ str_sz = ETH_GSTRING_LEN * len;
+ strings = (struct ethtool_gstrings *)
+ mlx5_malloc(0, str_sz + sizeof(struct ethtool_gstrings), 0,
+ SOCKET_ID_ANY);
+ if (!strings) {
+ DRV_LOG(WARNING, "port %u unable to allocate memory for"
+ " private flags", dev->data->port_id);
+ rte_errno = ENOMEM;
+ ret = -rte_errno;
+ goto exit;
+ }
+ strings->cmd = ETHTOOL_GSTRINGS;
+ strings->string_set = ETH_SS_PRIV_FLAGS;
+ strings->len = len;
+ ifr.ifr_data = (caddr_t)strings;
+ ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "port %u unable to get private flags strings",
+ dev->data->port_id);
+ goto exit;
+ }
+ for (i = 0; i < len; i++) {
+ strings->data[(i + 1) * ETH_GSTRING_LEN - 1] = 0;
+ if (!strcmp((const char *)strings->data + i * ETH_GSTRING_LEN,
+ "dropless_rq"))
+ break;
+ }
+ if (i == len) {
+ DRV_LOG(WARNING, "port %u does not support dropless_rq",
+ dev->data->port_id);
+ rte_errno = EOPNOTSUPP;
+ ret = -rte_errno;
+ goto exit;
+ }
+ flags.cmd = ETHTOOL_GPFLAGS;
+ ifr.ifr_data = (caddr_t)&flags;
+ ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "port %u unable to get private flags status",
+ dev->data->port_id);
+ goto exit;
+ }
+ ret = !!(flags.data & (1U << i));
+exit:
+ mlx5_free(strings);
+ return ret;
+}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b2022f3300..9307a4f95b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1602,6 +1602,7 @@ int mlx5_os_read_dev_stat(struct mlx5_priv *priv,
int mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats);
int mlx5_os_get_stats_n(struct rte_eth_dev *dev);
void mlx5_os_stats_init(struct rte_eth_dev *dev);
+int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev);
/* mlx5_mac.c */
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index a3e62e9533..0ecc530043 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1129,6 +1129,24 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->port_id, strerror(rte_errno));
goto error;
}
+ if (priv->config.std_delay_drop || priv->config.hp_delay_drop) {
+ if (!priv->config.vf && !priv->config.sf &&
+ !priv->representor) {
+ ret = mlx5_get_flag_dropless_rq(dev);
+ if (ret < 0)
+ DRV_LOG(WARNING,
+ "port %u cannot query dropless flag",
+ dev->data->port_id);
+ else if (!ret)
+ DRV_LOG(WARNING,
+ "port %u dropless_rq OFF, no rearming",
+ dev->data->port_id);
+ } else {
+ DRV_LOG(DEBUG,
+ "port %u doesn't support dropless_rq flag",
+ dev->data->port_id);
+ }
+ }
ret = mlx5_rxq_start(dev);
if (ret) {
DRV_LOG(ERR, "port %u Rx queue allocation failed: %s",
diff --git a/drivers/net/mlx5/windows/mlx5_ethdev_os.c b/drivers/net/mlx5/windows/mlx5_ethdev_os.c
index fddc7a6b12..359f73df7c 100644
--- a/drivers/net/mlx5/windows/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/windows/mlx5_ethdev_os.c
@@ -389,3 +389,20 @@ mlx5_is_removed(struct rte_eth_dev *dev)
return 1;
return 0;
}
+
+/*
+ * Query dropless_rq private flag value provided by ETHTOOL.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * - 0 on success, flag is not set.
+ * - 1 on success, flag is set.
+ * - negative errno value otherwise and rte_errno is set.
+ */
+int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+ return -ENOTSUP;
+}
--
2.27.0
next prev parent reply other threads:[~2021-11-05 13:37 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-04 11:26 [dpdk-dev] [PATCH 0/4] Add delay drop support for Rx queue Bing Zhao
2021-11-04 11:26 ` [dpdk-dev] [PATCH 1/4] common/mlx5: support delay drop capabilities query Bing Zhao
2021-11-04 11:26 ` [dpdk-dev] [PATCH 2/4] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-04 14:01 ` David Marchand
2021-11-04 14:34 ` Bing Zhao
2021-11-04 11:26 ` [dpdk-dev] [PATCH 3/4] net/mlx5: support querying delay drop status via ethtool Bing Zhao
2021-11-04 11:26 ` [dpdk-dev] [PATCH 4/4] doc: update the description for Rx delay drop Bing Zhao
2021-11-04 14:01 ` [dpdk-dev] [PATCH v2 0/2] Add delay drop support for Rx queue Bing Zhao
2021-11-04 14:01 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-04 14:01 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: check delay drop settings in kernel driver Bing Zhao
2021-11-04 16:55 ` [dpdk-dev] [PATCH v3 0/2] Add delay drop support for Rx queue Bing Zhao
2021-11-04 16:55 ` [dpdk-dev] [PATCH v3 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-04 16:55 ` [dpdk-dev] [PATCH v3 2/2] net/mlx5: check delay drop settings in kernel driver Bing Zhao
2021-11-04 17:59 ` [dpdk-dev] [PATCH v4 0/2] Add delay drop support for Rx queue Bing Zhao
2021-11-04 17:59 ` [dpdk-dev] [PATCH v4 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-04 18:22 ` Slava Ovsiienko
2021-11-04 17:59 ` [dpdk-dev] [PATCH v4 2/2] net/mlx5: check delay drop settings in kernel driver Bing Zhao
2021-11-04 18:22 ` Slava Ovsiienko
2021-11-04 21:46 ` [dpdk-dev] [PATCH v4 0/2] Add delay drop support for Rx queue Raslan Darawsheh
2021-11-05 13:36 ` [dpdk-dev] [PATCH v5 " Bing Zhao
2021-11-05 13:36 ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-05 13:36 ` Bing Zhao [this message]
2021-11-05 14:28 ` [dpdk-dev] [PATCH v6 0/2] Add delay drop support for Rx queue Bing Zhao
2021-11-05 14:28 ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-05 14:28 ` [dpdk-dev] [PATCH v6 2/2] net/mlx5: check delay drop settings in kernel driver Bing Zhao
2021-11-05 15:30 ` [dpdk-dev] [PATCH v7 0/2] Add delay drop support for Rx queue Bing Zhao
2021-11-05 15:30 ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: add support for Rx queue delay drop Bing Zhao
2021-11-05 15:30 ` [dpdk-dev] [PATCH v7 2/2] net/mlx5: check delay drop settings in kernel driver Bing Zhao
2021-11-05 16:07 ` [dpdk-dev] [PATCH v7 0/2] Add delay drop support for Rx queue Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211105133617.177189-3-bingz@nvidia.com \
--to=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).