* [PATCH v3 1/1] net/tap: add a check that Rx/Tx have the same num of queues
@ 2022-01-19 7:43 Nobuhiro MIKI
2022-01-26 16:19 ` Ferruh Yigit
0 siblings, 1 reply; 2+ messages in thread
From: Nobuhiro MIKI @ 2022-01-19 7:43 UTC (permalink / raw)
To: ferruh.yigit, keith.wiles; +Cc: dev, i.maximets, dmarchan, Nobuhiro MIKI
Users can create the desired number of RxQ and TxQ in DPDK. For
example, if the number of RxQ = 2 and the number of TxQ = 5,
a total of 8 file descriptors will be created for a tap device,
including RxQ, TxQ, and one for keepalive. The RxQ and TxQ
with the same ID are paired by dup(2).
In this scenario, Kernel will have 3 RxQ where packets are
incoming but not read. The reason for this is that there are only
2 RxQ that are polled by DPDK, while there are 5 queues in Kernel.
This patch add a checking if DPDK has appropriate numbers of
queues to avoid unexpected packet drop.
Signed-off-by: Nobuhiro MIKI <nmiki@yahoo-corp.jp>
---
v3: add doc for this limitation in doc/guides/nics/tap.rst
v2: fix commit message
I had first discussed this issue in OVS [1], but changed my mind
that a fix in DPDK would be more appropriate.
[1]: https://mail.openvswitch.org/pipermail/ovs-dev/2021-November/389690.html
---
doc/guides/nics/tap.rst | 5 +++++
drivers/net/tap/rte_eth_tap.c | 8 ++++++++
2 files changed, 13 insertions(+)
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 681010d9ed..3d4564b046 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -298,3 +298,8 @@ Systems supporting flow API
| Azure Ubuntu 16.04,| No limitation |
| kernel 4.13 | |
+--------------------+-----------------------+
+
+Limitations
+-----------
+
+* Rx/Tx must have the same number of queues.
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 5bb472f1a6..02eb311e09 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -940,6 +940,14 @@ tap_dev_configure(struct rte_eth_dev *dev)
RTE_PMD_TAP_MAX_QUEUES);
return -1;
}
+ if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
+ TAP_LOG(ERR,
+ "%s: number of rx queues %d must be equal to number of tx queues %d",
+ dev->device->name,
+ dev->data->nb_rx_queues,
+ dev->data->nb_tx_queues);
+ return -1;
+ }
TAP_LOG(INFO, "%s: %s: TX configured queues number: %u",
dev->device->name, pmd->name, dev->data->nb_tx_queues);
--
2.24.4
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v3 1/1] net/tap: add a check that Rx/Tx have the same num of queues
2022-01-19 7:43 [PATCH v3 1/1] net/tap: add a check that Rx/Tx have the same num of queues Nobuhiro MIKI
@ 2022-01-26 16:19 ` Ferruh Yigit
0 siblings, 0 replies; 2+ messages in thread
From: Ferruh Yigit @ 2022-01-26 16:19 UTC (permalink / raw)
To: Nobuhiro MIKI, keith.wiles; +Cc: dev, i.maximets, dmarchan
On 1/19/2022 7:43 AM, Nobuhiro MIKI wrote:
> Users can create the desired number of RxQ and TxQ in DPDK. For
> example, if the number of RxQ = 2 and the number of TxQ = 5,
> a total of 8 file descriptors will be created for a tap device,
> including RxQ, TxQ, and one for keepalive. The RxQ and TxQ
> with the same ID are paired by dup(2).
>
> In this scenario, Kernel will have 3 RxQ where packets are
> incoming but not read. The reason for this is that there are only
> 2 RxQ that are polled by DPDK, while there are 5 queues in Kernel.
> This patch add a checking if DPDK has appropriate numbers of
> queues to avoid unexpected packet drop.
>
> Signed-off-by: Nobuhiro MIKI <nmiki@yahoo-corp.jp>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-01-26 16:20 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-19 7:43 [PATCH v3 1/1] net/tap: add a check that Rx/Tx have the same num of queues Nobuhiro MIKI
2022-01-26 16:19 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).