From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 9B49B235 for ; Tue, 21 Nov 2017 14:22:08 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 3178B20BBE; Tue, 21 Nov 2017 08:22:08 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute1.internal (MEProxy); Tue, 21 Nov 2017 08:22:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fridaylinux.org; h=cc:date:from:in-reply-to:message-id:references:subject:to :x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=9WJHA8laV3lGLr+zm HOU3D45EhfVOsFY5LKrX7a14Xo=; b=HkibDn1Iq7Jg5VzOKFu21Lqm44txWq4hs JJbJNS9XxybKerPIYIf6fjjPPVVT63RJ1yRC5dlDNEunzQTTCMERKAh3u6MLe9tR g+BuYocw8/SCvTP2kT8tHqWa5Q03OlL1Ii1NCNTwk0Muk7PR7dRmkkwuEVtMm1zL zDEmKcuBJyPqwfPIyEhSFcPa0iu86vualBYEzmfvEQX4HdWExGPeFGFQGv+53Y3+ 2kl/ns/N5h+qPzUhlTMSr4j/Q+EKmHY/T7oys3GIAOzQCF0m872oU7lncY8lo+8z sBKZL/I9v54IEXm15ymUEzmKbGUw5sfvO94PHS4Nxdx0pbt/ZBYDg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:date:from:in-reply-to:message-id :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=9WJHA8laV3lGLr+zmHOU3D45EhfVOsFY5LKrX7a14Xo=; b=WQMtcLla ueFuv/1tfuJUagZwkdR7AIgBT4uPYY6u6bil2jI9TB/HCE9Wuy2yNGZHfpRVs6jW 55c+jhpMqWF+l85gsDpqvG7w1F+7uAEeBGdKXanjJtM2hzxSEwy1QnNOUhvrCAMY y3IWbwKxh3AePSUvH8rmJFfKhqHJDD/3LtDR2lrX57Iah/XFsVPmkXFivm2Eeskt 5k6zt7yAfAJ8FTlO3oWevwDacmcwWfRCevsafyU1OZS1RblzYlW7x1pY4MyIHi08 fUvS4NVBg4kulv575/ns6HZ+tr3KSiUcGe1kKIUG5UdOC8Yiq6uBZitIITMBzmop LCKCt4eepTpTUg== X-ME-Sender: Received: from localhost.localdomain (unknown [180.158.62.0]) by mail.messagingengine.com (Postfix) with ESMTPA id E899A247A9; Tue, 21 Nov 2017 08:22:04 -0500 (EST) From: Yuanhan Liu To: Ophir Munk Cc: Pascal Mazon , dpdk stable Date: Tue, 21 Nov 2017 21:16:24 +0800 Message-Id: <1511270333-31002-42-git-send-email-yliu@fridaylinux.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1511270333-31002-1-git-send-email-yliu@fridaylinux.org> References: <1511270333-31002-1-git-send-email-yliu@fridaylinux.org> Subject: [dpdk-stable] patch 'net/tap: fix flow and port commands' has been queued to stable release 17.08.1 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Nov 2017 13:22:08 -0000 Hi, FYI, your patch has been queued to stable release 17.08.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/24/17. So please shout if anyone has objections. Thanks. --yliu --- >>From 0539ffdd99aec23f551e844aa6067cd08fb375a2 Mon Sep 17 00:00:00 2001 From: Ophir Munk Date: Sat, 16 Sep 2017 22:32:38 +0000 Subject: [PATCH] net/tap: fix flow and port commands [ upstream commit f46900d03823aff1cb636a2f889caeee99c721b2 ] This commit fixes two bugs related to tap devices. The first bug occurs when executing in testpmd the following flow rule assuming tap device has 4 rx and tx pair queues "flow create 0 ingress pattern eth / end actions queue index 5 / end" This command will report on success and will print ""Flow rule #0 created" although it should have failed as queue index number 5 does not exist The second bug occurs when executing in testpmd "port start all" following a port configuration. Assuming 1 pair of rx and tx queues an error is reported: "Fail to start port 0" Before this commit a fixed max number (16) of rx and tx queue pairs were created on startup where the file descriptors (fds) of rx and tx pairs were identical. As a result in the first bug queue index 5 existed because the tap device was created with 16 rx and tx queue pairs regardless of the configured number of queues. In the second bug when tap device was started tx fd was closed before opening it and executing ioctl() on it. However closing the sole fd of the device caused ioctl to fail with "No such device". This commit creates the configured number of rx and tx queue pairs (up to max 16) and assigns a unique fd to each queue. It was written to solve the first bug and was found as the right fix for the second bug as well. Fixes: 02f96a0a82d1 ("net/tap: add TUN/TAP device PMD") Fixes: bf7b7f437b49 ("net/tap: create netdevice during probing") Fixes: de96fe68ae95 ("net/tap: add basic flow API patterns and actions") Signed-off-by: Ophir Munk Acked-by: Pascal Mazon --- drivers/net/tap/rte_eth_tap.c | 139 +++++++++++++++++++++++++++++++----------- drivers/net/tap/rte_eth_tap.h | 2 +- drivers/net/tap/tap_flow.c | 3 +- 3 files changed, 108 insertions(+), 36 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 9acea83..d926d4b 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -603,8 +603,31 @@ tap_dev_stop(struct rte_eth_dev *dev) } static int -tap_dev_configure(struct rte_eth_dev *dev __rte_unused) +tap_dev_configure(struct rte_eth_dev *dev) { + if (dev->data->nb_rx_queues > RTE_PMD_TAP_MAX_QUEUES) { + RTE_LOG(ERR, PMD, + "%s: number of rx queues %d exceeds max num of queues %d\n", + dev->device->name, + dev->data->nb_rx_queues, + RTE_PMD_TAP_MAX_QUEUES); + return -1; + } + if (dev->data->nb_tx_queues > RTE_PMD_TAP_MAX_QUEUES) { + RTE_LOG(ERR, PMD, + "%s: number of tx queues %d exceeds max num of queues %d\n", + dev->device->name, + dev->data->nb_tx_queues, + RTE_PMD_TAP_MAX_QUEUES); + return -1; + } + + RTE_LOG(INFO, PMD, "%s: %p: TX configured queues number: %u\n", + dev->device->name, (void *)dev, dev->data->nb_tx_queues); + + RTE_LOG(INFO, PMD, "%s: %p: RX configured queues number: %u\n", + dev->device->name, (void *)dev, dev->data->nb_rx_queues); + return 0; } @@ -650,8 +673,8 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->if_index = internals->if_index; dev_info->max_mac_addrs = 1; dev_info->max_rx_pktlen = (uint32_t)ETHER_MAX_VLAN_FRAME_LEN; - dev_info->max_rx_queues = internals->nb_queues; - dev_info->max_tx_queues = internals->nb_queues; + dev_info->max_rx_queues = RTE_PMD_TAP_MAX_QUEUES; + dev_info->max_tx_queues = RTE_PMD_TAP_MAX_QUEUES; dev_info->min_rx_bufsize = 0; dev_info->pci_dev = NULL; dev_info->speed_capa = tap_dev_speed_capa(); @@ -673,9 +696,9 @@ tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) unsigned long rx_nombuf = 0, ierrors = 0; const struct pmd_internals *pmd = dev->data->dev_private; - imax = (pmd->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - pmd->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - + /* rx queue statistics */ + imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? + dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; for (i = 0; i < imax; i++) { tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; @@ -683,7 +706,13 @@ tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) rx_bytes_total += tap_stats->q_ibytes[i]; rx_nombuf += pmd->rxq[i].stats.rx_nombuf; ierrors += pmd->rxq[i].stats.ierrors; + } + /* tx queue statistics */ + imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? + dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; + + for (i = 0; i < imax; i++) { tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; tap_stats->q_errors[i] = pmd->txq[i].stats.errs; tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; @@ -707,7 +736,7 @@ tap_stats_reset(struct rte_eth_dev *dev) int i; struct pmd_internals *pmd = dev->data->dev_private; - for (i = 0; i < pmd->nb_queues; i++) { + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { pmd->rxq[i].stats.ipackets = 0; pmd->rxq[i].stats.ibytes = 0; pmd->rxq[i].stats.ierrors = 0; @@ -729,11 +758,15 @@ tap_dev_close(struct rte_eth_dev *dev) tap_flow_flush(dev, NULL); tap_flow_implicit_flush(internals, NULL); - for (i = 0; i < internals->nb_queues; i++) { - if (internals->rxq[i].fd != -1) + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { + if (internals->rxq[i].fd != -1) { close(internals->rxq[i].fd); - internals->rxq[i].fd = -1; - internals->txq[i].fd = -1; + internals->rxq[i].fd = -1; + } + if (internals->txq[i].fd != -1) { + close(internals->txq[i].fd); + internals->txq[i].fd = -1; + } } if (internals->remote_if_index) { @@ -887,30 +920,57 @@ tap_mac_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) static int tap_setup_queue(struct rte_eth_dev *dev, struct pmd_internals *internals, - uint16_t qid) + uint16_t qid, + int is_rx) { + int *fd; + int *other_fd; + const char *dir; struct pmd_internals *pmd = dev->data->dev_private; struct rx_queue *rx = &internals->rxq[qid]; struct tx_queue *tx = &internals->txq[qid]; - int fd = rx->fd == -1 ? tx->fd : rx->fd; - if (fd == -1) { - RTE_LOG(INFO, PMD, "Add queue to TAP %s for qid %d\n", - pmd->name, qid); - fd = tun_alloc(pmd); - if (fd < 0) { + if (is_rx) { + fd = &rx->fd; + other_fd = &tx->fd; + dir = "rx"; + } else { + fd = &tx->fd; + other_fd = &rx->fd; + dir = "tx"; + } + if (*fd != -1) { + /* fd for this queue already exists */ + RTE_LOG(DEBUG, PMD, "%s: fd %d for %s queue qid %d exists\n", + pmd->name, *fd, dir, qid); + } else if (*other_fd != -1) { + /* Only other_fd exists. dup it */ + *fd = dup(*other_fd); + if (*fd < 0) { + *fd = -1; + RTE_LOG(ERR, PMD, "%s: dup() failed.\n", + pmd->name); + return -1; + } + RTE_LOG(DEBUG, PMD, "%s: dup fd %d for %s queue qid %d (%d)\n", + pmd->name, *other_fd, dir, qid, *fd); + } else { + /* Both RX and TX fds do not exist (equal -1). Create fd */ + *fd = tun_alloc(pmd); + if (*fd < 0) { + *fd = -1; /* restore original value */ RTE_LOG(ERR, PMD, "%s: tun_alloc() failed.\n", pmd->name); return -1; } + RTE_LOG(DEBUG, PMD, "%s: add %s queue for qid %d fd %d\n", + pmd->name, dir, qid, *fd); } - rx->fd = fd; - tx->fd = fd; tx->mtu = &dev->data->mtu; rx->rxmode = &dev->data->dev_conf.rxmode; - return fd; + return *fd; } static int @@ -932,10 +992,10 @@ tap_rx_queue_setup(struct rte_eth_dev *dev, int fd; int i; - if ((rx_queue_id >= internals->nb_queues) || !mp) { + if (rx_queue_id >= dev->data->nb_rx_queues || !mp) { RTE_LOG(WARNING, PMD, - "nb_queues %d too small or mempool NULL\n", - internals->nb_queues); + "nb_rx_queues %d too small or mempool NULL\n", + dev->data->nb_rx_queues); return -1; } @@ -954,7 +1014,7 @@ tap_rx_queue_setup(struct rte_eth_dev *dev, rxq->iovecs = iovecs; dev->data->rx_queues[rx_queue_id] = rxq; - fd = tap_setup_queue(dev, internals, rx_queue_id); + fd = tap_setup_queue(dev, internals, rx_queue_id, 1); if (fd == -1) { ret = fd; goto error; @@ -1002,11 +1062,11 @@ tap_tx_queue_setup(struct rte_eth_dev *dev, struct pmd_internals *internals = dev->data->dev_private; int ret; - if (tx_queue_id >= internals->nb_queues) + if (tx_queue_id >= dev->data->nb_tx_queues) return -1; dev->data->tx_queues[tx_queue_id] = &internals->txq[tx_queue_id]; - ret = tap_setup_queue(dev, internals, tx_queue_id); + ret = tap_setup_queue(dev, internals, tx_queue_id, 0); if (ret == -1) return -1; @@ -1166,7 +1226,6 @@ static const struct eth_dev_ops ops = { .filter_ctrl = tap_dev_filter_ctrl, }; - static int eth_dev_tap_create(struct rte_vdev_device *vdev, char *tap_name, char *remote_iface, int fixed_mac_type) @@ -1193,8 +1252,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, char *tap_name, } pmd = dev->data->dev_private; + pmd->dev = dev; snprintf(pmd->name, sizeof(pmd->name), "%s", tap_name); - pmd->nb_queues = RTE_PMD_TAP_MAX_QUEUES; pmd->ioctl_sock = socket(AF_INET, SOCK_DGRAM, 0); if (pmd->ioctl_sock == -1) { @@ -1212,8 +1271,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, char *tap_name, data->dev_link = pmd_link; data->mac_addrs = &pmd->eth_addr; - data->nb_rx_queues = pmd->nb_queues; - data->nb_tx_queues = pmd->nb_queues; + /* Set the number of RX and TX queues */ + data->nb_rx_queues = 0; + data->nb_tx_queues = 0; dev->data = data; dev->dev_ops = &ops; @@ -1241,7 +1301,11 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, char *tap_name, } /* Immediately create the netdevice (this will create the 1st queue). */ - if (tap_setup_queue(dev, pmd, 0) == -1) + /* rx queue */ + if (tap_setup_queue(dev, pmd, 0, 1) == -1) + goto error_exit; + /* tx queue */ + if (tap_setup_queue(dev, pmd, 0, 0) == -1) goto error_exit; ifr.ifr_mtu = dev->data->mtu; @@ -1515,9 +1579,16 @@ rte_pmd_tap_remove(struct rte_vdev_device *dev) tap_flow_implicit_flush(internals, NULL); nl_final(internals->nlsk_fd); } - for (i = 0; i < internals->nb_queues; i++) - if (internals->rxq[i].fd != -1) + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { + if (internals->rxq[i].fd != -1) { close(internals->rxq[i].fd); + internals->rxq[i].fd = -1; + } + if (internals->txq[i].fd != -1) { + close(internals->txq[i].fd); + internals->txq[i].fd = -1; + } + } close(internals->ioctl_sock); rte_free(eth_dev->data->dev_private); diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 928a045..829f32f 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -80,9 +80,9 @@ struct tx_queue { }; struct pmd_internals { + struct rte_eth_dev *dev; /* Ethernet device. */ char remote_iface[RTE_ETH_NAME_MAX_LEN]; /* Remote netdevice name */ char name[RTE_ETH_NAME_MAX_LEN]; /* Internal Tap device name */ - uint16_t nb_queues; /* Number of queues supported */ struct ether_addr eth_addr; /* Mac address of the device port */ struct ifreq remote_initial_flags; /* Remote netdevice flags on init */ int remote_if_index; /* remote netdevice IF_INDEX */ diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c index 41f7345..aa33960 100644 --- a/drivers/net/tap/tap_flow.c +++ b/drivers/net/tap/tap_flow.c @@ -1092,7 +1092,8 @@ priv_flow_process(struct pmd_internals *pmd, if (action) goto exit_action_not_supported; action = 1; - if (!queue || (queue->index >= pmd->nb_queues)) + if (!queue || + (queue->index > pmd->dev->data->nb_rx_queues - 1)) goto exit_action_not_supported; if (flow) err = add_action_skbedit(flow, queue->index); -- 2.7.4