From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 84576F94 for ; Sun, 17 Sep 2017 00:32:49 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from ophirmu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 17 Sep 2017 01:32:45 +0300 Received: from pegasus05.mtr.labs.mlnx (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v8GMWjtV018487; Sun, 17 Sep 2017 01:32:45 +0300 Received: from pegasus05.mtr.labs.mlnx (localhost [127.0.0.1]) by pegasus05.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id v8GMWjoX021358; Sat, 16 Sep 2017 22:32:45 GMT Received: (from root@localhost) by pegasus05.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id v8GMWiOF021357; Sat, 16 Sep 2017 22:32:44 GMT From: Ophir Munk To: Pascal Mazon Cc: Adrien Mazarguil , dev@dpdk.org, Thomas Monjalon , Olga Shern , Ophir Munk , stable@dpdk.org Date: Sat, 16 Sep 2017 22:32:38 +0000 Message-Id: <1505601158-21256-1-git-send-email-ophirmu@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <0e18a611-3b35-6624-013b-59c72f71307b@6wind.com> References: <0e18a611-3b35-6624-013b-59c72f71307b@6wind.com> Subject: [dpdk-dev] [PATCH v3] net/tap: fix flow and port commands X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 16 Sep 2017 22:32:49 -0000 This commit fixes two bugs related to tap devices. The first bug occurs when executing in testpmd the following flow rule assuming tap device has 4 rx and tx pair queues "flow create 0 ingress pattern eth / end actions queue index 5 / end" This command will report on success and will print ""Flow rule #0 created" although it should have failed as queue index number 5 does not exist The second bug occurs when executing in testpmd "port start all" following a port configuration. Assuming 1 pair of rx and tx queues an error is reported: "Fail to start port 0" Before this commit a fixed max number (16) of rx and tx queue pairs were created on startup where the file descriptors (fds) of rx and tx pairs were identical. As a result in the first bug queue index 5 existed because the tap device was created with 16 rx and tx queue pairs regardless of the configured number of queues. In the second bug when tap device was started tx fd was closed before opening it and executing ioctl() on it. However closing the sole fd of the device caused ioctl to fail with "No such device". This commit creates the configured number of rx and tx queue pairs (up to max 16) and assigns a unique fd to each queue. It was written to solve the first bug and was found as the right fix for the second bug as well. Fixes: 02f96a0a82d1 ("net/tap: add TUN/TAP device PMD") Fixes: bf7b7f437b49 ("net/tap: create netdevice during probing") Fixes: de96fe68ae95 ("net/tap: add basic flow API patterns and actions") Cc: stable@dpdk.org Signed-off-by: Ophir Munk --- In reply to Pascal Mazon review meesage-id: <0e18a611-3b35-6624-013b-59c72f71307b@6wind.com> drivers/net/tap/rte_eth_tap.c | 139 +++++++++++++++++++++++++++++++----------- drivers/net/tap/rte_eth_tap.h | 2 +- drivers/net/tap/tap_flow.c | 3 +- 3 files changed, 108 insertions(+), 36 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 9acea83..d926d4b 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -603,8 +603,31 @@ enum ioctl_mode { } static int -tap_dev_configure(struct rte_eth_dev *dev __rte_unused) +tap_dev_configure(struct rte_eth_dev *dev) { + if (dev->data->nb_rx_queues > RTE_PMD_TAP_MAX_QUEUES) { + RTE_LOG(ERR, PMD, + "%s: number of rx queues %d exceeds max num of queues %d\n", + dev->device->name, + dev->data->nb_rx_queues, + RTE_PMD_TAP_MAX_QUEUES); + return -1; + } + if (dev->data->nb_tx_queues > RTE_PMD_TAP_MAX_QUEUES) { + RTE_LOG(ERR, PMD, + "%s: number of tx queues %d exceeds max num of queues %d\n", + dev->device->name, + dev->data->nb_tx_queues, + RTE_PMD_TAP_MAX_QUEUES); + return -1; + } + + RTE_LOG(INFO, PMD, "%s: %p: TX configured queues number: %u\n", + dev->device->name, (void *)dev, dev->data->nb_tx_queues); + + RTE_LOG(INFO, PMD, "%s: %p: RX configured queues number: %u\n", + dev->device->name, (void *)dev, dev->data->nb_rx_queues); + return 0; } @@ -650,8 +673,8 @@ enum ioctl_mode { dev_info->if_index = internals->if_index; dev_info->max_mac_addrs = 1; dev_info->max_rx_pktlen = (uint32_t)ETHER_MAX_VLAN_FRAME_LEN; - dev_info->max_rx_queues = internals->nb_queues; - dev_info->max_tx_queues = internals->nb_queues; + dev_info->max_rx_queues = RTE_PMD_TAP_MAX_QUEUES; + dev_info->max_tx_queues = RTE_PMD_TAP_MAX_QUEUES; dev_info->min_rx_bufsize = 0; dev_info->pci_dev = NULL; dev_info->speed_capa = tap_dev_speed_capa(); @@ -673,9 +696,9 @@ enum ioctl_mode { unsigned long rx_nombuf = 0, ierrors = 0; const struct pmd_internals *pmd = dev->data->dev_private; - imax = (pmd->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - pmd->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - + /* rx queue statistics */ + imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? + dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; for (i = 0; i < imax; i++) { tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; @@ -683,7 +706,13 @@ enum ioctl_mode { rx_bytes_total += tap_stats->q_ibytes[i]; rx_nombuf += pmd->rxq[i].stats.rx_nombuf; ierrors += pmd->rxq[i].stats.ierrors; + } + /* tx queue statistics */ + imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? + dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; + + for (i = 0; i < imax; i++) { tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; tap_stats->q_errors[i] = pmd->txq[i].stats.errs; tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; @@ -707,7 +736,7 @@ enum ioctl_mode { int i; struct pmd_internals *pmd = dev->data->dev_private; - for (i = 0; i < pmd->nb_queues; i++) { + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { pmd->rxq[i].stats.ipackets = 0; pmd->rxq[i].stats.ibytes = 0; pmd->rxq[i].stats.ierrors = 0; @@ -729,11 +758,15 @@ enum ioctl_mode { tap_flow_flush(dev, NULL); tap_flow_implicit_flush(internals, NULL); - for (i = 0; i < internals->nb_queues; i++) { - if (internals->rxq[i].fd != -1) + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { + if (internals->rxq[i].fd != -1) { close(internals->rxq[i].fd); - internals->rxq[i].fd = -1; - internals->txq[i].fd = -1; + internals->rxq[i].fd = -1; + } + if (internals->txq[i].fd != -1) { + close(internals->txq[i].fd); + internals->txq[i].fd = -1; + } } if (internals->remote_if_index) { @@ -887,30 +920,57 @@ enum ioctl_mode { static int tap_setup_queue(struct rte_eth_dev *dev, struct pmd_internals *internals, - uint16_t qid) + uint16_t qid, + int is_rx) { + int *fd; + int *other_fd; + const char *dir; struct pmd_internals *pmd = dev->data->dev_private; struct rx_queue *rx = &internals->rxq[qid]; struct tx_queue *tx = &internals->txq[qid]; - int fd = rx->fd == -1 ? tx->fd : rx->fd; - if (fd == -1) { - RTE_LOG(INFO, PMD, "Add queue to TAP %s for qid %d\n", - pmd->name, qid); - fd = tun_alloc(pmd); - if (fd < 0) { + if (is_rx) { + fd = &rx->fd; + other_fd = &tx->fd; + dir = "rx"; + } else { + fd = &tx->fd; + other_fd = &rx->fd; + dir = "tx"; + } + if (*fd != -1) { + /* fd for this queue already exists */ + RTE_LOG(DEBUG, PMD, "%s: fd %d for %s queue qid %d exists\n", + pmd->name, *fd, dir, qid); + } else if (*other_fd != -1) { + /* Only other_fd exists. dup it */ + *fd = dup(*other_fd); + if (*fd < 0) { + *fd = -1; + RTE_LOG(ERR, PMD, "%s: dup() failed.\n", + pmd->name); + return -1; + } + RTE_LOG(DEBUG, PMD, "%s: dup fd %d for %s queue qid %d (%d)\n", + pmd->name, *other_fd, dir, qid, *fd); + } else { + /* Both RX and TX fds do not exist (equal -1). Create fd */ + *fd = tun_alloc(pmd); + if (*fd < 0) { + *fd = -1; /* restore original value */ RTE_LOG(ERR, PMD, "%s: tun_alloc() failed.\n", pmd->name); return -1; } + RTE_LOG(DEBUG, PMD, "%s: add %s queue for qid %d fd %d\n", + pmd->name, dir, qid, *fd); } - rx->fd = fd; - tx->fd = fd; tx->mtu = &dev->data->mtu; rx->rxmode = &dev->data->dev_conf.rxmode; - return fd; + return *fd; } static int @@ -932,10 +992,10 @@ enum ioctl_mode { int fd; int i; - if ((rx_queue_id >= internals->nb_queues) || !mp) { + if (rx_queue_id >= dev->data->nb_rx_queues || !mp) { RTE_LOG(WARNING, PMD, - "nb_queues %d too small or mempool NULL\n", - internals->nb_queues); + "nb_rx_queues %d too small or mempool NULL\n", + dev->data->nb_rx_queues); return -1; } @@ -954,7 +1014,7 @@ enum ioctl_mode { rxq->iovecs = iovecs; dev->data->rx_queues[rx_queue_id] = rxq; - fd = tap_setup_queue(dev, internals, rx_queue_id); + fd = tap_setup_queue(dev, internals, rx_queue_id, 1); if (fd == -1) { ret = fd; goto error; @@ -1002,11 +1062,11 @@ enum ioctl_mode { struct pmd_internals *internals = dev->data->dev_private; int ret; - if (tx_queue_id >= internals->nb_queues) + if (tx_queue_id >= dev->data->nb_tx_queues) return -1; dev->data->tx_queues[tx_queue_id] = &internals->txq[tx_queue_id]; - ret = tap_setup_queue(dev, internals, tx_queue_id); + ret = tap_setup_queue(dev, internals, tx_queue_id, 0); if (ret == -1) return -1; @@ -1166,7 +1226,6 @@ enum ioctl_mode { .filter_ctrl = tap_dev_filter_ctrl, }; - static int eth_dev_tap_create(struct rte_vdev_device *vdev, char *tap_name, char *remote_iface, int fixed_mac_type) @@ -1193,8 +1252,8 @@ enum ioctl_mode { } pmd = dev->data->dev_private; + pmd->dev = dev; snprintf(pmd->name, sizeof(pmd->name), "%s", tap_name); - pmd->nb_queues = RTE_PMD_TAP_MAX_QUEUES; pmd->ioctl_sock = socket(AF_INET, SOCK_DGRAM, 0); if (pmd->ioctl_sock == -1) { @@ -1212,8 +1271,9 @@ enum ioctl_mode { data->dev_link = pmd_link; data->mac_addrs = &pmd->eth_addr; - data->nb_rx_queues = pmd->nb_queues; - data->nb_tx_queues = pmd->nb_queues; + /* Set the number of RX and TX queues */ + data->nb_rx_queues = 0; + data->nb_tx_queues = 0; dev->data = data; dev->dev_ops = &ops; @@ -1241,7 +1301,11 @@ enum ioctl_mode { } /* Immediately create the netdevice (this will create the 1st queue). */ - if (tap_setup_queue(dev, pmd, 0) == -1) + /* rx queue */ + if (tap_setup_queue(dev, pmd, 0, 1) == -1) + goto error_exit; + /* tx queue */ + if (tap_setup_queue(dev, pmd, 0, 0) == -1) goto error_exit; ifr.ifr_mtu = dev->data->mtu; @@ -1515,9 +1579,16 @@ enum ioctl_mode { tap_flow_implicit_flush(internals, NULL); nl_final(internals->nlsk_fd); } - for (i = 0; i < internals->nb_queues; i++) - if (internals->rxq[i].fd != -1) + for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { + if (internals->rxq[i].fd != -1) { close(internals->rxq[i].fd); + internals->rxq[i].fd = -1; + } + if (internals->txq[i].fd != -1) { + close(internals->txq[i].fd); + internals->txq[i].fd = -1; + } + } close(internals->ioctl_sock); rte_free(eth_dev->data->dev_private); diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 928a045..829f32f 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -80,9 +80,9 @@ struct tx_queue { }; struct pmd_internals { + struct rte_eth_dev *dev; /* Ethernet device. */ char remote_iface[RTE_ETH_NAME_MAX_LEN]; /* Remote netdevice name */ char name[RTE_ETH_NAME_MAX_LEN]; /* Internal Tap device name */ - uint16_t nb_queues; /* Number of queues supported */ struct ether_addr eth_addr; /* Mac address of the device port */ struct ifreq remote_initial_flags; /* Remote netdevice flags on init */ int remote_if_index; /* remote netdevice IF_INDEX */ diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c index 41f7345..eefa868 100644 --- a/drivers/net/tap/tap_flow.c +++ b/drivers/net/tap/tap_flow.c @@ -1092,7 +1092,8 @@ struct tap_flow_items { if (action) goto exit_action_not_supported; action = 1; - if (!queue || (queue->index >= pmd->nb_queues)) + if (!queue || + (queue->index > pmd->dev->data->nb_rx_queues - 1)) goto exit_action_not_supported; if (flow) err = add_action_skbedit(flow, queue->index); -- 1.8.3.1