From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66A53A0542; Mon, 6 Jun 2022 19:10:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09B28406B4; Mon, 6 Jun 2022 19:10:31 +0200 (CEST) Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by mails.dpdk.org (Postfix) with ESMTP id E0C824069C for ; Mon, 6 Jun 2022 19:10:29 +0200 (CEST) Received: by mail-pg1-f176.google.com with SMTP id 7so12615097pga.12 for ; Mon, 06 Jun 2022 10:10:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zByKFhs59YsLYnK48KIwN0DVNuiHUtl4tBBJl7+pcfg=; b=eOiKDD7hPhchRn9l47Lzv9Ll1MANc6XqS9pIyQicCpFd7HDlglly7iuR8HlRT7VjNd UbFprlAy6w7pKEm0uuUDOwQV92J32OHvsAGX8T2Ic3sLGw6vMdQ6Zvv3YpZGST3WqZe/ o+2VK+JhC/7bnnq4MnsGI4X8czUkLWO0dfmG8ftKy0zKHpq2ijEhHPRjjPv/a7ctDuhy QaHKDdTzR8DpQVcOW1umzmP0ZBdjyXTa5QM3xv6ePRrk5bLNpQMwHUSS4/J3+9KTZZ0C FqEyhaDtLA80mNLXajYkpbTTCAFWm/n501m85TMET5alCLPRlyN8xI6PmrTZIYECMDKB oxSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zByKFhs59YsLYnK48KIwN0DVNuiHUtl4tBBJl7+pcfg=; b=6QBT8GvkH3E4jRdbWdUfsCcb50cx5a4QWu9+qw4+7lbiXSyMQYYs4BvEOIfKgeu8GS CdcXSbH/Oq9wLqmM0IDvwAp/k96zxk4PXGK6jKLVUcawY7Eub7CElEfhy5nj9J6F0Q2k mH8/5Eg7bXTvLjLk43kZI8y+Xm/HWg4ppnzJnVeody07zWAgdI1fru3kxn+KKRQ9dDJo 7iB157jrX5AKTeZBSsJ4dLY8w1bvPnUTu9bc7DkuK3TNSaS04FA1EPKlZ1mrzfEYu/SK 5tdwHRaIFxo8SRkKZugWL6nPQ4F7WGHh+CEP/EGq1sPddk0GInXKa7Fd7y5VeZ2KSNOA nerQ== X-Gm-Message-State: AOAM533dmJx2uqVDtxmmKa6LM8fM9SBc0ZB0wKQeZkj75wV849Gym+DY fEcwgbhb2czGhOJPXr4j+R0n5A== X-Google-Smtp-Source: ABdhPJwfj2+Mtw56vHIoXkLDTLASrRuwZPJIIWE816CH6AyJ9K1R6cn4N7f43JT/AkVzngeq7UxbYA== X-Received: by 2002:a63:d10a:0:b0:3c6:c6e0:9b1e with SMTP id k10-20020a63d10a000000b003c6c6e09b1emr22098599pgg.410.1654535428717; Mon, 06 Jun 2022 10:10:28 -0700 (PDT) Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199]) by smtp.gmail.com with ESMTPSA id y125-20020a626483000000b0050dc76281d3sm7768556pfb.173.2022.06.06.10.10.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Jun 2022 10:10:28 -0700 (PDT) Date: Mon, 6 Jun 2022 10:10:24 -0700 From: Stephen Hemminger To: Ido Goshen Cc: ferruh.yigit@xilinx.com, dev@dpdk.org Subject: Re: [PATCH v4] pcap: support MTU set Message-ID: <20220606101024.2ac04af4@hermes.local> In-Reply-To: <20220606162147.57218-1-ido@cgstowernetworks.com> References: <20220317174347.110909-1-ido@cgstowernetworks.com> <20220606162147.57218-1-ido@cgstowernetworks.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Mon, 6 Jun 2022 19:21:47 +0300 Ido Goshen wrote: > Support rte_eth_dev_set_mtu by pcap vdevs > Enforce mtu on rx/tx > > Bugzilla ID: 961 This is not really a bug, it is an enhancement specific to your test setup. It should not be backported to stable. Since it is change in behavior it might be better to add a vdev argument for this rather than overloading meaning of MTU. Also, this does not behave the same as virtio or hardware drivers. > Signed-off-by: Ido Goshen > > --- > v4: > 1. Add release notes comment > 2. Access pmd internals via queue struct > 3. eth_mtu_set code convention fixes > > v3: > Preserve pcap behavior to support max size packets by default > alternative to v2 in order to limit the code change to pcap only and > avoid abi change. > Enforce mtu only in case rte_eth_dev_set_mtu was explicitly called. > > v2: > Preserve pcap behavior to support max size packets by default. > --- > doc/guides/rel_notes/release_22_07.rst | 3 ++ > drivers/net/pcap/pcap_ethdev.c | 38 ++++++++++++++++++++++++++ > 2 files changed, 41 insertions(+) > > diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst > index 0ed4f92820..717191d498 100644 > --- a/doc/guides/rel_notes/release_22_07.rst > +++ b/doc/guides/rel_notes/release_22_07.rst > @@ -95,6 +95,9 @@ New Features > * Added AH mode support in lookaside protocol (IPsec) for CN9K & CN10K. > * Added AES-GMAC support in lookaside protocol (IPsec) for CN9K & CN10K. > > +* **Updated pcap driver.** > + > + * Added support for MTU via ``rte_eth_dev_set_mtu`` > > Removed Items > ------------- > diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c > index ec29fd6bc5..db1958f20f 100644 > --- a/drivers/net/pcap/pcap_ethdev.c > +++ b/drivers/net/pcap/pcap_ethdev.c > @@ -74,6 +74,7 @@ struct pcap_rx_queue { > > /* Contains pre-generated packets to be looped through */ > struct rte_ring *pkts; > + struct pmd_internals *internals; > }; > > struct pcap_tx_queue { > @@ -82,6 +83,7 @@ struct pcap_tx_queue { > struct queue_stat tx_stat; > char name[PATH_MAX]; > char type[ETH_PCAP_ARG_MAXLEN]; > + struct pmd_internals *internals; > }; > > struct pmd_internals { > @@ -93,6 +95,7 @@ struct pmd_internals { > int single_iface; > int phy_mac; > unsigned int infinite_rx; > + uint16_t mtu; > }; The mtu is already in dev->data->mtu, why copy it? > struct pmd_process_private { > @@ -278,6 +281,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > const u_char *packet; > struct rte_mbuf *mbuf; > struct pcap_rx_queue *pcap_q = queue; > + struct pmd_internals *internals = pcap_q->internals; > uint16_t num_rx = 0; > uint32_t rx_bytes = 0; > pcap_t *pcap; > @@ -303,6 +307,12 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > break; > } > > + if (unlikely(header.caplen > internals->mtu)) { > + pcap_q->rx_stat.err_pkts++; > + rte_pktmbuf_free(mbuf); > + break; > + } This doesn't account for VLAN header. > if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) { > /* pcap packet will fit in the mbuf, can copy it */ > rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet, > @@ -378,6 +388,7 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > struct rte_mbuf *mbuf; > struct pmd_process_private *pp; > struct pcap_tx_queue *dumper_q = queue; > + struct pmd_internals *internals = dumper_q->internals; > uint16_t num_tx = 0; > uint32_t tx_bytes = 0; > struct pcap_pkthdr header; > @@ -396,6 +407,12 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > for (i = 0; i < nb_pkts; i++) { > mbuf = bufs[i]; > len = caplen = rte_pktmbuf_pkt_len(mbuf); > + > + if (unlikely(len > internals->mtu)) { > + rte_pktmbuf_free(mbuf); > + continue; > + } There needs to be a per queue counter any and all drops. < > + > if (unlikely(!rte_pktmbuf_is_contiguous(mbuf) && > len > sizeof(temp_data))) { > caplen = sizeof(temp_data); > @@ -464,6 +481,7 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > struct rte_mbuf *mbuf; > struct pmd_process_private *pp; > struct pcap_tx_queue *tx_queue = queue; > + struct pmd_internals *internals = tx_queue->internals; > uint16_t num_tx = 0; > uint32_t tx_bytes = 0; > pcap_t *pcap; > @@ -479,6 +497,12 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) > for (i = 0; i < nb_pkts; i++) { > mbuf = bufs[i]; > len = rte_pktmbuf_pkt_len(mbuf); > + > + if (unlikely(len > internals->mtu)) { > + rte_pktmbuf_free(mbuf); > + continue; > + } > + > if (unlikely(!rte_pktmbuf_is_contiguous(mbuf) && > len > sizeof(temp_data))) { > PMD_LOG(ERR, > @@ -807,6 +831,16 @@ eth_stats_reset(struct rte_eth_dev *dev) > return 0; > } > > +static int > +eth_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) > +{ > + struct pmd_internals *internals = dev->data->dev_private; > + > + PMD_LOG(INFO, "MTU set %s %u\n", dev->device->name, mtu); > + internals->mtu = mtu; > + return 0; > +} If you drop internals->mtu (redundant) then this just becomes stub (ie return 0) > + > static inline void > infinite_rx_ring_free(struct rte_ring *pkts) > { > @@ -878,6 +912,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, > pcap_q->mb_pool = mb_pool; > pcap_q->port_id = dev->data->port_id; > pcap_q->queue_id = rx_queue_id; > + pcap_q->internals = internals; > dev->data->rx_queues[rx_queue_id] = pcap_q; > > if (internals->infinite_rx) { > @@ -952,6 +987,7 @@ eth_tx_queue_setup(struct rte_eth_dev *dev, > > pcap_q->port_id = dev->data->port_id; > pcap_q->queue_id = tx_queue_id; > + pcap_q->internals = internals; > dev->data->tx_queues[tx_queue_id] = pcap_q; > > return 0; > @@ -1004,6 +1040,7 @@ static const struct eth_dev_ops ops = { > .link_update = eth_link_update, > .stats_get = eth_stats_get, > .stats_reset = eth_stats_reset, > + .mtu_set = eth_mtu_set, > }; > > static int > @@ -1233,6 +1270,7 @@ pmd_init_internals(struct rte_vdev_device *vdev, > .addr_bytes = { 0x02, 0x70, 0x63, 0x61, 0x70, iface_idx++ } > }; > (*internals)->phy_mac = 0; > + (*internals)->mtu = RTE_ETH_PCAP_SNAPLEN; Use dev->data->mtu not internal value. > data = (*eth_dev)->data; > data->nb_rx_queues = (uint16_t)nb_rx_queues; > data->nb_tx_queues = (uint16_t)nb_tx_queues;