From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22429A04B6; Mon, 12 Oct 2020 19:03:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E9DA81D959; Mon, 12 Oct 2020 19:03:07 +0200 (CEST) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by dpdk.org (Postfix) with ESMTP id D18171D948 for ; Mon, 12 Oct 2020 19:03:06 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id 498A65C00F7; Mon, 12 Oct 2020 13:03:05 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Mon, 12 Oct 2020 13:03:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= jAuNUiSuEMv6T0aGO5znoiyA6ZsS43pZ/93e2S9szwc=; b=OGr/XRNE4leVv65V Bi4obcv4Svx7ScjTa3l6s+cSptMd9/3CAzLU4UuAsJQNtPKpkykvgPFir44X9y+r ijidgPu+xlAUP8V0u3axkYM1kfMEJ83MrLIUDAH6QA8aN//uX0B2ktDJEcACxbYC tl5oZYUJ4c51Z0Txt0u8UdZvB+H39Jkm5LoH4hi/oP3YFkxQ3osL4jyvx3DxemZx 3/U+BVbsNxYz/NhSUXUZ3wqVKZY07JXvtaz+vw4qNTtKDDhGDCxIIuML2uebGs+f aHKNaDdM0uRjzvX7fGTrexbWfEActgwEne9cgWhaHSJ1q3m+E9pNSM95mP7+1FRD zxRlwg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=jAuNUiSuEMv6T0aGO5znoiyA6ZsS43pZ/93e2S9sz wc=; b=hY7DBK24SL/wMXGyv5bVRi5xFqjBZyiH/GeJiV3wHmM6isLxt4hGc8qjw b3wsdWDdUo+XKpvtMGkh4sbAA9vO/Dh8O4bQy/3tGI6GVrlFMp06+GFQhU0jg7kq dPmubW1TEsy94pWK2FUgUzASoc5RS7jSehCz8yDwk3cYjv0i1xauiXYP9sJNhpWq 1BvFsoafNt5/GeyRsxtp2R4283AxNNwa6cZcEyKFYs3XJD0/MzOXkF3MrdJ092z/ OdlIYZ1sp/N0fOhy/ZSpJJacOJcwCsBcaTH9hj++zd0x3rUzTBa+pdwt8HLc1/bZ nsAjy0rQrQWIPHG4S4SJfdImCkRow== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrheejgddutdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecukfhppeejjedrudefgedrvddtfedrudekgeenucevlhhushhtvghruf hiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghl ohhnrdhnvght X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id EC635328005D; Mon, 12 Oct 2020 13:03:02 -0400 (EDT) From: Thomas Monjalon To: Viacheslav Ovsiienko , Andrew Rybchenko Cc: dev@dpdk.org, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com Date: Mon, 12 Oct 2020 19:03:00 +0200 Message-ID: <228932926.FOKgLshO0b@thomas> In-Reply-To: <6a04882a-4c4c-b515-9499-2ef7b20e94b2@oktetlabs.ru> References: <1602519585-5194-2-git-send-email-viacheslavo@nvidia.com> <6a04882a-4c4c-b515-9499-2ef7b20e94b2@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 12/10/2020 18:38, Andrew Rybchenko: > On 10/12/20 7:19 PM, Viacheslav Ovsiienko wrote: > > int > > +rte_eth_rxseg_queue_setup(uint16_t port_id, uint16_t rx_queue_id, > > + uint16_t nb_rx_desc, unsigned int socket_id, > > + const struct rte_eth_rxconf *rx_conf, > > + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) > > +{ > > + int ret; > > + uint16_t seg_idx; > > + uint32_t mbp_buf_size; > > > > > + struct rte_eth_dev *dev; > > + struct rte_eth_dev_info dev_info; > > + struct rte_eth_rxconf local_conf; > > + void **rxq; > > + > > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); > > + > > + dev = &rte_eth_devices[port_id]; > > + if (rx_queue_id >= dev->data->nb_rx_queues) { > > + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", rx_queue_id); > > + return -EINVAL; > > + } > > > > > + > > + if (rx_seg == NULL) { > > + RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n"); > > + return -EINVAL; > > + } > > + > > + if (n_seg == 0) { > > + RTE_ETHDEV_LOG(ERR, "Invalid zero description number\n"); > > + return -EINVAL; > > + } > > + > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxseg_queue_setup, -ENOTSUP); > > + > > > > > + /* > > + * Check the size of the mbuf data buffer. > > + * This value must be provided in the private data of the memory pool. > > + * First check that the memory pool has a valid private data. > > + */ > > + ret = rte_eth_dev_info_get(port_id, &dev_info); > > + if (ret != 0) > > + return ret; > > > > > + > > + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { > > + struct rte_mempool *mp = rx_seg[seg_idx].mp; > > + > > + if (mp->private_data_size < > > + sizeof(struct rte_pktmbuf_pool_private)) { > > + RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n", > > + mp->name, (int)mp->private_data_size, > > + (int)sizeof(struct rte_pktmbuf_pool_private)); > > + return -ENOSPC; > > + } > > + > > + mbp_buf_size = rte_pktmbuf_data_room_size(mp); > > + if (mbp_buf_size < rx_seg[seg_idx].length + > > + rx_seg[seg_idx].offset + > > + (seg_idx ? 0 : > > + (uint32_t)RTE_PKTMBUF_HEADROOM)) { > > + RTE_ETHDEV_LOG(ERR, > > + "%s mbuf_data_room_size %d < %d" > > + " (segment length=%d + segment offset=%d)\n", > > + mp->name, (int)mbp_buf_size, > > + (int)(rx_seg[seg_idx].length + > > + rx_seg[seg_idx].offset), > > + (int)rx_seg[seg_idx].length, > > + (int)rx_seg[seg_idx].offset); > > + return -EINVAL; > > + } > > + } > > + > > > > > + /* Use default specified by driver, if nb_rx_desc is zero */ > > + if (nb_rx_desc == 0) { > > + nb_rx_desc = dev_info.default_rxportconf.ring_size; > > + /* If driver default is also zero, fall back on EAL default */ > > + if (nb_rx_desc == 0) > > + nb_rx_desc = RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; > > + } > > + > > + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || > > + nb_rx_desc < dev_info.rx_desc_lim.nb_min || > > + nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { > > + > > + RTE_ETHDEV_LOG(ERR, > > + "Invalid value for nb_rx_desc(=%hu), should be: " > > + "<= %hu, >= %hu, and a product of %hu\n", > > + nb_rx_desc, dev_info.rx_desc_lim.nb_max, > > + dev_info.rx_desc_lim.nb_min, > > + dev_info.rx_desc_lim.nb_align); > > + return -EINVAL; > > + } > > + > > + if (dev->data->dev_started && > > + !(dev_info.dev_capa & > > + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP)) > > + return -EBUSY; > > + > > + if (dev->data->dev_started && > > + (dev->data->rx_queue_state[rx_queue_id] != > > + RTE_ETH_QUEUE_STATE_STOPPED)) > > + return -EBUSY; > > + > > + rxq = dev->data->rx_queues; > > + if (rxq[rx_queue_id]) { > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, > > + -ENOTSUP); > > + (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); > > + rxq[rx_queue_id] = NULL; > > + } > > + > > + if (rx_conf == NULL) > > + rx_conf = &dev_info.default_rxconf; > > + > > + local_conf = *rx_conf; > > + > > + /* > > + * If an offloading has already been enabled in > > + * rte_eth_dev_configure(), it has been enabled on all queues, > > + * so there is no need to enable it in this queue again. > > + * The local_conf.offloads input to underlying PMD only carries > > + * those offloadings which are only enabled on this queue and > > + * not enabled on all queues. > > + */ > > + local_conf.offloads &= ~dev->data->dev_conf.rxmode.offloads; > > + > > + /* > > + * New added offloadings for this queue are those not enabled in > > + * rte_eth_dev_configure() and they must be per-queue type. > > + * A pure per-port offloading can't be enabled on a queue while > > + * disabled on another queue. A pure per-port offloading can't > > + * be enabled for any queue as new added one if it hasn't been > > + * enabled in rte_eth_dev_configure(). > > + */ > > + if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != > > + local_conf.offloads) { > > + RTE_ETHDEV_LOG(ERR, > > + "Ethdev port_id=%d rx_queue_id=%d, new added offloads" > > + " 0x%"PRIx64" must be within per-queue offload" > > + " capabilities 0x%"PRIx64" in %s()\n", > > + port_id, rx_queue_id, local_conf.offloads, > > + dev_info.rx_queue_offload_capa, > > + __func__); > > + return -EINVAL; > > + } > > + > > + /* > > + * If LRO is enabled, check that the maximum aggregated packet > > + * size is supported by the configured device. > > + */ > > + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { > > + if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0) > > + dev->data->dev_conf.rxmode.max_lro_pkt_size = > > + dev->data->dev_conf.rxmode.max_rx_pkt_len; > > + int ret = check_lro_pkt_size(port_id, > > + dev->data->dev_conf.rxmode.max_lro_pkt_size, > > + dev->data->dev_conf.rxmode.max_rx_pkt_len, > > + dev_info.max_lro_pkt_size); > > + if (ret != 0) > > + return ret; > > + } > > > > IMO It is not acceptable to duplication so much code. > It is simply unmaintainable. > > NACK Can it be solved by making rte_eth_rx_queue_setup() a wrapper on top of this new rte_eth_rxseg_queue_setup() ?