From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE7CEA04B6; Mon, 12 Oct 2020 19:11:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C553E1D96A; Mon, 12 Oct 2020 19:11:08 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by dpdk.org (Postfix) with ESMTP id B98EA1D961 for ; Mon, 12 Oct 2020 19:11:07 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 61D617F602; Mon, 12 Oct 2020 20:11:06 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 61D617F602 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1602522666; bh=r8ZFpTdI2mh35A/9mGs611B0PHe5j05ke8PZeZgMfRY=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=qagghUsn8sSWkWxH+4zALVRCeuu3DB20mpf8x2MFhtMlGuNaA31lwFAzGxv3a+ikG ELqOnAIznXe5chNAI2BXczueVM0sf6w0lLRqQn9YnTJ/EX4ZASAsKXUCBsGtB1U7U6 IJbFGgtIMFtvD8KTeLTdGpMhvig9X9EtIWocEC7U= To: Thomas Monjalon , Viacheslav Ovsiienko Cc: dev@dpdk.org, stephen@networkplumber.org, ferruh.yigit@intel.com, olivier.matz@6wind.com, jerinjacobk@gmail.com, maxime.coquelin@redhat.com, david.marchand@redhat.com References: <1602519585-5194-2-git-send-email-viacheslavo@nvidia.com> <6a04882a-4c4c-b515-9499-2ef7b20e94b2@oktetlabs.ru> <228932926.FOKgLshO0b@thomas> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: Date: Mon, 12 Oct 2020 20:11:06 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1 MIME-Version: 1.0 In-Reply-To: <228932926.FOKgLshO0b@thomas> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/12/20 8:03 PM, Thomas Monjalon wrote: > 12/10/2020 18:38, Andrew Rybchenko: >> On 10/12/20 7:19 PM, Viacheslav Ovsiienko wrote: >>> int >>> +rte_eth_rxseg_queue_setup(uint16_t port_id, uint16_t rx_queue_id, >>> + uint16_t nb_rx_desc, unsigned int socket_id, >>> + const struct rte_eth_rxconf *rx_conf, >>> + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) >>> +{ >>> + int ret; >>> + uint16_t seg_idx; >>> + uint32_t mbp_buf_size; >> >> >> >>> + struct rte_eth_dev *dev; >>> + struct rte_eth_dev_info dev_info; >>> + struct rte_eth_rxconf local_conf; >>> + void **rxq; >>> + >>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); >>> + >>> + dev = &rte_eth_devices[port_id]; >>> + if (rx_queue_id >= dev->data->nb_rx_queues) { >>> + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=%u\n", rx_queue_id); >>> + return -EINVAL; >>> + } >> >> >> >>> + >>> + if (rx_seg == NULL) { >>> + RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n"); >>> + return -EINVAL; >>> + } >>> + >>> + if (n_seg == 0) { >>> + RTE_ETHDEV_LOG(ERR, "Invalid zero description number\n"); >>> + return -EINVAL; >>> + } >>> + >>> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxseg_queue_setup, -ENOTSUP); >>> + >> >> >> >>> + /* >>> + * Check the size of the mbuf data buffer. >>> + * This value must be provided in the private data of the memory pool. >>> + * First check that the memory pool has a valid private data. >>> + */ >>> + ret = rte_eth_dev_info_get(port_id, &dev_info); >>> + if (ret != 0) >>> + return ret; >> >> >> >>> + >>> + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { >>> + struct rte_mempool *mp = rx_seg[seg_idx].mp; >>> + >>> + if (mp->private_data_size < >>> + sizeof(struct rte_pktmbuf_pool_private)) { >>> + RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n", >>> + mp->name, (int)mp->private_data_size, >>> + (int)sizeof(struct rte_pktmbuf_pool_private)); >>> + return -ENOSPC; >>> + } >>> + >>> + mbp_buf_size = rte_pktmbuf_data_room_size(mp); >>> + if (mbp_buf_size < rx_seg[seg_idx].length + >>> + rx_seg[seg_idx].offset + >>> + (seg_idx ? 0 : >>> + (uint32_t)RTE_PKTMBUF_HEADROOM)) { >>> + RTE_ETHDEV_LOG(ERR, >>> + "%s mbuf_data_room_size %d < %d" >>> + " (segment length=%d + segment offset=%d)\n", >>> + mp->name, (int)mbp_buf_size, >>> + (int)(rx_seg[seg_idx].length + >>> + rx_seg[seg_idx].offset), >>> + (int)rx_seg[seg_idx].length, >>> + (int)rx_seg[seg_idx].offset); >>> + return -EINVAL; >>> + } >>> + } >>> + >> >> >> >>> + /* Use default specified by driver, if nb_rx_desc is zero */ >>> + if (nb_rx_desc == 0) { >>> + nb_rx_desc = dev_info.default_rxportconf.ring_size; >>> + /* If driver default is also zero, fall back on EAL default */ >>> + if (nb_rx_desc == 0) >>> + nb_rx_desc = RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; >>> + } >>> + >>> + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || >>> + nb_rx_desc < dev_info.rx_desc_lim.nb_min || >>> + nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { >>> + >>> + RTE_ETHDEV_LOG(ERR, >>> + "Invalid value for nb_rx_desc(=%hu), should be: " >>> + "<= %hu, >= %hu, and a product of %hu\n", >>> + nb_rx_desc, dev_info.rx_desc_lim.nb_max, >>> + dev_info.rx_desc_lim.nb_min, >>> + dev_info.rx_desc_lim.nb_align); >>> + return -EINVAL; >>> + } >>> + >>> + if (dev->data->dev_started && >>> + !(dev_info.dev_capa & >>> + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP)) >>> + return -EBUSY; >>> + >>> + if (dev->data->dev_started && >>> + (dev->data->rx_queue_state[rx_queue_id] != >>> + RTE_ETH_QUEUE_STATE_STOPPED)) >>> + return -EBUSY; >>> + >>> + rxq = dev->data->rx_queues; >>> + if (rxq[rx_queue_id]) { >>> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, >>> + -ENOTSUP); >>> + (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); >>> + rxq[rx_queue_id] = NULL; >>> + } >>> + >>> + if (rx_conf == NULL) >>> + rx_conf = &dev_info.default_rxconf; >>> + >>> + local_conf = *rx_conf; >>> + >>> + /* >>> + * If an offloading has already been enabled in >>> + * rte_eth_dev_configure(), it has been enabled on all queues, >>> + * so there is no need to enable it in this queue again. >>> + * The local_conf.offloads input to underlying PMD only carries >>> + * those offloadings which are only enabled on this queue and >>> + * not enabled on all queues. >>> + */ >>> + local_conf.offloads &= ~dev->data->dev_conf.rxmode.offloads; >>> + >>> + /* >>> + * New added offloadings for this queue are those not enabled in >>> + * rte_eth_dev_configure() and they must be per-queue type. >>> + * A pure per-port offloading can't be enabled on a queue while >>> + * disabled on another queue. A pure per-port offloading can't >>> + * be enabled for any queue as new added one if it hasn't been >>> + * enabled in rte_eth_dev_configure(). >>> + */ >>> + if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != >>> + local_conf.offloads) { >>> + RTE_ETHDEV_LOG(ERR, >>> + "Ethdev port_id=%d rx_queue_id=%d, new added offloads" >>> + " 0x%"PRIx64" must be within per-queue offload" >>> + " capabilities 0x%"PRIx64" in %s()\n", >>> + port_id, rx_queue_id, local_conf.offloads, >>> + dev_info.rx_queue_offload_capa, >>> + __func__); >>> + return -EINVAL; >>> + } >>> + >>> + /* >>> + * If LRO is enabled, check that the maximum aggregated packet >>> + * size is supported by the configured device. >>> + */ >>> + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { >>> + if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0) >>> + dev->data->dev_conf.rxmode.max_lro_pkt_size = >>> + dev->data->dev_conf.rxmode.max_rx_pkt_len; >>> + int ret = check_lro_pkt_size(port_id, >>> + dev->data->dev_conf.rxmode.max_lro_pkt_size, >>> + dev->data->dev_conf.rxmode.max_rx_pkt_len, >>> + dev_info.max_lro_pkt_size); >>> + if (ret != 0) >>> + return ret; >>> + } >> >> >> >> IMO It is not acceptable to duplication so much code. >> It is simply unmaintainable. >> >> NACK > > Can it be solved by making rte_eth_rx_queue_setup() a wrapper > on top of this new rte_eth_rxseg_queue_setup() ? > Could be, but strictly speaking it will break arguments validation order and error reporting in various cases. So, refactoring is required to keep it consistent.