From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20844A04B6; Mon, 12 Oct 2020 19:11:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9F3CB1D980; Mon, 12 Oct 2020 19:11:45 +0200 (CEST) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id 91CDB1D980 for ; Mon, 12 Oct 2020 19:11:43 +0200 (CEST) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 12 Oct 2020 10:09:47 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 12 Oct 2020 17:11:39 +0000 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.108) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 12 Oct 2020 17:11:39 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iFIWiSP9vO+UsRuTO+/ArwHqwa/Jgmd0RP851NrvurtsbikqW6x+SyXtOMQdOm3gbu4ZjLmfi7doeJhl5h3YmLW5os9Hoedld1jXo9n6xMyZoM8Zh2z0TsLtiH8OaDMilKwFqyg/gC/5rit4dzVXt5ad5Y7NHwM2UugFnW6muQOJoJpAa9n5X6TLtRBcEZtJJbFvNPQFM5pEPQhr0Xev8S6NNlN2NqXCcHLCFOYP9OziolMSSNj/o0n0IrFy35cKtts+p/jVMaa+AfaVwhT6qucqilt6eGUYVqNvZrW3m7CMe++ZDb6uRNbfOAC7VJOgLaIr01ZTYGfHXKV9cbiIUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mwW6VSYK+fPa/p1XBk4xSYva/eDR2YoY+R2rruO0Rn8=; b=ETlwyu25kWqoJVuxwXdOWPGXXihcZWT4rIik4GTmF7CTeBBsE2zz/0ekJGtb5PAX920VvJNmVZH+X4vuMitNexPDWTwyxiA1EQPXRoFsLAtVIrGiVTZDxI0DGN4QiQjkCVu8cU6GVFAcElvvDjyjGgZv37ijNSNlgZKkHF4H5P9c385F/Kg0Sr7QLVXlLTv9euJa6t94uTgwI+wPmPJWSWiOWyRQA0I93F/3OjOM/nZRQ+szyGiinWsAXiuedsTsA+NQ5GnlDnqzQCugLTcAxKchcweNLcdRu+G2bwrtpGrczZNfFgTipc5REevNhWTCJDQTdh9jDOfST/ega1ydQg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from MWHPR12MB1360.namprd12.prod.outlook.com (2603:10b6:300:12::7) by MWHPR12MB1695.namprd12.prod.outlook.com (2603:10b6:301:f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Mon, 12 Oct 2020 17:11:38 +0000 Received: from MWHPR12MB1360.namprd12.prod.outlook.com ([fe80::711e:ec6f:ba28:d3d0]) by MWHPR12MB1360.namprd12.prod.outlook.com ([fe80::711e:ec6f:ba28:d3d0%5]) with mapi id 15.20.3455.030; Mon, 12 Oct 2020 17:11:38 +0000 From: Slava Ovsiienko To: NBU-Contact-Thomas Monjalon , Andrew Rybchenko CC: "dev@dpdk.org" , "stephen@networkplumber.org" , "ferruh.yigit@intel.com" , "olivier.matz@6wind.com" , "jerinjacobk@gmail.com" , "maxime.coquelin@redhat.com" , "david.marchand@redhat.com" Thread-Topic: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split Thread-Index: AQHWoLRD6ZtbU0K9g0WnG2yXDCqjCamUKsIAgAAG6QCAAAClIA== Date: Mon, 12 Oct 2020 17:11:37 +0000 Message-ID: References: <1602519585-5194-2-git-send-email-viacheslavo@nvidia.com> <6a04882a-4c4c-b515-9499-2ef7b20e94b2@oktetlabs.ru> <228932926.FOKgLshO0b@thomas> In-Reply-To: <228932926.FOKgLshO0b@thomas> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=none action=none header.from=nvidia.com; x-originating-ip: [95.164.10.10] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 95dbe610-3d36-4445-9306-08d86ed1e14e x-ms-traffictypediagnostic: MWHPR12MB1695: x-ld-processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3513; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: GXFQZ3kwC2kkUEY3TDK7QmmlXIV41l39SNTEimwGlOdgNG+eFZafvEOvc0AXGzVWrtoQb6X3H/H5uGRdupZKbzGan3TsAHOyydh3R7vM0SQsdKkpy3U4utFDRrNzx1xEAJQN7++cPIdh7Ddtgimx1dBMgxPeKjE+CoS6AXAhXoUSMZpHbxNVHhEHakwP/OrBL1eiJETMAe2/zUxBbFxuBSkPp8PI/zeDtFUjgzeBCaOuelmic1T64TZPeSWrHmjxSEoWsIHVjF5pJnYWdITeZKs/8aWKMiMuzX9avBmzbaMeYjzjgIuwrYDn2KxxO4Mn38DJwAWYFyh/YQs9217tlw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR12MB1360.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(366004)(53546011)(26005)(186003)(2906002)(86362001)(52536014)(66556008)(64756008)(33656002)(316002)(54906003)(5660300002)(9686003)(110136005)(478600001)(66446008)(66476007)(66946007)(71200400001)(4326008)(76116006)(7696005)(6506007)(8936002)(55016002)(8676002)(83380400001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: TBhJIQGkAJP5Q280Sig4kSEfzPITPIpSSAM5jo6S4mjZrfrsaSRBp0RLxNGu/r0FAkcHxCZfK1Iky4gn4UyBTC4leGHaIGYb6vTfRXFHmBYHlQMnwYdrM534cPgqtblvfjsAYrPnAd7Ti2nJnt/WK/dWN5lQjzeVa9/GjeHVKb6H4ci9l9Ukl6BOflQLQbusBOq6fZidVBkGfo7rKPTjzZYc1lpG41ukaFjjptYHdBmjtM53lg/SkhKyIychNRqY98nLfYiBpUkKJjQZmCcjfPbCve7rOCQW8pUpKrALFOSBUUH/jDKgAZobiK1/j5vMiLNodyERPCj5C42Q1WJQtHfUQQxxuB1MdLgtXr4qyYpA4FytB7feHHHuIBXGlkuQgIRW+qcJTsUf1aKZ8Pl9lliZu3TkEVLeUt870SyOXgrJs96Di+1bpUxMljTtvvEJ5u2timelc0c7L4Auze6DAahkmGkt2i4bA5UcpcKGgsbe9PRB/aQhjoNQfzkDohI48RSePiGubT89GowjLQC0MXiXf+kSrQ3/yQ+RStgxVeeu+eVHFQGjrC+eoTq237Tx0ySpkig5iLviB7SmNl0qtGiwRTZQvofyglg1kZgO+pxfXIO0uu3VZ6Q+836Ae0Naz/zPF0rK3nalXqKtA3B/Og== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR12MB1360.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 95dbe610-3d36-4445-9306-08d86ed1e14e X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 17:11:37.9308 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 5m8DyheyB9o6OX635tFHM2lgTZU6as3DMdniFC8Odjd3IZ3r6aXo7Zysjc6l+eAplwzqKKVeChLfN3EmCkvX0g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1695 X-OriginatorOrg: Nvidia.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1602522587; bh=mwW6VSYK+fPa/p1XBk4xSYva/eDR2YoY+R2rruO0Rn8=; h=ARC-Seal:ARC-Message-Signature:ARC-Authentication-Results:From:To: CC:Subject:Thread-Topic:Thread-Index:Date:Message-ID:References: In-Reply-To:Accept-Language:Content-Language:X-MS-Has-Attach: X-MS-TNEF-Correlator:authentication-results:x-originating-ip: x-ms-publictraffictype:x-ms-office365-filtering-correlation-id: x-ms-traffictypediagnostic:x-ld-processed: x-microsoft-antispam-prvs:x-ms-oob-tlc-oobclassifiers: x-ms-exchange-senderadcheck:x-microsoft-antispam: x-microsoft-antispam-message-info:x-forefront-antispam-report: x-ms-exchange-antispam-messagedata:x-ms-exchange-transport-forked: Content-Type:Content-Transfer-Encoding:MIME-Version: X-MS-Exchange-CrossTenant-AuthAs: X-MS-Exchange-CrossTenant-AuthSource: X-MS-Exchange-CrossTenant-Network-Message-Id: X-MS-Exchange-CrossTenant-originalarrivaltime: X-MS-Exchange-CrossTenant-fromentityheader: X-MS-Exchange-CrossTenant-id:X-MS-Exchange-CrossTenant-mailboxtype: X-MS-Exchange-CrossTenant-userprincipalname: X-MS-Exchange-Transport-CrossTenantHeadersStamped:X-OriginatorOrg; b=Yws8gH7ravgrin0xpCA1y+jXlMQcYq+UeUmRvnJNaVGWg7HzC+AcL7CFIrs/0avR2 L3uh16wvnSlIkPJYQryLjXQGyw45AEm47jdIOvm6cGJX0f4z1weDDmmyrJ/reXe/ER c1d7otxmOS9Udxk5lqH4aty+6DODDUfZaSOiMrOAEpQs2DoAKQRHUIGnuI4JefgZq7 Iw4eXnMjmHUnb8fyOBpTDddqlqf9JkLKQJElWFbcuLQFvwvB2Nyu0HDCBgDHCNwrW1 oF1Bj/h/mL8g5qqq+X3XF175ZKY/cRErgrGbsuCppnJfIc6MDskxCkCtsTpdNds/l+ 9/hEqb6oAeCJQ== Subject: Re: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Thomas Monjalon > Sent: Monday, October 12, 2020 20:03 > To: Slava Ovsiienko ; Andrew Rybchenko > > Cc: dev@dpdk.org; stephen@networkplumber.org; ferruh.yigit@intel.com; > olivier.matz@6wind.com; jerinjacobk@gmail.com; > maxime.coquelin@redhat.com; david.marchand@redhat.com > Subject: Re: [dpdk-dev] [PATCH v3 1/9] ethdev: introduce Rx buffer split >=20 > 12/10/2020 18:38, Andrew Rybchenko: > > On 10/12/20 7:19 PM, Viacheslav Ovsiienko wrote: > > > int > > > +rte_eth_rxseg_queue_setup(uint16_t port_id, uint16_t rx_queue_id, > > > + uint16_t nb_rx_desc, unsigned int socket_id, > > > + const struct rte_eth_rxconf *rx_conf, > > > + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) { > > > + int ret; > > > + uint16_t seg_idx; > > > + uint32_t mbp_buf_size; > > > > > > > > > + struct rte_eth_dev *dev; > > > + struct rte_eth_dev_info dev_info; > > > + struct rte_eth_rxconf local_conf; > > > + void **rxq; > > > + > > > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); > > > + > > > + dev =3D &rte_eth_devices[port_id]; > > > + if (rx_queue_id >=3D dev->data->nb_rx_queues) { > > > + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=3D%u\n", > rx_queue_id); > > > + return -EINVAL; > > > + } > > > > > > > > > + > > > + if (rx_seg =3D=3D NULL) { > > > + RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n"); > > > + return -EINVAL; > > > + } > > > + > > > + if (n_seg =3D=3D 0) { > > > + RTE_ETHDEV_LOG(ERR, "Invalid zero description > number\n"); > > > + return -EINVAL; > > > + } > > > + > > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rxseg_queue_setup, > > > +-ENOTSUP); > > > + > > > > > > > > > + /* > > > + * Check the size of the mbuf data buffer. > > > + * This value must be provided in the private data of the memory > pool. > > > + * First check that the memory pool has a valid private data. > > > + */ > > > + ret =3D rte_eth_dev_info_get(port_id, &dev_info); > > > + if (ret !=3D 0) > > > + return ret; > > > > > > > > > + > > > + for (seg_idx =3D 0; seg_idx < n_seg; seg_idx++) { > > > + struct rte_mempool *mp =3D rx_seg[seg_idx].mp; > > > + > > > + if (mp->private_data_size < > > > + sizeof(struct rte_pktmbuf_pool_private)) { > > > + RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < > %d\n", > > > + mp->name, (int)mp->private_data_size, > > > + (int)sizeof(struct > rte_pktmbuf_pool_private)); > > > + return -ENOSPC; > > > + } > > > + > > > + mbp_buf_size =3D rte_pktmbuf_data_room_size(mp); > > > + if (mbp_buf_size < rx_seg[seg_idx].length + > > > + rx_seg[seg_idx].offset + > > > + (seg_idx ? 0 : > > > + (uint32_t)RTE_PKTMBUF_HEADROOM)) { > > > + RTE_ETHDEV_LOG(ERR, > > > + "%s mbuf_data_room_size %d < %d" > > > + " (segment length=3D%d + segment > offset=3D%d)\n", > > > + mp->name, (int)mbp_buf_size, > > > + (int)(rx_seg[seg_idx].length + > > > + rx_seg[seg_idx].offset), > > > + (int)rx_seg[seg_idx].length, > > > + (int)rx_seg[seg_idx].offset); > > > + return -EINVAL; > > > + } > > > + } > > > + > > > > > > > > > + /* Use default specified by driver, if nb_rx_desc is zero */ > > > + if (nb_rx_desc =3D=3D 0) { > > > + nb_rx_desc =3D dev_info.default_rxportconf.ring_size; > > > + /* If driver default is also zero, fall back on EAL default */ > > > + if (nb_rx_desc =3D=3D 0) > > > + nb_rx_desc =3D > RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; > > > + } > > > + > > > + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || > > > + nb_rx_desc < dev_info.rx_desc_lim.nb_min || > > > + nb_rx_desc % dev_info.rx_desc_lim.nb_align !=3D 0) { > > > + > > > + RTE_ETHDEV_LOG(ERR, > > > + "Invalid value for nb_rx_desc(=3D%hu), should be: " > > > + "<=3D %hu, >=3D %hu, and a product of %hu\n", > > > + nb_rx_desc, dev_info.rx_desc_lim.nb_max, > > > + dev_info.rx_desc_lim.nb_min, > > > + dev_info.rx_desc_lim.nb_align); > > > + return -EINVAL; > > > + } > > > + > > > + if (dev->data->dev_started && > > > + !(dev_info.dev_capa & > > > + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP)) > > > + return -EBUSY; > > > + > > > + if (dev->data->dev_started && > > > + (dev->data->rx_queue_state[rx_queue_id] !=3D > > > + RTE_ETH_QUEUE_STATE_STOPPED)) > > > + return -EBUSY; > > > + > > > + rxq =3D dev->data->rx_queues; > > > + if (rxq[rx_queue_id]) { > > > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops- > >rx_queue_release, > > > + -ENOTSUP); > > > + (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); > > > + rxq[rx_queue_id] =3D NULL; > > > + } > > > + > > > + if (rx_conf =3D=3D NULL) > > > + rx_conf =3D &dev_info.default_rxconf; > > > + > > > + local_conf =3D *rx_conf; > > > + > > > + /* > > > + * If an offloading has already been enabled in > > > + * rte_eth_dev_configure(), it has been enabled on all queues, > > > + * so there is no need to enable it in this queue again. > > > + * The local_conf.offloads input to underlying PMD only carries > > > + * those offloadings which are only enabled on this queue and > > > + * not enabled on all queues. > > > + */ > > > + local_conf.offloads &=3D ~dev->data->dev_conf.rxmode.offloads; > > > + > > > + /* > > > + * New added offloadings for this queue are those not enabled in > > > + * rte_eth_dev_configure() and they must be per-queue type. > > > + * A pure per-port offloading can't be enabled on a queue while > > > + * disabled on another queue. A pure per-port offloading can't > > > + * be enabled for any queue as new added one if it hasn't been > > > + * enabled in rte_eth_dev_configure(). > > > + */ > > > + if ((local_conf.offloads & dev_info.rx_queue_offload_capa) !=3D > > > + local_conf.offloads) { > > > + RTE_ETHDEV_LOG(ERR, > > > + "Ethdev port_id=3D%d rx_queue_id=3D%d, new added > offloads" > > > + " 0x%"PRIx64" must be within per-queue offload" > > > + " capabilities 0x%"PRIx64" in %s()\n", > > > + port_id, rx_queue_id, local_conf.offloads, > > > + dev_info.rx_queue_offload_capa, > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* > > > + * If LRO is enabled, check that the maximum aggregated packet > > > + * size is supported by the configured device. > > > + */ > > > + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { > > > + if (dev->data->dev_conf.rxmode.max_lro_pkt_size =3D=3D 0) > > > + dev->data->dev_conf.rxmode.max_lro_pkt_size =3D > > > + dev->data- > >dev_conf.rxmode.max_rx_pkt_len; > > > + int ret =3D check_lro_pkt_size(port_id, > > > + dev->data- > >dev_conf.rxmode.max_lro_pkt_size, > > > + dev->data- > >dev_conf.rxmode.max_rx_pkt_len, > > > + dev_info.max_lro_pkt_size); > > > + if (ret !=3D 0) > > > + return ret; > > > + } > > > > > > > > IMO It is not acceptable to duplication so much code. > > It is simply unmaintainable. > > > > NACK >=20 > Can it be solved by making rte_eth_rx_queue_setup() a wrapper on top of > this new rte_eth_rxseg_queue_setup() ? >=20 It would be the code refactoring. The more simple solution - provide the su= broutine to perform the common part of parameters check. It seems there are no strong decision-making pro's and con's for these two = approaches. As I said - from my side the main concern of including segment descriptions= into config structure is introducing ambiguity of some kind. But, if we decide to switch to this = approach - will handle. With best regards, Slava