From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43E33A034C; Mon, 17 Aug 2020 19:49:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4AA184CA7; Mon, 17 Aug 2020 19:49:44 +0200 (CEST) Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by dpdk.org (Postfix) with ESMTP id 128EA4C99 for ; Mon, 17 Aug 2020 19:49:41 +0200 (CEST) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 17 Aug 2020 10:49:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 17 Aug 2020 10:49:41 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 17 Aug 2020 10:49:41 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 17 Aug 2020 17:49:40 +0000 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.106) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 17 Aug 2020 17:49:40 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K1IaxXfKS8FUsNb4RHes5783rsefmqHkmA0xQp2MlrXcBIGUynycslvnTHX13DI5aZDDpBoj7vaPA12Cy8x5EFyYP0XT8VDgUlOxet1h4J4EI5aPYS9wd3smFBss+pqaW0ZCeUsOpOVBFs1X0jEbPDK++7WCG5oSQobQOWl0oOK8Mq7r4AkFVTK8dyCEnVVnf1cdVt8oXp3muhon4HZ71sSTijzvtdlqO6JDrtZo48/wHCQVaTnc95Pz2YdX6cQAy6J5puUJIUXv44pPfoHAOrQ2qGw4Bvwucz4IdJDD3uly0Z796LdbfV+k4DDW7YRctd10VyTtqPI5q6Mcj6nUPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Wc32dHwSeIgRig19xjA79w4HpiOfGgxMtGlEOReOfG0=; b=XPbnXYcnNGtXKQEwXyhOXqH3fEiClwCB/UcZvt63WEXvYgZrPfJ1u7M2TiGvoOWwW418W1+UuC+LUH97TSoQtwhwQKj0b5KA8q59GFstUKBe+8MNML2NVTYb4zNQhmvTQvYwUGA91R7ewP7fb5VPA7NApf10Zy9FaKDYLBG70/OHTHo+61GzgzpCSvF6/xUlu00fgtBnBmmRMFma6+d+bOpLUej0MpqNJJR47ZuEKekWMYYP6yd97sJfV9+ccdLAPDYDVQGZB/uah+uYCS2+9lQNmmbOnrM+mA+tvtwsUAKciAPU27kXIRkUQlYbtNeS350L7naP56APjPRhzM7daw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none Received: from MWHPR12MB1360.namprd12.prod.outlook.com (2603:10b6:300:12::7) by MWHPR12MB1694.namprd12.prod.outlook.com (2603:10b6:301:11::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3283.18; Mon, 17 Aug 2020 17:49:39 +0000 Received: from MWHPR12MB1360.namprd12.prod.outlook.com ([fe80::711e:ec6f:ba28:d3d0]) by MWHPR12MB1360.namprd12.prod.outlook.com ([fe80::711e:ec6f:ba28:d3d0%5]) with mapi id 15.20.3283.027; Mon, 17 Aug 2020 17:49:38 +0000 From: Slava Ovsiienko To: "dev@dpdk.org" CC: Thomas Monjalon , "stephen@networkplumber.org" , "ferruh.yigit@intel.com" , Shahaf Shuler , "olivier.matz@6wind.com" , "jerinjacobk@gmail.com" , "maxime.coquelin@redhat.com" , "david.marchand@redhat.com" , "arybchenko@solarflare.com" , Asaf Penso Thread-Topic: [RFC] ethdev: introduce Rx buffer split Thread-Index: AdZ0vX26iflBD6P5QjC0iZ6rubHz2w== Date: Mon, 17 Aug 2020 17:49:38 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=nvidia.com; x-originating-ip: [95.164.10.10] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 0d05c18c-33d6-4aaf-96a4-08d842d5e9bd x-ms-traffictypediagnostic: MWHPR12MB1694: x-ld-processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:551; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 9dFMMEFGaQOXlFXSMLjyNV/tAV5HBqXAEFqVsObOZh+Q5mqKbnLcS3iQdfceVsTlmF1bYvRd494rd71HMt6Sg6btPlb7wHHx7c3AyI95ygRYA0LoWqU52vok4HvuB4EsGUsvyCTZpf28b5JNPc0dz7F6zr/vhdRCVnuYMOfbSpEbGmdpGY3pKBXM5UkhLSu5uou15YQYZ7QSflZSbI8syW4srmMaddrtzMKnQCsP3PRienwP2HDHfvrslgnKhExLU2gJUaQ94XI8oXrs8pmlj2IzJjTK+plVj6dY3v1KNxb0kDNt0jUhx8Za3Lh1w1+Ym63Ma4IhwP4kWT3Y6fPJwQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR12MB1360.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(39860400002)(346002)(366004)(396003)(376002)(30864003)(66446008)(76116006)(66476007)(64756008)(66946007)(316002)(52536014)(5660300002)(66556008)(8676002)(55016002)(54906003)(71200400001)(33656002)(83380400001)(86362001)(4326008)(26005)(6916009)(6506007)(478600001)(107886003)(8936002)(53546011)(7696005)(2906002)(9686003)(186003); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: T9zllH/BO6gfNmZ5EXmQ5edHusHkGyZcfwYlBWSJsLPwDQ9p3T4MXTcUljRIXVOWDmiJMugJqjm49wSN7dwtZE/ctF0OS3+RyLURk7LTG34yEqbXh3W8yI5JYNj/tqq87DRj3c3ZQxw9tW8SSDEMQ3fuoGE3Ej4gqLZOAjOXv/ouxob2b/TBtgoqHNT9ZxpVn0aPQkAnWrpMV3XzergnNkZPG5nmahbvLAtRiTfSK0YgLU5MXmru7TH8nf2bHexjvjsle8BYnCoRCeATBurAkKR7Rw84lZvZhMBg4jaZE7M8Am+2eIT14AklfWwA29Y+szr2SHADBuKGva1BYz+Hu4i+aA/jNUEF611pcezE63owE4IQLZt833W7VFHBN1qS8MomUagJUWRW8e2IxC3qd6s0IdMQ8yE2wDH+GKPJIlUwExStbzz/Hc9QUF8r5dmwaOddlUBnh9+LXjF8NUTtkDgdbTAiINIFEvAFiRKobUDcHBVBGJ/6rCEwuHMyyjIwwnvLV+HnlT1nzDfiKbcqoDhB11bR4JW+w5yBm2DpKjZGSowr2XAxtQKjRdVPDFccDsMThqRQqnbm+vLzHwltKOsE2FePT9vACRSOeJ8g+6Y84MBWaIFoWDSuOxFruXLEiJliyFZg+vkPOdToc1QJ1Q== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR12MB1360.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0d05c18c-33d6-4aaf-96a4-08d842d5e9bd X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Aug 2020 17:49:38.7949 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: lr1uQhatGPtAu2ODr+oIbVV7gco7f/XqCLpILuXF0CblIknDbELNyrNCL7gzpVuG+/RQwEy518oa7uER7aRjgg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1694 X-OriginatorOrg: Nvidia.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1597686567; bh=Wc32dHwSeIgRig19xjA79w4HpiOfGgxMtGlEOReOfG0=; h=X-PGP-Universal:ARC-Seal:ARC-Message-Signature: ARC-Authentication-Results:From:To:CC:Subject:Thread-Topic: Thread-Index:Date:Message-ID:Accept-Language:Content-Language: X-MS-Has-Attach:X-MS-TNEF-Correlator:authentication-results: x-originating-ip:x-ms-publictraffictype: x-ms-office365-filtering-correlation-id:x-ms-traffictypediagnostic: x-ld-processed:x-ms-exchange-transport-forked: x-microsoft-antispam-prvs:x-ms-oob-tlc-oobclassifiers: x-ms-exchange-senderadcheck:x-microsoft-antispam: x-microsoft-antispam-message-info:x-forefront-antispam-report: x-ms-exchange-antispam-messagedata:Content-Type: Content-Transfer-Encoding:MIME-Version: X-MS-Exchange-CrossTenant-AuthAs: X-MS-Exchange-CrossTenant-AuthSource: X-MS-Exchange-CrossTenant-Network-Message-Id: X-MS-Exchange-CrossTenant-originalarrivaltime: X-MS-Exchange-CrossTenant-fromentityheader: X-MS-Exchange-CrossTenant-id:X-MS-Exchange-CrossTenant-mailboxtype: X-MS-Exchange-CrossTenant-userprincipalname: X-MS-Exchange-Transport-CrossTenantHeadersStamped:X-OriginatorOrg; b=fSlEbwwJH26qhd7vCXLI9kDiz6vbJSBiG9lj+5AkxrK7/QsMssromSC9q+rPB7mDZ ZS7T4ubcqnnQ6vnpzfWXgbQmVugogggai+HfUi9PTKiXDFuoJ6QzL3UHlSvtUMH5ju rKJXkZzs95atKBKcChTi55H+IOYRVzYfWQGl/nswaz4jtl//vWCI72vj/++w5NEPwX 7FCyovsKpdvpHMvdv2JVxd2QVGRurK1WcHWOEZNK5ReJe1RnS9zfVH8Zr87yPQFFEr SAbFyl/9+jwP/NgX0EizP23VzkhnXipdu/VrguMND9ctBh7AcBiyAVuCci4PjTZxrp VeSwnQS1sbGwA== Subject: [dpdk-dev] [RFC] ethdev: introduce Rx buffer split X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >From 7f7052d8b85ff3ff7011bd844b6d3169c6e51923 Mon Sep 17 00:00:00 2001 From: Viacheslav Ovsiienko Date: Mon, 17 Aug 2020 16:57:43 +0000 Subject: [RFC] ethdev: introduce Rx buffer split The DPDK datapath in the transmit direction is very flexible. An application can build the multisegment packet and manages almost all data aspects - the memory pools where segments are allocated from, the segment lengths, the memory attributes like external buffers, registered for DMA, etc. In the receiving direction, the datapath is much less flexible, an application can only specify the memory pool to configure the receiving queue and nothing more. In order to extend receiving datapath capabilities it is proposed to add the way to provide extended infoirmation how to split the packets being received. The following structure is introduced to specify the Rx packet segment: struct rte_eth_rxseg { struct rte_mempool *mp; /* memory pools to allocate segment from */ uint16_t length; /* segment maximal data length */ uint16_t offset; /* data offset from beginning of mbuf data buffer */ uint32_t reserved; /* reserved field */ }; The new routine rte_eth_rx_queue_setup_ex() is introduced to setup the given Rx queue using the new extended Rx packet segment description: int rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) This routine presents the two new parameters: rx_seg - pointer the array of segment descriptions, each element describes the memory pool, maximal data length, initial data offset from the beginning of data buffer in mbuf n_seg - number of elements in the array The new offload flag DEV_RX_OFFLOAD_BUFFER_SPLIT in device capabilities is introduced to present the way for PMD to report to application about supporting Rx packet split to configurable segments. Prior invoking the rte_eth_rx_queue_setup_ex() routine application should check DEV_RX_OFFLOAD_BUFFER_SPLIT flag. If the Rx queue is configured with new routine the packets being received will be split into multiple segments pushed to the mbufs with specified attributes. The PMD will allocate the first mbuf from the pool specified in the first segment descriptor and puts the data staring at specified offeset in the allocated mbuf data buffer. If packet length exceeds the specified segment length the next mbuf will be allocated according to the next segment descriptor (if any) and data will be put in its data buffer at specified offset and not exceeding specified length. If there is no next descriptor the next mbuf will be allocated and filled in the same way (from the same pool and with the same buffer offset/length) as the current one. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, len0=3D14B, off0=3DRTE_PKTMBUF_HEADROOM seg1 - pool1, len1=3D20B, off1=3D0B seg2 - pool2, len2=3D20B, off2=3D0B seg3 - pool3, len3=3D512B, off3=3D0B The packet 46 bytes long will look like the following: seg0 - 14B long @ RTE_PKTMBUF_HEADROOM in mbuf from pool0 seg1 - 20B long @ 0 in mbuf from pool1 seg2 - 12B long @ 0 in mbuf from pool2 The packet 1500 bytes long will look like the following: seg0 - 14B @ RTE_PKTMBUF_HEADROOM in mbuf from pool0 seg1 - 20B @ 0 in mbuf from pool1 seg2 - 20B @ 0 in mbuf from pool2 seg3 - 512B @ 0 in mbuf from pool3 seg4 - 512B @ 0 in mbuf from pool3 seg5 - 422B @ 0 in mbuf from pool3 The offload DEV_RX_OFFLOAD_SCATTER must be present and configured to support new buffer spllit feature (if n_seg is greater than one). The new approach would allow splitting the ingress packets into multiple parts pushed to the memory with different attributes. For example, the packet headers can be pushed to the embedded data buffers within mbufs and the application data into the external buffers attached to mbufs allocated from the different memory pools. The memory attributes for the split parts may differ either - for example the application data may be pushed into the external memory located on the dedicated physical device, say GPU or NVMe. This would improve the DPDK receiving datapath flexibility with preserving compatibility with existing API. Also, the proposed segment description might be used to specify Rx packet split for some other features. For example, provide the way to specify the extra memory pool for the Header Split feature of some Intel PMD. Signed-off-by: Viacheslav Ovsiienko --- lib/librte_ethdev/rte_ethdev.c | 166 ++++++++++++++++++++++++++++++++= ++++ lib/librte_ethdev/rte_ethdev.h | 15 ++++ lib/librte_ethdev/rte_ethdev_core.h | 10 +++ 3 files changed, 191 insertions(+) diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.= c index 7858ad5..638e42d 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -1933,6 +1933,172 @@ struct rte_eth_dev * } =20 int +rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg) +{ + int ret; + uint16_t seg_idx; + uint32_t mbp_buf_size; + struct rte_eth_dev *dev; + struct rte_eth_dev_info dev_info; + struct rte_eth_rxconf local_conf; + void **rxq; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); + + dev =3D &rte_eth_devices[port_id]; + if (rx_queue_id >=3D dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid RX queue_id=3D%u\n", rx_queue_id); + return -EINVAL; + } + + if (rx_seg =3D=3D NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid null description pointer\n"); + return -EINVAL; + } + + if (n_seg =3D=3D 0) { + RTE_ETHDEV_LOG(ERR, "Invalid zero description number\n"); + return -EINVAL; + } + + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_setup_ex, -ENOTSUP); + + /* + * Check the size of the mbuf data buffer. + * This value must be provided in the private data of the memory pool. + * First check that the memory pool has a valid private data. + */ + ret =3D rte_eth_dev_info_get(port_id, &dev_info); + if (ret !=3D 0) + return ret; + + for (seg_idx =3D 0; seg_idx < n_seg; seg_idx++) { + struct rte_mempool *mp =3D rx_seg[seg_idx].mp; + + if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) { + RTE_ETHDEV_LOG(ERR, "%s private_data_size %d < %d\n", + mp->name, (int)mp->private_data_size, + (int)sizeof(struct rte_pktmbuf_pool_private)); + return -ENOSPC; + } + + mbp_buf_size =3D rte_pktmbuf_data_room_size(mp); + if (mbp_buf_size < rx_seg[seg_idx].length + rx_seg[seg_idx].offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %d < %d" + " (segment length=3D%d + segment offset=3D%d)\n", + mp->name, (int)mbp_buf_size, + (int)(rx_seg[seg_idx].length + rx_seg[seg_idx].offset), + (int)rx_seg[seg_idx].length, + (int)rx_seg[seg_idx].offset); + return -EINVAL; + } + } + + /* Use default specified by driver, if nb_rx_desc is zero */ + if (nb_rx_desc =3D=3D 0) { + nb_rx_desc =3D dev_info.default_rxportconf.ring_size; + /* If driver default is also zero, fall back on EAL default */ + if (nb_rx_desc =3D=3D 0) + nb_rx_desc =3D RTE_ETH_DEV_FALLBACK_RX_RINGSIZE; + } + + if (nb_rx_desc > dev_info.rx_desc_lim.nb_max || + nb_rx_desc < dev_info.rx_desc_lim.nb_min || + nb_rx_desc % dev_info.rx_desc_lim.nb_align !=3D 0) { + + RTE_ETHDEV_LOG(ERR, + "Invalid value for nb_rx_desc(=3D%hu), should be: <=3D %hu, >=3D %hu, a= nd a product of %hu\n", + nb_rx_desc, dev_info.rx_desc_lim.nb_max, + dev_info.rx_desc_lim.nb_min, + dev_info.rx_desc_lim.nb_align); + return -EINVAL; + } + + if (dev->data->dev_started && + !(dev_info.dev_capa & + RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP)) + return -EBUSY; + + if (dev->data->dev_started && + (dev->data->rx_queue_state[rx_queue_id] !=3D + RTE_ETH_QUEUE_STATE_STOPPED)) + return -EBUSY; + + rxq =3D dev->data->rx_queues; + if (rxq[rx_queue_id]) { + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, + -ENOTSUP); + (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); + rxq[rx_queue_id] =3D NULL; + } + + if (rx_conf =3D=3D NULL) + rx_conf =3D &dev_info.default_rxconf; + + local_conf =3D *rx_conf; + + /* + * If an offloading has already been enabled in + * rte_eth_dev_configure(), it has been enabled on all queues, + * so there is no need to enable it in this queue again. + * The local_conf.offloads input to underlying PMD only carries + * those offloadings which are only enabled on this queue and + * not enabled on all queues. + */ + local_conf.offloads &=3D ~dev->data->dev_conf.rxmode.offloads; + + /* + * New added offloadings for this queue are those not enabled in + * rte_eth_dev_configure() and they must be per-queue type. + * A pure per-port offloading can't be enabled on a queue while + * disabled on another queue. A pure per-port offloading can't + * be enabled for any queue as new added one if it hasn't been + * enabled in rte_eth_dev_configure(). + */ + if ((local_conf.offloads & dev_info.rx_queue_offload_capa) !=3D + local_conf.offloads) { + RTE_ETHDEV_LOG(ERR, + "Ethdev port_id=3D%d rx_queue_id=3D%d, new added offloads 0x%"PRIx64" m= ust be " + "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + port_id, rx_queue_id, local_conf.offloads, + dev_info.rx_queue_offload_capa, + __func__); + return -EINVAL; + } + + /* + * If LRO is enabled, check that the maximum aggregated packet + * size is supported by the configured device. + */ + if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { + if (dev->data->dev_conf.rxmode.max_lro_pkt_size =3D=3D 0) + dev->data->dev_conf.rxmode.max_lro_pkt_size =3D + dev->data->dev_conf.rxmode.max_rx_pkt_len; + int ret =3D check_lro_pkt_size(port_id, + dev->data->dev_conf.rxmode.max_lro_pkt_size, + dev->data->dev_conf.rxmode.max_rx_pkt_len, + dev_info.max_lro_pkt_size); + if (ret !=3D 0) + return ret; + } + + ret =3D (*dev->dev_ops->rx_queue_setup_ex)(dev, rx_queue_id, nb_rx_desc, + socket_id, &local_conf, + rx_seg, n_seg); + if (!ret) { + if (!dev->data->min_rx_buf_size || + dev->data->min_rx_buf_size > mbp_buf_size) + dev->data->min_rx_buf_size =3D mbp_buf_size; + } + + return eth_err(port_id, ret); +} + +int rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, const struct rte_eth_hairpin_conf *conf) diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.= h index 70295d7..701264a 100644 --- a/lib/librte_ethdev/rte_ethdev.h +++ b/lib/librte_ethdev/rte_ethdev.h @@ -938,6 +938,16 @@ struct rte_eth_txmode { }; =20 /** + * A structure used to configure an RX packet segment to split. + */ +struct rte_eth_rxseg { + struct rte_mempool *mp; /**< Memory pools to allocate segment from */ + uint16_t length; /**< Segment maximal data length */ + uint16_t offset; /**< Data offset from beggining of mbuf data buffer */ + uint32_t reserved; /**< Reserved field */ +}; + +/** * A structure used to configure an RX ring of an Ethernet port. */ struct rte_eth_rxconf { @@ -1988,6 +1998,11 @@ int rte_eth_rx_queue_setup(uint16_t port_id, uint16_= t rx_queue_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); =20 +int rte_eth_rx_queue_setup_ex(uint16_t port_id, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, uint16_t n_seg); + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior noti= ce diff --git a/lib/librte_ethdev/rte_ethdev_core.h b/lib/librte_ethdev/rte_et= hdev_core.h index 32407dd..27018de 100644 --- a/lib/librte_ethdev/rte_ethdev_core.h +++ b/lib/librte_ethdev/rte_ethdev_core.h @@ -265,6 +265,15 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev= *dev, struct rte_mempool *mb_pool); /**< @internal Set up a receive queue of an Ethernet device. */ =20 +typedef int (*eth_rx_queue_setup_ex_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxseg *rx_seg, + uint16_t n_seg); +/**< @internal Set up a receive queue of an Ethernet device. */ + typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, @@ -659,6 +668,7 @@ struct eth_dev_ops { eth_queue_start_t tx_queue_start;/**< Start TX for a queue. */ eth_queue_stop_t tx_queue_stop; /**< Stop TX for a queue. */ eth_rx_queue_setup_t rx_queue_setup;/**< Set up device RX queue. */ + eth_rx_queue_setup_ex_t rx_queue_setup_ex;/**< Set up device RX queue.= */ eth_queue_release_t rx_queue_release; /**< Release RX queue. */ eth_rx_queue_count_t rx_queue_count; /**< Get the number of used RX descriptors. */ --=20 1.8.3.1