From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 0CFC511A4 for ; Tue, 12 Mar 2019 18:04:33 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x2CGp0bn014430; Tue, 12 Mar 2019 10:04:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=mpABG71M1t9lQ73enzzEMoAglmZC8TN2iTzTa8KT0Oc=; b=yuyL3bMrZSb+ycMhZkPPRUY7GS0RiF5wyPmcijFsNey7uqnk6UDS5kAtOUZwFAGJToqP gym3qTJpTyLMh7N0dsIU5KzH5V/ryUoawzutWsNbHd6CFCXdKxJJPbYCmzYcn0+ca/u7 42F5DA3OcYI8rVRa1YgJ+kbmM5CsgWX2U1zUwBPeZNzqANjcMtyaGqylJz7RQUPOR5gQ +UURDUmZhQXK7h0KH1opB7YPLjvlmdSP6tx9yxY8wuuQabR/22o6PQ26C/yVeQZRQU3q dT+/fWzVESLVwY0vbG5SowBeWhdO79PP/sq3Kc5JGH8weK9mJ4qcyJMwJmDCdA6mgEjL iw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2r69g01gmn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 12 Mar 2019 10:04:32 -0700 Received: from SC-EXCH02.marvell.com (10.93.176.82) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Tue, 12 Mar 2019 10:04:30 -0700 Received: from NAM04-CO1-obe.outbound.protection.outlook.com (104.47.45.50) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Tue, 12 Mar 2019 10:04:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mpABG71M1t9lQ73enzzEMoAglmZC8TN2iTzTa8KT0Oc=; b=pAB25B1Jzteqsj0uLH946Mxp8hNpkZhuHAwq/Yu3uh6n62FHqhqHWeDtOAqLruEk1lU9KfY8Mc/7j2TA5ht0i9XV0TaQSqmOUvjtQiYDV39yARo4aVmPCTMzQGnM29r+IsiKlsYFiltRpO4WiU3KG9Y9nOXUqGqSyPBcGFwYgGA= Received: from BN8PR18MB2689.namprd18.prod.outlook.com (20.179.72.92) by BN8PR18MB2435.namprd18.prod.outlook.com (20.179.66.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1686.19; Tue, 12 Mar 2019 17:04:26 +0000 Received: from BN8PR18MB2689.namprd18.prod.outlook.com ([fe80::612d:6d86:d895:4997]) by BN8PR18MB2689.namprd18.prod.outlook.com ([fe80::612d:6d86:d895:4997%5]) with mapi id 15.20.1686.021; Tue, 12 Mar 2019 17:04:26 +0000 From: Shahed Shaikh To: Yongseok Koh CC: Rasesh Mody , dpdk stable Thread-Topic: [EXT] patch 'net/qede: fix performance bottleneck in Rx path' has been queued to LTS release 17.11.6 Thread-Index: AQHU1ddMFOfoUFq1mkSfXQ/lt0RmWKYIPjQA Date: Tue, 12 Mar 2019 17:04:26 +0000 Message-ID: References: <20190308174749.30771-1-yskoh@mellanox.com> <20190308174749.30771-59-yskoh@mellanox.com> In-Reply-To: <20190308174749.30771-59-yskoh@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [42.108.235.54] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 68430b32-de8a-42e3-6a92-08d6a70cc893 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060)(7193020); SRVR:BN8PR18MB2435; x-ms-traffictypediagnostic: BN8PR18MB2435: x-ms-exchange-purlcount: 2 x-microsoft-exchange-diagnostics: =?us-ascii?Q?1; BN8PR18MB2435; 23:6x5DilkTmafHmlFYu+Dc1nw2ZD6OHMQgM8/z0q14s?= =?us-ascii?Q?Uy5ntXZf3kBBjPrP0oykmcLs4FSyYXVb6aljV/5LokfA6e+PBcPRVfXJI79B?= =?us-ascii?Q?t7230N1dG4Hn6oTU4wbmx8NzN7nBWB6OYnibG3djroskEy+RU8m535kapXS8?= =?us-ascii?Q?N8J29s64AYZptyI41cU9GQuV2jqkhJLGNeKzTNZP1LgyqczhrW4L5X0QEXkF?= =?us-ascii?Q?RGoz8mar3tWWV1aCUzI5kKZ9F8wzNkmhItwCse+Q3n10xrkYktukQUoOSKra?= =?us-ascii?Q?AtqfkMvfg10JU7diV8Jrjo5E+sqdTjGY1l/FVFwmM9YwgG2CCL8oA+UpQwsJ?= =?us-ascii?Q?J2qcwA06R6qrhokJjEN/sw6/CF2UlzvVNw6/3gI8ktJ4Cuj+q6yKTTNKl1He?= =?us-ascii?Q?GrjbUwAYSs1EvzCx/jNofdoBkK4YU+w/BNJAPlFvq/Hv2TRjAJh2mLIf8BF6?= =?us-ascii?Q?rLa8UxHEya9UtarLH7L4XEeaLGxTa3wIZaOOvTbbER75ojJqkenukCyhmnFk?= =?us-ascii?Q?nVDYlTvIbsnAk1YLohGm/uhR6LCmoIGkWEAC5n0ZyBnpExfwkYsFVWVUP1E9?= =?us-ascii?Q?sxQtMKESCW/jt3DWj12td86dR+hxalqQWNwU3f6Tom+ZAqCoOy/gigXvz7Go?= =?us-ascii?Q?QqcU2OTeg9Do+CtrUMn+c2CfybP3qWYG7LIDdg19s4DCDcEx5fQXlZMBpIr+?= =?us-ascii?Q?NyP8Kexj5Wgx0RvRFbvgvipadRO7/sJ0LrDf/EWE0Ri43NTR83CATFbOcYxJ?= =?us-ascii?Q?pd0VKM/nM6mL/NHwg7FhutNUcn4uTEYCmzz4YxZrQXZMXS8oS15pBo91jTdQ?= =?us-ascii?Q?ytrDJniJAnBaJaYY2hQQjB4ltIc+Vv6iYzAvZ3zor3n/QF2JZTzaVAHBfm3I?= =?us-ascii?Q?qX4SNe9asITFLVU9VDw/IICrBaxveH3tEOoZ+5WATDo7Li2m7EDN8vUb1Csn?= =?us-ascii?Q?zt3y4L7NZWMWoHfiW7bJZE3DSDzZ5K7FcI2uJxj/KXYQxXlPwR/uz/skeHe1?= =?us-ascii?Q?1mrFmrWrs+K1RPBUQLeEgRMIGnUgaX0dWJXFgbUTJOKqSAVmqCk+mwtCQDf7?= =?us-ascii?Q?568dzYir+R/tndWnqmsuq/h0e5+2xla6NAPYHRC1AVYtxphNhmnP/ibShCkI?= =?us-ascii?Q?wQZ9vw1lTRFewwRmkyEZuBN26ILt94O9jwlU6QV3OLfxgwUf3V0pcra3sEQq?= =?us-ascii?Q?OexoEL/9oKk6hdebKgt8TPMAOUINMUZui0S0kz3qShSF5H/MWfYApCPY/i+r?= =?us-ascii?Q?DFIacJIUWz44xNeJ04t+oEzAeVpMQBLHmP6hpMgEvFrEkyBBa385IfT+tII2?= =?us-ascii?Q?yJZdZnTY58rw/s6b2nrJxaoJ2XDOA+QM69lV+3YXXaiuS5dwWgTqu+eCLvIy?= =?us-ascii?Q?9oDKXAyYiZMzX0TlX0oIFxUd6C4ASdJmk/kPSB3yUc2bhXq?= x-microsoft-antispam-prvs: x-forefront-prvs: 09749A275C x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(136003)(39840400004)(346002)(366004)(376002)(396003)(189003)(28163001)(13464003)(199004)(97736004)(8676002)(4326008)(186003)(2906002)(106356001)(86362001)(6246003)(9686003)(25786009)(33656002)(229853002)(6916009)(26005)(476003)(102836004)(256004)(74316002)(14454004)(105586002)(14444005)(446003)(966005)(11346002)(6346003)(486006)(71190400001)(71200400001)(53546011)(478600001)(81166006)(66066001)(6506007)(68736007)(53936002)(305945005)(3846002)(52536013)(5660300002)(6116002)(316002)(54906003)(7696005)(6436002)(53376002)(6306002)(7736002)(76176011)(8936002)(99286004)(30864003)(81156014)(55016002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN8PR18MB2435; H:BN8PR18MB2689.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: 0dMSQTTKv2ilsf33DF9q5R2LFCnqgzLKxNscl74rWkgHN74C/OriBJDijsv1gNMwJf3YDIDwQx4xEhNjut7HZkCV967m3YFG+pBBS9crdWzGcFL5rxtvBxAusHY5NV9jpmP0jZwcb7K+E3d/TgE/XeXDzS2HJzl3UEL1Ric0SJ5VsffU/XGj7HEwPLh4TVX8KsVGPNAKXSVzeiFAG1qtE+UOJNtq8klPmyP80cVAZt3x0XAnGD1gFOnYMRcjlxILXNUPAVYxKMi5UYZV3F0pf50JvAqP7sfhWPFAGWHpOtQqGgpVyh+Cti4+e/gxVBQIC6oQHG32gSIveTe+ne/Q6Q9dFBL7O9abjzBLXFTi09/lXTygDu/amLLTqweSLpA9e6zaPW7+z3xOv7TY8FxBJk9CYXQ2/4yhy5Vbckty1w4= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 68430b32-de8a-42e3-6a92-08d6a70cc893 X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Mar 2019 17:04:26.4623 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR18MB2435 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-03-12_09:, , signatures=0 Subject: Re: [dpdk-stable] [EXT] patch 'net/qede: fix performance bottleneck in Rx path' has been queued to LTS release 17.11.6 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Mar 2019 17:04:34 -0000 > -----Original Message----- > From: Yongseok Koh > Sent: Friday, March 8, 2019 11:18 PM > To: Shahed Shaikh > Cc: Rasesh Mody ; dpdk stable > Subject: [EXT] patch 'net/qede: fix performance bottleneck in Rx path' ha= s been > queued to LTS release 17.11.6 > Hi, >=20 > FYI, your patch has been queued to LTS release 17.11.6 >=20 > Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. > It will be pushed if I get no objection by 03/13/19. So please shout if a= nyone has > objection. Hi, We recently found that this patch introduces a regression for which I have = just sent out a fix. http://patchwork.dpdk.org/patch/51134/ ("net/qede: fix receive packet dro= p") How do we handle such situation? Will you also pull above fix once it gets = accepted or ignore current patch for 17.11.6 and consider for next release? Please advise. Thanks, Shahed =20 >=20 > Also note that after the patch there's a diff of the upstream commit vs t= he patch > applied to the branch. If the code is different (ie: not only metadata di= ffs), due > for example to a change in context or macro names, please double check it= . >=20 > Thanks. >=20 > Yongseok >=20 > --- > From f4f2aff537e1ff13ed85a9d4e52038ca34e7e005 Mon Sep 17 00:00:00 2001 > From: Shahed Shaikh > Date: Fri, 18 Jan 2019 02:29:29 -0800 > Subject: [PATCH] net/qede: fix performance bottleneck in Rx path >=20 > [ upstream commit 8f2312474529ad7ff0e4b65b82efc8530e7484ce ] >=20 > Allocating replacement buffer per received packet is expensive. > Instead, process received packets first and allocate replacement buffers = in bulk > later. >=20 > This improves performance by ~25% in terms of PPS on AMD platforms. >=20 > Fixes: 2ea6f76aff40 ("qede: add core driver") >=20 > Signed-off-by: Shahed Shaikh > Acked-by: Rasesh Mody > --- > drivers/net/qede/qede_rxtx.c | 97 +++++++++++++++++++++++++++++++++----- > ------ > drivers/net/qede/qede_rxtx.h | 2 + > 2 files changed, 75 insertions(+), 24 deletions(-) >=20 > diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c = index > cdb85c218..b525075ca 100644 > --- a/drivers/net/qede/qede_rxtx.c > +++ b/drivers/net/qede/qede_rxtx.c > @@ -37,6 +37,52 @@ static inline int qede_alloc_rx_buffer(struct > qede_rx_queue *rxq) > return 0; > } >=20 > +#define QEDE_MAX_BULK_ALLOC_COUNT 512 > + > +static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, > +int count) { > + void *obj_p[QEDE_MAX_BULK_ALLOC_COUNT] __rte_cache_aligned; > + struct rte_mbuf *mbuf =3D NULL; > + struct eth_rx_bd *rx_bd; > + dma_addr_t mapping; > + int i, ret =3D 0; > + uint16_t idx; > + > + idx =3D rxq->sw_rx_prod & NUM_RX_BDS(rxq); > + > + if (count > QEDE_MAX_BULK_ALLOC_COUNT) > + count =3D QEDE_MAX_BULK_ALLOC_COUNT; > + > + ret =3D rte_mempool_get_bulk(rxq->mb_pool, obj_p, count); > + if (unlikely(ret)) { > + PMD_RX_LOG(ERR, rxq, > + "Failed to allocate %d rx buffers " > + "sw_rx_prod %u sw_rx_cons %u mp entries %u free > %u", > + count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq), > + rte_mempool_avail_count(rxq->mb_pool), > + rte_mempool_in_use_count(rxq->mb_pool)); > + return -ENOMEM; > + } > + > + for (i =3D 0; i < count; i++) { > + mbuf =3D obj_p[i]; > + if (likely(i < count - 1)) > + rte_prefetch0(obj_p[i + 1]); > + > + idx =3D rxq->sw_rx_prod & NUM_RX_BDS(rxq); > + rxq->sw_rx_ring[idx].mbuf =3D mbuf; > + rxq->sw_rx_ring[idx].page_offset =3D 0; > + mapping =3D rte_mbuf_data_iova_default(mbuf); > + rx_bd =3D (struct eth_rx_bd *) > + ecore_chain_produce(&rxq->rx_bd_ring); > + rx_bd->addr.hi =3D rte_cpu_to_le_32(U64_HI(mapping)); > + rx_bd->addr.lo =3D rte_cpu_to_le_32(U64_LO(mapping)); > + rxq->sw_rx_prod++; > + } > + > + return 0; > +} > + > /* Criterias for calculating Rx buffer size - > * 1) rx_buf_size should not exceed the size of mbuf > * 2) In scattered_rx mode - minimum rx_buf_size should be @@ -1134,7 > +1180,7 @@ qede_reuse_page(__rte_unused struct qede_dev *qdev, > struct qede_rx_queue *rxq, struct qede_rx_entry *curr_cons) { > struct eth_rx_bd *rx_bd_prod =3D ecore_chain_produce(&rxq- > >rx_bd_ring); > - uint16_t idx =3D rxq->sw_rx_cons & NUM_RX_BDS(rxq); > + uint16_t idx =3D rxq->sw_rx_prod & NUM_RX_BDS(rxq); > struct qede_rx_entry *curr_prod; > dma_addr_t new_mapping; >=20 > @@ -1367,7 +1413,6 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > uint8_t bitfield_val; > #endif > uint8_t tunn_parse_flag; > - uint8_t j; > struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa; > uint64_t ol_flags; > uint32_t packet_type; > @@ -1376,6 +1421,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > uint8_t offset, tpa_agg_idx, flags; > struct qede_agg_info *tpa_info =3D NULL; > uint32_t rss_hash; > + int rx_alloc_count =3D 0; >=20 > hw_comp_cons =3D rte_le_to_cpu_16(*rxq->hw_cons_ptr); > sw_comp_cons =3D ecore_chain_get_cons_idx(&rxq->rx_comp_ring); > @@ -1385,6 +1431,25 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > if (hw_comp_cons =3D=3D sw_comp_cons) > return 0; >=20 > + /* Allocate buffers that we used in previous loop */ > + if (rxq->rx_alloc_count) { > + if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, > + rxq->rx_alloc_count))) { > + struct rte_eth_dev *dev; > + > + PMD_RX_LOG(ERR, rxq, > + "New buffer allocation failed," > + "dropping incoming packetn"); > + dev =3D &rte_eth_devices[rxq->port_id]; > + dev->data->rx_mbuf_alloc_failed +=3D > + rxq->rx_alloc_count; > + rxq->rx_alloc_errors +=3D rxq->rx_alloc_count; > + return 0; > + } > + qede_update_rx_prod(qdev, rxq); > + rxq->rx_alloc_count =3D 0; > + } > + > while (sw_comp_cons !=3D hw_comp_cons) { > ol_flags =3D 0; > packet_type =3D RTE_PTYPE_UNKNOWN; > @@ -1556,16 +1621,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > rx_mb->hash.rss =3D rss_hash; > } >=20 > - if (unlikely(qede_alloc_rx_buffer(rxq) !=3D 0)) { > - PMD_RX_LOG(ERR, rxq, > - "New buffer allocation failed," > - "dropping incoming packet\n"); > - qede_recycle_rx_bd_ring(rxq, qdev, fp_cqe->bd_num); > - rte_eth_devices[rxq->port_id]. > - data->rx_mbuf_alloc_failed++; > - rxq->rx_alloc_errors++; > - break; > - } > + rx_alloc_count++; > qede_rx_bd_ring_consume(rxq); >=20 > if (!tpa_start_flg && fp_cqe->bd_num > 1) { @@ -1577,17 > +1633,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_= t > nb_pkts) > if (qede_process_sg_pkts(p_rxq, seg1, num_segs, > pkt_len - len)) > goto next_cqe; > - for (j =3D 0; j < num_segs; j++) { > - if (qede_alloc_rx_buffer(rxq)) { > - PMD_RX_LOG(ERR, rxq, > - "Buffer allocation failed"); > - rte_eth_devices[rxq->port_id]. > - data->rx_mbuf_alloc_failed++; > - rxq->rx_alloc_errors++; > - break; > - } > - rxq->rx_segs++; > - } > + > + rx_alloc_count +=3D num_segs; > + rxq->rx_segs +=3D num_segs; > } > rxq->rx_segs++; /* for the first segment */ >=20 > @@ -1629,7 +1677,8 @@ next_cqe: > } > } >=20 > - qede_update_rx_prod(qdev, rxq); > + /* Request number of bufferes to be allocated in next loop */ > + rxq->rx_alloc_count =3D rx_alloc_count; >=20 > rxq->rcv_pkts +=3D rx_pkt; >=20 > diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h = index > fe80237d5..574831e61 100644 > --- a/drivers/net/qede/qede_rxtx.h > +++ b/drivers/net/qede/qede_rxtx.h > @@ -196,6 +196,8 @@ struct qede_rx_queue { > uint16_t queue_id; > uint16_t port_id; > uint16_t rx_buf_size; > + uint16_t rx_alloc_count; > + uint16_t unused; > uint64_t rcv_pkts; > uint64_t rx_segs; > uint64_t rx_hw_errors; > -- > 2.11.0 >=20 > --- > Diff of the applied patch vs upstream commit (please double-check if no= n- > empty: > --- > --- - 2019-03-08 09:46:43.105876155 -0800 > +++ 0059-net-qede-fix-performance-bottleneck-in-Rx-path.patch 2019-03-08 > 09:46:40.311403000 -0800 > @@ -1,8 +1,10 @@ > -From 8f2312474529ad7ff0e4b65b82efc8530e7484ce Mon Sep 17 00:00:00 > 2001 > +From f4f2aff537e1ff13ed85a9d4e52038ca34e7e005 Mon Sep 17 00:00:00 > 2001 > From: Shahed Shaikh > Date: Fri, 18 Jan 2019 02:29:29 -0800 > Subject: [PATCH] net/qede: fix performance bottleneck in Rx path >=20 > +[ upstream commit 8f2312474529ad7ff0e4b65b82efc8530e7484ce ] > + > Allocating replacement buffer per received packet is expensive. > Instead, process received packets first and allocate replacement buffer= s in bulk > later. > @@ -11,7 +13,6 @@ > platforms. >=20 > Fixes: 2ea6f76aff40 ("qede: add core driver") > -Cc: stable@dpdk.org >=20 > Signed-off-by: Shahed Shaikh > Acked-by: Rasesh Mody @@ -21,10 +22,10 @@ > 2 files changed, 75 insertions(+), 24 deletions(-) >=20 > diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c= -index > 0e33be1a3..684c4aeef 100644 > +index cdb85c218..b525075ca 100644 > --- a/drivers/net/qede/qede_rxtx.c > +++ b/drivers/net/qede/qede_rxtx.c > -@@ -35,6 +35,52 @@ static inline int qede_alloc_rx_buffer(struct > qede_rx_queue *rxq) > +@@ -37,6 +37,52 @@ static inline int qede_alloc_rx_buffer(struct > +qede_rx_queue *rxq) > return 0; > } >=20 > @@ -77,7 +78,7 @@ > /* Criterias for calculating Rx buffer size - > * 1) rx_buf_size should not exceed the size of mbuf > * 2) In scattered_rx mode - minimum rx_buf_size should be -@@ -1131,7 > +1177,7 @@ qede_reuse_page(__rte_unused struct qede_dev *qdev, > +@@ -1134,7 +1180,7 @@ qede_reuse_page(__rte_unused struct qede_dev > +*qdev, > struct qede_rx_queue *rxq, struct qede_rx_entry *curr_cons) > { > struct eth_rx_bd *rx_bd_prod =3D ecore_chain_produce(&rxq- > >rx_bd_ring); > @@ -86,7 +87,7 @@ > struct qede_rx_entry *curr_prod; > dma_addr_t new_mapping; >=20 > -@@ -1364,7 +1410,6 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > +@@ -1367,7 +1413,6 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > +**rx_pkts, uint16_t nb_pkts) > uint8_t bitfield_val; > #endif > uint8_t tunn_parse_flag; > @@ -94,7 +95,7 @@ > struct eth_fast_path_rx_tpa_start_cqe *cqe_start_tpa; > uint64_t ol_flags; > uint32_t packet_type; > -@@ -1373,6 +1418,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > +@@ -1376,6 +1421,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > +**rx_pkts, uint16_t nb_pkts) > uint8_t offset, tpa_agg_idx, flags; > struct qede_agg_info *tpa_info =3D NULL; > uint32_t rss_hash; > @@ -102,7 +103,7 @@ >=20 > hw_comp_cons =3D rte_le_to_cpu_16(*rxq->hw_cons_ptr); > sw_comp_cons =3D ecore_chain_get_cons_idx(&rxq->rx_comp_ring); > -@@ -1382,6 +1428,25 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > +@@ -1385,6 +1431,25 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > +**rx_pkts, uint16_t nb_pkts) > if (hw_comp_cons =3D=3D sw_comp_cons) > return 0; >=20 > @@ -128,7 +129,7 @@ > while (sw_comp_cons !=3D hw_comp_cons) { > ol_flags =3D 0; > packet_type =3D RTE_PTYPE_UNKNOWN; > -@@ -1553,16 +1618,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > +@@ -1556,16 +1621,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > +**rx_pkts, uint16_t nb_pkts) > rx_mb->hash.rss =3D rss_hash; > } >=20 > @@ -146,7 +147,7 @@ > qede_rx_bd_ring_consume(rxq); >=20 > if (!tpa_start_flg && fp_cqe->bd_num > 1) { -@@ -1574,17 > +1630,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_= t > nb_pkts) > +@@ -1577,17 +1633,9 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf > +**rx_pkts, uint16_t nb_pkts) > if (qede_process_sg_pkts(p_rxq, seg1, num_segs, > pkt_len - len)) > goto next_cqe; > @@ -167,7 +168,7 @@ > } > rxq->rx_segs++; /* for the first segment */ >=20 > -@@ -1626,7 +1674,8 @@ next_cqe: > +@@ -1629,7 +1677,8 @@ next_cqe: > } > } >=20 > @@ -178,10 +179,10 @@ > rxq->rcv_pkts +=3D rx_pkt; >=20 > diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h= -index > 454daa07b..5b249cbb2 100644 > +index fe80237d5..574831e61 100644 > --- a/drivers/net/qede/qede_rxtx.h > +++ b/drivers/net/qede/qede_rxtx.h > -@@ -192,6 +192,8 @@ struct qede_rx_queue { > +@@ -196,6 +196,8 @@ struct qede_rx_queue { > uint16_t queue_id; > uint16_t port_id; > uint16_t rx_buf_size;