From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C804EA0A02; Tue, 6 Apr 2021 11:27:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B66A7140FCA; Tue, 6 Apr 2021 11:27:28 +0200 (CEST) Received: from NAM04-CO1-obe.outbound.protection.outlook.com (mail-eopbgr690086.outbound.protection.outlook.com [40.107.69.86]) by mails.dpdk.org (Postfix) with ESMTP id 285FD406A2 for ; Tue, 6 Apr 2021 11:27:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MdC84qriAaq42VSJ6MysT/EoJhMpTAwQlEDgeKmWA6xl2PTXl58fBeFNODNx4tvg4b2lPzkQ+IvnC63CM/vkXcuehUDdUz8wZUDZCVvFJgXIG2iQd797QjseMYoAhPm4Y3NMaKVWBY6ByToinUhoRELxpJ1I3pUU6Ciz3qNlIEVb2RQtfraUuKo36cJIgvPkI/sL/H5K0F+qXnqNG7SbN45M5DGmpM2wA2eELpvqnqWLe8m1AXzoqziX+1fKY2nGTSpe7Zplmc5+2oKWCX6RzBb+pL9FNo1Txb/uo0VzOP33Z9YezJMxw4k3i7HvBvgFWTOTAWKcfenPcW5Q9dq+ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Kxk7q6Tj7nUipoUXxdcpIqIZphSp0AcMj815P4V19pA=; b=A+VUzdfKAjrE3zcc/VRPKP27cCy3qxmKymeJjiGkrSyS0ffISU1z6LLCcz8XUoxJThDeuZ0nvYW10Jpkc56RF1dYT8snFHrnNw5O5MZ4dS3RSQzgg5d01QKdS+HOm6Em71SFwSlar8Dn8r8NEnSr0L5vVTG5J5rVWCljDmmkaIrNaH2ScmmO/x5tnhdzfsyPBQxEDbj8I45VWl07dJpk+VH+QdIze3Eo1GAPdFpfpS/jwPTcR0TRT9sUxrgxfZyXc/DHyEO5B3pOFmd9Q/waFzd0OJbLvAJzwGdWrh1I/LmFEvtaT5fOvuU7IIWc4OUaTlt/KfE7frKZt/0xBdYLdA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Kxk7q6Tj7nUipoUXxdcpIqIZphSp0AcMj815P4V19pA=; b=Nam6Z2kjaHjACy9oZ+IFf7cqfXtZvdTv7Rx0zhALgjk3K/Uoh2XV0JeHs4Wx8ILsn8IS//PH6x9iGb0WLovP9+2brZVxJOjIRkcWHL+IEkF9lkDCmNkOoPnEEKMTUtQyFkum0AM6zK/cKvPLihQsfUJtfr5d78KVoW8T41e07rZ3a1LO/LfiPhMbOePB3yMXfskekYmx+gqSMrJ5V5Iz+MSU3UW7zDip4LQ9/ShiZdtZXy4EvYhgMiHheuoYf3gqJkhth9ZEJbhEq3zPxZ6YufjldRkHt9gzchgm+aNe+TyL6rM4qGcACyFxeElwn5EGP0QwZiYv0rXjZyAdAvILrQ== Received: from DM6PR12MB3753.namprd12.prod.outlook.com (2603:10b6:5:1c7::18) by DM6PR12MB4481.namprd12.prod.outlook.com (2603:10b6:5:2af::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.26; Tue, 6 Apr 2021 09:27:25 +0000 Received: from DM6PR12MB3753.namprd12.prod.outlook.com ([fe80::20f7:11fc:9d46:9258]) by DM6PR12MB3753.namprd12.prod.outlook.com ([fe80::20f7:11fc:9d46:9258%6]) with mapi id 15.20.3999.032; Tue, 6 Apr 2021 09:27:24 +0000 From: Slava Ovsiienko To: Michael Baum , "dev@dpdk.org" CC: Matan Azrad , Raslan Darawsheh Thread-Topic: [PATCH 1/6] net/mlx5: separate Rx function declarations to another file Thread-Index: AQHXKiQyRpX89DKIlkKR7fyQdeU5iqqnOanA Date: Tue, 6 Apr 2021 09:27:24 +0000 Message-ID: References: <1617631256-3018-1-git-send-email-michaelba@nvidia.com> <1617631256-3018-2-git-send-email-michaelba@nvidia.com> In-Reply-To: <1617631256-3018-2-git-send-email-michaelba@nvidia.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: nvidia.com; dkim=none (message not signed) header.d=none;nvidia.com; dmarc=none action=none header.from=nvidia.com; x-originating-ip: [95.164.10.10] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: e423d9ca-f0e8-490e-ece0-08d8f8de304d x-ms-traffictypediagnostic: DM6PR12MB4481: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:1468; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: g1ZOfD8N+D+5DskA2l4VGFJHubIe6ecSM24/etBhiXqu2Br1C/CD3QMoqyE+K/9LzjjrhsODrVdvLopIH2dVO2Jvt2phTvgBnCJegUkDcLUWprEMUNisAgVrcl+Sa2F4Pja45UAG9jEZNil478AWFSU+wSXJWva31legHdvfQFebbl7kHUbeS/UMipzMDNmyNjYyN3QunqHKn36R3P1xfkuZ2zWMcdeYB+cJ44E4XsH50G22Zt+ALI/RkDJPTiYUg+FT9XDkkp5hiC47uNwbo6NjWURIrYTojOR2Z2CRb2WjyNmejunfn4mJiyLQSYXsjC0aM8hDPYgJQjWth2yOys6rpIJL1EBs0mWwY5m62LLgt4i+EBlWUADpBMJuQ5Iq91uJK8DvUupjP1Frabsw/xvCEu4B7XqStPd6gGzgPEXX+0sc7vwPpmZKrRCMGokNOi4xI8l6+egvLIZEEg9h8wcO7meyPpfnKLXTQdxiG2gdjXnNH0+kXA31lbq9q3l2uY+rDTTegOvajF2cfkfxUIQJRzVV3opfyd5o0Ck31HTLkuzl+WtUxqLXNSn8RYdT1jnzsd2ZY/blA9GcD9pNdz6MA3AY7Nb+Y62kNqme/77xtOVQGbVHNdnhIlG2P/xljpOKYK06gkhmoRGz1WEmtfXmcvm7iUBMrf2aN0g0C1k= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB3753.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(346002)(136003)(376002)(396003)(39860400002)(366004)(76116006)(33656002)(64756008)(54906003)(66446008)(186003)(66556008)(66476007)(4326008)(71200400001)(110136005)(66946007)(5660300002)(7696005)(86362001)(316002)(6506007)(55016002)(30864003)(52536014)(9686003)(38100700001)(83380400001)(8676002)(8936002)(2906002)(26005)(107886003)(53546011)(478600001)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?QkrJkN3gLcverJ9jeW9oOfdm3CUYk/qu5A8aogOGjgErpofuTj8ce+4gIgsW?= =?us-ascii?Q?A0GZdLAut/K7Y1nH7fqEaLBjdHK7MQhwi6/5SXHJt8Trf5lm8/YOvau68zrN?= =?us-ascii?Q?7SVgWBEHw0CVasTahwFNbFxRgna6ylXRTcKQUN8YF2c6I803cXc0gIEkrimr?= =?us-ascii?Q?5ObW7vLsRixpHCL9s7aQRtYe/iGTPVdoI5PS88yi4Abo/2cu1E9wkbWTOPou?= =?us-ascii?Q?TuQlTbMyq1Acua500gRL8tUOL9rMOq+pxwrULFO938r1f9fKI00SLGn5MDf9?= =?us-ascii?Q?H/ZFywG8gnCA9GSnkUOXnh0Rv+Txkd4IfEbq/1NiNSSVz/Na0HBvAI6r151M?= =?us-ascii?Q?BrJUIz3Gm/M2swXGKuFLCts4bats73ninBZAONs2uXNnw6BQNKx0lW6OQt3L?= =?us-ascii?Q?0/uJaVCMFx5j638/lS3acILo2w+mSKbXfE7Frqh7NFRwmlUe83irXdlqKViE?= =?us-ascii?Q?cCRVEV65noGpQLLDbLHKGVS1AfIEUA2tJHDpFuhqJ7jtkBpA5UJyQyecBQ17?= =?us-ascii?Q?wirA5B7rUkJ46TYeXAS8IshB66UUyBk1eVIYVENy3uRP0P76a/csgwhBCdxW?= =?us-ascii?Q?J9Wfz+F/svwRkC1ilI58uE3f6tYE52f5y3AulqRJXVmIEjwrkOGF4l2mfvhd?= =?us-ascii?Q?5vbe4+rxuqfCtABVpPtDXegWs4AaGynrHzH93M8Km8f3+A1IN0Ymagm6yH7P?= =?us-ascii?Q?oeob/PMIHA1jMkielJmIojzbcQiALujPZYBP3J85LZC8p8Y3p5XodyMNaA5X?= =?us-ascii?Q?mKfcoEO3GiJjxdFnt6kAOSnaQHMSqz11YFov0JoMdjNTu0myxMEYLDvL+cDV?= =?us-ascii?Q?3wI4uzEhn4cLVFq3vQVu4pjhmoD5Nipsq04MZsl65XfJjIT+PqO420umoe9+?= =?us-ascii?Q?Re0RLMmzFKm61047Dr1grojbrM9SK9M8EfvrMdZGpUSvcxjopb9ZvkaxoE4l?= =?us-ascii?Q?itEpTXoiQFVt07IIgNvsu6vM0TpnNNfiJRvzgpECBCU5KkXHXYdWzKbKFx73?= =?us-ascii?Q?pkgIZpjzsvOznMEWfRO/LN5TlntbmO9lLyK2uCDNeNvgUb1JghQZzXMpnwC8?= =?us-ascii?Q?XAjLGR8PUAD9ZoXWktHbOHXSrQz9TXQ2T8H80YeDdF7s+iSMwS9KryiI9xiy?= =?us-ascii?Q?9ny4Gc1ilSf8neaZtQ/zRok84jvanRRWM7YI+d8DCRM9td1+/2KFFVpElyE6?= =?us-ascii?Q?ACzVqONkqxBmzQvHbO4bvp2Mkdm7reV57/5nJtrmjhz5zGqoupHCzlK3sVIF?= =?us-ascii?Q?stif2xC/rnjRCxzOeremXIVZfTgZFDioybr+GnKb0be8vJDuV6hiwjTluP7p?= =?us-ascii?Q?2lg=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB3753.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: e423d9ca-f0e8-490e-ece0-08d8f8de304d X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Apr 2021 09:27:24.8817 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: ADL0aktPAdiI8Cx1MqAWiqC8y2BgoEO12ML2HRB9ode+1saetBe+t0h8j5TXZsMpMpfwTPBGxnX+IV/Hh0fu8A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4481 Subject: Re: [dpdk-dev] [PATCH 1/6] net/mlx5: separate Rx function declarations to another file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Michael Baum > Sent: Monday, April 5, 2021 17:01 > To: dev@dpdk.org > Cc: Matan Azrad ; Raslan Darawsheh > ; Slava Ovsiienko > Subject: [PATCH 1/6] net/mlx5: separate Rx function declarations to anoth= er > file >=20 > The mlx5_rxtx.c file contains a lot of Tx burst functions, each of those > is performance-optimized for the specific set of requested offloads. > These ones are generated on the basis of the template function and it > takes significant time to compile, just due to a large number of giant > functions generated in the same file and this compilation is not being > done in parallel with using multithreading. >=20 > Therefore we can split the mlx5_rxtx.c file into several separate files > to allow different functions to be compiled simultaneously. > In this patch, we separate Rx function declarations to different header > file in preparation for removing them from the source file and as an > optional preparation step for further consolidation of Rx burst > functions. >=20 > Signed-off-by: Michael Baum Acked-by: Viacheslav Ovsiienko > --- > drivers/net/mlx5/linux/mlx5_mp_os.c | 1 + > drivers/net/mlx5/linux/mlx5_os.c | 1 + > drivers/net/mlx5/linux/mlx5_verbs.c | 1 + > drivers/net/mlx5/mlx5.c | 1 + > drivers/net/mlx5/mlx5_devx.c | 1 + > drivers/net/mlx5/mlx5_ethdev.c | 1 + > drivers/net/mlx5/mlx5_flow.c | 1 + > drivers/net/mlx5/mlx5_flow_dv.c | 1 + > drivers/net/mlx5/mlx5_flow_verbs.c | 1 + > drivers/net/mlx5/mlx5_mr.c | 1 + > drivers/net/mlx5/mlx5_rss.c | 1 + > drivers/net/mlx5/mlx5_rx.h | 598 > ++++++++++++++++++++++++++++++++++++ > drivers/net/mlx5/mlx5_rxq.c | 1 + > drivers/net/mlx5/mlx5_rxtx.c | 1 + > drivers/net/mlx5/mlx5_rxtx.h | 569 ------------------------------= ---- > drivers/net/mlx5/mlx5_rxtx_vec.c | 1 + > drivers/net/mlx5/mlx5_stats.c | 1 + > drivers/net/mlx5/mlx5_trigger.c | 1 + > drivers/net/mlx5/mlx5_txpp.c | 1 + > drivers/net/mlx5/mlx5_vlan.c | 1 + > drivers/net/mlx5/windows/mlx5_os.c | 1 + > 21 files changed, 617 insertions(+), 569 deletions(-) > create mode 100644 drivers/net/mlx5/mlx5_rx.h >=20 > diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c > b/drivers/net/mlx5/linux/mlx5_mp_os.c > index 8011ca8..63fa278 100644 > --- a/drivers/net/mlx5/linux/mlx5_mp_os.c > +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c > @@ -16,6 +16,7 @@ >=20 > #include "mlx5.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_utils.h" >=20 > int > diff --git a/drivers/net/mlx5/linux/mlx5_os.c > b/drivers/net/mlx5/linux/mlx5_os.c > index 2d5bcab..97a28ec 100644 > --- a/drivers/net/mlx5/linux/mlx5_os.c > +++ b/drivers/net/mlx5/linux/mlx5_os.c > @@ -40,6 +40,7 @@ > #include "mlx5_common_os.h" > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c > b/drivers/net/mlx5/linux/mlx5_verbs.c > index c7d4b17..73096af 100644 > --- a/drivers/net/mlx5/linux/mlx5_verbs.c > +++ b/drivers/net/mlx5/linux/mlx5_verbs.c > @@ -22,6 +22,7 @@ > #include > #include > #include > +#include > #include > #include >=20 > diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c > index 9557d06..6f77bc2 100644 > --- a/drivers/net/mlx5/mlx5.c > +++ b/drivers/net/mlx5/mlx5.c > @@ -35,6 +35,7 @@ > #include "mlx5.h" > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c > index 5c940ed..76935f6 100644 > --- a/drivers/net/mlx5/mlx5_devx.c > +++ b/drivers/net/mlx5/mlx5_devx.c > @@ -21,6 +21,7 @@ > #include "mlx5.h" > #include "mlx5_common_os.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "mlx5_devx.h" > #include "mlx5_flow.h" > diff --git a/drivers/net/mlx5/mlx5_ethdev.c > b/drivers/net/mlx5/mlx5_ethdev.c > index 564d713..708e3a3 100644 > --- a/drivers/net/mlx5/mlx5_ethdev.c > +++ b/drivers/net/mlx5/mlx5_ethdev.c > @@ -23,6 +23,7 @@ > #include >=20 > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_autoconf.h" >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c > index c347f81..b3877a1 100644 > --- a/drivers/net/mlx5/mlx5_flow.c > +++ b/drivers/net/mlx5/mlx5_flow.c > @@ -30,6 +30,7 @@ > #include "mlx5_flow.h" > #include "mlx5_flow_os.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_common_os.h" > #include "rte_pmd_mlx5.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c > b/drivers/net/mlx5/mlx5_flow_dv.c > index 533dadf..cac05fb 100644 > --- a/drivers/net/mlx5/mlx5_flow_dv.c > +++ b/drivers/net/mlx5/mlx5_flow_dv.c > @@ -33,6 +33,7 @@ > #include "mlx5_flow.h" > #include "mlx5_flow_os.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "rte_pmd_mlx5.h" >=20 > #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || > !defined(HAVE_INFINIBAND_VERBS_H) > diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c > b/drivers/net/mlx5/mlx5_flow_verbs.c > index b442b9b..c331350 100644 > --- a/drivers/net/mlx5/mlx5_flow_verbs.c > +++ b/drivers/net/mlx5/mlx5_flow_verbs.c > @@ -24,6 +24,7 @@ > #include "mlx5.h" > #include "mlx5_flow.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" >=20 > #define VERBS_SPEC_INNER(item_flags) \ > (!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ? > IBV_FLOW_SPEC_INNER : 0) > diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c > index 3255393..2014936 100644 > --- a/drivers/net/mlx5/mlx5_mr.c > +++ b/drivers/net/mlx5/mlx5_mr.c > @@ -15,6 +15,7 @@ > #include "mlx5.h" > #include "mlx5_mr.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" >=20 > struct mr_find_contig_memsegs_data { > uintptr_t addr; > diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c > index dc0131a..c32129c 100644 > --- a/drivers/net/mlx5/mlx5_rss.c > +++ b/drivers/net/mlx5/mlx5_rss.c > @@ -16,6 +16,7 @@ > #include "mlx5_defs.h" > #include "mlx5.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" >=20 > /** > * DPDK callback to update the RSS hash configuration. > diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h > new file mode 100644 > index 0000000..83b1f38 > --- /dev/null > +++ b/drivers/net/mlx5/mlx5_rx.h > @@ -0,0 +1,598 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright 2021 6WIND S.A. > + * Copyright 2021 Mellanox Technologies, Ltd > + */ > + > +#ifndef RTE_PMD_MLX5_RX_H_ > +#define RTE_PMD_MLX5_RX_H_ > + > +#include > +#include > + > +#include > +#include > +#include > +#include > + > +#include > + > +#include "mlx5.h" > +#include "mlx5_autoconf.h" > +#include "mlx5_mr.h" > + > +/* Support tunnel matching. */ > +#define MLX5_FLOW_TUNNEL 10 > + > +struct mlx5_rxq_stats { > +#ifdef MLX5_PMD_SOFT_COUNTERS > + uint64_t ipackets; /**< Total of successfully received packets. */ > + uint64_t ibytes; /**< Total of successfully received bytes. */ > +#endif > + uint64_t idropped; /**< Total of packets dropped when RX ring full. > */ > + uint64_t rx_nombuf; /**< Total of RX mbuf allocation failures. */ > +}; > + > +/* Compressed CQE context. */ > +struct rxq_zip { > + uint16_t ai; /* Array index. */ > + uint16_t ca; /* Current array index. */ > + uint16_t na; /* Next array index. */ > + uint16_t cq_ci; /* The next CQE. */ > + uint32_t cqe_cnt; /* Number of CQEs. */ > +}; > + > +/* Multi-Packet RQ buffer header. */ > +struct mlx5_mprq_buf { > + struct rte_mempool *mp; > + uint16_t refcnt; /* Atomically accessed refcnt. */ > + uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first > packet. */ > + struct rte_mbuf_ext_shared_info shinfos[]; > + /* > + * Shared information per stride. > + * More memory will be allocated for the first stride head-room and > for > + * the strides data. > + */ > +} __rte_cache_aligned; > + > +/* Get pointer to the first stride. */ > +#define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \ > + sizeof(struct mlx5_mprq_buf) + \ > + (strd_n) * \ > + sizeof(struct rte_mbuf_ext_shared_info) + \ > + RTE_PKTMBUF_HEADROOM)) > + > +#define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6 > +#define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9 > + > +enum mlx5_rxq_err_state { > + MLX5_RXQ_ERR_STATE_NO_ERROR =3D 0, > + MLX5_RXQ_ERR_STATE_NEED_RESET, > + MLX5_RXQ_ERR_STATE_NEED_READY, > +}; > + > +enum mlx5_rqx_code { > + MLX5_RXQ_CODE_EXIT =3D 0, > + MLX5_RXQ_CODE_NOMBUF, > + MLX5_RXQ_CODE_DROPPED, > +}; > + > +struct mlx5_eth_rxseg { > + struct rte_mempool *mp; /**< Memory pool to allocate segment > from. */ > + uint16_t length; /**< Segment data length, configures split point. */ > + uint16_t offset; /**< Data offset from beginning of mbuf data > buffer. */ > + uint32_t reserved; /**< Reserved field. */ > +}; > + > +/* RX queue descriptor. */ > +struct mlx5_rxq_data { > + unsigned int csum:1; /* Enable checksum offloading. */ > + unsigned int hw_timestamp:1; /* Enable HW timestamp. */ > + unsigned int rt_timestamp:1; /* Realtime timestamp format. */ > + unsigned int vlan_strip:1; /* Enable VLAN stripping. */ > + unsigned int crc_present:1; /* CRC must be subtracted. */ > + unsigned int sges_n:3; /* Log 2 of SGEs (max buffers per packet). */ > + unsigned int cqe_n:4; /* Log 2 of CQ elements. */ > + unsigned int elts_n:4; /* Log 2 of Mbufs. */ > + unsigned int rss_hash:1; /* RSS hash result is enabled. */ > + unsigned int mark:1; /* Marked flow available on the queue. */ > + unsigned int strd_num_n:5; /* Log 2 of the number of stride. */ > + unsigned int strd_sz_n:4; /* Log 2 of stride size. */ > + unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ > + unsigned int err_state:2; /* enum mlx5_rxq_err_state. */ > + unsigned int strd_scatter_en:1; /* Scattered packets from a stride. */ > + unsigned int lro:1; /* Enable LRO. */ > + unsigned int dynf_meta:1; /* Dynamic metadata is configured. */ > + unsigned int mcqe_format:3; /* CQE compression format. */ > + volatile uint32_t *rq_db; > + volatile uint32_t *cq_db; > + uint16_t port_id; > + uint32_t elts_ci; > + uint32_t rq_ci; > + uint16_t consumed_strd; /* Number of consumed strides in WQE. */ > + uint32_t rq_pi; > + uint32_t cq_ci; > + uint16_t rq_repl_thresh; /* Threshold for buffer replenishment. */ > + uint32_t byte_mask; > + union { > + struct rxq_zip zip; /* Compressed context. */ > + uint16_t decompressed; > + /* Number of ready mbufs decompressed from the CQ. */ > + }; > + struct mlx5_mr_ctrl mr_ctrl; /* MR control descriptor. */ > + uint16_t mprq_max_memcpy_len; /* Maximum size of packet to > memcpy. */ > + volatile void *wqes; > + volatile struct mlx5_cqe(*cqes)[]; > + struct rte_mbuf *(*elts)[]; > + struct mlx5_mprq_buf *(*mprq_bufs)[]; > + struct rte_mempool *mp; > + struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. > */ > + struct mlx5_mprq_buf *mprq_repl; /* Stashed mbuf for replenish. > */ > + struct mlx5_dev_ctx_shared *sh; /* Shared context. */ > + uint16_t idx; /* Queue index. */ > + struct mlx5_rxq_stats stats; > + rte_xmm_t mbuf_initializer; /* Default rearm/flags for vectorized Rx. > */ > + struct rte_mbuf fake_mbuf; /* elts padding for vectorized Rx. */ > + void *cq_uar; /* Verbs CQ user access region. */ > + uint32_t cqn; /* CQ number. */ > + uint8_t cq_arm_sn; /* CQ arm seq number. */ > +#ifndef RTE_ARCH_64 > + rte_spinlock_t *uar_lock_cq; > + /* CQ (UAR) access lock required for 32bit implementations */ > +#endif > + uint32_t tunnel; /* Tunnel information. */ > + int timestamp_offset; /* Dynamic mbuf field for timestamp. */ > + uint64_t timestamp_rx_flag; /* Dynamic mbuf flag for timestamp. */ > + uint64_t flow_meta_mask; > + int32_t flow_meta_offset; > + uint32_t flow_meta_port_mask; > + uint32_t rxseg_n; /* Number of split segment descriptions. */ > + struct mlx5_eth_rxseg rxseg[MLX5_MAX_RXQ_NSEG]; > + /* Buffer split segment descriptions - sizes, offsets, pools. */ > +} __rte_cache_aligned; > + > +enum mlx5_rxq_type { > + MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */ > + MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */ > + MLX5_RXQ_TYPE_UNDEFINED, > +}; > + > +/* RX queue control descriptor. */ > +struct mlx5_rxq_ctrl { > + struct mlx5_rxq_data rxq; /* Data path structure. */ > + LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ > + uint32_t refcnt; /* Reference counter. */ > + struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ > + struct mlx5_priv *priv; /* Back pointer to private data. */ > + enum mlx5_rxq_type type; /* Rxq type. */ > + unsigned int socket; /* CPU socket ID for allocations. */ > + unsigned int irq:1; /* Whether IRQ is enabled. */ > + uint32_t flow_mark_n; /* Number of Mark/Flag flows using this > Queue. */ > + uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels > counters. */ > + uint32_t wqn; /* WQ number. */ > + uint16_t dump_file_n; /* Number of dump files. */ > + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ > + uint32_t hairpin_status; /* Hairpin binding status. */ > +}; > + > +/* mlx5_rxq.c */ > + > +extern uint8_t rss_hash_default_key[]; > + > +unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data); > +int mlx5_mprq_free_mp(struct rte_eth_dev *dev); > +int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev); > +int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); > +int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); > +int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > +int mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > +int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t > desc, > + unsigned int socket, const struct rte_eth_rxconf > *conf, > + struct rte_mempool *mp); > +int mlx5_rx_hairpin_queue_setup > + (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > + const struct rte_eth_hairpin_conf *hairpin_conf); > +void mlx5_rx_queue_release(void *dpdk_rxq); > +int mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev); > +void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev); > +int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id); > +int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id); > +int mlx5_rxq_obj_verify(struct rte_eth_dev *dev); > +struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t > idx, > + uint16_t desc, unsigned int socket, > + const struct rte_eth_rxconf *conf, > + const struct rte_eth_rxseg_split *rx_seg, > + uint16_t n_seg); > +struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new > + (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > + const struct rte_eth_hairpin_conf *hairpin_conf); > +struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx= ); > +int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); > +int mlx5_rxq_verify(struct rte_eth_dev *dev); > +int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); > +int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev); > +struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev > *dev, > + const uint16_t *queues, > + uint32_t queues_n); > +int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, > + struct mlx5_ind_table_obj *ind_tbl, > + bool standalone); > +int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, > + struct mlx5_ind_table_obj *ind_tbl); > +int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, > + struct mlx5_ind_table_obj *ind_tbl, > + uint16_t *queues, const uint32_t queues_n, > + bool standalone); > +struct mlx5_cache_entry *mlx5_hrxq_create_cb(struct mlx5_cache_list > *list, > + struct mlx5_cache_entry *entry __rte_unused, void > *cb_ctx); > +int mlx5_hrxq_match_cb(struct mlx5_cache_list *list, > + struct mlx5_cache_entry *entry, > + void *cb_ctx); > +void mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, > + struct mlx5_cache_entry *entry); > +uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, > + struct mlx5_flow_rss_desc *rss_desc); > +int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); > +uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev); > +enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, > uint16_t idx); > +const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf > + (struct rte_eth_dev *dev, uint16_t idx); > +struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev); > +void mlx5_drop_action_destroy(struct rte_eth_dev *dev); > +uint64_t mlx5_get_rx_port_offloads(void); > +uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev); > +void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev); > +int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, > + const uint8_t *rss_key, uint32_t rss_key_len, > + uint64_t hash_fields, > + const uint16_t *queues, uint32_t queues_n); > + > +/* mlx5_rxtx.c */ > + > +uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t > pkts_n); > +void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); > +__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec); > +void mlx5_mprq_buf_free_cb(void *addr, void *opaque); > +void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); > +uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, > + uint16_t pkts_n); > +uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, > + uint16_t pkts_n); > +int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset); > +uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t > rx_queue_id); > +void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, > + struct rte_eth_rxq_info *qinfo); > +int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t > rx_queue_id, > + struct rte_eth_burst_mode *mode); > + > +/* Vectorized version of mlx5_rxtx.c */ > +int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); > +int mlx5_check_vec_rx_support(struct rte_eth_dev *dev); > +uint16_t mlx5_rx_burst_vec(void *dpdk_rxq, struct rte_mbuf **pkts, > + uint16_t pkts_n); > +uint16_t mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf > **pkts, > + uint16_t pkts_n); > + > +/* mlx5_mr.c */ > + > +uint32_t mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr); > + > +/** > + * Query LKey from a packet buffer for Rx. No need to flush local caches= for > Rx > + * as mempool is pre-configured and static. > + * > + * @param rxq > + * Pointer to Rx queue structure. > + * @param addr > + * Address to search. > + * > + * @return > + * Searched LKey on success, UINT32_MAX on no match. > + */ > +static __rte_always_inline uint32_t > +mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) > +{ > + struct mlx5_mr_ctrl *mr_ctrl =3D &rxq->mr_ctrl; > + uint32_t lkey; > + > + /* Linear search on MR cache array. */ > + lkey =3D mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, > + MLX5_MR_CACHE_N, addr); > + if (likely(lkey !=3D UINT32_MAX)) > + return lkey; > + /* Take slower bottom-half (Binary Search) on miss. */ > + return mlx5_rx_addr2mr_bh(rxq, addr); > +} > + > +#define mlx5_rx_mb2mr(rxq, mb) mlx5_rx_addr2mr(rxq, (uintptr_t)((mb)- > >buf_addr)) > + > +/** > + * Convert timestamp from HW format to linear counter > + * from Packet Pacing Clock Queue CQE timestamp format. > + * > + * @param sh > + * Pointer to the device shared context. Might be needed > + * to convert according current device configuration. > + * @param ts > + * Timestamp from CQE to convert. > + * @return > + * UTC in nanoseconds > + */ > +static __rte_always_inline uint64_t > +mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) > +{ > + RTE_SET_USED(sh); > + return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; > +} > + > +/** > + * Set timestamp in mbuf dynamic field. > + * > + * @param mbuf > + * Structure to write into. > + * @param offset > + * Dynamic field offset in mbuf structure. > + * @param timestamp > + * Value to write. > + */ > +static __rte_always_inline void > +mlx5_timestamp_set(struct rte_mbuf *mbuf, int offset, > + rte_mbuf_timestamp_t timestamp) > +{ > + *RTE_MBUF_DYNFIELD(mbuf, offset, rte_mbuf_timestamp_t *) =3D > timestamp; > +} > + > +/** > + * Replace MPRQ buffer. > + * > + * @param rxq > + * Pointer to Rx queue structure. > + * @param rq_idx > + * RQ index to replace. > + */ > +static __rte_always_inline void > +mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx) > +{ > + const uint32_t strd_n =3D 1 << rxq->strd_num_n; > + struct mlx5_mprq_buf *rep =3D rxq->mprq_repl; > + volatile struct mlx5_wqe_data_seg *wqe =3D > + &((volatile struct mlx5_wqe_mprq *)rxq- > >wqes)[rq_idx].dseg; > + struct mlx5_mprq_buf *buf =3D (*rxq->mprq_bufs)[rq_idx]; > + void *addr; > + > + if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) > 1) { > + MLX5_ASSERT(rep !=3D NULL); > + /* Replace MPRQ buf. */ > + (*rxq->mprq_bufs)[rq_idx] =3D rep; > + /* Replace WQE. */ > + addr =3D mlx5_mprq_buf_addr(rep, strd_n); > + wqe->addr =3D rte_cpu_to_be_64((uintptr_t)addr); > + /* If there's only one MR, no need to replace LKey in WQE. */ > + if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > > 1)) > + wqe->lkey =3D mlx5_rx_addr2mr(rxq, (uintptr_t)addr); > + /* Stash a mbuf for next replacement. */ > + if (likely(!rte_mempool_get(rxq->mprq_mp, (void > **)&rep))) > + rxq->mprq_repl =3D rep; > + else > + rxq->mprq_repl =3D NULL; > + /* Release the old buffer. */ > + mlx5_mprq_buf_free(buf); > + } else if (unlikely(rxq->mprq_repl =3D=3D NULL)) { > + struct mlx5_mprq_buf *rep; > + > + /* > + * Currently, the MPRQ mempool is out of buffer > + * and doing memcpy regardless of the size of Rx > + * packet. Retry allocation to get back to > + * normal. > + */ > + if (!rte_mempool_get(rxq->mprq_mp, (void **)&rep)) > + rxq->mprq_repl =3D rep; > + } > +} > + > +/** > + * Attach or copy MPRQ buffer content to a packet. > + * > + * @param rxq > + * Pointer to Rx queue structure. > + * @param pkt > + * Pointer to a packet to fill. > + * @param len > + * Packet length. > + * @param buf > + * Pointer to a MPRQ buffer to take the data from. > + * @param strd_idx > + * Stride index to start from. > + * @param strd_cnt > + * Number of strides to consume. > + */ > +static __rte_always_inline enum mlx5_rqx_code > +mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, > uint32_t len, > + struct mlx5_mprq_buf *buf, uint16_t strd_idx, uint16_t > strd_cnt) > +{ > + const uint32_t strd_n =3D 1 << rxq->strd_num_n; > + const uint16_t strd_sz =3D 1 << rxq->strd_sz_n; > + const uint16_t strd_shift =3D > + MLX5_MPRQ_STRIDE_SHIFT_BYTE * rxq->strd_shift_en; > + const int32_t hdrm_overlap =3D > + len + RTE_PKTMBUF_HEADROOM - strd_cnt * strd_sz; > + const uint32_t offset =3D strd_idx * strd_sz + strd_shift; > + void *addr =3D RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), > offset); > + > + /* > + * Memcpy packets to the target mbuf if: > + * - The size of packet is smaller than mprq_max_memcpy_len. > + * - Out of buffer in the Mempool for Multi-Packet RQ. > + * - The packet's stride overlaps a headroom and scatter is off. > + */ > + if (len <=3D rxq->mprq_max_memcpy_len || > + rxq->mprq_repl =3D=3D NULL || > + (hdrm_overlap > 0 && !rxq->strd_scatter_en)) { > + if (likely(len <=3D > + (uint32_t)(pkt->buf_len - > RTE_PKTMBUF_HEADROOM))) { > + rte_memcpy(rte_pktmbuf_mtod(pkt, void *), > + addr, len); > + DATA_LEN(pkt) =3D len; > + } else if (rxq->strd_scatter_en) { > + struct rte_mbuf *prev =3D pkt; > + uint32_t seg_len =3D RTE_MIN(len, (uint32_t) > + (pkt->buf_len - > RTE_PKTMBUF_HEADROOM)); > + uint32_t rem_len =3D len - seg_len; > + > + rte_memcpy(rte_pktmbuf_mtod(pkt, void *), > + addr, seg_len); > + DATA_LEN(pkt) =3D seg_len; > + while (rem_len) { > + struct rte_mbuf *next =3D > + rte_pktmbuf_alloc(rxq->mp); > + > + if (unlikely(next =3D=3D NULL)) > + return MLX5_RXQ_CODE_NOMBUF; > + NEXT(prev) =3D next; > + SET_DATA_OFF(next, 0); > + addr =3D RTE_PTR_ADD(addr, seg_len); > + seg_len =3D RTE_MIN(rem_len, (uint32_t) > + (next->buf_len - > RTE_PKTMBUF_HEADROOM)); > + rte_memcpy > + (rte_pktmbuf_mtod(next, void *), > + addr, seg_len); > + DATA_LEN(next) =3D seg_len; > + rem_len -=3D seg_len; > + prev =3D next; > + ++NB_SEGS(pkt); > + } > + } else { > + return MLX5_RXQ_CODE_DROPPED; > + } > + } else { > + rte_iova_t buf_iova; > + struct rte_mbuf_ext_shared_info *shinfo; > + uint16_t buf_len =3D strd_cnt * strd_sz; > + void *buf_addr; > + > + /* Increment the refcnt of the whole chunk. */ > + __atomic_add_fetch(&buf->refcnt, 1, __ATOMIC_RELAXED); > + MLX5_ASSERT(__atomic_load_n(&buf->refcnt, > + __ATOMIC_RELAXED) <=3D strd_n + 1); > + buf_addr =3D RTE_PTR_SUB(addr, > RTE_PKTMBUF_HEADROOM); > + /* > + * MLX5 device doesn't use iova but it is necessary in a > + * case where the Rx packet is transmitted via a > + * different PMD. > + */ > + buf_iova =3D rte_mempool_virt2iova(buf) + > + RTE_PTR_DIFF(buf_addr, buf); > + shinfo =3D &buf->shinfos[strd_idx]; > + rte_mbuf_ext_refcnt_set(shinfo, 1); > + /* > + * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when > + * attaching the stride to mbuf and more offload flags > + * will be added below by calling rxq_cq_to_mbuf(). > + * Other fields will be overwritten. > + */ > + rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova, > + buf_len, shinfo); > + /* Set mbuf head-room. */ > + SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM); > + MLX5_ASSERT(pkt->ol_flags =3D=3D EXT_ATTACHED_MBUF); > + MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=3D > + len - (hdrm_overlap > 0 ? hdrm_overlap : 0)); > + DATA_LEN(pkt) =3D len; > + /* > + * Copy the last fragment of a packet (up to headroom > + * size bytes) in case there is a stride overlap with > + * a next packet's headroom. Allocate a separate mbuf > + * to store this fragment and link it. Scatter is on. > + */ > + if (hdrm_overlap > 0) { > + MLX5_ASSERT(rxq->strd_scatter_en); > + struct rte_mbuf *seg =3D > + rte_pktmbuf_alloc(rxq->mp); > + > + if (unlikely(seg =3D=3D NULL)) > + return MLX5_RXQ_CODE_NOMBUF; > + SET_DATA_OFF(seg, 0); > + rte_memcpy(rte_pktmbuf_mtod(seg, void *), > + RTE_PTR_ADD(addr, len - hdrm_overlap), > + hdrm_overlap); > + DATA_LEN(seg) =3D hdrm_overlap; > + DATA_LEN(pkt) =3D len - hdrm_overlap; > + NEXT(pkt) =3D seg; > + NB_SEGS(pkt) =3D 2; > + } > + } > + return MLX5_RXQ_CODE_EXIT; > +} > + > +/** > + * Check whether Multi-Packet RQ can be enabled for the device. > + * > + * @param dev > + * Pointer to Ethernet device. > + * > + * @return > + * 1 if supported, negative errno value if not. > + */ > +static __rte_always_inline int > +mlx5_check_mprq_support(struct rte_eth_dev *dev) > +{ > + struct mlx5_priv *priv =3D dev->data->dev_private; > + > + if (priv->config.mprq.enabled && > + priv->rxqs_n >=3D priv->config.mprq.min_rxqs_num) > + return 1; > + return -ENOTSUP; > +} > + > +/** > + * Check whether Multi-Packet RQ is enabled for the Rx queue. > + * > + * @param rxq > + * Pointer to receive queue structure. > + * > + * @return > + * 0 if disabled, otherwise enabled. > + */ > +static __rte_always_inline int > +mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq) > +{ > + return rxq->strd_num_n > 0; > +} > + > +/** > + * Check whether Multi-Packet RQ is enabled for the device. > + * > + * @param dev > + * Pointer to Ethernet device. > + * > + * @return > + * 0 if disabled, otherwise enabled. > + */ > +static __rte_always_inline int > +mlx5_mprq_enabled(struct rte_eth_dev *dev) > +{ > + struct mlx5_priv *priv =3D dev->data->dev_private; > + uint32_t i; > + uint16_t n =3D 0; > + uint16_t n_ibv =3D 0; > + > + if (mlx5_check_mprq_support(dev) < 0) > + return 0; > + /* All the configured queues should be enabled. */ > + for (i =3D 0; i < priv->rxqs_n; ++i) { > + struct mlx5_rxq_data *rxq =3D (*priv->rxqs)[i]; > + struct mlx5_rxq_ctrl *rxq_ctrl =3D container_of > + (rxq, struct mlx5_rxq_ctrl, rxq); > + > + if (rxq =3D=3D NULL || rxq_ctrl->type !=3D > MLX5_RXQ_TYPE_STANDARD) > + continue; > + n_ibv++; > + if (mlx5_rxq_mprq_enabled(rxq)) > + ++n; > + } > + /* Multi-Packet RQ can't be partially configured. */ > + MLX5_ASSERT(n =3D=3D 0 || n =3D=3D n_ibv); > + return n =3D=3D n_ibv; > +} > + > +#endif /* RTE_PMD_MLX5_RX_H_ */ > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c > index 9009eb8..19df0fa 100644 > --- a/drivers/net/mlx5/mlx5_rxq.c > +++ b/drivers/net/mlx5/mlx5_rxq.c > @@ -25,6 +25,7 @@ > #include "mlx5_defs.h" > #include "mlx5.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "mlx5_autoconf.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c > index c76b995..d004e1e 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.c > +++ b/drivers/net/mlx5/mlx5_rxtx.c > @@ -25,6 +25,7 @@ > #include "mlx5_mr.h" > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" >=20 > /* TX burst subroutines return codes. */ > enum mlx5_txcmp_code { > diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h > index 4f0fda0..d443db4 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.h > +++ b/drivers/net/mlx5/mlx5_rxtx.h > @@ -31,21 +31,10 @@ > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" >=20 > -/* Support tunnel matching. */ > -#define MLX5_FLOW_TUNNEL 10 >=20 > /* Mbuf dynamic flag offset for inline. */ > extern uint64_t rte_net_mlx5_dynf_inline_mask; >=20 > -struct mlx5_rxq_stats { > -#ifdef MLX5_PMD_SOFT_COUNTERS > - uint64_t ipackets; /**< Total of successfully received packets. */ > - uint64_t ibytes; /**< Total of successfully received bytes. */ > -#endif > - uint64_t idropped; /**< Total of packets dropped when RX ring full. > */ > - uint64_t rx_nombuf; /**< Total of RX mbuf allocation failures. */ > -}; > - > struct mlx5_txq_stats { > #ifdef MLX5_PMD_SOFT_COUNTERS > uint64_t opackets; /**< Total of successfully sent packets. */ > @@ -56,148 +45,6 @@ struct mlx5_txq_stats { >=20 > struct mlx5_priv; >=20 > -/* Compressed CQE context. */ > -struct rxq_zip { > - uint16_t ai; /* Array index. */ > - uint16_t ca; /* Current array index. */ > - uint16_t na; /* Next array index. */ > - uint16_t cq_ci; /* The next CQE. */ > - uint32_t cqe_cnt; /* Number of CQEs. */ > -}; > - > -/* Multi-Packet RQ buffer header. */ > -struct mlx5_mprq_buf { > - struct rte_mempool *mp; > - uint16_t refcnt; /* Atomically accessed refcnt. */ > - uint8_t pad[RTE_PKTMBUF_HEADROOM]; /* Headroom for the first > packet. */ > - struct rte_mbuf_ext_shared_info shinfos[]; > - /* > - * Shared information per stride. > - * More memory will be allocated for the first stride head-room and > for > - * the strides data. > - */ > -} __rte_cache_aligned; > - > -/* Get pointer to the first stride. */ > -#define mlx5_mprq_buf_addr(ptr, strd_n) (RTE_PTR_ADD((ptr), \ > - sizeof(struct mlx5_mprq_buf) + \ > - (strd_n) * \ > - sizeof(struct rte_mbuf_ext_shared_info) + \ > - RTE_PKTMBUF_HEADROOM)) > - > -#define MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES 6 > -#define MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES 9 > - > -enum mlx5_rxq_err_state { > - MLX5_RXQ_ERR_STATE_NO_ERROR =3D 0, > - MLX5_RXQ_ERR_STATE_NEED_RESET, > - MLX5_RXQ_ERR_STATE_NEED_READY, > -}; > - > -enum mlx5_rqx_code { > - MLX5_RXQ_CODE_EXIT =3D 0, > - MLX5_RXQ_CODE_NOMBUF, > - MLX5_RXQ_CODE_DROPPED, > -}; > - > -struct mlx5_eth_rxseg { > - struct rte_mempool *mp; /**< Memory pool to allocate segment > from. */ > - uint16_t length; /**< Segment data length, configures split point. */ > - uint16_t offset; /**< Data offset from beginning of mbuf data > buffer. */ > - uint32_t reserved; /**< Reserved field. */ > -}; > - > -/* RX queue descriptor. */ > -struct mlx5_rxq_data { > - unsigned int csum:1; /* Enable checksum offloading. */ > - unsigned int hw_timestamp:1; /* Enable HW timestamp. */ > - unsigned int rt_timestamp:1; /* Realtime timestamp format. */ > - unsigned int vlan_strip:1; /* Enable VLAN stripping. */ > - unsigned int crc_present:1; /* CRC must be subtracted. */ > - unsigned int sges_n:3; /* Log 2 of SGEs (max buffers per packet). */ > - unsigned int cqe_n:4; /* Log 2 of CQ elements. */ > - unsigned int elts_n:4; /* Log 2 of Mbufs. */ > - unsigned int rss_hash:1; /* RSS hash result is enabled. */ > - unsigned int mark:1; /* Marked flow available on the queue. */ > - unsigned int strd_num_n:5; /* Log 2 of the number of stride. */ > - unsigned int strd_sz_n:4; /* Log 2 of stride size. */ > - unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ > - unsigned int err_state:2; /* enum mlx5_rxq_err_state. */ > - unsigned int strd_scatter_en:1; /* Scattered packets from a stride. */ > - unsigned int lro:1; /* Enable LRO. */ > - unsigned int dynf_meta:1; /* Dynamic metadata is configured. */ > - unsigned int mcqe_format:3; /* CQE compression format. */ > - volatile uint32_t *rq_db; > - volatile uint32_t *cq_db; > - uint16_t port_id; > - uint32_t elts_ci; > - uint32_t rq_ci; > - uint16_t consumed_strd; /* Number of consumed strides in WQE. */ > - uint32_t rq_pi; > - uint32_t cq_ci; > - uint16_t rq_repl_thresh; /* Threshold for buffer replenishment. */ > - uint32_t byte_mask; > - union { > - struct rxq_zip zip; /* Compressed context. */ > - uint16_t decompressed; > - /* Number of ready mbufs decompressed from the CQ. */ > - }; > - struct mlx5_mr_ctrl mr_ctrl; /* MR control descriptor. */ > - uint16_t mprq_max_memcpy_len; /* Maximum size of packet to > memcpy. */ > - volatile void *wqes; > - volatile struct mlx5_cqe(*cqes)[]; > - struct rte_mbuf *(*elts)[]; > - struct mlx5_mprq_buf *(*mprq_bufs)[]; > - struct rte_mempool *mp; > - struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. > */ > - struct mlx5_mprq_buf *mprq_repl; /* Stashed mbuf for replenish. > */ > - struct mlx5_dev_ctx_shared *sh; /* Shared context. */ > - uint16_t idx; /* Queue index. */ > - struct mlx5_rxq_stats stats; > - rte_xmm_t mbuf_initializer; /* Default rearm/flags for vectorized Rx. > */ > - struct rte_mbuf fake_mbuf; /* elts padding for vectorized Rx. */ > - void *cq_uar; /* Verbs CQ user access region. */ > - uint32_t cqn; /* CQ number. */ > - uint8_t cq_arm_sn; /* CQ arm seq number. */ > -#ifndef RTE_ARCH_64 > - rte_spinlock_t *uar_lock_cq; > - /* CQ (UAR) access lock required for 32bit implementations */ > -#endif > - uint32_t tunnel; /* Tunnel information. */ > - int timestamp_offset; /* Dynamic mbuf field for timestamp. */ > - uint64_t timestamp_rx_flag; /* Dynamic mbuf flag for timestamp. */ > - uint64_t flow_meta_mask; > - int32_t flow_meta_offset; > - uint32_t flow_meta_port_mask; > - uint32_t rxseg_n; /* Number of split segment descriptions. */ > - struct mlx5_eth_rxseg rxseg[MLX5_MAX_RXQ_NSEG]; > - /* Buffer split segment descriptions - sizes, offsets, pools. */ > -} __rte_cache_aligned; > - > -enum mlx5_rxq_type { > - MLX5_RXQ_TYPE_STANDARD, /* Standard Rx queue. */ > - MLX5_RXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */ > - MLX5_RXQ_TYPE_UNDEFINED, > -}; > - > -/* RX queue control descriptor. */ > -struct mlx5_rxq_ctrl { > - struct mlx5_rxq_data rxq; /* Data path structure. */ > - LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ > - uint32_t refcnt; /* Reference counter. */ > - struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ > - struct mlx5_priv *priv; /* Back pointer to private data. */ > - enum mlx5_rxq_type type; /* Rxq type. */ > - unsigned int socket; /* CPU socket ID for allocations. */ > - unsigned int irq:1; /* Whether IRQ is enabled. */ > - uint32_t flow_mark_n; /* Number of Mark/Flag flows using this > Queue. */ > - uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels > counters. */ > - uint32_t wqn; /* WQ number. */ > - uint16_t dump_file_n; /* Number of dump files. */ > - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ > - uint32_t hairpin_status; /* Hairpin binding status. */ > -}; > - > /* TX queue send local data. */ > __extension__ > struct mlx5_txq_local { > @@ -302,80 +149,6 @@ struct mlx5_txq_ctrl { > #define MLX5_TX_BFREG(txq) \ > (MLX5_PROC_PRIV((txq)->port_id)->uar_table[(txq)->idx]) >=20 > -/* mlx5_rxq.c */ > - > -extern uint8_t rss_hash_default_key[]; > - > -unsigned int mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data); > -int mlx5_mprq_free_mp(struct rte_eth_dev *dev); > -int mlx5_mprq_alloc_mp(struct rte_eth_dev *dev); > -int mlx5_rx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); > -int mlx5_rx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); > -int mlx5_rx_queue_start_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > -int mlx5_rx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > -int mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t > desc, > - unsigned int socket, const struct rte_eth_rxconf > *conf, > - struct rte_mempool *mp); > -int mlx5_rx_hairpin_queue_setup > - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > - const struct rte_eth_hairpin_conf *hairpin_conf); > -void mlx5_rx_queue_release(void *dpdk_rxq); > -int mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev); > -void mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev); > -int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id); > -int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id); > -int mlx5_rxq_obj_verify(struct rte_eth_dev *dev); > -struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx= , > - uint16_t desc, unsigned int socket, > - const struct rte_eth_rxconf *conf, > - const struct rte_eth_rxseg_split *rx_seg, > - uint16_t n_seg); > -struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new > - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > - const struct rte_eth_hairpin_conf *hairpin_conf); > -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx= ); > -int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); > -int mlx5_rxq_verify(struct rte_eth_dev *dev); > -int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); > -int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev); > -struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev > *dev, > - const uint16_t *queues, > - uint32_t queues_n); > -int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, > - struct mlx5_ind_table_obj *ind_tbl, > - bool standalone); > -int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, > - struct mlx5_ind_table_obj *ind_tbl); > -int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, > - struct mlx5_ind_table_obj *ind_tbl, > - uint16_t *queues, const uint32_t queues_n, > - bool standalone); > -struct mlx5_cache_entry *mlx5_hrxq_create_cb(struct mlx5_cache_list > *list, > - struct mlx5_cache_entry *entry __rte_unused, void > *cb_ctx); > -int mlx5_hrxq_match_cb(struct mlx5_cache_list *list, > - struct mlx5_cache_entry *entry, > - void *cb_ctx); > -void mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, > - struct mlx5_cache_entry *entry); > -uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, > - struct mlx5_flow_rss_desc *rss_desc); > -int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); > -uint32_t mlx5_hrxq_verify(struct rte_eth_dev *dev); > - > - > -enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, > uint16_t idx); > -const struct rte_eth_hairpin_conf *mlx5_rxq_get_hairpin_conf > - (struct rte_eth_dev *dev, uint16_t idx); > -struct mlx5_hrxq *mlx5_drop_action_create(struct rte_eth_dev *dev); > -void mlx5_drop_action_destroy(struct rte_eth_dev *dev); > -uint64_t mlx5_get_rx_port_offloads(void); > -uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev); > -void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev); > -int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, > - const uint8_t *rss_key, uint32_t rss_key_len, > - uint64_t hash_fields, > - const uint16_t *queues, uint32_t queues_n); > - > /* mlx5_txq.c */ >=20 > int mlx5_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); > @@ -416,45 +189,21 @@ struct mlx5_txq_ctrl *mlx5_txq_hairpin_new > void mlx5_set_ptype_table(void); > void mlx5_set_cksum_table(void); > void mlx5_set_swp_types_table(void); > -uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t > pkts_n); > -void mlx5_rxq_initialize(struct mlx5_rxq_data *rxq); > -__rte_noinline int mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t > vec); > -void mlx5_mprq_buf_free_cb(void *addr, void *opaque); > -void mlx5_mprq_buf_free(struct mlx5_mprq_buf *buf); > -uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, > - uint16_t pkts_n); > uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, > uint16_t pkts_n); > -uint16_t removed_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, > - uint16_t pkts_n); > -int mlx5_rx_descriptor_status(void *rx_queue, uint16_t offset); > int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset); > -uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t > rx_queue_id); > void mlx5_dump_debug_information(const char *path, const char *title, > const void *buf, unsigned int len); > int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, > const struct mlx5_mp_arg_queue_state_modify > *sm); > -void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, > - struct rte_eth_rxq_info *qinfo); > void mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, > struct rte_eth_txq_info *qinfo); > -int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t > rx_queue_id, > - struct rte_eth_burst_mode *mode); > int mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t > tx_queue_id, > struct rte_eth_burst_mode *mode); >=20 > -/* Vectorized version of mlx5_rxtx.c */ > -int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); > -int mlx5_check_vec_rx_support(struct rte_eth_dev *dev); > -uint16_t mlx5_rx_burst_vec(void *dpdk_txq, struct rte_mbuf **pkts, > - uint16_t pkts_n); > -uint16_t mlx5_rx_burst_mprq_vec(void *dpdk_txq, struct rte_mbuf > **pkts, > - uint16_t pkts_n); > - > /* mlx5_mr.c */ >=20 > void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); > -uint32_t mlx5_rx_addr2mr_bh(struct mlx5_rxq_data *rxq, uintptr_t addr); > uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf > *mb); > uint32_t mlx5_tx_update_ext_mp(struct mlx5_txq_data *txq, uintptr_t > addr, > struct rte_mempool *mp); > @@ -538,35 +287,6 @@ int mlx5_dma_unmap(struct rte_pci_device *pdev, > void *addr, uint64_t iova, > } >=20 > /** > - * Query LKey from a packet buffer for Rx. No need to flush local caches= for > Rx > - * as mempool is pre-configured and static. > - * > - * @param rxq > - * Pointer to Rx queue structure. > - * @param addr > - * Address to search. > - * > - * @return > - * Searched LKey on success, UINT32_MAX on no match. > - */ > -static __rte_always_inline uint32_t > -mlx5_rx_addr2mr(struct mlx5_rxq_data *rxq, uintptr_t addr) > -{ > - struct mlx5_mr_ctrl *mr_ctrl =3D &rxq->mr_ctrl; > - uint32_t lkey; > - > - /* Linear search on MR cache array. */ > - lkey =3D mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, > - MLX5_MR_CACHE_N, addr); > - if (likely(lkey !=3D UINT32_MAX)) > - return lkey; > - /* Take slower bottom-half (Binary Search) on miss. */ > - return mlx5_rx_addr2mr_bh(rxq, addr); > -} > - > -#define mlx5_rx_mb2mr(rxq, mb) mlx5_rx_addr2mr(rxq, (uintptr_t)((mb)- > >buf_addr)) > - > -/** > * Query LKey from a packet buffer for Tx. If not found, add the mempool= . > * > * @param txq > @@ -637,25 +357,6 @@ int mlx5_dma_unmap(struct rte_pci_device *pdev, > void *addr, uint64_t iova, > } >=20 > /** > - * Convert timestamp from HW format to linear counter > - * from Packet Pacing Clock Queue CQE timestamp format. > - * > - * @param sh > - * Pointer to the device shared context. Might be needed > - * to convert according current device configuration. > - * @param ts > - * Timestamp from CQE to convert. > - * @return > - * UTC in nanoseconds > - */ > -static __rte_always_inline uint64_t > -mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) > -{ > - RTE_SET_USED(sh); > - return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; > -} > - > -/** > * Convert timestamp from mbuf format to linear counter > * of Clock Queue completions (24 bits) > * > @@ -712,274 +413,4 @@ int mlx5_dma_unmap(struct rte_pci_device *pdev, > void *addr, uint64_t iova, > return ci; > } >=20 > -/** > - * Set timestamp in mbuf dynamic field. > - * > - * @param mbuf > - * Structure to write into. > - * @param offset > - * Dynamic field offset in mbuf structure. > - * @param timestamp > - * Value to write. > - */ > -static __rte_always_inline void > -mlx5_timestamp_set(struct rte_mbuf *mbuf, int offset, > - rte_mbuf_timestamp_t timestamp) > -{ > - *RTE_MBUF_DYNFIELD(mbuf, offset, rte_mbuf_timestamp_t *) =3D > timestamp; > -} > - > -/** > - * Replace MPRQ buffer. > - * > - * @param rxq > - * Pointer to Rx queue structure. > - * @param rq_idx > - * RQ index to replace. > - */ > -static __rte_always_inline void > -mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx) > -{ > - const uint32_t strd_n =3D 1 << rxq->strd_num_n; > - struct mlx5_mprq_buf *rep =3D rxq->mprq_repl; > - volatile struct mlx5_wqe_data_seg *wqe =3D > - &((volatile struct mlx5_wqe_mprq *)rxq- > >wqes)[rq_idx].dseg; > - struct mlx5_mprq_buf *buf =3D (*rxq->mprq_bufs)[rq_idx]; > - void *addr; > - > - if (__atomic_load_n(&buf->refcnt, __ATOMIC_RELAXED) > 1) { > - MLX5_ASSERT(rep !=3D NULL); > - /* Replace MPRQ buf. */ > - (*rxq->mprq_bufs)[rq_idx] =3D rep; > - /* Replace WQE. */ > - addr =3D mlx5_mprq_buf_addr(rep, strd_n); > - wqe->addr =3D rte_cpu_to_be_64((uintptr_t)addr); > - /* If there's only one MR, no need to replace LKey in WQE. */ > - if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > > 1)) > - wqe->lkey =3D mlx5_rx_addr2mr(rxq, (uintptr_t)addr); > - /* Stash a mbuf for next replacement. */ > - if (likely(!rte_mempool_get(rxq->mprq_mp, (void > **)&rep))) > - rxq->mprq_repl =3D rep; > - else > - rxq->mprq_repl =3D NULL; > - /* Release the old buffer. */ > - mlx5_mprq_buf_free(buf); > - } else if (unlikely(rxq->mprq_repl =3D=3D NULL)) { > - struct mlx5_mprq_buf *rep; > - > - /* > - * Currently, the MPRQ mempool is out of buffer > - * and doing memcpy regardless of the size of Rx > - * packet. Retry allocation to get back to > - * normal. > - */ > - if (!rte_mempool_get(rxq->mprq_mp, (void **)&rep)) > - rxq->mprq_repl =3D rep; > - } > -} > - > -/** > - * Attach or copy MPRQ buffer content to a packet. > - * > - * @param rxq > - * Pointer to Rx queue structure. > - * @param pkt > - * Pointer to a packet to fill. > - * @param len > - * Packet length. > - * @param buf > - * Pointer to a MPRQ buffer to take the data from. > - * @param strd_idx > - * Stride index to start from. > - * @param strd_cnt > - * Number of strides to consume. > - */ > -static __rte_always_inline enum mlx5_rqx_code > -mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, > uint32_t len, > - struct mlx5_mprq_buf *buf, uint16_t strd_idx, uint16_t > strd_cnt) > -{ > - const uint32_t strd_n =3D 1 << rxq->strd_num_n; > - const uint16_t strd_sz =3D 1 << rxq->strd_sz_n; > - const uint16_t strd_shift =3D > - MLX5_MPRQ_STRIDE_SHIFT_BYTE * rxq->strd_shift_en; > - const int32_t hdrm_overlap =3D > - len + RTE_PKTMBUF_HEADROOM - strd_cnt * strd_sz; > - const uint32_t offset =3D strd_idx * strd_sz + strd_shift; > - void *addr =3D RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), > offset); > - > - /* > - * Memcpy packets to the target mbuf if: > - * - The size of packet is smaller than mprq_max_memcpy_len. > - * - Out of buffer in the Mempool for Multi-Packet RQ. > - * - The packet's stride overlaps a headroom and scatter is off. > - */ > - if (len <=3D rxq->mprq_max_memcpy_len || > - rxq->mprq_repl =3D=3D NULL || > - (hdrm_overlap > 0 && !rxq->strd_scatter_en)) { > - if (likely(len <=3D > - (uint32_t)(pkt->buf_len - > RTE_PKTMBUF_HEADROOM))) { > - rte_memcpy(rte_pktmbuf_mtod(pkt, void *), > - addr, len); > - DATA_LEN(pkt) =3D len; > - } else if (rxq->strd_scatter_en) { > - struct rte_mbuf *prev =3D pkt; > - uint32_t seg_len =3D RTE_MIN(len, (uint32_t) > - (pkt->buf_len - > RTE_PKTMBUF_HEADROOM)); > - uint32_t rem_len =3D len - seg_len; > - > - rte_memcpy(rte_pktmbuf_mtod(pkt, void *), > - addr, seg_len); > - DATA_LEN(pkt) =3D seg_len; > - while (rem_len) { > - struct rte_mbuf *next =3D > - rte_pktmbuf_alloc(rxq->mp); > - > - if (unlikely(next =3D=3D NULL)) > - return MLX5_RXQ_CODE_NOMBUF; > - NEXT(prev) =3D next; > - SET_DATA_OFF(next, 0); > - addr =3D RTE_PTR_ADD(addr, seg_len); > - seg_len =3D RTE_MIN(rem_len, (uint32_t) > - (next->buf_len - > RTE_PKTMBUF_HEADROOM)); > - rte_memcpy > - (rte_pktmbuf_mtod(next, void *), > - addr, seg_len); > - DATA_LEN(next) =3D seg_len; > - rem_len -=3D seg_len; > - prev =3D next; > - ++NB_SEGS(pkt); > - } > - } else { > - return MLX5_RXQ_CODE_DROPPED; > - } > - } else { > - rte_iova_t buf_iova; > - struct rte_mbuf_ext_shared_info *shinfo; > - uint16_t buf_len =3D strd_cnt * strd_sz; > - void *buf_addr; > - > - /* Increment the refcnt of the whole chunk. */ > - __atomic_add_fetch(&buf->refcnt, 1, __ATOMIC_RELAXED); > - MLX5_ASSERT(__atomic_load_n(&buf->refcnt, > - __ATOMIC_RELAXED) <=3D strd_n + 1); > - buf_addr =3D RTE_PTR_SUB(addr, > RTE_PKTMBUF_HEADROOM); > - /* > - * MLX5 device doesn't use iova but it is necessary in a > - * case where the Rx packet is transmitted via a > - * different PMD. > - */ > - buf_iova =3D rte_mempool_virt2iova(buf) + > - RTE_PTR_DIFF(buf_addr, buf); > - shinfo =3D &buf->shinfos[strd_idx]; > - rte_mbuf_ext_refcnt_set(shinfo, 1); > - /* > - * EXT_ATTACHED_MBUF will be set to pkt->ol_flags when > - * attaching the stride to mbuf and more offload flags > - * will be added below by calling rxq_cq_to_mbuf(). > - * Other fields will be overwritten. > - */ > - rte_pktmbuf_attach_extbuf(pkt, buf_addr, buf_iova, > - buf_len, shinfo); > - /* Set mbuf head-room. */ > - SET_DATA_OFF(pkt, RTE_PKTMBUF_HEADROOM); > - MLX5_ASSERT(pkt->ol_flags =3D=3D EXT_ATTACHED_MBUF); > - MLX5_ASSERT(rte_pktmbuf_tailroom(pkt) >=3D > - len - (hdrm_overlap > 0 ? hdrm_overlap : 0)); > - DATA_LEN(pkt) =3D len; > - /* > - * Copy the last fragment of a packet (up to headroom > - * size bytes) in case there is a stride overlap with > - * a next packet's headroom. Allocate a separate mbuf > - * to store this fragment and link it. Scatter is on. > - */ > - if (hdrm_overlap > 0) { > - MLX5_ASSERT(rxq->strd_scatter_en); > - struct rte_mbuf *seg =3D > - rte_pktmbuf_alloc(rxq->mp); > - > - if (unlikely(seg =3D=3D NULL)) > - return MLX5_RXQ_CODE_NOMBUF; > - SET_DATA_OFF(seg, 0); > - rte_memcpy(rte_pktmbuf_mtod(seg, void *), > - RTE_PTR_ADD(addr, len - hdrm_overlap), > - hdrm_overlap); > - DATA_LEN(seg) =3D hdrm_overlap; > - DATA_LEN(pkt) =3D len - hdrm_overlap; > - NEXT(pkt) =3D seg; > - NB_SEGS(pkt) =3D 2; > - } > - } > - return MLX5_RXQ_CODE_EXIT; > -} > - > -/** > - * Check whether Multi-Packet RQ can be enabled for the device. > - * > - * @param dev > - * Pointer to Ethernet device. > - * > - * @return > - * 1 if supported, negative errno value if not. > - */ > -static __rte_always_inline int > -mlx5_check_mprq_support(struct rte_eth_dev *dev) > -{ > - struct mlx5_priv *priv =3D dev->data->dev_private; > - > - if (priv->config.mprq.enabled && > - priv->rxqs_n >=3D priv->config.mprq.min_rxqs_num) > - return 1; > - return -ENOTSUP; > -} > - > -/** > - * Check whether Multi-Packet RQ is enabled for the Rx queue. > - * > - * @param rxq > - * Pointer to receive queue structure. > - * > - * @return > - * 0 if disabled, otherwise enabled. > - */ > -static __rte_always_inline int > -mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq) > -{ > - return rxq->strd_num_n > 0; > -} > - > -/** > - * Check whether Multi-Packet RQ is enabled for the device. > - * > - * @param dev > - * Pointer to Ethernet device. > - * > - * @return > - * 0 if disabled, otherwise enabled. > - */ > -static __rte_always_inline int > -mlx5_mprq_enabled(struct rte_eth_dev *dev) > -{ > - struct mlx5_priv *priv =3D dev->data->dev_private; > - uint32_t i; > - uint16_t n =3D 0; > - uint16_t n_ibv =3D 0; > - > - if (mlx5_check_mprq_support(dev) < 0) > - return 0; > - /* All the configured queues should be enabled. */ > - for (i =3D 0; i < priv->rxqs_n; ++i) { > - struct mlx5_rxq_data *rxq =3D (*priv->rxqs)[i]; > - struct mlx5_rxq_ctrl *rxq_ctrl =3D container_of > - (rxq, struct mlx5_rxq_ctrl, rxq); > - > - if (rxq =3D=3D NULL || rxq_ctrl->type !=3D > MLX5_RXQ_TYPE_STANDARD) > - continue; > - n_ibv++; > - if (mlx5_rxq_mprq_enabled(rxq)) > - ++n; > - } > - /* Multi-Packet RQ can't be partially configured. */ > - MLX5_ASSERT(n =3D=3D 0 || n =3D=3D n_ibv); > - return n =3D=3D n_ibv; > -} > #endif /* RTE_PMD_MLX5_RXTX_H_ */ > diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c > b/drivers/net/mlx5/mlx5_rxtx_vec.c > index 028e0f6..d5af2d9 100644 > --- a/drivers/net/mlx5/mlx5_rxtx_vec.c > +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c > @@ -19,6 +19,7 @@ > #include "mlx5.h" > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_rxtx_vec.h" > #include "mlx5_autoconf.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.= c > index a6569b2..4dbd831 100644 > --- a/drivers/net/mlx5/mlx5_stats.c > +++ b/drivers/net/mlx5/mlx5_stats.c > @@ -17,6 +17,7 @@ > #include "mlx5_defs.h" > #include "mlx5.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_malloc.h" >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_trigger.c > b/drivers/net/mlx5/mlx5_trigger.c > index 94dd567..c88cb22 100644 > --- a/drivers/net/mlx5/mlx5_trigger.c > +++ b/drivers/net/mlx5/mlx5_trigger.c > @@ -16,6 +16,7 @@ > #include "mlx5.h" > #include "mlx5_mr.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "rte_pmd_mlx5.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c > index e8d632a..89e1c5d 100644 > --- a/drivers/net/mlx5/mlx5_txpp.c > +++ b/drivers/net/mlx5/mlx5_txpp.c > @@ -17,6 +17,7 @@ >=20 > #include "mlx5.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_common_os.h" >=20 > static_assert(sizeof(struct mlx5_cqe_ts) =3D=3D sizeof(rte_int128_t), > diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c > index 64678d3..60f97f2 100644 > --- a/drivers/net/mlx5/mlx5_vlan.c > +++ b/drivers/net/mlx5/mlx5_vlan.c > @@ -16,6 +16,7 @@ > #include "mlx5.h" > #include "mlx5_autoconf.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "mlx5_devx.h" >=20 > diff --git a/drivers/net/mlx5/windows/mlx5_os.c > b/drivers/net/mlx5/windows/mlx5_os.c > index 6f39276..79eac80 100644 > --- a/drivers/net/mlx5/windows/mlx5_os.c > +++ b/drivers/net/mlx5/windows/mlx5_os.c > @@ -23,6 +23,7 @@ > #include "mlx5_common_os.h" > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > +#include "mlx5_rx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > -- > 1.8.3.1