From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1777A0A02; Wed, 7 Apr 2021 13:34:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 567F04069F; Wed, 7 Apr 2021 13:34:01 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2052.outbound.protection.outlook.com [40.107.236.52]) by mails.dpdk.org (Postfix) with ESMTP id E8AC24013F for ; Wed, 7 Apr 2021 13:33:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bxgy0EkEo4cl32TJtCwQ5Xn0nLuoxF0ZjYauC5EQBr0gXkSaDz2VmYCyujrJ8SXTbtdLAib4fgarKlX8ylHbcKIgkvAqmScOIYXFd7Tdm/nQ6vgKXYVd3BPRwD5f10IpGrSYjUuqET4JKshdd8Gs0MdLE7ctvQgo0PuNilcQQAnGS3rAG6nADDd5/mZB4zWKa4+ljQW3c9Qiq0aRSkSVqBxuLGHCGYcp80ymQRXpWq7xlnK9Khd/nU07Bv4rFWvYzqVMZIkhdcAOToPK6GgbvpOZPrWGMZ+6ulM4l263u7NaH3RloUhfj0VCC4VD5w5BxiIldBku+s0qAnMFbVQIfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZFPLNP6kVoXtJ2z22dEG6F5Zv9fnJNG0xlqe1evU9bU=; b=NTJGfTUQ3iENlXqzO1SjSpvDzdDpwBU6u4QjLyuBn4iM6waGcvHV0Z1I3QtrMcazTqTBj8iJYo5mQGCGn5d638a2M7Fu2ntj0RVIyJ4f2JIfjm3fHTQRw+WepNijLaaZhcVrbYSwe0PgfkaHya0M10Cre1qXAU3VLz4lhn9wRF9dPEaZSRKWoDAYOgb3XSlpE2FtaG1QQDMMpL1ueCPXiDP3m7mSIFgyeCCYzrn0An5hZr/KgpCvvzPplmtdm0tqnrqHfACfYKFl/Vq0XAlAk2zTd0Ylthdbj6NZPDIHQP0apSSUqRnaWHmXiQB3nL3xNExc4rdagnlDdl/Hv3/SAg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZFPLNP6kVoXtJ2z22dEG6F5Zv9fnJNG0xlqe1evU9bU=; b=NHQNovra/yhObcsREmtUmICV+c7bh5L5SDBSIb+cJENuDt9oRJSVkCVxIUg4j3yMpq9H8nwL/+BUuFEgSm1uD7QDtVndkNNprDtlPoDNvM9EUcoaDtilwlR7RjLPP0bxqCt1wP/Bdvgu56YTGyciybMgIU2zPMVzFX8+fLavWfJchhclA6SLZ2B+GjQt5xzVy4j5aBwqrNwvYdBzoTR4rI02e87B0Aq4vh6Nw2ngc3DTavt5V9SYfjq8hNRc6Jk+FAUfv4+JHs38Pp194P9wZ3sgwocAKTVZ6u9vDflMcZIeAbi/19wef9N0PUR+I0DOAFPy2u5o2rlJPFfRwSSdag== Received: from DM6PR12MB2748.namprd12.prod.outlook.com (2603:10b6:5:43::28) by DM6PR12MB4698.namprd12.prod.outlook.com (2603:10b6:5:34::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.29; Wed, 7 Apr 2021 11:33:56 +0000 Received: from DM6PR12MB2748.namprd12.prod.outlook.com ([fe80::3d94:1f7f:178f:aba2]) by DM6PR12MB2748.namprd12.prod.outlook.com ([fe80::3d94:1f7f:178f:aba2%6]) with mapi id 15.20.3912.031; Wed, 7 Apr 2021 11:33:56 +0000 From: Raslan Darawsheh To: Michael Baum , "dev@dpdk.org" CC: Matan Azrad , Slava Ovsiienko Thread-Topic: [PATCH 3/6] net/mlx5: separate Tx function declarations to another file Thread-Index: AQHXKiQzFH3XT2nHJUKuwdr3raEGG6qo7Fww Date: Wed, 7 Apr 2021 11:33:56 +0000 Message-ID: References: <1617631256-3018-1-git-send-email-michaelba@nvidia.com> <1617631256-3018-4-git-send-email-michaelba@nvidia.com> In-Reply-To: <1617631256-3018-4-git-send-email-michaelba@nvidia.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: nvidia.com; dkim=none (message not signed) header.d=none;nvidia.com; dmarc=none action=none header.from=nvidia.com; x-originating-ip: [188.161.228.239] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: ceef0803-086d-489b-69ef-08d8f9b907d5 x-ms-traffictypediagnostic: DM6PR12MB4698: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:765; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: +IAwLLcNczo5c6XAG++YVCVYQftESWv07DE7VnWi5JUXaVkB+gW8qeaNGydXscHMhPhKMSdhSMNuiaEXxvqGxmbnnNEI2Kfyj8MuiprDiDgociw2OzH6UPaK26ylj3KVwukJG5is1YJ37iae8MMGTkA3Cce9+JJkEPTagFm+LYUzQwLOXe1G+gA4oF7XxFFJsJdb8FN/jiGEyTsrccB4AJkVoZJ1/o46k9PqjvasJQq7DEVKnc7+bXX0ez5Z6JoWwbD8oUtlGd9MHh/Kx5MZwpGoBdYXY8V0xS9LepwE7DKDG5r8BemOYyy1HIVl2XGL4XDKY9tiatvpKhIUuNPC3MAXUgd2TeZKovIzpnKDPtBQwPUPBg5Xegk3DFZ2xw7Pb9JLg/2Rn1xLPHTBNNQ5ht+iWz0fK1/ks9yNIup13yYKTA7VpDINogcbbAlv7BPoh93nlMRPTi1w6JMu37lLBsc1T8DTby4VdM9u+GWhOsXdynpf0My54CL0HmIbyPoqNaxAJ5Ady7xgx1CQqe3EkFwXlzrQH8wnaXaJcnuSQBJd5bpUe02ODfFrFw3I+R3F2pXBdx1KERHSWhCcr3rOpYeH/atsBTocDMeHV/dMHadSDOeQ9r6M6p/DX9JKOElABHXW6dVl01y2wR/KKeAVu5QOb0JtkLnEBWopFQz83eM= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR12MB2748.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(39860400002)(396003)(136003)(346002)(376002)(71200400001)(8676002)(38100700001)(55016002)(53546011)(6506007)(33656002)(110136005)(478600001)(8936002)(316002)(66476007)(52536014)(26005)(64756008)(76116006)(66556008)(7696005)(30864003)(66446008)(83380400001)(4326008)(66946007)(54906003)(5660300002)(107886003)(186003)(86362001)(9686003)(2906002)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?fxe2gTZCoouQ6QRIVP1DGt9Nh2wc5Xa8STkZetHIaO1yAqYKglgCk7kHvc21?= =?us-ascii?Q?xgkv9Kqxcg5WbLls6siGKi8TQISy3u+CyVUrv79UuONa5JPJ7/wlSLI7O8cN?= =?us-ascii?Q?SqndAt2VZMANqksYNiFTMW0yF4htdjM2C1t0mbPDUOeTLV2TgXsTov1omg4U?= =?us-ascii?Q?MHxPNSSyaL9Axz21FYBiI/CxM4Bz5O15DQOimP+nEc+eJhr3rNUQiDlKO27r?= =?us-ascii?Q?vU3aZCzoCIaAShB5skuc7sKEqF2j2mcJL9YmVy338PbattSK0bh18RsdkgmG?= =?us-ascii?Q?bq6YuYspv0A7XpdVs6sd1IT4dqjMo2OOiufIa5pGP+VonQi/fBbzx0mBqzMf?= =?us-ascii?Q?Da5lCv0DPZWfcbQ1FrrbLnL/GRGaS3AfxvwSyJEN3W5kFRCpadXmWAyoalTa?= =?us-ascii?Q?RXqNQCdxf9I54NGuHJarw1uF0Z+LUSefu7JZ3VJDUBopX4+w2MYXJDtvIIwT?= =?us-ascii?Q?b4X9wJm6syjCYdxm1xh9gx8jrhnTPBHH+c+Xn5xrD2Gcg8CraRApS67ALrHh?= =?us-ascii?Q?tjxCWo5Tm2OX5kaEidZ/MZNtQITu2pHO8AHoUTmKRP4Xds1Lim7ZssOnPxW1?= =?us-ascii?Q?a/IOztzfW4X7iWNfXADZ4nwl7JBCuwNcjLShLedPUcD1RBQxntUsNkc3bNWH?= =?us-ascii?Q?QolaGr/hqS0OE4BeuIiKAG2lNTFxJemLBzNg9SY1KlqafAUD8wzj6lPHJS/O?= =?us-ascii?Q?JxvcbLxfRmEIDakvBeLegL30Jh+wnlC+wFQOJrX3vJI/5aYP7ZajoiFj0DSH?= =?us-ascii?Q?QFrz8t3HAVPW905iMQ5Rtkaj0+0Rr5bXvcfK8iVRBmS+KOHxfVpi4vuku4Rq?= =?us-ascii?Q?Pi2CRCP3McMzhCDPLXTCGhFZmxYRjOXmM/IlWcuJo1Fk5+LG07mCfqMUtL3M?= =?us-ascii?Q?oYr1ec4llECf7n4r2Y1epMS+p1J1mLct83Owadn90UnQ5ZXkvExhTGKlDnbY?= =?us-ascii?Q?7JflCkM+ByVZQWASdDyxE7KOYS0yjFy8PEAZj0JaFd2LDVUbD444SlPOqg2V?= =?us-ascii?Q?0PjKudAzge6y0Js7pBrk587kMJmOcCkn6rXCSbU+g2/7qlmIxR2cZAu6/d1j?= =?us-ascii?Q?KIuRHZ+lzC70w3rJU3QsXYEt+DhgyFKQAJPrquEzJaOOlJpgjP4VkhBqj2UN?= =?us-ascii?Q?aRVWTKN7Z9b2VINE4x121cKbLlEAqT4gqqD++t8ncTA4Du46zl4zHa6KCi6m?= =?us-ascii?Q?CB/F3Ar8YYe/wOg4HN/M3cE2ZOmXxQZ4jTOWUQTGvDTeCJTI+D/t3djLjoIk?= =?us-ascii?Q?RQv6E9oS49Q97sSqL+n6PhQS+C/0iCM8Ili0ojbKvBuoRgqPsEozfEMdLo+t?= =?us-ascii?Q?Pg2Mvr2BCrmKGw7MqNTUd8e9?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR12MB2748.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: ceef0803-086d-489b-69ef-08d8f9b907d5 X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Apr 2021 11:33:56.7248 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: uXu4P78Qs8x93UjgU/JVbD4njGzAoTNcplbtNezVVPFVz21Yi6yqKp27cNkFCBawlBdXMEwF5F/DZ3X6gYkhhg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4698 Subject: Re: [dpdk-dev] [PATCH 3/6] net/mlx5: separate Tx function declarations to another file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Michael, This patch would cause this compilation failure on aarch64 compilation with= gcc : aarch64-linux-gnu-gcc (Linaro GCC 7.1-2017.08) 7.1.1 20170707 [615/2518] Compiling C object drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5= _txq.c.o FAILED: drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_txq.c.o aarch64-linux-gnu-gcc -Idrivers/libtmp_rte_net_mlx5.a.p -Idrivers -I../../r= oot/dpdk/drivers -Idrivers/net/mlx5 -I../../root/dpdk/drivers/net/mlx5 -Idr= ivers/net/mlx5/linux -I../../root/dpdk/drivers/net/mlx5/linux -Ilib/librte_= ethdev -I../../root/dpdk/lib/librte_ethdev -I. -I../../root/dpdk -Iconfig -= I../../root/dpdk/config -Ilib/librte_eal/include -I../../root/dpdk/lib/libr= te_eal/include -Ilib/librte_eal/linux/include -I../../root/dpdk/lib/librte_= eal/linux/include -Ilib/librte_eal/arm/include -I../../root/dpdk/lib/librte= _eal/arm/include -Ilib/librte_eal/common -I../../root/dpdk/lib/librte_eal/c= ommon -Ilib/librte_eal -I../../root/dpdk/lib/librte_eal -Ilib/librte_kvargs= -I../../root/dpdk/lib/librte_kvargs -Ilib/librte_metrics -I../../root/dpdk= /lib/librte_metrics -Ilib/librte_telemetry -I../../root/dpdk/lib/librte_tel= emetry -Ilib/librte_net -I../../root/dpdk/lib/librte_net -Ilib/librte_mbuf = -I../../root/dpdk/lib/librte_mbuf -Ilib/librte_mempool -I../../root/dpdk/li= b/librte_mempool -Ilib/librte_ring -I../../root/dpdk/lib/librte_ring -Ilib/= librte_meter -I../../root/dpdk/lib/librte_meter -Idrivers/bus/pci -I../../r= oot/dpdk/drivers/bus/pci -I../../root/dpdk/drivers/bus/pci/linux -Ilib/libr= te_pci -I../../root/dpdk/lib/librte_pci -Idrivers/bus/vdev -I../../root/dpd= k/drivers/bus/vdev -Ilib/librte_hash -I../../root/dpdk/lib/librte_hash -Ili= b/librte_rcu -I../../root/dpdk/lib/librte_rcu -Idrivers/common/mlx5 -I../..= /root/dpdk/drivers/common/mlx5 -Idrivers/common/mlx5/linux -I../../root/dpd= k/drivers/common/mlx5/linux -fdiagnostics-color=3Dalways -pipe -D_FILE_OFFS= ET_BITS=3D64 -Wall -Winvalid-pch -Werror -O2 -g -include rte_config.h -Wext= ra -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security = -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-de= finition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite= -strings -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=3Darmv8= -a+crc -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation= -std=3Dc11 -Wno-strict-prototypes -D_BSD_SOURCE -D_DEFAULT_SOURCE -D_XOPEN= _SOURCE=3D600 -pedantic -DPEDANTIC -MD -MQ drivers/libtmp_rte_net_mlx5.a.p/= net_mlx5_mlx5_txq.c.o -MF drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_txq= .c.o.d -o drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_txq.c.o -c ../../ro= ot/dpdk/drivers/net/mlx5/mlx5_txq.c ../../root/dpdk/drivers/net/mlx5/mlx5_txq.c: In function 'txq_set_params': ../../root/dpdk/drivers/net/mlx5/mlx5_txq.c:817:17: error: dereferencing po= inter to incomplete type 'struct rte_pci_device' (priv->pci_dev->id.device_id =3D=3D ^~ =20 Kindest regards, Raslan Darawsheh > -----Original Message----- > From: Michael Baum > Sent: Monday, April 5, 2021 5:01 PM > To: dev@dpdk.org > Cc: Matan Azrad ; Raslan Darawsheh > ; Slava Ovsiienko > Subject: [PATCH 3/6] net/mlx5: separate Tx function declarations to anoth= er > file >=20 > This patch separates Tx function declarations to different header file > in preparation for removing their implementation from the source file > and as an optional preparation for Tx cleanup. >=20 > Signed-off-by: Michael Baum > --- > drivers/net/mlx5/linux/mlx5_mp_os.c | 1 + > drivers/net/mlx5/linux/mlx5_os.c | 1 + > drivers/net/mlx5/linux/mlx5_verbs.c | 2 +- > drivers/net/mlx5/mlx5.c | 1 + > drivers/net/mlx5/mlx5_devx.c | 2 +- > drivers/net/mlx5/mlx5_ethdev.c | 1 + > drivers/net/mlx5/mlx5_flow.c | 2 +- > drivers/net/mlx5/mlx5_flow_dv.c | 2 +- > drivers/net/mlx5/mlx5_flow_verbs.c | 1 - > drivers/net/mlx5/mlx5_mr.c | 1 + > drivers/net/mlx5/mlx5_rxmode.c | 1 - > drivers/net/mlx5/mlx5_rxq.c | 2 +- > drivers/net/mlx5/mlx5_rxtx.c | 1 + > drivers/net/mlx5/mlx5_rxtx.h | 344 ------------------------------= --- > drivers/net/mlx5/mlx5_stats.c | 2 +- > drivers/net/mlx5/mlx5_trigger.c | 2 +- > drivers/net/mlx5/mlx5_tx.h | 371 > ++++++++++++++++++++++++++++++++++++ > drivers/net/mlx5/mlx5_txpp.c | 2 +- > drivers/net/mlx5/mlx5_txq.c | 2 +- > drivers/net/mlx5/windows/mlx5_os.c | 1 + > 20 files changed, 387 insertions(+), 355 deletions(-) > create mode 100644 drivers/net/mlx5/mlx5_tx.h >=20 > diff --git a/drivers/net/mlx5/linux/mlx5_mp_os.c > b/drivers/net/mlx5/linux/mlx5_mp_os.c > index 63fa278..ca529b6 100644 > --- a/drivers/net/mlx5/linux/mlx5_mp_os.c > +++ b/drivers/net/mlx5/linux/mlx5_mp_os.c > @@ -17,6 +17,7 @@ > #include "mlx5.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_utils.h" >=20 > int > diff --git a/drivers/net/mlx5/linux/mlx5_os.c > b/drivers/net/mlx5/linux/mlx5_os.c > index 97a28ec..026423b 100644 > --- a/drivers/net/mlx5/linux/mlx5_os.c > +++ b/drivers/net/mlx5/linux/mlx5_os.c > @@ -41,6 +41,7 @@ > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c > b/drivers/net/mlx5/linux/mlx5_verbs.c > index 73096af..0b0759f 100644 > --- a/drivers/net/mlx5/linux/mlx5_verbs.c > +++ b/drivers/net/mlx5/linux/mlx5_verbs.c > @@ -20,9 +20,9 @@ > #include > #include > #include > -#include > #include > #include > +#include > #include > #include >=20 > diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c > index 6f77bc2..02cc2c7 100644 > --- a/drivers/net/mlx5/mlx5.c > +++ b/drivers/net/mlx5/mlx5.c > @@ -36,6 +36,7 @@ > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c > index 76935f6..76d31f5 100644 > --- a/drivers/net/mlx5/mlx5_devx.c > +++ b/drivers/net/mlx5/mlx5_devx.c > @@ -20,7 +20,7 @@ >=20 > #include "mlx5.h" > #include "mlx5_common_os.h" > -#include "mlx5_rxtx.h" > +#include "mlx5_tx.h" > #include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "mlx5_devx.h" > diff --git a/drivers/net/mlx5/mlx5_ethdev.c > b/drivers/net/mlx5/mlx5_ethdev.c > index 708e3a3..90baee5 100644 > --- a/drivers/net/mlx5/mlx5_ethdev.c > +++ b/drivers/net/mlx5/mlx5_ethdev.c > @@ -24,6 +24,7 @@ >=20 > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_autoconf.h" >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c > index b3877a1..4dea006 100644 > --- a/drivers/net/mlx5/mlx5_flow.c > +++ b/drivers/net/mlx5/mlx5_flow.c > @@ -29,8 +29,8 @@ > #include "mlx5.h" > #include "mlx5_flow.h" > #include "mlx5_flow_os.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_common_os.h" > #include "rte_pmd_mlx5.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c > b/drivers/net/mlx5/mlx5_flow_dv.c > index cac05fb..cb5b3c9 100644 > --- a/drivers/net/mlx5/mlx5_flow_dv.c > +++ b/drivers/net/mlx5/mlx5_flow_dv.c > @@ -32,8 +32,8 @@ > #include "mlx5_common_os.h" > #include "mlx5_flow.h" > #include "mlx5_flow_os.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "rte_pmd_mlx5.h" >=20 > #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || > !defined(HAVE_INFINIBAND_VERBS_H) > diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c > b/drivers/net/mlx5/mlx5_flow_verbs.c > index c331350..0fdafbb 100644 > --- a/drivers/net/mlx5/mlx5_flow_verbs.c > +++ b/drivers/net/mlx5/mlx5_flow_verbs.c > @@ -23,7 +23,6 @@ > #include "mlx5_defs.h" > #include "mlx5.h" > #include "mlx5_flow.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" >=20 > #define VERBS_SPEC_INNER(item_flags) \ > diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c > index 2014936..e791b63 100644 > --- a/drivers/net/mlx5/mlx5_mr.c > +++ b/drivers/net/mlx5/mlx5_mr.c > @@ -16,6 +16,7 @@ > #include "mlx5_mr.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" >=20 > struct mr_find_contig_memsegs_data { > uintptr_t addr; > diff --git a/drivers/net/mlx5/mlx5_rxmode.c > b/drivers/net/mlx5/mlx5_rxmode.c > index cf93cca..25fb47c 100644 > --- a/drivers/net/mlx5/mlx5_rxmode.c > +++ b/drivers/net/mlx5/mlx5_rxmode.c > @@ -11,7 +11,6 @@ >=20 > #include > #include "mlx5.h" > -#include "mlx5_rxtx.h" > #include "mlx5_utils.h" >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c > index 19df0fa..bb9a908 100644 > --- a/drivers/net/mlx5/mlx5_rxq.c > +++ b/drivers/net/mlx5/mlx5_rxq.c > @@ -24,7 +24,7 @@ >=20 > #include "mlx5_defs.h" > #include "mlx5.h" > -#include "mlx5_rxtx.h" > +#include "mlx5_tx.h" > #include "mlx5_rx.h" > #include "mlx5_utils.h" > #include "mlx5_autoconf.h" > diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c > index c7f2605..57ff407 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.c > +++ b/drivers/net/mlx5/mlx5_rxtx.c > @@ -26,6 +26,7 @@ > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" >=20 > /* TX burst subroutines return codes. */ > enum mlx5_txcmp_code { > diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h > index f1ebc99..e168dd4 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.h > +++ b/drivers/net/mlx5/mlx5_rxtx.h > @@ -17,169 +17,18 @@ > #include > #include > #include > -#include > #include >=20 > -#include > -#include > #include > #include >=20 > -#include "mlx5_defs.h" > #include "mlx5_utils.h" > #include "mlx5.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" >=20 > - > -/* Mbuf dynamic flag offset for inline. */ > -extern uint64_t rte_net_mlx5_dynf_inline_mask; > - > -struct mlx5_txq_stats { > -#ifdef MLX5_PMD_SOFT_COUNTERS > - uint64_t opackets; /**< Total of successfully sent packets. */ > - uint64_t obytes; /**< Total of successfully sent bytes. */ > -#endif > - uint64_t oerrors; /**< Total number of failed transmitted packets. */ > -}; > - > struct mlx5_priv; >=20 > -/* TX queue send local data. */ > -__extension__ > -struct mlx5_txq_local { > - struct mlx5_wqe *wqe_last; /* last sent WQE pointer. */ > - struct rte_mbuf *mbuf; /* first mbuf to process. */ > - uint16_t pkts_copy; /* packets copied to elts. */ > - uint16_t pkts_sent; /* packets sent. */ > - uint16_t pkts_loop; /* packets sent on loop entry. */ > - uint16_t elts_free; /* available elts remain. */ > - uint16_t wqe_free; /* available wqe remain. */ > - uint16_t mbuf_off; /* data offset in current mbuf. */ > - uint16_t mbuf_nseg; /* number of remaining mbuf. */ > - uint16_t mbuf_free; /* number of inline mbufs to free. */ > -}; > - > -/* TX queue descriptor. */ > -__extension__ > -struct mlx5_txq_data { > - uint16_t elts_head; /* Current counter in (*elts)[]. */ > - uint16_t elts_tail; /* Counter of first element awaiting completion. */ > - uint16_t elts_comp; /* elts index since last completion request. */ > - uint16_t elts_s; /* Number of mbuf elements. */ > - uint16_t elts_m; /* Mask for mbuf elements indices. */ > - /* Fields related to elts mbuf storage. */ > - uint16_t wqe_ci; /* Consumer index for work queue. */ > - uint16_t wqe_pi; /* Producer index for work queue. */ > - uint16_t wqe_s; /* Number of WQ elements. */ > - uint16_t wqe_m; /* Mask Number for WQ elements. */ > - uint16_t wqe_comp; /* WQE index since last completion request. */ > - uint16_t wqe_thres; /* WQE threshold to request completion in CQ. > */ > - /* WQ related fields. */ > - uint16_t cq_ci; /* Consumer index for completion queue. */ > - uint16_t cq_pi; /* Production index for completion queue. */ > - uint16_t cqe_s; /* Number of CQ elements. */ > - uint16_t cqe_m; /* Mask for CQ indices. */ > - /* CQ related fields. */ > - uint16_t elts_n:4; /* elts[] length (in log2). */ > - uint16_t cqe_n:4; /* Number of CQ elements (in log2). */ > - uint16_t wqe_n:4; /* Number of WQ elements (in log2). */ > - uint16_t tso_en:1; /* When set hardware TSO is enabled. */ > - uint16_t tunnel_en:1; > - /* When set TX offload for tunneled packets are supported. */ > - uint16_t swp_en:1; /* Whether SW parser is enabled. */ > - uint16_t vlan_en:1; /* VLAN insertion in WQE is supported. */ > - uint16_t db_nc:1; /* Doorbell mapped to non-cached region. */ > - uint16_t db_heu:1; /* Doorbell heuristic write barrier. */ > - uint16_t fast_free:1; /* mbuf fast free on Tx is enabled. */ > - uint16_t inlen_send; /* Ordinary send data inline size. */ > - uint16_t inlen_empw; /* eMPW max packet size to inline. */ > - uint16_t inlen_mode; /* Minimal data length to inline. */ > - uint32_t qp_num_8s; /* QP number shifted by 8. */ > - uint64_t offloads; /* Offloads for Tx Queue. */ > - struct mlx5_mr_ctrl mr_ctrl; /* MR control descriptor. */ > - struct mlx5_wqe *wqes; /* Work queue. */ > - struct mlx5_wqe *wqes_end; /* Work queue array limit. */ > -#ifdef RTE_LIBRTE_MLX5_DEBUG > - uint32_t *fcqs; /* Free completion queue (debug extended). */ > -#else > - uint16_t *fcqs; /* Free completion queue. */ > -#endif > - volatile struct mlx5_cqe *cqes; /* Completion queue. */ > - volatile uint32_t *qp_db; /* Work queue doorbell. */ > - volatile uint32_t *cq_db; /* Completion queue doorbell. */ > - uint16_t port_id; /* Port ID of device. */ > - uint16_t idx; /* Queue index. */ > - uint64_t ts_mask; /* Timestamp flag dynamic mask. */ > - int32_t ts_offset; /* Timestamp field dynamic offset. */ > - struct mlx5_dev_ctx_shared *sh; /* Shared context. */ > - struct mlx5_txq_stats stats; /* TX queue counters. */ > -#ifndef RTE_ARCH_64 > - rte_spinlock_t *uar_lock; > - /* UAR access lock required for 32bit implementations */ > -#endif > - struct rte_mbuf *elts[0]; > - /* Storage for queued packets, must be the last field. */ > -} __rte_cache_aligned; > - > -enum mlx5_txq_type { > - MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */ > - MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Rx queue. */ > -}; > - > -/* TX queue control descriptor. */ > -struct mlx5_txq_ctrl { > - LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */ > - uint32_t refcnt; /* Reference counter. */ > - unsigned int socket; /* CPU socket ID for allocations. */ > - enum mlx5_txq_type type; /* The txq ctrl type. */ > - unsigned int max_inline_data; /* Max inline data. */ > - unsigned int max_tso_header; /* Max TSO header size. */ > - struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */ > - struct mlx5_priv *priv; /* Back pointer to private data. */ > - off_t uar_mmap_offset; /* UAR mmap offset for non-primary > process. */ > - void *bf_reg; /* BlueFlame register from Verbs. */ > - uint16_t dump_file_n; /* Number of dump files. */ > - struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ > - uint32_t hairpin_status; /* Hairpin binding status. */ > - struct mlx5_txq_data txq; /* Data path structure. */ > - /* Must be the last field in the structure, contains elts[]. */ > -}; > - > -#define MLX5_TX_BFREG(txq) \ > - (MLX5_PROC_PRIV((txq)->port_id)->uar_table[(txq)->idx]) > - > -/* mlx5_txq.c */ > - > -int mlx5_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); > -int mlx5_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); > -int mlx5_tx_queue_start_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > -int mlx5_tx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > -int mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t > desc, > - unsigned int socket, const struct rte_eth_txconf > *conf); > -int mlx5_tx_hairpin_queue_setup > - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > - const struct rte_eth_hairpin_conf *hairpin_conf); > -void mlx5_tx_queue_release(void *dpdk_txq); > -void txq_uar_init(struct mlx5_txq_ctrl *txq_ctrl); > -int mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd); > -void mlx5_tx_uar_uninit_secondary(struct rte_eth_dev *dev); > -int mlx5_txq_obj_verify(struct rte_eth_dev *dev); > -struct mlx5_txq_ctrl *mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx= , > - uint16_t desc, unsigned int socket, > - const struct rte_eth_txconf *conf); > -struct mlx5_txq_ctrl *mlx5_txq_hairpin_new > - (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > - const struct rte_eth_hairpin_conf *hairpin_conf); > -struct mlx5_txq_ctrl *mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx= ); > -int mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx); > -int mlx5_txq_releasable(struct rte_eth_dev *dev, uint16_t idx); > -int mlx5_txq_verify(struct rte_eth_dev *dev); > -void txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl); > -void txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl); > -uint64_t mlx5_get_tx_port_offloads(struct rte_eth_dev *dev); > -void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev); > - > /* mlx5_rxtx.c */ >=20 > extern uint32_t mlx5_ptype_table[]; > @@ -189,88 +38,22 @@ struct mlx5_txq_ctrl *mlx5_txq_hairpin_new > void mlx5_set_ptype_table(void); > void mlx5_set_cksum_table(void); > void mlx5_set_swp_types_table(void); > -uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, > - uint16_t pkts_n); > -int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset); > void mlx5_dump_debug_information(const char *path, const char *title, > const void *buf, unsigned int len); > int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, > const struct mlx5_mp_arg_queue_state_modify > *sm); > int mlx5_queue_state_modify(struct rte_eth_dev *dev, > struct mlx5_mp_arg_queue_state_modify *sm); > -void mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, > - struct rte_eth_txq_info *qinfo); > -int mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t > tx_queue_id, > - struct rte_eth_burst_mode *mode); >=20 > /* mlx5_mr.c */ >=20 > void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); > -uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf > *mb); > -uint32_t mlx5_tx_update_ext_mp(struct mlx5_txq_data *txq, uintptr_t > addr, > - struct rte_mempool *mp); > int mlx5_dma_map(struct rte_pci_device *pdev, void *addr, uint64_t iova, > size_t len); > int mlx5_dma_unmap(struct rte_pci_device *pdev, void *addr, uint64_t > iova, > size_t len); >=20 > /** > - * Provide safe 64bit store operation to mlx5 UAR region for both 32bit = and > - * 64bit architectures. > - * > - * @param val > - * value to write in CPU endian format. > - * @param addr > - * Address to write to. > - * @param lock > - * Address of the lock to use for that UAR access. > - */ > -static __rte_always_inline void > -__mlx5_uar_write64_relaxed(uint64_t val, void *addr, > - rte_spinlock_t *lock __rte_unused) > -{ > -#ifdef RTE_ARCH_64 > - *(uint64_t *)addr =3D val; > -#else /* !RTE_ARCH_64 */ > - rte_spinlock_lock(lock); > - *(uint32_t *)addr =3D val; > - rte_io_wmb(); > - *((uint32_t *)addr + 1) =3D val >> 32; > - rte_spinlock_unlock(lock); > -#endif > -} > - > -/** > - * Provide safe 64bit store operation to mlx5 UAR region for both 32bit = and > - * 64bit architectures while guaranteeing the order of execution with th= e > - * code being executed. > - * > - * @param val > - * value to write in CPU endian format. > - * @param addr > - * Address to write to. > - * @param lock > - * Address of the lock to use for that UAR access. > - */ > -static __rte_always_inline void > -__mlx5_uar_write64(uint64_t val, void *addr, rte_spinlock_t *lock) > -{ > - rte_io_wmb(); > - __mlx5_uar_write64_relaxed(val, addr, lock); > -} > - > -/* Assist macros, used instead of directly calling the functions they wr= ap. */ > -#ifdef RTE_ARCH_64 > -#define mlx5_uar_write64_relaxed(val, dst, lock) \ > - __mlx5_uar_write64_relaxed(val, dst, NULL) > -#define mlx5_uar_write64(val, dst, lock) __mlx5_uar_write64(val, dst, > NULL) > -#else > -#define mlx5_uar_write64_relaxed(val, dst, lock) \ > - __mlx5_uar_write64_relaxed(val, dst, lock) > -#define mlx5_uar_write64(val, dst, lock) __mlx5_uar_write64(val, dst, lo= ck) > -#endif > - > -/** > * Get Memory Pool (MP) from mbuf. If mbuf is indirect, the pool from > which the > * cloned mbuf is allocated is returned instead. > * > @@ -288,131 +71,4 @@ int mlx5_dma_unmap(struct rte_pci_device *pdev, > void *addr, uint64_t iova, > return buf->pool; > } >=20 > -/** > - * Query LKey from a packet buffer for Tx. If not found, add the mempool= . > - * > - * @param txq > - * Pointer to Tx queue structure. > - * @param addr > - * Address to search. > - * > - * @return > - * Searched LKey on success, UINT32_MAX on no match. > - */ > -static __rte_always_inline uint32_t > -mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) > -{ > - struct mlx5_mr_ctrl *mr_ctrl =3D &txq->mr_ctrl; > - uintptr_t addr =3D (uintptr_t)mb->buf_addr; > - uint32_t lkey; > - > - /* Check generation bit to see if there's any change on existing MRs. > */ > - if (unlikely(*mr_ctrl->dev_gen_ptr !=3D mr_ctrl->cur_gen)) > - mlx5_mr_flush_local_cache(mr_ctrl); > - /* Linear search on MR cache array. */ > - lkey =3D mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, > - MLX5_MR_CACHE_N, addr); > - if (likely(lkey !=3D UINT32_MAX)) > - return lkey; > - /* Take slower bottom-half on miss. */ > - return mlx5_tx_mb2mr_bh(txq, mb); > -} > - > -/** > - * Ring TX queue doorbell and flush the update if requested. > - * > - * @param txq > - * Pointer to TX queue structure. > - * @param wqe > - * Pointer to the last WQE posted in the NIC. > - * @param cond > - * Request for write memory barrier after BlueFlame update. > - */ > -static __rte_always_inline void > -mlx5_tx_dbrec_cond_wmb(struct mlx5_txq_data *txq, volatile struct > mlx5_wqe *wqe, > - int cond) > -{ > - uint64_t *dst =3D MLX5_TX_BFREG(txq); > - volatile uint64_t *src =3D ((volatile uint64_t *)wqe); > - > - rte_io_wmb(); > - *txq->qp_db =3D rte_cpu_to_be_32(txq->wqe_ci); > - /* Ensure ordering between DB record and BF copy. */ > - rte_wmb(); > - mlx5_uar_write64_relaxed(*src, dst, txq->uar_lock); > - if (cond) > - rte_wmb(); > -} > - > -/** > - * Ring TX queue doorbell and flush the update by write memory barrier. > - * > - * @param txq > - * Pointer to TX queue structure. > - * @param wqe > - * Pointer to the last WQE posted in the NIC. > - */ > -static __rte_always_inline void > -mlx5_tx_dbrec(struct mlx5_txq_data *txq, volatile struct mlx5_wqe *wqe) > -{ > - mlx5_tx_dbrec_cond_wmb(txq, wqe, 1); > -} > - > -/** > - * Convert timestamp from mbuf format to linear counter > - * of Clock Queue completions (24 bits) > - * > - * @param sh > - * Pointer to the device shared context to fetch Tx > - * packet pacing timestamp and parameters. > - * @param ts > - * Timestamp from mbuf to convert. > - * @return > - * positive or zero value - completion ID to wait > - * negative value - conversion error > - */ > -static __rte_always_inline int32_t > -mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts) > -{ > - uint64_t ts, ci; > - uint32_t tick; > - > - do { > - /* > - * Read atomically two uint64_t fields and compare lsb bits. > - * It there is no match - the timestamp was updated in > - * the service thread, data should be re-read. > - */ > - rte_compiler_barrier(); > - ci =3D __atomic_load_n(&sh->txpp.ts.ci_ts, > __ATOMIC_RELAXED); > - ts =3D __atomic_load_n(&sh->txpp.ts.ts, > __ATOMIC_RELAXED); > - rte_compiler_barrier(); > - if (!((ts ^ ci) << (64 - MLX5_CQ_INDEX_WIDTH))) > - break; > - } while (true); > - /* Perform the skew correction, positive value to send earlier. */ > - mts -=3D sh->txpp.skew; > - mts -=3D ts; > - if (unlikely(mts >=3D UINT64_MAX / 2)) { > - /* We have negative integer, mts is in the past. */ > - __atomic_fetch_add(&sh->txpp.err_ts_past, > - 1, __ATOMIC_RELAXED); > - return -1; > - } > - tick =3D sh->txpp.tick; > - MLX5_ASSERT(tick); > - /* Convert delta to completions, round up. */ > - mts =3D (mts + tick - 1) / tick; > - if (unlikely(mts >=3D (1 << MLX5_CQ_INDEX_WIDTH) / 2 - 1)) { > - /* We have mts is too distant future. */ > - __atomic_fetch_add(&sh->txpp.err_ts_future, > - 1, __ATOMIC_RELAXED); > - return -1; > - } > - mts <<=3D 64 - MLX5_CQ_INDEX_WIDTH; > - ci +=3D mts; > - ci >>=3D 64 - MLX5_CQ_INDEX_WIDTH; > - return ci; > -} > - > #endif /* RTE_PMD_MLX5_RXTX_H_ */ > diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.= c > index 4dbd831..ae2f566 100644 > --- a/drivers/net/mlx5/mlx5_stats.c > +++ b/drivers/net/mlx5/mlx5_stats.c > @@ -16,8 +16,8 @@ >=20 > #include "mlx5_defs.h" > #include "mlx5.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_malloc.h" >=20 > /** > diff --git a/drivers/net/mlx5/mlx5_trigger.c > b/drivers/net/mlx5/mlx5_trigger.c > index c88cb22..001c0b5 100644 > --- a/drivers/net/mlx5/mlx5_trigger.c > +++ b/drivers/net/mlx5/mlx5_trigger.c > @@ -15,8 +15,8 @@ >=20 > #include "mlx5.h" > #include "mlx5_mr.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_utils.h" > #include "rte_pmd_mlx5.h" >=20 > diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h > new file mode 100644 > index 0000000..7f91d04 > --- /dev/null > +++ b/drivers/net/mlx5/mlx5_tx.h > @@ -0,0 +1,371 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright 2021 6WIND S.A. > + * Copyright 2021 Mellanox Technologies, Ltd > + */ > + > +#ifndef RTE_PMD_MLX5_TX_H_ > +#define RTE_PMD_MLX5_TX_H_ > + > +#include > +#include > + > +#include > +#include > +#include > +#include > + > +#include > + > +#include "mlx5.h" > +#include "mlx5_autoconf.h" > +#include "mlx5_mr.h" > + > +/* Mbuf dynamic flag offset for inline. */ > +extern uint64_t rte_net_mlx5_dynf_inline_mask; > + > +struct mlx5_txq_stats { > +#ifdef MLX5_PMD_SOFT_COUNTERS > + uint64_t opackets; /**< Total of successfully sent packets. */ > + uint64_t obytes; /**< Total of successfully sent bytes. */ > +#endif > + uint64_t oerrors; /**< Total number of failed transmitted packets. */ > +}; > + > +/* TX queue send local data. */ > +__extension__ > +struct mlx5_txq_local { > + struct mlx5_wqe *wqe_last; /* last sent WQE pointer. */ > + struct rte_mbuf *mbuf; /* first mbuf to process. */ > + uint16_t pkts_copy; /* packets copied to elts. */ > + uint16_t pkts_sent; /* packets sent. */ > + uint16_t pkts_loop; /* packets sent on loop entry. */ > + uint16_t elts_free; /* available elts remain. */ > + uint16_t wqe_free; /* available wqe remain. */ > + uint16_t mbuf_off; /* data offset in current mbuf. */ > + uint16_t mbuf_nseg; /* number of remaining mbuf. */ > + uint16_t mbuf_free; /* number of inline mbufs to free. */ > +}; > + > +/* TX queue descriptor. */ > +__extension__ > +struct mlx5_txq_data { > + uint16_t elts_head; /* Current counter in (*elts)[]. */ > + uint16_t elts_tail; /* Counter of first element awaiting completion. */ > + uint16_t elts_comp; /* elts index since last completion request. */ > + uint16_t elts_s; /* Number of mbuf elements. */ > + uint16_t elts_m; /* Mask for mbuf elements indices. */ > + /* Fields related to elts mbuf storage. */ > + uint16_t wqe_ci; /* Consumer index for work queue. */ > + uint16_t wqe_pi; /* Producer index for work queue. */ > + uint16_t wqe_s; /* Number of WQ elements. */ > + uint16_t wqe_m; /* Mask Number for WQ elements. */ > + uint16_t wqe_comp; /* WQE index since last completion request. */ > + uint16_t wqe_thres; /* WQE threshold to request completion in CQ. > */ > + /* WQ related fields. */ > + uint16_t cq_ci; /* Consumer index for completion queue. */ > + uint16_t cq_pi; /* Production index for completion queue. */ > + uint16_t cqe_s; /* Number of CQ elements. */ > + uint16_t cqe_m; /* Mask for CQ indices. */ > + /* CQ related fields. */ > + uint16_t elts_n:4; /* elts[] length (in log2). */ > + uint16_t cqe_n:4; /* Number of CQ elements (in log2). */ > + uint16_t wqe_n:4; /* Number of WQ elements (in log2). */ > + uint16_t tso_en:1; /* When set hardware TSO is enabled. */ > + uint16_t tunnel_en:1; > + /* When set TX offload for tunneled packets are supported. */ > + uint16_t swp_en:1; /* Whether SW parser is enabled. */ > + uint16_t vlan_en:1; /* VLAN insertion in WQE is supported. */ > + uint16_t db_nc:1; /* Doorbell mapped to non-cached region. */ > + uint16_t db_heu:1; /* Doorbell heuristic write barrier. */ > + uint16_t fast_free:1; /* mbuf fast free on Tx is enabled. */ > + uint16_t inlen_send; /* Ordinary send data inline size. */ > + uint16_t inlen_empw; /* eMPW max packet size to inline. */ > + uint16_t inlen_mode; /* Minimal data length to inline. */ > + uint32_t qp_num_8s; /* QP number shifted by 8. */ > + uint64_t offloads; /* Offloads for Tx Queue. */ > + struct mlx5_mr_ctrl mr_ctrl; /* MR control descriptor. */ > + struct mlx5_wqe *wqes; /* Work queue. */ > + struct mlx5_wqe *wqes_end; /* Work queue array limit. */ > +#ifdef RTE_LIBRTE_MLX5_DEBUG > + uint32_t *fcqs; /* Free completion queue (debug extended). */ > +#else > + uint16_t *fcqs; /* Free completion queue. */ > +#endif > + volatile struct mlx5_cqe *cqes; /* Completion queue. */ > + volatile uint32_t *qp_db; /* Work queue doorbell. */ > + volatile uint32_t *cq_db; /* Completion queue doorbell. */ > + uint16_t port_id; /* Port ID of device. */ > + uint16_t idx; /* Queue index. */ > + uint64_t ts_mask; /* Timestamp flag dynamic mask. */ > + int32_t ts_offset; /* Timestamp field dynamic offset. */ > + struct mlx5_dev_ctx_shared *sh; /* Shared context. */ > + struct mlx5_txq_stats stats; /* TX queue counters. */ > +#ifndef RTE_ARCH_64 > + rte_spinlock_t *uar_lock; > + /* UAR access lock required for 32bit implementations */ > +#endif > + struct rte_mbuf *elts[0]; > + /* Storage for queued packets, must be the last field. */ > +} __rte_cache_aligned; > + > +enum mlx5_txq_type { > + MLX5_TXQ_TYPE_STANDARD, /* Standard Tx queue. */ > + MLX5_TXQ_TYPE_HAIRPIN, /* Hairpin Tx queue. */ > +}; > + > +/* TX queue control descriptor. */ > +struct mlx5_txq_ctrl { > + LIST_ENTRY(mlx5_txq_ctrl) next; /* Pointer to the next element. */ > + uint32_t refcnt; /* Reference counter. */ > + unsigned int socket; /* CPU socket ID for allocations. */ > + enum mlx5_txq_type type; /* The txq ctrl type. */ > + unsigned int max_inline_data; /* Max inline data. */ > + unsigned int max_tso_header; /* Max TSO header size. */ > + struct mlx5_txq_obj *obj; /* Verbs/DevX queue object. */ > + struct mlx5_priv *priv; /* Back pointer to private data. */ > + off_t uar_mmap_offset; /* UAR mmap offset for non-primary > process. */ > + void *bf_reg; /* BlueFlame register from Verbs. */ > + uint16_t dump_file_n; /* Number of dump files. */ > + struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ > + uint32_t hairpin_status; /* Hairpin binding status. */ > + struct mlx5_txq_data txq; /* Data path structure. */ > + /* Must be the last field in the structure, contains elts[]. */ > +}; > + > +/* mlx5_txq.c */ > + > +int mlx5_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id); > +int mlx5_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id); > +int mlx5_tx_queue_start_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > +int mlx5_tx_queue_stop_primary(struct rte_eth_dev *dev, uint16_t > queue_id); > +int mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t > desc, > + unsigned int socket, const struct rte_eth_txconf > *conf); > +int mlx5_tx_hairpin_queue_setup > + (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > + const struct rte_eth_hairpin_conf *hairpin_conf); > +void mlx5_tx_queue_release(void *dpdk_txq); > +void txq_uar_init(struct mlx5_txq_ctrl *txq_ctrl); > +int mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd); > +void mlx5_tx_uar_uninit_secondary(struct rte_eth_dev *dev); > +int mlx5_txq_obj_verify(struct rte_eth_dev *dev); > +struct mlx5_txq_ctrl *mlx5_txq_new(struct rte_eth_dev *dev, uint16_t > idx, > + uint16_t desc, unsigned int socket, > + const struct rte_eth_txconf *conf); > +struct mlx5_txq_ctrl *mlx5_txq_hairpin_new > + (struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, > + const struct rte_eth_hairpin_conf *hairpin_conf); > +struct mlx5_txq_ctrl *mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx= ); > +int mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx); > +int mlx5_txq_releasable(struct rte_eth_dev *dev, uint16_t idx); > +int mlx5_txq_verify(struct rte_eth_dev *dev); > +void txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl); > +void txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl); > +uint64_t mlx5_get_tx_port_offloads(struct rte_eth_dev *dev); > +void mlx5_txq_dynf_timestamp_set(struct rte_eth_dev *dev); > + > +/* mlx5_tx.c */ > + > +uint16_t removed_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, > + uint16_t pkts_n); > +int mlx5_tx_descriptor_status(void *tx_queue, uint16_t offset); > +void mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, > + struct rte_eth_txq_info *qinfo); > +int mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t > tx_queue_id, > + struct rte_eth_burst_mode *mode); > + > +/* mlx5_mr.c */ > + > +uint32_t mlx5_tx_mb2mr_bh(struct mlx5_txq_data *txq, struct rte_mbuf > *mb); > +uint32_t mlx5_tx_update_ext_mp(struct mlx5_txq_data *txq, uintptr_t > addr, > + struct rte_mempool *mp); > + > +static __rte_always_inline uint64_t * > +mlx5_tx_bfreg(struct mlx5_txq_data *txq) > +{ > + return MLX5_PROC_PRIV(txq->port_id)->uar_table[txq->idx]; > +} > + > +/** > + * Provide safe 64bit store operation to mlx5 UAR region for both 32bit = and > + * 64bit architectures. > + * > + * @param val > + * value to write in CPU endian format. > + * @param addr > + * Address to write to. > + * @param lock > + * Address of the lock to use for that UAR access. > + */ > +static __rte_always_inline void > +__mlx5_uar_write64_relaxed(uint64_t val, void *addr, > + rte_spinlock_t *lock __rte_unused) > +{ > +#ifdef RTE_ARCH_64 > + *(uint64_t *)addr =3D val; > +#else /* !RTE_ARCH_64 */ > + rte_spinlock_lock(lock); > + *(uint32_t *)addr =3D val; > + rte_io_wmb(); > + *((uint32_t *)addr + 1) =3D val >> 32; > + rte_spinlock_unlock(lock); > +#endif > +} > + > +/** > + * Provide safe 64bit store operation to mlx5 UAR region for both 32bit = and > + * 64bit architectures while guaranteeing the order of execution with th= e > + * code being executed. > + * > + * @param val > + * value to write in CPU endian format. > + * @param addr > + * Address to write to. > + * @param lock > + * Address of the lock to use for that UAR access. > + */ > +static __rte_always_inline void > +__mlx5_uar_write64(uint64_t val, void *addr, rte_spinlock_t *lock) > +{ > + rte_io_wmb(); > + __mlx5_uar_write64_relaxed(val, addr, lock); > +} > + > +/* Assist macros, used instead of directly calling the functions they wr= ap. */ > +#ifdef RTE_ARCH_64 > +#define mlx5_uar_write64_relaxed(val, dst, lock) \ > + __mlx5_uar_write64_relaxed(val, dst, NULL) > +#define mlx5_uar_write64(val, dst, lock) __mlx5_uar_write64(val, dst, > NULL) > +#else > +#define mlx5_uar_write64_relaxed(val, dst, lock) \ > + __mlx5_uar_write64_relaxed(val, dst, lock) > +#define mlx5_uar_write64(val, dst, lock) __mlx5_uar_write64(val, dst, lo= ck) > +#endif > + > +/** > + * Query LKey from a packet buffer for Tx. If not found, add the mempool= . > + * > + * @param txq > + * Pointer to Tx queue structure. > + * @param addr > + * Address to search. > + * > + * @return > + * Searched LKey on success, UINT32_MAX on no match. > + */ > +static __rte_always_inline uint32_t > +mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) > +{ > + struct mlx5_mr_ctrl *mr_ctrl =3D &txq->mr_ctrl; > + uintptr_t addr =3D (uintptr_t)mb->buf_addr; > + uint32_t lkey; > + > + /* Check generation bit to see if there's any change on existing MRs. > */ > + if (unlikely(*mr_ctrl->dev_gen_ptr !=3D mr_ctrl->cur_gen)) > + mlx5_mr_flush_local_cache(mr_ctrl); > + /* Linear search on MR cache array. */ > + lkey =3D mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, > + MLX5_MR_CACHE_N, addr); > + if (likely(lkey !=3D UINT32_MAX)) > + return lkey; > + /* Take slower bottom-half on miss. */ > + return mlx5_tx_mb2mr_bh(txq, mb); > +} > + > +/** > + * Ring TX queue doorbell and flush the update if requested. > + * > + * @param txq > + * Pointer to TX queue structure. > + * @param wqe > + * Pointer to the last WQE posted in the NIC. > + * @param cond > + * Request for write memory barrier after BlueFlame update. > + */ > +static __rte_always_inline void > +mlx5_tx_dbrec_cond_wmb(struct mlx5_txq_data *txq, volatile struct > mlx5_wqe *wqe, > + int cond) > +{ > + uint64_t *dst =3D mlx5_tx_bfreg(txq); > + volatile uint64_t *src =3D ((volatile uint64_t *)wqe); > + > + rte_io_wmb(); > + *txq->qp_db =3D rte_cpu_to_be_32(txq->wqe_ci); > + /* Ensure ordering between DB record and BF copy. */ > + rte_wmb(); > + mlx5_uar_write64_relaxed(*src, dst, txq->uar_lock); > + if (cond) > + rte_wmb(); > +} > + > +/** > + * Ring TX queue doorbell and flush the update by write memory barrier. > + * > + * @param txq > + * Pointer to TX queue structure. > + * @param wqe > + * Pointer to the last WQE posted in the NIC. > + */ > +static __rte_always_inline void > +mlx5_tx_dbrec(struct mlx5_txq_data *txq, volatile struct mlx5_wqe *wqe) > +{ > + mlx5_tx_dbrec_cond_wmb(txq, wqe, 1); > +} > + > +/** > + * Convert timestamp from mbuf format to linear counter > + * of Clock Queue completions (24 bits). > + * > + * @param sh > + * Pointer to the device shared context to fetch Tx > + * packet pacing timestamp and parameters. > + * @param ts > + * Timestamp from mbuf to convert. > + * @return > + * positive or zero value - completion ID to wait. > + * negative value - conversion error. > + */ > +static __rte_always_inline int32_t > +mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts) > +{ > + uint64_t ts, ci; > + uint32_t tick; > + > + do { > + /* > + * Read atomically two uint64_t fields and compare lsb bits. > + * It there is no match - the timestamp was updated in > + * the service thread, data should be re-read. > + */ > + rte_compiler_barrier(); > + ci =3D __atomic_load_n(&sh->txpp.ts.ci_ts, > __ATOMIC_RELAXED); > + ts =3D __atomic_load_n(&sh->txpp.ts.ts, > __ATOMIC_RELAXED); > + rte_compiler_barrier(); > + if (!((ts ^ ci) << (64 - MLX5_CQ_INDEX_WIDTH))) > + break; > + } while (true); > + /* Perform the skew correction, positive value to send earlier. */ > + mts -=3D sh->txpp.skew; > + mts -=3D ts; > + if (unlikely(mts >=3D UINT64_MAX / 2)) { > + /* We have negative integer, mts is in the past. */ > + __atomic_fetch_add(&sh->txpp.err_ts_past, > + 1, __ATOMIC_RELAXED); > + return -1; > + } > + tick =3D sh->txpp.tick; > + MLX5_ASSERT(tick); > + /* Convert delta to completions, round up. */ > + mts =3D (mts + tick - 1) / tick; > + if (unlikely(mts >=3D (1 << MLX5_CQ_INDEX_WIDTH) / 2 - 1)) { > + /* We have mts is too distant future. */ > + __atomic_fetch_add(&sh->txpp.err_ts_future, > + 1, __ATOMIC_RELAXED); > + return -1; > + } > + mts <<=3D 64 - MLX5_CQ_INDEX_WIDTH; > + ci +=3D mts; > + ci >>=3D 64 - MLX5_CQ_INDEX_WIDTH; > + return ci; > +} > + > +#endif /* RTE_PMD_MLX5_TX_H_ */ > diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c > index 89e1c5d..d90399a 100644 > --- a/drivers/net/mlx5/mlx5_txpp.c > +++ b/drivers/net/mlx5/mlx5_txpp.c > @@ -16,8 +16,8 @@ > #include >=20 > #include "mlx5.h" > -#include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_common_os.h" >=20 > static_assert(sizeof(struct mlx5_cqe_ts) =3D=3D sizeof(rte_int128_t), > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c > index cd13eb9..b8a1657 100644 > --- a/drivers/net/mlx5/mlx5_txq.c > +++ b/drivers/net/mlx5/mlx5_txq.c > @@ -23,7 +23,7 @@ > #include "mlx5_defs.h" > #include "mlx5_utils.h" > #include "mlx5.h" > -#include "mlx5_rxtx.h" > +#include "mlx5_tx.h" > #include "mlx5_autoconf.h" >=20 > /** > diff --git a/drivers/net/mlx5/windows/mlx5_os.c > b/drivers/net/mlx5/windows/mlx5_os.c > index 79eac80..814063b 100644 > --- a/drivers/net/mlx5/windows/mlx5_os.c > +++ b/drivers/net/mlx5/windows/mlx5_os.c > @@ -24,6 +24,7 @@ > #include "mlx5_utils.h" > #include "mlx5_rxtx.h" > #include "mlx5_rx.h" > +#include "mlx5_tx.h" > #include "mlx5_autoconf.h" > #include "mlx5_mr.h" > #include "mlx5_flow.h" > -- > 1.8.3.1