From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 241F4A046B for ; Mon, 22 Jul 2019 07:32:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C78E95905; Mon, 22 Jul 2019 07:32:48 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00087.outbound.protection.outlook.com [40.107.0.87]) by dpdk.org (Postfix) with ESMTP id 9EF1F2BAF for ; Mon, 22 Jul 2019 07:32:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I/IinX2lmm4iu/K5A8Xba4gyLX5g6Wz2sENsGl9AkC+k4mQFT5d0aevl2fIo3lUME2BDKl33eWOojEOPe8nDUt4474VXCpllpcNE1mkTyfGg0hr6NxJcm0aSttxph6hGGUDKfsv9SGw0x35ZYGfoTjAHb5RM6HWuKHVnBWJBo1pgpNpAUhA78CCh4gAiKoaWaDU5lonuHZGwgbN01u3Wy79gvvLG8omNhNKXvnnq8mevc89UM4ogsFNkBaHN/UbqXBQopd4NBep0juCvxVY83TJX1rfNmiwtlhb6FePzGOVOnhJjEb1n4vM8XBolp33H17To/SPx57kqxJgXejDdeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2qlH5Mo6KTVO9F/osApHRFNiiWPK7aWfvEWrKieIRlQ=; b=lnavuuigH5csIHNuZFotIIiPA9LhWyfnLHQuwENyZoHysmYY2bs7HZX3L6xsw4rWpKlVNozcllDIAMyqCFzoOF++72EWEkDeoBrZfrhbxuAjwR14el2mGQP8teWHDOfdgZMtotO2m9oGN/6hfhwQZtvDsI8b5sid0XGqDPHYkpZPjJh2UQtab8HpE5wCGa18Gsbrg0g2jsPH6ym4j54BrrImh5DC1KC8ZPrAqCWEGR4fn458etvIXZtz/+TDDnpbUsuK0aB5UNJeCxoz5zGIGpibijyzFJIszjwzBCXC4Krv5iqzBsqLvH+fdNIKE74AKEGjY2uXLYuKGzEScGmYHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=mellanox.com;dmarc=pass action=none header.from=mellanox.com;dkim=pass header.d=mellanox.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2qlH5Mo6KTVO9F/osApHRFNiiWPK7aWfvEWrKieIRlQ=; b=KcFBOCQgxSOL9qyDxQ/VL9rs7GoswQD2GWIfyoHtuhoCrzq/vKOnGEX0udJsn1CFwn/lFi7EezaalwGZ9oAGKkbR4Dd8Y4vsd7XImPaxO2sh3yXAVm1pJY+E7pLHWNuJlJUbHQlbubVvVb7s2jWiJByudNKCDv1VmUYLq6Lh7lA= Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com (52.134.72.27) by DB3PR0502MB3995.eurprd05.prod.outlook.com (52.134.72.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2094.16; Mon, 22 Jul 2019 05:32:46 +0000 Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::69c1:c0d7:1fa1:f89f]) by DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::69c1:c0d7:1fa1:f89f%6]) with mapi id 15.20.2094.013; Mon, 22 Jul 2019 05:32:46 +0000 From: Yongseok Koh To: Slava Ovsiienko CC: "dev@dpdk.org" Thread-Topic: [PATCH v4 2/8] net/mlx5: add Tx datapath related devargs Thread-Index: AQHVP9Ar/6t0F7ALNEOqzU5NAuP4CqbWHieA Date: Mon, 22 Jul 2019 05:32:46 +0000 Message-ID: References: <1563346400-1762-1-git-send-email-viacheslavo@mellanox.com> <1563719100-368-1-git-send-email-viacheslavo@mellanox.com> <1563719100-368-3-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1563719100-368-3-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-originating-ip: [69.181.245.183] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 3dc96c3e-435b-4247-485c-08d70e66071c x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:DB3PR0502MB3995; x-ms-traffictypediagnostic: DB3PR0502MB3995: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3276; x-forefront-prvs: 01068D0A20 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(39860400002)(376002)(346002)(366004)(396003)(136003)(199004)(189003)(478600001)(6486002)(33656002)(53936002)(76176011)(53946003)(6436002)(316002)(37006003)(36756003)(6512007)(81156014)(81166006)(186003)(6862004)(25786009)(446003)(8676002)(26005)(30864003)(2616005)(11346002)(476003)(4326008)(229853002)(6636002)(8936002)(71190400001)(71200400001)(486006)(102836004)(7736002)(2906002)(14454004)(6506007)(53546011)(305945005)(6246003)(66946007)(66476007)(66556008)(64756008)(66446008)(66066001)(76116006)(91956017)(14444005)(256004)(6116002)(68736007)(86362001)(99286004)(5660300002)(3846002); DIR:OUT; SFP:1101; SCL:1; SRVR:DB3PR0502MB3995; H:DB3PR0502MB3980.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: kTFwprbX/m6j9rIjFFG6yQ74aSvCNBNe/8BWPSzjVV8clVR+TGGrsOuuEA+z61BwNWSy++vx4KiZ/JMq4/6hJgMGznHO7Cp6pfkficHCYEDuGPP/iShRv/VeQC0Cf2WQcYw8I44A3RUULdUulOnBSZ2MTXfL55eXIIa04hwkv9yJZ56csp81jN+3U5PidleHLRSPeuPleG1TcMHpx63HFtQCqZL5r4W2cB/o/D2z4d2UJxtErSzRGza8P4iIJp6NAPOtAFzxevWgJEoZGx5MlqJK3HRlMTV2/wAmyuEoK53htK7j+sTJNcNH6YJrLG8ktWsdxrjl2DTKfbUlxRPJY3H2YMXDOadnFLeD/9/8ZsCLF4bLbzIVox2XqiS9P0AwsjtIqwUqhExtkNJYg5lZj8VMRoMuF7mZEfxfgGC9Vnk= Content-Type: text/plain; charset="us-ascii" Content-ID: <7DCAF1BCD6A2944495303E527A45D320@eurprd05.prod.outlook.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3dc96c3e-435b-4247-485c-08d70e66071c X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jul 2019 05:32:46.3142 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: yskoh@mellanox.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0502MB3995 Subject: Re: [dpdk-dev] [PATCH v4 2/8] net/mlx5: add Tx datapath related devargs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > On Jul 21, 2019, at 7:24 AM, Viacheslav Ovsiienko wrote: >=20 > This patch introduces new mlx5 PMD devarg options: >=20 > - txq_inline_min - specifies minimal amount of data to be inlined into > WQE during Tx operations. NICs may require this minimal data amount > to operate correctly. The exact value may depend on NIC operation mode, > requested offloads, etc. >=20 > - txq_inline_max - specifies the maximal packet length to be completely > inlined into WQE Ethernet Segment for ordinary SEND method. If packet > is larger the specified value, the packet data won't be copied by the > driver at all, data buffer is addressed with a pointer. If packet length > is less or equal all packet data will be copied into WQE. >=20 > - txq_inline_mpw - specifies the maximal packet length to be completely > inlined into WQE for Enhanced MPW method. >=20 > Driver documentation is also updated. >=20 > Signed-off-by: Viacheslav Ovsiienko > --- Acked-by: Yongseok Koh > doc/guides/nics/mlx5.rst | 155 +++++++++++++++++++++++-----= ----- > doc/guides/rel_notes/release_19_08.rst | 2 + > drivers/net/mlx5/mlx5.c | 29 +++++- > drivers/net/mlx5/mlx5.h | 4 + > 4 files changed, 140 insertions(+), 50 deletions(-) >=20 > diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst > index 5cf1e76..7e87344 100644 > --- a/doc/guides/nics/mlx5.rst > +++ b/doc/guides/nics/mlx5.rst > @@ -351,24 +351,102 @@ Run-time configuration > - ``txq_inline`` parameter [int] >=20 > Amount of data to be inlined during TX operations. This parameter is > - deprecated and ignored, kept for compatibility issue. > + deprecated and converted to the new parameter ``txq_inline_max`` provi= ding > + partial compatibility. >=20 > - ``txqs_min_inline`` parameter [int] >=20 > - Enable inline send only when the number of TX queues is greater or equ= al > + Enable inline data send only when the number of TX queues is greater o= r equal > to this value. >=20 > - This option should be used in combination with ``txq_inline`` above. > - > - On ConnectX-4, ConnectX-4 LX, ConnectX-5, ConnectX-6 and BlueField wit= hout > - Enhanced MPW: > - > - - Disabled by default. > - - In case ``txq_inline`` is set recommendation is 4. > - > - On ConnectX-5, ConnectX-6 and BlueField with Enhanced MPW: > - > - - Set to 8 by default. > + This option should be used in combination with ``txq_inline_max`` and > + ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settin= gs above. > + > + If this option is not specified the default value 16 is used for BlueF= ield > + and 8 for other platforms > + > + The data inlining consumes the CPU cycles, so this option is intended = to > + auto enable inline data if we have enough Tx queues, which means we ha= ve > + enough CPU cores and PCI bandwidth is getting more critical and CPU > + is not supposed to be bottleneck anymore. > + > + The copying data into WQE improves latency and can improve PPS perform= ance > + when PCI back pressure is detected and may be useful for scenarios inv= olving > + heavy traffic on many queues. > + > + Because additional software logic is necessary to handle this mode, th= is > + option should be used with care, as it may lower performance when back > + pressure is not expected. > + > +- ``txq_inline_min`` parameter [int] > + > + Minimal amount of data to be inlined into WQE during Tx operations. NI= Cs > + may require this minimal data amount to operate correctly. The exact v= alue > + may depend on NIC operation mode, requested offloads, etc. > + > + If ``txq_inline_min`` key is present the specified value (may be align= ed > + by the driver in order not to exceed the limits and provide better des= criptor > + space utilization) will be used by the driver and it is guaranteed the > + requested data bytes are inlined into the WQE beside other inline sett= ings. > + This keys also may update ``txq_inline_max`` value (default of specifi= ed > + explicitly in devargs) to reserve the space for inline data. > + > + If ``txq_inline_min`` key is not present, the value may be queried by = the > + driver from the NIC via DevX if this feature is available. If there is= no DevX > + enabled/supported the value 18 (supposing L2 header including VLAN) is= set > + for ConnectX-4, value 58 (supposing L2-L4 headers, required by configu= rations > + over E-Switch) is set for ConnectX-4 Lx, and 0 is set by default for C= onnectX-5 > + and newer NICs. If packet is shorter the ``txq_inline_min`` value, the= entire > + packet is inlined. > + > + For the ConnectX-4 and ConnectX-4 Lx NICs driver does not allow to set > + this value below 18 (minimal L2 header, including VLAN). > + > + Please, note, this minimal data inlining disengages eMPW feature (Enha= nced > + Multi-Packet Write), because last one does not support partial packet = inlining. > + This is not very critical due to minimal data inlining is mostly requi= red > + by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW featur= e. > + > +- ``txq_inline_max`` parameter [int] > + > + Specifies the maximal packet length to be completely inlined into WQE > + Ethernet Segment for ordinary SEND method. If packet is larger than sp= ecified > + value, the packet data won't be copied by the driver at all, data buff= er > + is addressed with a pointer. If packet length is less or equal all pac= ket > + data will be copied into WQE. This may improve PCI bandwidth utilizati= on for > + short packets significantly but requires the extra CPU cycles. > + > + The data inline feature is controlled by number of Tx queues, if numbe= r of Tx > + queues is larger than ``txqs_min_inline`` key parameter, the inline fe= ature > + is engaged, if there are not enough Tx queues (which means not enough = CPU cores > + and CPU resources are scarce), data inline is not performed by the dri= ver. > + Assigning ``txqs_min_inline`` with zero always enables the data inline= . > + > + The default ``txq_inline_max`` value is 290. The specified value may b= e adjusted > + by the driver in order not to exceed the limit (930 bytes) and to prov= ide better > + WQE space filling without gaps, the adjustment is reflected in the deb= ug log. > + > +- ``txq_inline_mpw`` parameter [int] > + > + Specifies the maximal packet length to be completely inlined into WQE = for > + Enhanced MPW method. If packet is large the specified value, the packe= t data > + won't be copied, and data buffer is addressed with pointer. If packet = length > + is less or equal, all packet data will be copied into WQE. This may im= prove PCI > + bandwidth utilization for short packets significantly but requires the= extra > + CPU cycles. > + > + The data inline feature is controlled by number of TX queues, if numbe= r of Tx > + queues is larger than ``txqs_min_inline`` key parameter, the inline fe= ature > + is engaged, if there are not enough Tx queues (which means not enough = CPU cores > + and CPU resources are scarce), data inline is not performed by the dri= ver. > + Assigning ``txqs_min_inline`` with zero always enables the data inline= . > + > + The default ``txq_inline_mpw`` value is 188. The specified value may b= e adjusted > + by the driver in order not to exceed the limit (930 bytes) and to prov= ide better > + WQE space filling without gaps, the adjustment is reflected in the deb= ug log. > + Due to multiple packets may be included to the same WQE with Enhanced = Multi > + Packet Write Method and overall WQE size is limited it is not recommen= ded to > + specify large values for the ``txq_inline_mpw``. >=20 > - ``txqs_max_vec`` parameter [int] >=20 > @@ -376,47 +454,34 @@ Run-time configuration > equal to this value. This parameter is deprecated and ignored, kept > for compatibility issue to not prevent driver from probing. >=20 > -- ``txq_mpw_en`` parameter [int] > - > - A nonzero value enables multi-packet send (MPS) for ConnectX-4 Lx and > - enhanced multi-packet send (Enhanced MPS) for ConnectX-5, ConnectX-6 a= nd BlueField. > - MPS allows the TX burst function to pack up multiple packets in a > - single descriptor session in order to save PCI bandwidth and improve > - performance at the cost of a slightly higher CPU usage. When > - ``txq_inline`` is set along with ``txq_mpw_en``, TX burst function tri= es > - to copy entire packet data on to TX descriptor instead of including > - pointer of packet only if there is enough room remained in the > - descriptor. ``txq_inline`` sets per-descriptor space for either pointe= rs > - or inlined packets. In addition, Enhanced MPS supports hybrid mode - > - mixing inlined packets and pointers in the same descriptor. > - > - This option cannot be used with certain offloads such as ``DEV_TX_OFFL= OAD_TCP_TSO, > - DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLO= AD_VLAN_INSERT``. > - When those offloads are requested the MPS send function will not be us= ed. > - > - It is currently only supported on the ConnectX-4 Lx, ConnectX-5, Conne= ctX-6 and BlueField > - families of adapters. > - On ConnectX-4 Lx the MPW is considered un-secure hence disabled by def= ault. > - Users which enable the MPW should be aware that application which prov= ides incorrect > - mbuf descriptors in the Tx burst can lead to serious errors in the hos= t including, on some cases, > - NIC to get stuck. > - On ConnectX-5, ConnectX-6 and BlueField the MPW is secure and enabled = by default. > - > - ``txq_mpw_hdr_dseg_en`` parameter [int] >=20 > A nonzero value enables including two pointers in the first block of TX > descriptor. The parameter is deprecated and ignored, kept for compatibi= lity > issue. >=20 > - Effective only when Enhanced MPS is supported. Disabled by default. > - > - ``txq_max_inline_len`` parameter [int] >=20 > Maximum size of packet to be inlined. This limits the size of packet to > be inlined. If the size of a packet is larger than configured value, th= e > packet isn't inlined even though there's enough space remained in the > descriptor. Instead, the packet is included with pointer. This paramete= r > - is deprecated. > + is deprecated and converted directly to ``txq_inline_mpw`` providing f= ull > + compatibility. Valid only if eMPW feature is engaged. > + > +- ``txq_mpw_en`` parameter [int] > + > + A nonzero value enables Enhanced Multi-Packet Write (eMPW) for Connect= X-5, > + ConnectX-6 and BlueField. eMPW allows the TX burst function to pack up= multiple > + packets in a single descriptor session in order to save PCI bandwidth = and improve > + performance at the cost of a slightly higher CPU usage. When ``txq_inl= ine_mpw`` > + is set along with ``txq_mpw_en``, TX burst function copies entire pack= et > + data on to TX descriptor instead of including pointer of packet. > + > + The Enhanced Multi-Packet Write feature is enabled by default if NIC s= upports > + it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` = option. > + Also, if minimal data inlining is requested by non-zero ``txq_inline_m= in`` > + option or reported by the NIC, the eMPW feature is disengaged. >=20 > - ``tx_vec_en`` parameter [int] >=20 > @@ -424,12 +489,6 @@ Run-time configuration > NICs if the number of global Tx queues on the port is less than > ``txqs_max_vec``. The parameter is deprecated and ignored. >=20 > - This option cannot be used with certain offloads such as ``DEV_TX_OFFL= OAD_TCP_TSO, > - DEV_TX_OFFLOAD_VXLAN_TNL_TSO, DEV_TX_OFFLOAD_GRE_TNL_TSO, DEV_TX_OFFLO= AD_VLAN_INSERT``. > - When those offloads are requested the MPS send function will not be us= ed. > - > - Enabled by default on ConnectX-5, ConnectX-6 and BlueField. > - > - ``rx_vec_en`` parameter [int] >=20 > A nonzero value enables Rx vector if the port is not configured in > diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_note= s/release_19_08.rst > index 1bf9eb8..6c382cb 100644 > --- a/doc/guides/rel_notes/release_19_08.rst > +++ b/doc/guides/rel_notes/release_19_08.rst > @@ -116,6 +116,8 @@ New Features > * Added support for IP-in-IP tunnel. > * Accelerate flows with count action creation and destroy. > * Accelerate flows counter query. > + * Improve Tx datapath improves performance with enabled HW offloads. > + >=20 > * **Updated Solarflare network PMD.** >=20 > diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c > index d4f0eb2..bbf2583 100644 > --- a/drivers/net/mlx5/mlx5.c > +++ b/drivers/net/mlx5/mlx5.c > @@ -72,6 +72,15 @@ > /* Device parameter to configure inline send. Deprecated, ignored.*/ > #define MLX5_TXQ_INLINE "txq_inline" >=20 > +/* Device parameter to limit packet size to inline with ordinary SEND. *= / > +#define MLX5_TXQ_INLINE_MAX "txq_inline_max" > + > +/* Device parameter to configure minimal data size to inline. */ > +#define MLX5_TXQ_INLINE_MIN "txq_inline_min" > + > +/* Device parameter to limit packet size to inline with Enhanced MPW. */ > +#define MLX5_TXQ_INLINE_MPW "txq_inline_mpw" > + > /* > * Device parameter to configure the number of TX queues threshold for > * enabling inline send. > @@ -1006,7 +1015,15 @@ struct mlx5_dev_spawn_data { > } else if (strcmp(MLX5_RXQS_MIN_MPRQ, key) =3D=3D 0) { > config->mprq.min_rxqs_num =3D tmp; > } else if (strcmp(MLX5_TXQ_INLINE, key) =3D=3D 0) { > - DRV_LOG(WARNING, "%s: deprecated parameter, ignored", key); > + DRV_LOG(WARNING, "%s: deprecated parameter," > + " converted to txq_inline_max", key); > + config->txq_inline_max =3D tmp; > + } else if (strcmp(MLX5_TXQ_INLINE_MAX, key) =3D=3D 0) { > + config->txq_inline_max =3D tmp; > + } else if (strcmp(MLX5_TXQ_INLINE_MIN, key) =3D=3D 0) { > + config->txq_inline_min =3D tmp; > + } else if (strcmp(MLX5_TXQ_INLINE_MPW, key) =3D=3D 0) { > + config->txq_inline_mpw =3D tmp; > } else if (strcmp(MLX5_TXQS_MIN_INLINE, key) =3D=3D 0) { > config->txqs_inline =3D tmp; > } else if (strcmp(MLX5_TXQS_MAX_VEC, key) =3D=3D 0) { > @@ -1016,7 +1033,9 @@ struct mlx5_dev_spawn_data { > } else if (strcmp(MLX5_TXQ_MPW_HDR_DSEG_EN, key) =3D=3D 0) { > DRV_LOG(WARNING, "%s: deprecated parameter, ignored", key); > } else if (strcmp(MLX5_TXQ_MAX_INLINE_LEN, key) =3D=3D 0) { > - DRV_LOG(WARNING, "%s: deprecated parameter, ignored", key); > + DRV_LOG(WARNING, "%s: deprecated parameter," > + " converted to txq_inline_mpw", key); > + config->txq_inline_mpw =3D tmp; > } else if (strcmp(MLX5_TX_VEC_EN, key) =3D=3D 0) { > DRV_LOG(WARNING, "%s: deprecated parameter, ignored", key); > } else if (strcmp(MLX5_RX_VEC_EN, key) =3D=3D 0) { > @@ -1064,6 +1083,9 @@ struct mlx5_dev_spawn_data { > MLX5_RX_MPRQ_MAX_MEMCPY_LEN, > MLX5_RXQS_MIN_MPRQ, > MLX5_TXQ_INLINE, > + MLX5_TXQ_INLINE_MIN, > + MLX5_TXQ_INLINE_MAX, > + MLX5_TXQ_INLINE_MPW, > MLX5_TXQS_MIN_INLINE, > MLX5_TXQS_MAX_VEC, > MLX5_TXQ_MPW_EN, > @@ -2026,6 +2048,9 @@ struct mlx5_dev_spawn_data { > .hw_padding =3D 0, > .mps =3D MLX5_ARG_UNSET, > .rx_vec_en =3D 1, > + .txq_inline_max =3D MLX5_ARG_UNSET, > + .txq_inline_min =3D MLX5_ARG_UNSET, > + .txq_inline_mpw =3D MLX5_ARG_UNSET, > .txqs_inline =3D MLX5_ARG_UNSET, > .vf_nl_en =3D 1, > .mr_ext_memseg_en =3D 1, > diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h > index 354f6bc..86f005d 100644 > --- a/drivers/net/mlx5/mlx5.h > +++ b/drivers/net/mlx5/mlx5.h > @@ -198,6 +198,7 @@ struct mlx5_dev_config { > unsigned int cqe_comp:1; /* CQE compression is enabled. */ > unsigned int cqe_pad:1; /* CQE padding is enabled. */ > unsigned int tso:1; /* Whether TSO is supported. */ > + unsigned int tx_inline:1; /* Engage TX data inlining. */ > unsigned int rx_vec_en:1; /* Rx vector is enabled. */ > unsigned int mr_ext_memseg_en:1; > /* Whether memseg should be extended for MR creation. */ > @@ -223,6 +224,9 @@ struct mlx5_dev_config { > unsigned int ind_table_max_size; /* Maximum indirection table size. */ > unsigned int max_dump_files_num; /* Maximum dump files per queue. */ > int txqs_inline; /* Queue number threshold for inlining. */ > + int txq_inline_min; /* Minimal amount of data bytes to inline. */ > + int txq_inline_max; /* Max packet size for inlining with SEND. */ > + int txq_inline_mpw; /* Max packet size for inlining with eMPW. */ > struct mlx5_hca_attr hca_attr; /* HCA attributes. */ > }; >=20 > --=20 > 1.8.3.1 >=20