From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40060.outbound.protection.outlook.com [40.107.4.60]) by dpdk.org (Postfix) with ESMTP id D80C15592 for ; Thu, 28 Sep 2017 20:54:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=YnJIZrEA53ElhrqSGhiJZ/dXhQOUyp9g7gPIr7LCUFM=; b=uNhJ2HYrYby37FDm366Wdg2znAQHVGcdyELwEFSzeNT33k2HOueOK7uqd+Ng0mXsU83gluszETFBIQvSS3ikhn0pHLvksyKoBgXk/bIvGCjKWKi+GbvTMY/F2gjNLnavytlU2ElPJs3WrS0PTB4YJWEWFSnUi2mfGjn6J99QQQE= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; Received: from mellanox.com (82.166.227.17) by AM4PR05MB3140.eurprd05.prod.outlook.com (2603:10a6:205:3::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Thu, 28 Sep 2017 18:54:25 +0000 From: Shahaf Shuler To: ferruh.yigit@intel.com, thomas@monjalon.net Cc: arybchenko@solarflare.com, konstantin.ananyev@intel.com, jerin.jacob@caviumnetworks.com, dev@dpdk.org Date: Thu, 28 Sep 2017 21:54:14 +0300 Message-Id: X-Mailer: git-send-email 2.12.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [82.166.227.17] X-ClientProxiedBy: AM5PR0601CA0042.eurprd06.prod.outlook.com (2603:10a6:203:68::28) To AM4PR05MB3140.eurprd05.prod.outlook.com (2603:10a6:205:3::17) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 49654e47-7ccd-4371-2812-08d506a2577f X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(48565401081)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:AM4PR05MB3140; X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3140; 3:QAmGFD1Y+zjCAVeAeBVNQbbqfpS5KKp5DZ2lYikb0oI6Tic+0cTDVifVY9Q+5wE+fHbRuL64xvxPPky5qBjzglq4RaoPwNmm5ISxEA76wCqR/u3i8Xng7MsV7ma2bRA/lSQIoDXUiwD/zoRcv3DeA1gAyS7KbeI9jDZn/qFCg8QLap/DeEtG7c332Z2ndh7q+cbCE65jVrlfaZp5Ivlk+R8uw8TcMyFlECyjc8LCjnp0IU+EMBg9hbqU2Gx6aOaP; 25:+NSAQ8kDq7gAfhBI0SWn2M/p4OAri0qXtTB9xYhnj8aunpdHwjz/OKjdvxJvTBWs6HX95VtOlAeH3dJGesz3lLQkksmdQARoigpK24itReGIVV3KiVvhX5V0Lb8vcgqiHrBYf3CGSVTwP7YxA/Bc5wSxeWFZ7trtkL5wi0N7lilvxDZPej19xkBRsQnLfsTcXVXOJhta4W9uQjcp6Sejw0AoadK5yXT1G4XuiSxGeopQD9jwQWV3iFZAW05TrfTYWxXXrGfaXGqX890TOvLr6tySmGU+t6m+lzIXNJRu6AAPpcoNc5D63vvvbN4EDJNtxdgUHunyBNzHyzFJOXqVlw==; 31:xdS2ZwWu5JVa/uIBACt9AtVNho38hmUXEZLEA0IoLSoFuiuJVOZA7l7zXi7c0HMdmkuNDM6TnLzXvYcPIINheCiAFJcnZMWk08YwcS+gNKy5FgC10un3LajdICLZrsGy11uO9FcwpgKvnPzW7xwDMegAJxnjBlv+1ZMdZqQVGoNZIqakffIF/W3Pja4KFEEp/6VaZuMtzqbNpBOXDF4uV/G1nZpJpaZwIcWDwhd5Ss0= X-MS-TrafficTypeDiagnostic: AM4PR05MB3140: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3140; 20:PljuBH3Mv20bteOl6b6SuStwPZmFKvc1yFOAhFUuo5zbp4IZDmKUCgBKRjr1KmtKdqtktqB+RqhUw23gDONIaDARS+i9XBZn3OgYFIlPZcRoe16Hnz3FVxNN5FYn/lZdAAQFkiw4yvq8SE7xwGN/eTtCEetdDwm9yh+fcPkhvrhzorO9vR3345BropSmqNm+4z1/xZyikbkU+SL9QSQ5FW1DZVjiApmt3YJXvqNR0APqKLH1cHov6Ta7/3nRs1MYLIRt8amZwgIZd05noX1SNlBqN+gwH5kFpLke9Dsxt0Srh+pixfg6fnSU0WaW7T0n8CaPMrl/3F1v20pmu8Q/uYx0ebHisAFeKrj4PcA7R1UUbVpTYleyvLRZj6Qbd9a1rltzwK1ZjjBg+otNGF7fMy4nVbknWy1fWhk18Me5Vx/XsGckxeKTi0Jw/7SN4XlXb61WXoBsunE/35uBj3Svq4CAnrQR4NJ9exNTkv7p8Ryp7DmEDi+Z4cTV0YyEVPvN; 4:0kD4xCnbO5QXw6SCjvFD6V1wEgv4H86si8gy4XynjJvbddcTCKvM71y1zjzAGImCXvjk+qNvHtBSUqrwy8RImQiUIcH7+HP5DEmHRvrc0RoHfFWO+D7eEnPXVlhuWZIgvfG0LujzDg/rLHxU0gh8+leH25b6w3ksElXLwSo7Zal4eUvzxqVdKNtbKum/qphzy19OmcvewYA7fEyZgOxqnVH1WrQ/FCbYXl2kHRQq+TBNHv1kTPDxzU6zP+9xeekl/XKomkqhuMus4D3uDeu9bslXFj4qAsiIkZ1WC6ME8vY5hlPD9pbGVsqBcFjpM7qjse20bPzOACASeTuIt3cydQ== X-Exchange-Antispam-Report-Test: UriScan:(20558992708506)(278428928389397); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(100000703101)(100105400095)(93006095)(93001095)(10201501046)(3002001)(6055026)(6041248)(20161123562025)(20161123558100)(20161123555025)(20161123564025)(20161123560025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:AM4PR05MB3140; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:AM4PR05MB3140; X-Forefront-PRVS: 0444EB1997 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(376002)(346002)(189002)(199003)(101416001)(33646002)(118296001)(25786009)(97736004)(33026002)(2906002)(50986999)(76176999)(4326008)(81166006)(81156014)(69596002)(7736002)(47776003)(8676002)(53936002)(305945005)(50466002)(105586002)(66066001)(8936002)(106356001)(3846002)(68736007)(316002)(478600001)(48376002)(6666003)(189998001)(6116002)(5660300001)(16586007)(50226002)(55016002)(21086003)(5003940100001)(2950100002)(86362001)(36756003); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3140; H:mellanox.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM4PR05MB3140; 23:hcSew89Tt5feEapcWlp8gzs5qQ8koLtoIEGHbpp6+?= =?us-ascii?Q?rkQCuP3HQuPjofSYodh38hYzwSb2ePH5jlKdZ3lWiipydHCJMbQiNlvbT6zr?= =?us-ascii?Q?UiIT4E4aNksk8w3GerJd8fMcMJwojX0ZcObuQoEKOhGv/21ugBC4V4b2822+?= =?us-ascii?Q?1bZsVPjciDuWNm+c8Y6FAc+eU//jO2rF3+B5xwjEzkVgzrDygEhgc5AY5pWH?= =?us-ascii?Q?EqlZNeAfWKWMB942av0VVTQBt+hIaOuY9ujwMt9Ghytn/SnHer/Y4dHd1LbE?= =?us-ascii?Q?E5PJt75tD+DTDqm2wVVfJFXN3EsN64L2m4cr/ig6bCl8b9q2NYz5rl5FlOrx?= =?us-ascii?Q?6lZ9Sr4zBek24Gh1Ua1g3513G2i2vA2h6qysCGKBJwDgScOjHMSV9pbgMvwm?= =?us-ascii?Q?maV6HMfczzjvMpKlEA7kqFcwaY6cBIeAdBnIl7Aeb2WQsHk+RkXcvc1Fe9VV?= =?us-ascii?Q?1cSg4G1jqbON2U2OfxnyJL7a0t9kIw98kOMZPxOqwayNuwJpdFpxCXWJPWKR?= =?us-ascii?Q?mncpmpu1pmy7rDLzrwOohwpQDzi4or4NNhd02ITKdGsSCijeHEv/KoBm6yCj?= =?us-ascii?Q?wM4QOnd6K8Dzjo0f6X4ku71hBR4zl++cMml8MlwyDC1Dp91EyjOCLnjbnjeH?= =?us-ascii?Q?IASJaoXCLkbjMDNUKdoyQLyrzigz4ZFQKXGZvtOdDt4YPscAHWIXmWrteOzU?= =?us-ascii?Q?KLA8X77YC1iJQNrn1DSGAF+UTVacl+hKEgxK2AJBJ3viAnAzaSG33kkPJ/mz?= =?us-ascii?Q?gpPbG3+RfWIvDKm7pa+o8szWfoD7TfHaI2g8lfZr4DjF1OuTkq4zQF46+FCw?= =?us-ascii?Q?/WmMZuLibYAEoro1df4bKMzkJSo4Bz/oMZGE0p/uqgaQJ6B7lavQydctUIFH?= =?us-ascii?Q?Vo0E//QNE7+tQb1o8nagKoUaedqYthRdNHvHf+c8sRnHcKODMckQ99FvjGeo?= =?us-ascii?Q?9zmlv/BNd2ThUxDeIXD+yFaUvKC2XsWOG47Subu1ZJjOVtTymdHtgdT4Flvj?= =?us-ascii?Q?bP1PO8aV/cx8fh+ngnl+8H9wGtu4mGsRkiKQvRksJ3aeg2LY8MnbuVPDw8s+?= =?us-ascii?Q?8bmauxHJKbYcG7/hy/r6FPQCMp41PX5D6ukGGZfnAARqeFrcg=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; AM4PR05MB3140; 6:iE1qKNShllGw71dNSXAVt2d2B8VufYE3gnwvyqMrNumBUiaZRRLqB0KVVffdbSSZgE3HqLfTh5VU9fGbrmWArWDfUqjZE8m8fxtRB2MXV8PqTJCw/7HxRINiD/4MfTwHAa08TolBJukM4fSnURoF7Sp0C7/0dTBM281lU3BMw4swJMqiPHo6JWFjVbCFHt78jS76PC0OR5Wjcst7e0qiKgjjNb10AfhOORahrBg27MNCO//LIBCDf2REf8vRwRyX9htX6FSsT4vc3bY2KsIC6c+deNZbSHxOz0VfqtQuW1VdTKji28d5vSFTD8zitmGxHYqatHpo/LYK2DsnBTHKHg==; 5:hlut5yqNUOXYGpY8GPLCQyN9AXQpXbbYOgMLxS3B+C/cK6wj2BDmIOSCr+7Yj4CqD0pyJSOLEuICXiY4PXYy0o4bB/W7dB6h1bpFjfUb9AwFNpQyButCtHe8mQcgpjX6x2KXsY22eyYCxFcpiNYv6w==; 24:MpEBdFbq5jWlGVpqvTNxnWOqMRSzCd+waaYxiimWUuRNvWFD8pBXR1i2tJl0iOeBg6G128FO+uf/9DtLsJT3ag7WHA+tK/RfrU7K+9Dl5DI=; 7:j/ZIqfzOA0qBwpF/S8gX9CImljv+uzzcqH0QAi2CTw+BVYjHrP3nTr8Kqamk8gcApzVuHbflmQrfSWTPxrFl0Hl3WDUfDWMkwBZw+n5BasV5P+UTaxf2R6naL0sG/xJes4YKoir2I4OCfkQLf8xgIttvuIhW40CvnBtLYfsbHQSCl2+ot2dRmDMOYWbQjhtzMyLKWnSEe/UfUwrgqHvTlq/WFI6WzRLQvm9md0YT4oA= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2017 18:54:25.8806 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3140 Subject: [dpdk-dev] [PATCH v5 1/3] ethdev: introduce Rx queue offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Sep 2017 18:54:32 -0000 Introduce a new API to configure Rx offloads. In the new API, offloads are divided into per-port and per-queue offloads. The PMD reports capability for each of them. Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags. To enable per-port offload, the offload should be set on both device configuration and queue configuration. To enable per-queue offload, the offloads can be set only on queue configuration. Applications should set the ignore_offload_bitfield bit on rxmode structure in order to move to the new API. The old Rx offloads API is kept for the meanwhile, in order to enable a smooth transition for PMDs and application to the new API. Signed-off-by: Shahaf Shuler --- doc/guides/nics/features.rst | 33 ++++---- lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++---- lib/librte_ether/rte_ethdev.h | 51 +++++++++++- 3 files changed, 210 insertions(+), 30 deletions(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 37ffbc68c..4e68144ef 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -179,7 +179,7 @@ Jumbo frame Supports Rx jumbo frames. -* **[uses] user config**: ``dev_conf.rxmode.jumbo_frame``, +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``. ``dev_conf.rxmode.max_rx_pkt_len``. * **[related] rte_eth_dev_info**: ``max_rx_pktlen``. * **[related] API**: ``rte_eth_dev_set_mtu()``. @@ -192,7 +192,7 @@ Scattered Rx Supports receiving segmented mbufs. -* **[uses] user config**: ``dev_conf.rxmode.enable_scatter``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``. * **[implements] datapath**: ``Scattered Rx function``. * **[implements] rte_eth_dev_data**: ``scattered_rx``. * **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``. @@ -206,11 +206,11 @@ LRO Supports Large Receive Offload. -* **[uses] user config**: ``dev_conf.rxmode.enable_lro``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``. * **[implements] datapath**: ``LRO functionality``. * **[implements] rte_eth_dev_data**: ``lro``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``. +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``. .. _nic_features_tso: @@ -363,7 +363,7 @@ VLAN filter Supports filtering of a VLAN Tag identifier. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_filter``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``. * **[implements] eth_dev_ops**: ``vlan_filter_set``. * **[related] API**: ``rte_eth_dev_vlan_filter()``. @@ -499,7 +499,7 @@ CRC offload Supports CRC stripping by hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``. .. _nic_features_vlan_offload: @@ -509,11 +509,10 @@ VLAN offload Supports VLAN offload to hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_strip``, - ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``. * **[implements] eth_dev_ops**: ``vlan_offload_set``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``. * **[related] API**: ``rte_eth_dev_set_vlan_offload()``, ``rte_eth_dev_get_vlan_offload()``. @@ -526,10 +525,11 @@ QinQ offload Supports QinQ (queue in queue) offload. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``. @@ -540,13 +540,13 @@ L3 checksum offload Supports L3 checksum offload. -* **[uses] user config**: ``dev_conf.rxmode.hw_ip_checksum``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` | ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` | ``PKT_RX_IP_CKSUM_NONE``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``. @@ -557,13 +557,14 @@ L4 checksum offload Supports L4 checksum offload. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` | ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` | ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` | ``PKT_RX_L4_CKSUM_NONE``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``. @@ -574,8 +575,9 @@ MACsec offload Supports MACsec. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``. @@ -586,13 +588,14 @@ Inner L3 checksum Supports inner packet L3 checksum. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``. * **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``. diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 1849a3bdd..9b73d2377 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -688,12 +688,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex) } } +/** + * A conversion function from rxmode bitfield API. + */ +static void +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode, + uint64_t *rx_offloads) +{ + uint64_t offloads = 0; + + if (rxmode->header_split == 1) + offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT; + if (rxmode->hw_ip_checksum == 1) + offloads |= DEV_RX_OFFLOAD_CHECKSUM; + if (rxmode->hw_vlan_filter == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + if (rxmode->hw_vlan_strip == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + if (rxmode->hw_vlan_extend == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND; + if (rxmode->jumbo_frame == 1) + offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + if (rxmode->hw_strip_crc == 1) + offloads |= DEV_RX_OFFLOAD_CRC_STRIP; + if (rxmode->enable_scatter == 1) + offloads |= DEV_RX_OFFLOAD_SCATTER; + if (rxmode->enable_lro == 1) + offloads |= DEV_RX_OFFLOAD_TCP_LRO; + + *rx_offloads = offloads; +} + +/** + * A conversion function from rxmode offloads API. + */ +static void +rte_eth_convert_rx_offloads(const uint64_t rx_offloads, + struct rte_eth_rxmode *rxmode) +{ + + if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT) + rxmode->header_split = 1; + else + rxmode->header_split = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM) + rxmode->hw_ip_checksum = 1; + else + rxmode->hw_ip_checksum = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) + rxmode->hw_vlan_filter = 1; + else + rxmode->hw_vlan_filter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + rxmode->hw_vlan_strip = 1; + else + rxmode->hw_vlan_strip = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) + rxmode->hw_vlan_extend = 1; + else + rxmode->hw_vlan_extend = 0; + if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + rxmode->jumbo_frame = 1; + else + rxmode->jumbo_frame = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP) + rxmode->hw_strip_crc = 1; + else + rxmode->hw_strip_crc = 0; + if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) + rxmode->enable_scatter = 1; + else + rxmode->enable_scatter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rxmode->enable_lro = 1; + else + rxmode->enable_lro = 0; +} + int rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_conf local_conf = *dev_conf; int diag; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -723,8 +801,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -EBUSY; } + /* + * Convert between the offloads API to enable PMDs to support + * only one of them. + */ + if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) { + rte_eth_convert_rx_offload_bitfield( + &dev_conf->rxmode, &local_conf.rxmode.offloads); + } else { + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads, + &local_conf.rxmode); + } + /* Copy the dev_conf parameter into the dev structure */ - memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf)); + memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf)); /* * Check that the numbers of RX and TX queues are not greater @@ -768,7 +858,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * If jumbo frames are enabled, check that the maximum RX packet * length is supported by the configured device. */ - if (dev_conf->rxmode.jumbo_frame == 1) { + if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) { RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u" @@ -1032,6 +1122,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, uint32_t mbp_buf_size; struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_rxconf local_conf; void **rxq; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, if (rx_conf == NULL) rx_conf = &dev_info.default_rxconf; + local_conf = *rx_conf; + if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) { + /** + * Reflect port offloads to queue offloads in order for + * offloads to not be discarded. + */ + rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode, + &local_conf.offloads); + } + ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc, - socket_id, rx_conf, mp); + socket_id, &local_conf, mp); if (!ret) { if (!dev->data->min_rx_buf_size || dev->data->min_rx_buf_size > mbp_buf_size) @@ -2007,7 +2108,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) { + if (!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER)) { RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id); return -ENOSYS; } @@ -2083,23 +2185,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) /*check which option changed by application*/ cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_STRIP; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_STRIP; mask |= ETH_VLAN_STRIP_MASK; } cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_FILTER; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_FILTER; mask |= ETH_VLAN_FILTER_MASK; } cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_EXTEND; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_EXTEND; mask |= ETH_VLAN_EXTEND_MASK; } @@ -2108,6 +2228,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) return ret; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP); + + /* + * Convert to the offload bitfield API just in case the underlying PMD + * still supporting it. + */ + rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads, + &dev->data->dev_conf.rxmode); (*dev->dev_ops->vlan_offload_set)(dev, mask); return ret; @@ -2122,13 +2249,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (dev->data->dev_conf.rxmode.hw_vlan_strip) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP) ret |= ETH_VLAN_STRIP_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_filter) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER) ret |= ETH_VLAN_FILTER_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_extend) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND) ret |= ETH_VLAN_EXTEND_OFFLOAD; return ret; diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 99cdd54d4..e02d57881 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -348,7 +348,18 @@ struct rte_eth_rxmode { enum rte_eth_rx_mq_mode mq_mode; uint32_t max_rx_pkt_len; /**< Only used if jumbo_frame enabled. */ uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ + /** + * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_offload_capa field on rte_eth_dev_info + * structure are allowed to be set. + */ + uint64_t offloads; __extension__ + /** + * Below bitfield API is obsolete. Application should + * enable per-port offloads using the offload field + * above. + */ uint16_t header_split : 1, /**< Header Split enable. */ hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */ hw_vlan_filter : 1, /**< VLAN filter enable. */ @@ -357,7 +368,17 @@ struct rte_eth_rxmode { jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */ hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */ enable_scatter : 1, /**< Enable scatter packets rx handler */ - enable_lro : 1; /**< Enable LRO */ + enable_lro : 1, /**< Enable LRO */ + /** + * When set the offload bitfield should be ignored. + * Instead per-port Rx offloads should be set on offloads + * field above. + * Per-queue offloads shuold be set on rte_eth_rxq_conf + * structure. + * This bit is temporary till rxmode bitfield offloads API will + * be deprecated. + */ + ignore_offload_bitfield : 1; }; /** @@ -691,6 +712,12 @@ struct rte_eth_rxconf { uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */ uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + /** + * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_queue_offload_capa or rx_offload_capa + * fields on rte_eth_dev_info structure are allowed to be set. + */ + uint64_t offloads; }; #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */ @@ -907,6 +934,18 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040 #define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080 +#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100 +#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200 +#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400 +#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800 +#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000 +#define DEV_RX_OFFLOAD_SCATTER 0x00002000 +#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ + DEV_RX_OFFLOAD_UDP_CKSUM | \ + DEV_RX_OFFLOAD_TCP_CKSUM) +#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \ + DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_VLAN_EXTEND) /** * TX offload capabilities of a device. @@ -949,8 +988,11 @@ struct rte_eth_dev_info { /** Maximum number of hash MAC addresses for MTA and UTA. */ uint16_t max_vfs; /**< Maximum number of VFs. */ uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */ - uint32_t rx_offload_capa; /**< Device RX offload capabilities. */ + uint64_t rx_offload_capa; + /**< Device per port RX offload capabilities. */ uint32_t tx_offload_capa; /**< Device TX offload capabilities. */ + uint64_t rx_queue_offload_capa; + /**< Device per queue RX offload capabilities. */ uint16_t reta_size; /**< Device redirection table size, the total number of entries. */ uint8_t hash_key_size; /**< Hash key size in bytes */ @@ -1874,6 +1916,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); * each statically configurable offload hardware feature provided by * Ethernet devices, such as IP checksum or VLAN tag stripping for * example. + * The Rx offload bitfield API is obsolete and will be deprecated. + * Applications should set the ignore_bitfield_offloads bit on *rxmode* + * structure and use offloads field to set per-port offloads instead. * - the Receive Side Scaling (RSS) configuration when using multiple RX * queues per port. * @@ -1927,6 +1972,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev); * The *rx_conf* structure contains an *rx_thresh* structure with the values * of the Prefetch, Host, and Write-Back threshold registers of the receive * ring. + * In addition it contains the hardware offloads features to activate using + * the DEV_RX_OFFLOAD_* flags. * @param mb_pool * The pointer to the memory pool from which to allocate *rte_mbuf* network * memory buffers to populate each descriptor of the receive ring. -- 2.12.0