From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0056.outbound.protection.outlook.com [104.47.2.56]) by dpdk.org (Postfix) with ESMTP id CB64E1B60B for ; Wed, 4 Oct 2017 10:18:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=YnJIZrEA53ElhrqSGhiJZ/dXhQOUyp9g7gPIr7LCUFM=; b=Q+Z3feUADPyvAw/Jkp15hdB0Us72PYOGJ6/yrb51NGixyTSTo/dS0TXMgA9Pbx9PIWzeUBLshs9atNvWLz0AvTbpuIZJQcaCUdF1c6QqgV0Nxc7ChA/HHDli+Pt3Ii16ThuOPocaOreGJjPN1/iR8ug416ILarxMTinBF3xMxAk= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; Received: from mellanox.com (82.166.227.17) by VI1PR05MB3149.eurprd05.prod.outlook.com (2603:10a6:802:1b::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Wed, 4 Oct 2017 08:18:12 +0000 From: Shahaf Shuler To: konstantin.ananyev@intel.com, thomas@monjalon.net, arybchenko@solarflare.com, jerin.jacob@caviumnetworks.com, ferruh.yigit@intel.com Cc: dev@dpdk.org Date: Wed, 4 Oct 2017 11:17:58 +0300 Message-Id: <67a1a59b597f5a8554da09836e262c4cf842cdeb.1507104596.git.shahafs@mellanox.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [82.166.227.17] X-ClientProxiedBy: VI1PR08CA0212.eurprd08.prod.outlook.com (2603:10a6:802:15::21) To VI1PR05MB3149.eurprd05.prod.outlook.com (2603:10a6:802:1b::14) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d048f82e-a028-4a71-d11c-08d50b00753a X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(48565401081)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:VI1PR05MB3149; X-Microsoft-Exchange-Diagnostics: 1; VI1PR05MB3149; 3:CXCUiYyuF00ghqVh35RsYApWHyLbL7blqBkq7nOFT6WTJzimckqVZCyPoDL6L8v06TS6yP0lCkubVUXI3LA/FTqutnsRHV+hWFDu2zeP/QyBeudwxFI2ukWpx5jlj5aOcu0sNebpz1771DeuF6NZG9MuW7z5Tpp2jDyKn8gGYJx7k0ECI1k71Agq04rnKvaYArrHfqm5yTIGirOwdz19bGBYM4PhWRhxpLLD1+dMSpn8ljZFeE8SzpA3KmHZU5If; 25:3dYEnmea8aAhb0pK/NiseWQDvdU/leOb1ktVq8SRpMa7xIt/yXbWOWfY7mcbDZOIK3LhbgLg/wjV8qVp0Dtrh2HlUPF+G2TVFSrhtqR5SRMK+uGOPJMfkzoJ67dPT4vW8TLtGHh6WSNDCsMb2KCbdVdEQ5SPCVGdTz4U3nCi2/OJ5dwlv0uoDkCPWcGtputYff7L5lC2U6NXB+LX6hcDln4+pGB+rta2YUBbvG7qjSg5s0Tun3OajycvaHfvC8J22OVXponnm1WN5/bmoUibTTUP/pyOhBvbLgVzNBBHYvODKN9d4rlTX+UnDdkm0tvREqzc18keYluEjlLlMiNi3w==; 31:1o3jO+UtjgUlohsFJhzBAOOzzjUiKAa/5Fay3ZLeLc7ZuDAi7eaMJVw5+S9krJ/81ctiW0Tvz+okt70c3nnzChPDwfE3NjNFzvjga5koJ0aqL+ROXuX7mglR+zk1TKgHaSxXxaF/e+7XCMG+7CcKawLLFExWZ7tcPocwtkZRKfDxOtrw+G74ityXpPw43MRNMrTcZPDs2h95y4LFOhP3FdUY0SlFANyDUTnVfCkZ4qc= X-MS-TrafficTypeDiagnostic: VI1PR05MB3149: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; VI1PR05MB3149; 20:+hVZvnN15rPAARwj2t+e1AP9cH4o32UTsfLu2hF6af3Cwa3WEaHomjhfDC5rT8OYrd9eH71pAP5Cb0g5NM3uOc5ctX5zmKqCg4FaRa1GXrORRngp0HPz2O9ITgtxsTN91Tj8lqFJkPtCIsrSgjAMQTpStW+Q+BlCINSLami9qTFEXgt4+FKUCVD+Z5uiBFUU3u3MD0+gEvrb1bbcKOizII0yRWceov+CIh6D5bAxucfZMlvqt2WNJ7rMVrtvK8qCWKBx60VFm5ps469e3OHvvILgdeTkqKa/QuaKoXanPU27xf/f4LJ/QIJYZDIhbPLI28HLbA7v1Rw/GWvI4ly+7TxZpj6yT13rr5yUepRwKPRznTCk9+f25tYwjwqnUj8Krg7U+QRxuhEY8ZDniDW+TcIp99qVrx68oIsxa86n1dhysvvnggi4eE4DFjkVkF/63qSR0Ayi0okR/C115SwGUUGXBQ3ynI2FMukY1vHhUIQ8CWJIIEKHWiMR6N1/qZm9; 4:/CVLKdIxoFaVpnKpADu/3Oee3+DlpO9vhf557Rcctr/UEzjFrL0jT6g2dWT2yFCsIjYygexo26iNFeOxuFBGvYrsnJNF6Ex7TZ6Wlw/RYrcSzyubKnXOU6fE7T9SXTeE6MC3xDDYMZvsGQl8imt/pgvqe0N6Onl90P1lr6at9P7Ym5d9/R/nvXNHqW4hX9RTTnFBKTAGjutvHNAi8XyL853pI6gvN7xwr0p6jtSfiyA/VxPPfXDN5M9u5zEcr9gFaIytdSAbvShQwC5WHmIsFsoVjBEenckn+iuCpy9zLzc8WLwL18MTS3mMVUFJZwdeLgqS+R4jBPpIhgqN04kNFA== X-Exchange-Antispam-Report-Test: UriScan:(20558992708506)(278428928389397); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(10201501046)(93006095)(93001095)(100000703101)(100105400095)(6055026)(6041248)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123564025)(20161123555025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:VI1PR05MB3149; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:VI1PR05MB3149; X-Forefront-PRVS: 0450A714CB X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(346002)(376002)(189002)(199003)(55016002)(105586002)(189998001)(21086003)(305945005)(8676002)(16586007)(106356001)(66066001)(25786009)(47776003)(33646002)(81166006)(8936002)(81156014)(16526018)(316002)(7736002)(86362001)(4326008)(53936002)(3846002)(50226002)(2950100002)(50466002)(97736004)(33026002)(6116002)(68736007)(48376002)(69596002)(5003940100001)(118296001)(5660300001)(36756003)(478600001)(6666003)(76176999)(50986999)(101416001)(2906002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR05MB3149; H:mellanox.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; VI1PR05MB3149; 23:lN0Sfph4vyWwR6iIXrvmJ54ALZjF99P5m1ER5uirR?= =?us-ascii?Q?nhL3q7Xu+zkdTFBBaU/1hNVAf0Sr8j9r8E1jpRNe3AgODJobnTvzy8wMEQAk?= =?us-ascii?Q?gTjPcN0SX3VM8UWrVjpDXVD7k93PHtoOT0z1oC7c1Gy2qF2u7w7/RT2SbleE?= =?us-ascii?Q?+Q3XyfuQqzRNsLT5TfMyv+XbWm0n47bNEXQDYQGIV2mQPN6QKinZIDtBDwbg?= =?us-ascii?Q?qA2FUgYunPQj2sr4vEKu7DuYpsSgAgKv5c3BB0YYfUkjPauQ+3fWfmNQSNxp?= =?us-ascii?Q?PqSeftjynCPZ+6NyCY5XUVaoO1wALRfCL+E1xAVIuX8oujtNXUKN4T79Fv+L?= =?us-ascii?Q?ZFHePq98Hp6zQZJ5W2b6YWmE2HKAOfjugbyg4sm3bm0gHQAiruxdS7stSReE?= =?us-ascii?Q?8buoJJ+T/DnN97ppvM544dUE8VlM2HeepzZC/FkmI/ijBMlXb1VuSql4cqTy?= =?us-ascii?Q?EaMZmkKn+9qy4UrhBfMzlhNhLjDR0K44v2M5fEVcGTkeemjqbLhsjrrFuy0Q?= =?us-ascii?Q?jbHrSHfvbtDox4WeOiKvkk3wg3opl1XGFr6CgBMh08lKx4a9meBgwAeqnX+o?= =?us-ascii?Q?eldJQQcNXHm72XtcZ+m0kUrVUSuUvnufB+ht/aHfrf1ah7JPXRQiUS2gO8EQ?= =?us-ascii?Q?fmOMMizkXDNohBJTJ58dG4qXEeME7ETs7TAkR6nB6XtbTADlIqBRsYOodfNI?= =?us-ascii?Q?fUCpG8Et2JbcFN2CAufNdWDibjZWelAENXa9LcJHq2io40WFeiwYfxOTTcb7?= =?us-ascii?Q?du/07emECtTz6UlOcYfcJ/ePV0FqEMAwoGZnso1x/RYiN6gh37uw9YVtJ0Cp?= =?us-ascii?Q?JMhqPK3pHVMgYQFQguVj7M6RNRypj1eM7zymWJmTXfDD7+rBXWll4jtUi5eq?= =?us-ascii?Q?n37S7e4tY6KEVeKTUVXdroqT04kyI55spb91tmgQlnCoJRpEQwJgegyOURXW?= =?us-ascii?Q?DzaWw7V4c5rLILTEj3/E/yzeWz/JPyunTyI+gfpfksxdYu/ZB8e+c4l1PpTK?= =?us-ascii?Q?oCh4FgQyr4Yn6bupb9aHgTARuWzI/0VkVYL/QETGRtTN85mW3eXTMWYIIFJA?= =?us-ascii?Q?8aApTThWRK5u1CXFPYef0QUpza0CZnsCe6xM4uNfSRkFvsGiNRSV8Vw7KEMZ?= =?us-ascii?Q?SX/CJ1tXVk=3D?= X-Microsoft-Exchange-Diagnostics: 1; VI1PR05MB3149; 6:jfhhHqqqGf3zTFrvZmeV8Z7H4CpF3eRPxWfIvthQUp1BPYCGbUovNrZ2uZ2DPWK7FwiE6bemg28ZycufgjNXMRZdIAhp3Gk1PojF5KeWnwH9Y5olDU5Nqea2KI/LU5uKV6r/aCokYtvIguZGxq4qMSfrTjXlyWgawZsnrba3uwU5b+ebJnBfeG8dNGNZeYurCeJdCSInqFKjqkhVmb1iAIzlD5fHS7D0xb5WWLBXj0GVBqERJsFYQep3ljsVdxR6KIwlCCwetJGSSBzxtJi6WqCwema/arBvGGHs9+Dm77NoBmDN6367dY6byBDGcW1qGtEVxnVdEMa/ySBQUhGZhg==; 5:WvxPCMkCdDk9tNBN+Mt4DVB33juojLkFz3TsjhSo4qAy9Tlm8FLNuC+hGInAFNGdQCLorqoUNefPW51v2i/3jPYEOFoSddwQfL/XY7pk8T9+b42Dk8Rg2cP/Wp6vlpu2vOd3Atc1oZ4XjWYiOiXjNA==; 24:KnvcDljp/+Q4+ROhMof1iP7f/JADjqySSGklfNIlsBVVYfxTt7cIRj1J9CFQ7RlHdKSf+wSM0yUyKtbCNzkP9cYTau5fWNooUNRcnZu0VU0=; 7:sEOrJguJxhtDirRCxDZRJzsh0di4waEB+xB5zfbCYS93yzh8F8Lvi00alAW359a3Cs3QEfQKiIjm7vHGM+t0Oc+z94Zr/bqIb/If8FjndkHQjpGWPwNhIn1e86cfu2xqeE1aOvzSJUVRzLigU9x1hS7K7HhzMoyfGnULcPqDRdt7uds2+5tM5HpI4KvqW/Vx7KHlomN9cIS7bftDMd40ZC3Rl3LMTPl00VRvunqjyII= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Oct 2017 08:18:12.9987 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR05MB3149 Subject: [dpdk-dev] [PATCH v6 1/4] ethdev: introduce Rx queue offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Oct 2017 08:18:16 -0000 Introduce a new API to configure Rx offloads. In the new API, offloads are divided into per-port and per-queue offloads. The PMD reports capability for each of them. Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags. To enable per-port offload, the offload should be set on both device configuration and queue configuration. To enable per-queue offload, the offloads can be set only on queue configuration. Applications should set the ignore_offload_bitfield bit on rxmode structure in order to move to the new API. The old Rx offloads API is kept for the meanwhile, in order to enable a smooth transition for PMDs and application to the new API. Signed-off-by: Shahaf Shuler --- doc/guides/nics/features.rst | 33 ++++---- lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++---- lib/librte_ether/rte_ethdev.h | 51 +++++++++++- 3 files changed, 210 insertions(+), 30 deletions(-) diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 37ffbc68c..4e68144ef 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -179,7 +179,7 @@ Jumbo frame Supports Rx jumbo frames. -* **[uses] user config**: ``dev_conf.rxmode.jumbo_frame``, +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``. ``dev_conf.rxmode.max_rx_pkt_len``. * **[related] rte_eth_dev_info**: ``max_rx_pktlen``. * **[related] API**: ``rte_eth_dev_set_mtu()``. @@ -192,7 +192,7 @@ Scattered Rx Supports receiving segmented mbufs. -* **[uses] user config**: ``dev_conf.rxmode.enable_scatter``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``. * **[implements] datapath**: ``Scattered Rx function``. * **[implements] rte_eth_dev_data**: ``scattered_rx``. * **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``. @@ -206,11 +206,11 @@ LRO Supports Large Receive Offload. -* **[uses] user config**: ``dev_conf.rxmode.enable_lro``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``. * **[implements] datapath**: ``LRO functionality``. * **[implements] rte_eth_dev_data**: ``lro``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``. +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``. .. _nic_features_tso: @@ -363,7 +363,7 @@ VLAN filter Supports filtering of a VLAN Tag identifier. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_filter``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``. * **[implements] eth_dev_ops**: ``vlan_filter_set``. * **[related] API**: ``rte_eth_dev_vlan_filter()``. @@ -499,7 +499,7 @@ CRC offload Supports CRC stripping by hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``. .. _nic_features_vlan_offload: @@ -509,11 +509,10 @@ VLAN offload Supports VLAN offload to hardware. -* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_strip``, - ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``. * **[implements] eth_dev_ops**: ``vlan_offload_set``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``. * **[related] API**: ``rte_eth_dev_set_vlan_offload()``, ``rte_eth_dev_get_vlan_offload()``. @@ -526,10 +525,11 @@ QinQ offload Supports QinQ (queue in queue) offload. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``. @@ -540,13 +540,13 @@ L3 checksum offload Supports L3 checksum offload. -* **[uses] user config**: ``dev_conf.rxmode.hw_ip_checksum``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` | ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` | ``PKT_RX_IP_CKSUM_NONE``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``. @@ -557,13 +557,14 @@ L4 checksum offload Supports L4 checksum offload. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` | ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` | ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` | ``PKT_RX_L4_CKSUM_NONE``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``. @@ -574,8 +575,9 @@ MACsec offload Supports MACsec. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``, ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``. @@ -586,13 +588,14 @@ Inner L3 checksum Supports inner packet L3 checksum. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``. * **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``, ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``, ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``. * **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``. * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``. -* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``, +* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``, ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``. diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 1849a3bdd..9b73d2377 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -688,12 +688,90 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex) } } +/** + * A conversion function from rxmode bitfield API. + */ +static void +rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode, + uint64_t *rx_offloads) +{ + uint64_t offloads = 0; + + if (rxmode->header_split == 1) + offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT; + if (rxmode->hw_ip_checksum == 1) + offloads |= DEV_RX_OFFLOAD_CHECKSUM; + if (rxmode->hw_vlan_filter == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + if (rxmode->hw_vlan_strip == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + if (rxmode->hw_vlan_extend == 1) + offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND; + if (rxmode->jumbo_frame == 1) + offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + if (rxmode->hw_strip_crc == 1) + offloads |= DEV_RX_OFFLOAD_CRC_STRIP; + if (rxmode->enable_scatter == 1) + offloads |= DEV_RX_OFFLOAD_SCATTER; + if (rxmode->enable_lro == 1) + offloads |= DEV_RX_OFFLOAD_TCP_LRO; + + *rx_offloads = offloads; +} + +/** + * A conversion function from rxmode offloads API. + */ +static void +rte_eth_convert_rx_offloads(const uint64_t rx_offloads, + struct rte_eth_rxmode *rxmode) +{ + + if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT) + rxmode->header_split = 1; + else + rxmode->header_split = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM) + rxmode->hw_ip_checksum = 1; + else + rxmode->hw_ip_checksum = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) + rxmode->hw_vlan_filter = 1; + else + rxmode->hw_vlan_filter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + rxmode->hw_vlan_strip = 1; + else + rxmode->hw_vlan_strip = 0; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) + rxmode->hw_vlan_extend = 1; + else + rxmode->hw_vlan_extend = 0; + if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + rxmode->jumbo_frame = 1; + else + rxmode->jumbo_frame = 0; + if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP) + rxmode->hw_strip_crc = 1; + else + rxmode->hw_strip_crc = 0; + if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) + rxmode->enable_scatter = 1; + else + rxmode->enable_scatter = 0; + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rxmode->enable_lro = 1; + else + rxmode->enable_lro = 0; +} + int rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_conf local_conf = *dev_conf; int diag; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -723,8 +801,20 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -EBUSY; } + /* + * Convert between the offloads API to enable PMDs to support + * only one of them. + */ + if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) { + rte_eth_convert_rx_offload_bitfield( + &dev_conf->rxmode, &local_conf.rxmode.offloads); + } else { + rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads, + &local_conf.rxmode); + } + /* Copy the dev_conf parameter into the dev structure */ - memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf)); + memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf)); /* * Check that the numbers of RX and TX queues are not greater @@ -768,7 +858,7 @@ rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * If jumbo frames are enabled, check that the maximum RX packet * length is supported by the configured device. */ - if (dev_conf->rxmode.jumbo_frame == 1) { + if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) { RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u" @@ -1032,6 +1122,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, uint32_t mbp_buf_size; struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; + struct rte_eth_rxconf local_conf; void **rxq; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, if (rx_conf == NULL) rx_conf = &dev_info.default_rxconf; + local_conf = *rx_conf; + if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) { + /** + * Reflect port offloads to queue offloads in order for + * offloads to not be discarded. + */ + rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode, + &local_conf.offloads); + } + ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc, - socket_id, rx_conf, mp); + socket_id, &local_conf, mp); if (!ret) { if (!dev->data->min_rx_buf_size || dev->data->min_rx_buf_size > mbp_buf_size) @@ -2007,7 +2108,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) { + if (!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER)) { RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id); return -ENOSYS; } @@ -2083,23 +2185,41 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) /*check which option changed by application*/ cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_STRIP; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_STRIP; mask |= ETH_VLAN_STRIP_MASK; } cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_FILTER; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_FILTER; mask |= ETH_VLAN_FILTER_MASK; } cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD); - org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend); + org = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND); if (cur != org) { - dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur; + if (cur) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_VLAN_EXTEND; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_VLAN_EXTEND; mask |= ETH_VLAN_EXTEND_MASK; } @@ -2108,6 +2228,13 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) return ret; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP); + + /* + * Convert to the offload bitfield API just in case the underlying PMD + * still supporting it. + */ + rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads, + &dev->data->dev_conf.rxmode); (*dev->dev_ops->vlan_offload_set)(dev, mask); return ret; @@ -2122,13 +2249,16 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; - if (dev->data->dev_conf.rxmode.hw_vlan_strip) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP) ret |= ETH_VLAN_STRIP_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_filter) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER) ret |= ETH_VLAN_FILTER_OFFLOAD; - if (dev->data->dev_conf.rxmode.hw_vlan_extend) + if (dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND) ret |= ETH_VLAN_EXTEND_OFFLOAD; return ret; diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 99cdd54d4..e02d57881 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -348,7 +348,18 @@ struct rte_eth_rxmode { enum rte_eth_rx_mq_mode mq_mode; uint32_t max_rx_pkt_len; /**< Only used if jumbo_frame enabled. */ uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ + /** + * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_offload_capa field on rte_eth_dev_info + * structure are allowed to be set. + */ + uint64_t offloads; __extension__ + /** + * Below bitfield API is obsolete. Application should + * enable per-port offloads using the offload field + * above. + */ uint16_t header_split : 1, /**< Header Split enable. */ hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */ hw_vlan_filter : 1, /**< VLAN filter enable. */ @@ -357,7 +368,17 @@ struct rte_eth_rxmode { jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */ hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */ enable_scatter : 1, /**< Enable scatter packets rx handler */ - enable_lro : 1; /**< Enable LRO */ + enable_lro : 1, /**< Enable LRO */ + /** + * When set the offload bitfield should be ignored. + * Instead per-port Rx offloads should be set on offloads + * field above. + * Per-queue offloads shuold be set on rte_eth_rxq_conf + * structure. + * This bit is temporary till rxmode bitfield offloads API will + * be deprecated. + */ + ignore_offload_bitfield : 1; }; /** @@ -691,6 +712,12 @@ struct rte_eth_rxconf { uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */ uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + /** + * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags. + * Only offloads set on rx_queue_offload_capa or rx_offload_capa + * fields on rte_eth_dev_info structure are allowed to be set. + */ + uint64_t offloads; }; #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */ @@ -907,6 +934,18 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040 #define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080 +#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100 +#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200 +#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400 +#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800 +#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000 +#define DEV_RX_OFFLOAD_SCATTER 0x00002000 +#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \ + DEV_RX_OFFLOAD_UDP_CKSUM | \ + DEV_RX_OFFLOAD_TCP_CKSUM) +#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \ + DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_VLAN_EXTEND) /** * TX offload capabilities of a device. @@ -949,8 +988,11 @@ struct rte_eth_dev_info { /** Maximum number of hash MAC addresses for MTA and UTA. */ uint16_t max_vfs; /**< Maximum number of VFs. */ uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */ - uint32_t rx_offload_capa; /**< Device RX offload capabilities. */ + uint64_t rx_offload_capa; + /**< Device per port RX offload capabilities. */ uint32_t tx_offload_capa; /**< Device TX offload capabilities. */ + uint64_t rx_queue_offload_capa; + /**< Device per queue RX offload capabilities. */ uint16_t reta_size; /**< Device redirection table size, the total number of entries. */ uint8_t hash_key_size; /**< Hash key size in bytes */ @@ -1874,6 +1916,9 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); * each statically configurable offload hardware feature provided by * Ethernet devices, such as IP checksum or VLAN tag stripping for * example. + * The Rx offload bitfield API is obsolete and will be deprecated. + * Applications should set the ignore_bitfield_offloads bit on *rxmode* + * structure and use offloads field to set per-port offloads instead. * - the Receive Side Scaling (RSS) configuration when using multiple RX * queues per port. * @@ -1927,6 +1972,8 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev); * The *rx_conf* structure contains an *rx_thresh* structure with the values * of the Prefetch, Host, and Write-Back threshold registers of the receive * ring. + * In addition it contains the hardware offloads features to activate using + * the DEV_RX_OFFLOAD_* flags. * @param mb_pool * The pointer to the memory pool from which to allocate *rte_mbuf* network * memory buffers to populate each descriptor of the receive ring. -- 2.12.0