From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-he1eur01on0061.outbound.protection.outlook.com [104.47.0.61]) by dpdk.org (Postfix) with ESMTP id D3DA3A495 for ; Tue, 23 Jan 2018 15:34:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=cFViZHo9vhhriOyYH9F81NRdKevHpig3ILP1D3W3JHE=; b=POYCQvN1vVFEpqyU786EdpU87uDLMoTqkFSlgK5QZtQu7psVair32HS5oL/qIiDg03TXH/ZG0ekVr9taD4I0KdVq3I0IYAoH4vq9+NU/0k6vky2mo23XAAlOHpx0L+NWiOBk2tbTt4jpL0BEmnHJ48JODcWbWuPPf8iWvWff/nk= Received: from VI1PR05MB3149.eurprd05.prod.outlook.com (10.170.237.142) by VI1PR05MB1120.eurprd05.prod.outlook.com (10.162.15.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.428.17; Tue, 23 Jan 2018 14:34:12 +0000 Received: from VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::789c:3f06:bb88:e29c]) by VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::789c:3f06:bb88:e29c%13]) with mapi id 15.20.0428.019; Tue, 23 Jan 2018 14:34:11 +0000 From: Shahaf Shuler To: Olivier Matz , "dev@dpdk.org" , "Ananyev, Konstantin" , Ferruh Yigit , Thomas Monjalon , "Bruce Richardson" , Andrew Rybchenko Thread-Topic: questions about new offload ethdev api Thread-Index: AQHTlFGILWvzUvb8UECLHynXRp7vV6OBfRmA Date: Tue, 23 Jan 2018 14:34:11 +0000 Message-ID: References: <20180123135308.tr7nmuqsdeogm7bl@glumotte.dev.6wind.com> In-Reply-To: <20180123135308.tr7nmuqsdeogm7bl@glumotte.dev.6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [31.154.10.107] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR05MB1120; 7:mNA3Mtov4lnxDZ4dBJTu8OCuQYTtcBpXhdQRGO583IyF6Lx0LB1UYG/O1XssaPUmawnfiV7+i889CpDIDTe1kyxKMyNY+XfIHDUzWlh/ILwNxd4dUF6vXicp2yXzHDKA2nq6iZ62ocUl4XR9tpf/hzUTR9uA6Mq2vJYKUMbrF0Mfx/KrG7e7Wjbxaj6EiRaZGJlAUQwoD07//un9y62IA/P2fOxKfeladb+YthHLh9CDzArsqTzZGI9jO98QasLf x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: 73384dba-0cb6-4c3e-fd4f-08d5626e5eca x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(4534165)(4627221)(201703031133081)(201702281549075)(5600026)(4604075)(3008032)(2017052603307)(7153060)(7193020); SRVR:VI1PR05MB1120; x-ms-traffictypediagnostic: VI1PR05MB1120: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040501)(2401047)(8121501046)(5005006)(93006095)(93001095)(10201501046)(3002001)(3231023)(2400081)(944501161)(6055026)(6041288)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(6072148)(201708071742011); SRVR:VI1PR05MB1120; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:VI1PR05MB1120; x-forefront-prvs: 05610E64EE x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(376002)(366004)(39380400002)(346002)(396003)(199004)(189003)(2501003)(229853002)(14454004)(110136005)(5660300001)(2900100001)(478600001)(6436002)(25786009)(2906002)(86362001)(26005)(3846002)(6116002)(68736007)(97736004)(3660700001)(105586002)(3280700002)(6506007)(99286004)(76176011)(9686003)(102836004)(55016002)(33656002)(53936002)(106356001)(66066001)(305945005)(6246003)(74316002)(81166006)(7736002)(2950100002)(7696005)(316002)(81156014)(5250100002)(8936002)(8676002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR05MB1120; H:VI1PR05MB3149.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: JjxykDNNGtgLOb8OruSQi1y4ZDXep10+DDgFgBHRJuY/q0DmvgSPKotxWDbdWzKrgsRBdAUVh18TRtlTxbkZcw== spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 73384dba-0cb6-4c3e-fd4f-08d5626e5eca X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Jan 2018 14:34:11.7360 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR05MB1120 Subject: Re: [dpdk-dev] questions about new offload ethdev api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Jan 2018 14:34:16 -0000 Tuesday, January 23, 2018 3:53 PM, Olivier Matz: > Hi, >=20 > I'm currently porting an application to the new ethdev offload api, and I= have > two questions about it. >=20 > 1/ inconsistent offload capa for PMDs using the old API? >=20 > It is stated in struct rte_eth_txmode: >=20 > /** > * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags. > * Only offloads set on tx_offload_capa field on rte_eth_dev_info > * structure are allowed to be set. > */ > uint64_t offloads; >=20 > So, if I want to enable DEV_TX_OFFLOAD_MULTI_SEGS for the whole > ethdev, I must check that DEV_TX_OFFLOAD_MULTI_SEGS is advertised in > dev_info->tx_offload_capa. >=20 > In my understanding, many PMDs are still using the old API, and there is = a > conversion layer in ethdev when doing dev_configure(). But I don't see an= y > similar mechanism for the dev_info. Therefore, > DEV_TX_OFFLOAD_MULTI_SEGS is not present in tx_offload_capa, and if I > follow the API comment, I'm not allowed to use this feature. >=20 > Am I missing something or is it a bug? Yes this is something we missed during the review.=20 The DEV_TX_OFFLOAD_MULTI_SEGS is a new capability, to match the old ETH_TXQ= _FLAGS_NOMULTSEGS, I guess that we have the same issue with DEV_TX_OFFLOAD_= MBUF_FAST_FREE. Am not sure it can be easily solved with conversion function in ethdev laye= r. since both of the capabilities are new, and ethdev cannot possibly know = which PMD supports them. One option is to set the cap for every PMD, like it was assumed before the = new offloads API. not sure it is the right way though. The DEV_TX_OFFLOAD_MBUF_FAST_FREE could be done by converting the default_t= xconf into capability flags. So in case the (ETH_TXQ_FLAGS_NOREFCOUNT | ETH= _TXQ_FLAGS_NOMULTMEMP) are set ethdev layer will set the fast free flag. However, I think the right fix is for the PMDs which indeed support it to p= rovide a patch which set it in the tx_offloads_capa even if they don't want= to do the full conversion yet (I think it is very small work). Specificall= y considering we expect the majority of the PMD to move to the new API in 1= 8.05. I tried to make everything work for both old/new PMD/application however st= ill there are some corner cases.=20 >=20 > It looks that testpmd does not check the capa before setting the an offlo= ad > flag. This could be a workaround in my application. >=20 > 2/ meaning of rxmode.jumbo_frame, rxmode.enable_scatter, > rxmode.max_rx_pkt_len >=20 > While it's not related to the new API, it is probably a good opportunity = to > clarify the meaning of these flags. I'm not able to find a good documenta= tion > about them. >=20 > Here is my understanding, the configuration only depends on: > - the maximum rx frame length > - the amount of data available in a mbuf (minus headroom) >=20 > Flags to set in rxmode (example): > +---------------+----------------+----------------+-----------------+ > | |mbuf_data_len=3D1K|mbuf_data_len=3D2K|mbuf_data_len=3D16= K| > +---------------+----------------+----------------+-----------------+ > |max_rx_len=3D1500|enable_scatter | | | > +---------------+----------------+----------------+-----------------+ > |max_rx_len=3D9000|enable_scatter, |enable_scatter, |jumbo_frame | > | |jumbo_frame |jumbo_frame | | > +---------------+----------------+----------------+-----------------+ >=20 > If this table is correct, the flag jumbo_frame would be equivalent to che= ck if > max_rx_pkt_len is above a threshold. >=20 > And enable_scatter could be deduced from the mbuf size of the given rxq > (which is a bit harder but maybe doable). I glad you raised this subject. We had a lot of discussion on it internally= in Mellanox. I fully agree. All application needs is to specify the maximum packet size it wants to rec= eive.=20 I think also the lack of documentation is causing PMDs to use those flags w= rongly. For example - some PMDs set the jumbo_frame flag internally without= it being set by the application.=20 I would like to add one more item : MTU. What is the relation (if any) between setting MTU and the max_rx_len ?=20 I know MTU stands for Max Transmit Unit, however at least in Linux it is th= e same for the Send and the receive.=20 >=20 > Thanks, > Olivier