From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0061.outbound.protection.outlook.com [104.47.2.61]) by dpdk.org (Postfix) with ESMTP id 03B3F3DC for ; Wed, 6 Sep 2017 08:01:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=j0HAVKThxiJAS5MfCLJ4ZGrvbxhGwGPBIKDfZroh+ms=; b=eI8dvExtcMPIOoRWaN/qcFVazj1GLqNu/WLMiMT1ZqTXe1iocwW3tjsEfwBTNI6DXcit53xiBPzfMIhNtQoBoe6zq6PybkroNPL7lZip15p9fr12thy2h4vUVWCONXYIb6PA+nwUozrinG0P8GwWjP9QEZM7l0f7dewicHdlAvc= Received: from VI1PR05MB3149.eurprd05.prod.outlook.com (10.170.237.142) by VI1PR05MB1343.eurprd05.prod.outlook.com (10.162.122.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.13.10; Wed, 6 Sep 2017 06:01:43 +0000 Received: from VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::8450:1a86:2dd0:82c2]) by VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::8450:1a86:2dd0:82c2%13]) with mapi id 15.20.0013.018; Wed, 6 Sep 2017 06:01:43 +0000 From: Shahaf Shuler To: "Ananyev, Konstantin" , Thomas Monjalon CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API Thread-Index: AQHTJYU1V12CwMiH90ScwHe7u4qjjKKkxkyAgAElSgCAAAYVAIAAJeMwgABVZwCAAOkvQA== Date: Wed, 6 Sep 2017 06:01:43 +0000 Message-ID: References: <2327783.H4uO08xLcu@xps> <2601191342CEEE43887BDE71AB9772584F2460F1@irsmsx105.ger.corp.intel.com> <2334939.YzL2ADl2XU@xps> <2601191342CEEE43887BDE71AB9772584F246819@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772584F246B5D@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772584F246B5D@irsmsx105.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR05MB1343; 6:swZCqz5BdaOMhD3H4k1yh3tFTnFImvwq9KhA/iFxKtOVh2vSDUH+ytEpwTG8KR8KeZs4gmdrlDjrFE7q1RcHR5DigT/9hkpS74MVt6DwXBi4Sh0JoQ7iqYyXXGBQbXxnQrjk9M7ObJWms1v+W4uYA/JDZc8EyKlHyIEAzSu+1pO83SUHbTwje+s0V1NQe4pIb1m9RUREwE2y5+/tTscnxhv4QIBYpbOudR/JS/LogFhxt9aBcTb1TDO4dJKRCnoozsPlgc6D8c5fuWiYu5CXXbLKRY39j/Kn37ggiG0xeCXrVPGGTIlFYUoFz4Kc43TTdYS2BbxcQOkFroxncbmsVw==; 5:vMOdvjacNduydG89N7Q8wFtoSkBLfsUDHvFmomu4GX9NPsYnLe4H4pmsdSn9J8D80jGTlQIgnTsKyaKJlICwmRxDQYF8zjU8bnfHRRF7UkpTXZCdRkZNbsI8WxXYPiytWgHvJCsIEPcgdmeBJjD2oA==; 24:mG92ih3X0AhhAGTXYaYx5rfDOv3FLLo8EtSIG6L4FV8oQ/W9cvWeu9hJr4MFV3l8Z7xOFHAdLpSKElWEU9Y+rHPArh61ptl1THr5FeBhn4o=; 7:+xYTNGCfctGQQkWiUE3rfwsc3XZa1R6xs+sbcAQpxzcqMN6zrPIuaGPg1z4D+1p+4fRP8sqoflDjTuf0QqOEuB9kdbMW8KsYsXdRRRcZ/1ohwvK1AhJxfOwHjGRzcXmJZBnyGX0y0GXqXzjsnXB8H/imjJ2ztpW7EaVZ7XREVWJBW025Uosw2uBY0/BrK7E5telFYsCnwbv9FnamB2568lf7p2AzlGCkzSRec6LmMKI= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-ms-office365-filtering-correlation-id: dcbab8d2-4d74-40d1-8c6a-08d4f4ecbfd3 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(300000502095)(300135100095)(22001)(2017030254152)(48565401081)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:VI1PR05MB1343; x-ms-traffictypediagnostic: VI1PR05MB1343: x-exchange-antispam-report-test: UriScan:(211171220733660); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(93006095)(93001095)(100000703101)(100105400095)(10201501046)(3002001)(6055026)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123560025)(20161123562025)(20161123564025)(20161123555025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:VI1PR05MB1343; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:VI1PR05MB1343; x-forefront-prvs: 0422860ED4 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(377454003)(189002)(199003)(3280700002)(229853002)(2906002)(93886005)(81156014)(2950100002)(305945005)(97736004)(66066001)(7736002)(189998001)(8676002)(76176999)(8936002)(6506006)(6436002)(74316002)(50986999)(81166006)(54356999)(105586002)(33656002)(101416001)(478600001)(106356001)(5660300001)(6246003)(68736007)(14454004)(9686003)(53936002)(25786009)(2900100001)(102836003)(6116002)(7696004)(3846002)(5250100002)(4326008)(99286003)(3660700001)(86362001)(55016002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR05MB1343; H:VI1PR05MB3149.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Sep 2017 06:01:43.2135 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR05MB1343 Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Sep 2017 06:01:46 -0000 Tuesday, September 5, 2017 6:31 PM, Ananyev, Konstantin: > > > > > > > > > In fact, right now it is possible to query/change these 3 > > > > > > > vlan offload flags on the fly (after dev_start) on port > > > > > > > basis by > > > rte_eth_dev_(get|set)_vlan_offload API. > > > > Regarding this API from ethdev. > > > > So this seems like a hack on ethdev. Currently there are 2 ways for use= r to > set Rx vlan offloads. > > One is through dev_configure which require the ports to be stopped. The > other is this API which can set even if the port is started. >=20 > Yes there is an ability to enable/disable VLAN offloads without > stop/reconfigure the device. > Though I wouldn't call it 'a hack'. > From my perspective - it is a useful feature. > Same as it is possible in some cases to change MTU without stopping devic= e, > etc. >=20 > > > > We should have only one place were application set offloads and this > > is currently on dev_configure, >=20 > Hmm, if HW supports the ability to do things at runtime why we have to st= op > users from using that ability? >=20 > > And future to be on rx_queue_setup. > > > > I would say that this API should be removed as well. > > Application which wants to change those offloads will stop the ports an= d > reconfigure the PMD. >=20 > I wouldn't agree - see above. >=20 > > Am quite sure that there are PMDs which need to re-create the Rxq > > based on vlan offloads changing and this cannot be done while the traff= ic > flows. >=20 > That's an optional API - PMD can choose does it want to support it or not= . >=20 > > > > > > > > > > > So, I think at least these 3 flags need to be remained on a p= ort > basis. > > > > > > > > > > > > I don't understand how it helps to be able to configure the > > > > > > same thing in 2 places. > > > > > > > > > > Because some offloads are per device, another - per queue. > > > > > Configuring on a device basis would allow most users to conjure > > > > > all queues in the same manner by default. > > > > > Those users who would need more fine-grained setup (per queue) > > > > > will be able to overwrite it by rx_queue_setup(). > > > > > > > > Those users can set the same config for all queues. > > > > > > > > > > > I think you are just describing a limitation of these HW: some > > > > > > offloads must be the same for all queues. > > > > > > > > > > As I said above - on some devices some offloads might also > > > > > affect queues that belong to VFs (to another ports in DPDK words)= . > > > > > You might never invoke rx_queue_setup() for these queues per > > > > > your > > > app. > > > > > But you still want to enable this offload on that device. > > > > > > I am ok with having per-port and per-queue offload configuration. > > > My concern is that after that patch only per-queue offload > > > configuration will remain. > > > I think we need both. > > > > So looks like we all agree PMDs should report as part of the > rte_eth_dev_info_get which offloads are per port and which are per queue. >=20 > Yep. >=20 > > > > Regarding the offloads configuration by application I see 2 options: > > 1. have an API to set offloads per port as part of device configure > > and API to set offloads per queue as part of queue setup 2. set all > > offloads as part of queue configuration (per port offloads will be set = equally > for all queues). In case of a mixed configuration for port offloads PMD w= ill > return error. > > Such error can be reported on device start. The PMD will traverse t= he > queues and check for conflicts. > > > > I will focus on the cons, since both achieve the goal: > > > > Cons of #1: > > - Two places to configure offloads. >=20 > Yes, but why is that a problem? If we could make the offloads API to set the offloads in a single place it = would be much cleaner and less error prune. There is one flow which change the offloads configuration.=20 Later on when we want to change/expend it will be much simpler, as all modi= fication can happen in a single place only. >=20 > > - Like Thomas mentioned - what about offloads per device? This directio= n > leads to more places to configure the offloads. >=20 > As you said above - there would be 2 places: per port and per queue. > Could you explain - what other places you are talking about? In fact, the vlan filter offload for PF is a *per device* offload and not p= er port. Since the corresponding VF has it just by the fact the PF set it o= n dev_configure. So to be exact, such offload should be set on a new offload section called = "per device offloads". Currently you compromise on setting it in the *per port* offload section, w= ith proper explanation on the VF limitation in intel. >=20 > > > > Cons of #2: > > - Late error reporting - on device start and not on queue setup. >=20 > Consider scenario when PF has a corresponding VFs (PF is controlled by > DPDK) Right now (at least with Intel HW) it is possible to: >=20 > struct rte_eth_conf dev_conf; > dev_conf. rxmode.hw_vlan_filter =3D 1; > ... > rte_eth_dev_configure(pf_port_id, 0, 0, &dev_conf); > rte_eth_dev_start(pf_port_id); >=20 > In that scenario I don't have any RX/TX queues configured. > Though I still able to enable vlan filter, and it would work correctly fo= r VFs. > Same for other per-port offloads. For the PF - enabling vlan filtering without any queues means nothing. The = PF can receive no traffic, what different does it makes the vlan filtering = is set? For the VF - I assume it will have queues, therefore for it vlan filtering = has a meaning. However as I said above, the VF has the vlan filter because = in intel this is per-device offload, so this is not a good example.=20 Which other per-port offloads you refer to? I don't understand what is the meaning of setting per-port offloads without= opening any Tx/Rx queues.=20 > With approach #2 it simply wouldn't work. Yes for vlan filtering it will not work on intel, and this may be enough to= move to suggestion #1. Thomas? >=20 > So my preference is still #1. >=20 > Konstantin >=20 > > > > I would go with #2. > > > > > Konstantin > > > > > > > > > > > You are advocating for per-port configuration API because some > > > > settings must be the same on all the ports of your hardware? > > > > So there is a big trouble. You don't need per-port settings, but > > > > per-hw-device settings. > > > > Or would you accept more fine-grained per-port settings? > > > > If yes, you can accept even finer grained per-queues settings. > > > > > > > > > > > It does not prevent from configuring them in the per-queue setu= p. > > > > > > > > > > > > > In fact, why can't we have both per port and per queue RX > offload: > > > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply > > > > > > > them on > > > a port basis. > > > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and > > > > > > > apply > > > them on a queue basis. > > > > > > > - if particular RX_OFFLOAD flag for that device couldn't be > > > > > > > setup on a > > > queue basis - > > > > > > > rx_queue_setup() will return an error. > > > > > > > > > > > > The queue setup can work while the value is the same for every > > > queues. > > > > > > > > > > Ok, and how people would know that? > > > > > That for device N offload X has to be the same for all queues, > > > > > and for device M offload X can be differs for different queues. > > > > > > > > We can know the hardware limitations by filling this information > > > > at PMD init. > > > > > > > > > Again, if we don't allow to enable/disable offloads for > > > > > particular queue, why to bother with updating rx_queue_setup() AP= I > at all? > > > > > > > > I do not understand this question. > > > > > > > > > > > - rte_eth_rxq_info can be extended to provide information > > > > > > > which > > > RX_OFFLOADs > > > > > > > can be configured on a per queue basis. > > > > > > > > > > > > Yes the PMD should advertise its limitations like being forced > > > > > > to apply the same configuration to all its queues. > > > > > > > > > > Didn't get your last sentence. > > > > > > > > I agree that the hardware limitations must be written in an ethdev > > > structure.