From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-he1eur01on0044.outbound.protection.outlook.com [104.47.0.44]) by dpdk.org (Postfix) with ESMTP id 477573254 for ; Tue, 5 Sep 2017 12:51:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=/23GkRX47oFOSHslQ9e3pwu128ozQf+zZX4URjVdafc=; b=BRjC5/5B1ITeBroAcz7SgNSawTw++qeYQs1qFjKqCLAGQHkFwokjt7+HsRXliJaazOQkLEQAhwH2bnUc3gOIcWZXCcQjz42m+xSsSI0W6oW2oc6TZaM10W5X/8QgjXj079lTGmc/fHEuigUCYUNo+/ME3RIxIVaeRwC5kVoyoL0= Received: from VI1PR05MB3149.eurprd05.prod.outlook.com (10.170.237.142) by VI1PR05MB0989.eurprd05.prod.outlook.com (10.162.11.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.13.10; Tue, 5 Sep 2017 10:51:38 +0000 Received: from VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::8450:1a86:2dd0:82c2]) by VI1PR05MB3149.eurprd05.prod.outlook.com ([fe80::8450:1a86:2dd0:82c2%13]) with mapi id 15.20.0013.018; Tue, 5 Sep 2017 10:51:38 +0000 From: Shahaf Shuler To: "Ananyev, Konstantin" , Thomas Monjalon CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API Thread-Index: AQHTJYU1V12CwMiH90ScwHe7u4qjjKKkxkyAgAElSgCAAAYVAIAAJeMw Date: Tue, 5 Sep 2017 10:51:38 +0000 Message-ID: References: <2327783.H4uO08xLcu@xps> <2601191342CEEE43887BDE71AB9772584F2460F1@irsmsx105.ger.corp.intel.com> <2334939.YzL2ADl2XU@xps> <2601191342CEEE43887BDE71AB9772584F246819@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772584F246819@irsmsx105.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR05MB0989; 6:MCYf7I3Ffwm7teRnhYxvhLl0K2Xx/xJmy4fwHOldajDF8Szv4Uvvx/st/x743aqMB0nM2xLsMxIKWBQtQ3NEP8QAtD1vlF52uUBo0zDt51R+MLp21FUsqsPOXNmhT54WJ60IC7XG2njn0sePn+kLYBop7MPpg6nGaHGSr7TW8fjsyz4JKYPtX+wgupB7rEX8vzkyq/aibl72twqb4HIj9kzIi5R4y9x8xVNf/3Yh66OX3xZixVkLDK7bz40AtDAP4ImyZ1ehC/zt5rkQR6op3DneWkDaMsCottDEWKCY6owvByBjADGFORoBAUww9qwS0w7RWWfGaIC1mN9u5tT23g==; 5:4yUHfscdLqGno8bRSEjiLd7WYxFVlnoKh1bMuWXCYDW7xwixLZPEUfiqmU//FReH6Ei5AVgNWaxmqJ2wOmF2p6b8RcTCdYbSylJUVRHQBZhvHBWBik5gJ6+NLm+PepjOJMTS34ELCKegf18jUGk+BQ==; 24:MBMlkgVVCalSSguYOKfxw7ediZVNoP/mapVYaT0+liuinrVKVS4oYP2GqjrH/UQ5u3ahFUsgaXhZ15h2bvSSXeOvGJ+Kpw63rZhyHQ7hcNo=; 7:IZtW54vBpR/XyYHlhhcYpI4KVEVpDrYMwL6/cgEwBxpSawWQbRXJ7BANVryMNsUqUL2UJsh9wcHw1++ioKgRyX+KFto4UFWN4LLyvtq1ClcT9IoabafOjd0UQSkg4WhKkuqEi3VbaEzlPLbOudp7iRYELhN8zun1RThL4wpNcsp9Xjq1CzjCIqkJIHITqwknKcD41du/lkHJFp3evZ5e3F41AA0u3nLS4exTSun0C7M= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-ms-office365-filtering-correlation-id: 4a1b1414-8ca7-4901-5dda-08d4f44c15d3 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(48565401081)(300000503095)(300135400095)(2017052603199)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:VI1PR05MB0989; x-ms-traffictypediagnostic: VI1PR05MB0989: x-exchange-antispam-report-test: UriScan:(211171220733660); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(93006095)(93001095)(100000703101)(100105400095)(10201501046)(6055026)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123555025)(20161123558100)(20161123560025)(20161123562025)(20161123564025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:VI1PR05MB0989; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:VI1PR05MB0989; x-forefront-prvs: 0421BF7135 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(39860400002)(377454003)(199003)(189002)(97736004)(5250100002)(478600001)(68736007)(101416001)(66066001)(25786009)(305945005)(7696004)(74316002)(7736002)(2900100001)(102836003)(5660300001)(6116002)(3280700002)(4326008)(3846002)(93886005)(8936002)(55016002)(86362001)(229853002)(81166006)(81156014)(99286003)(9686003)(53936002)(105586002)(6246003)(106356001)(14454004)(2950100002)(8676002)(50986999)(76176999)(189998001)(2906002)(33656002)(54356999)(6506006)(3660700001)(6436002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR05MB0989; H:VI1PR05MB3149.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Sep 2017 10:51:38.0854 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR05MB0989 Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 10:51:41 -0000 Tuesday, September 5, 2017 11:10 AM, Ananyev, Konstantin: > > > > > In fact, right now it is possible to query/change these 3 vlan > > > > > offload flags on the fly (after dev_start) on port basis by > rte_eth_dev_(get|set)_vlan_offload API. Regarding this API from ethdev. So this seems like a hack on ethdev. Currently there are 2 ways for user to= set Rx vlan offloads. One is through dev_configure which require the ports to be stopped. The oth= er is this API which can set even if the port is started. We should have only one place were application set offloads and this is cur= rently on dev_configure, And future to be on rx_queue_setup. I would say that this API should be removed as well. Application which wants to change those offloads will stop the ports and re= configure the PMD. Am quite sure that there are PMDs which need to re-create the Rxq based on = vlan offloads changing and this cannot be done while the traffic flows. > > > > > So, I think at least these 3 flags need to be remained on a port = basis. > > > > > > > > I don't understand how it helps to be able to configure the same > > > > thing in 2 places. > > > > > > Because some offloads are per device, another - per queue. > > > Configuring on a device basis would allow most users to conjure all > > > queues in the same manner by default. > > > Those users who would need more fine-grained setup (per queue) will > > > be able to overwrite it by rx_queue_setup(). > > > > Those users can set the same config for all queues. > > > > > > > I think you are just describing a limitation of these HW: some > > > > offloads must be the same for all queues. > > > > > > As I said above - on some devices some offloads might also affect > > > queues that belong to VFs (to another ports in DPDK words). > > > You might never invoke rx_queue_setup() for these queues per your > app. > > > But you still want to enable this offload on that device. >=20 > I am ok with having per-port and per-queue offload configuration. > My concern is that after that patch only per-queue offload configuration = will > remain. > I think we need both. So looks like we all agree PMDs should report as part of the rte_eth_dev_in= fo_get which offloads are per port and which are per queue. Regarding the offloads configuration by application I see 2 options: 1. have an API to set offloads per port as part of device configure and API= to set offloads per queue as part of queue setup 2. set all offloads as part of queue configuration (per port offloads will = be set equally for all queues). In case of a mixed configuration for port o= ffloads PMD will return error. Such error can be reported on device start. The PMD will traverse the q= ueues and check for conflicts. I will focus on the cons, since both achieve the goal: Cons of #1: - Two places to configure offloads. - Like Thomas mentioned - what about offloads per device? This direction le= ads to more places to configure the offloads. Cons of #2: - Late error reporting - on device start and not on queue setup. I would go with #2. > Konstantin >=20 > > > > You are advocating for per-port configuration API because some > > settings must be the same on all the ports of your hardware? > > So there is a big trouble. You don't need per-port settings, but > > per-hw-device settings. > > Or would you accept more fine-grained per-port settings? > > If yes, you can accept even finer grained per-queues settings. > > > > > > > It does not prevent from configuring them in the per-queue setup. > > > > > > > > > In fact, why can't we have both per port and per queue RX offload= : > > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them o= n > a port basis. > > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply > them on a queue basis. > > > > > - if particular RX_OFFLOAD flag for that device couldn't be setup= on a > queue basis - > > > > > rx_queue_setup() will return an error. > > > > > > > > The queue setup can work while the value is the same for every > queues. > > > > > > Ok, and how people would know that? > > > That for device N offload X has to be the same for all queues, and > > > for device M offload X can be differs for different queues. > > > > We can know the hardware limitations by filling this information at > > PMD init. > > > > > Again, if we don't allow to enable/disable offloads for particular > > > queue, why to bother with updating rx_queue_setup() API at all? > > > > I do not understand this question. > > > > > > > - rte_eth_rxq_info can be extended to provide information which > RX_OFFLOADs > > > > > can be configured on a per queue basis. > > > > > > > > Yes the PMD should advertise its limitations like being forced to > > > > apply the same configuration to all its queues. > > > > > > Didn't get your last sentence. > > > > I agree that the hardware limitations must be written in an ethdev > structure.