From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 48ED17CBD for ; Wed, 13 Sep 2017 14:56:32 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Sep 2017 05:56:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,387,1500966000"; d="scan'208";a="148706774" Received: from irsmsx154.ger.corp.intel.com ([163.33.192.96]) by orsmga005.jf.intel.com with ESMTP; 13 Sep 2017 05:56:23 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.75]) by IRSMSX154.ger.corp.intel.com ([169.254.12.83]) with mapi id 14.03.0319.002; Wed, 13 Sep 2017 13:56:22 +0100 From: "Ananyev, Konstantin" To: Thomas Monjalon , "dev@dpdk.org" , Shahaf Shuler CC: "stephen@networkplumber.org" Thread-Topic: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API Thread-Index: AQHTJYU8DF/tpFi9okyg3kyCdEiysqKkwPgQgAEZ2gCAABLvoIAAIFYAgAATnACAAS25gIAAR4XggAryaYCAAB5GAIAAF+WAgAAUKNA= Date: Wed, 13 Sep 2017 12:56:22 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772584F24A738@irsmsx105.ger.corp.intel.com> References: <2632317.9dDdYF2F86@xps> <1868308.cPa78Soq0s@xps> In-Reply-To: <1868308.cPa78Soq0s@xps> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNTVkMzhiNTYtYmVkOC00OTQ5LTk2MTctZTkyMTIzMDU3ZTdmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6ImQ5TWtBT2VvZTlmd2greFFibXNpRytiMmhFSXROUFN1Z1h4Wm9jN0F5M3M9In0= x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Sep 2017 12:56:33 -0000 > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Wednesday, September 13, 2017 1:42 PM > To: dev@dpdk.org; Shahaf Shuler > Cc: Ananyev, Konstantin ; stephen@networkpl= umber.org > Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the ne= w offloads API >=20 > 13/09/2017 13:16, Shahaf Shuler: > > Wednesday, September 13, 2017 12:28 PM, Thomas Monjalon: > > > I still think we must streamline ethdev API instead of complexifying. > > > We should drop the big "configure everything" and configure offloads = one by > > > one, and per queue (the finer grain). > > > > The issue is, that there is some functionality which cannot be achieved= when configuring offload per queue. > > For example - vlan filter on intel NICs. The PF can set it even without= creating a single queue, in order to enable it for the VFs. >=20 > As it is a device-specific - not documented - side effect, > I won't consider it. Hmm, are you saying that if there are gaps in our documentation it ok to br= eak things? Once again - you suggest to break existing functionality without providing = any alternative way to support it. Surely I will NACK such proposal. Konstantin=20 > However I understand it may be better to be able to configure > per-port offloads with a dedicated per-port function. > I agree with the approach of the v3 of this series. >=20 > Let me give my overview of offloads: >=20 > We have simple offloads which are configured by just setting a flag. > The same flag can be set per-port or per-queue. > This offload can be set before starting or on the fly. > We currently have no generic way to set it on the fly. >=20 > We have also more complicate offloads which require more configuration. > They are set with the rte_flow API. > They can be per-port, per-queue, on the fly or not (AFAIK). >=20 > I think we must discuss "on the fly" capability. > It requires probably to set up simple offloads (flags) with a dedicated > function instead of using "configure" and "queue_setup" functions. > This new capability can be implemented in a different series. >=20 > Opinions?