From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 725482C15 for ; Tue, 5 Sep 2017 10:10:48 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Sep 2017 01:09:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,479,1498546800"; d="scan'208";a="145546363" Received: from irsmsx105.ger.corp.intel.com ([163.33.3.28]) by orsmga005.jf.intel.com with ESMTP; 05 Sep 2017 01:09:55 -0700 Received: from irsmsx155.ger.corp.intel.com (163.33.192.3) by irsmsx105.ger.corp.intel.com (163.33.3.28) with Microsoft SMTP Server (TLS) id 14.3.319.2; Tue, 5 Sep 2017 09:09:55 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.75]) by irsmsx155.ger.corp.intel.com ([169.254.14.70]) with mapi id 14.03.0319.002; Tue, 5 Sep 2017 09:09:54 +0100 From: "Ananyev, Konstantin" To: Thomas Monjalon CC: Shahaf Shuler , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API Thread-Index: AQHTJYU8DF/tpFi9okyg3kyCdEiysqKkwPgQgAEZ2gCAABLvoA== Date: Tue, 5 Sep 2017 08:09:54 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772584F246819@irsmsx105.ger.corp.intel.com> References: <2327783.H4uO08xLcu@xps> <2601191342CEEE43887BDE71AB9772584F2460F1@irsmsx105.ger.corp.intel.com> <2334939.YzL2ADl2XU@xps> In-Reply-To: <2334939.YzL2ADl2XU@xps> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 08:10:49 -0000 > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Tuesday, September 5, 2017 8:48 AM > To: Ananyev, Konstantin > Cc: Shahaf Shuler ; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 4/4] ethdev: add helpers to move to the ne= w offloads API >=20 > 04/09/2017 16:18, Ananyev, Konstantin: > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > 04/09/2017 15:25, Ananyev, Konstantin: > > > > Hi Shahaf, > > > > > > > > > +/** > > > > > + * A conversion function from rxmode offloads API to rte_eth_rxq= _conf > > > > > + * offloads API. > > > > > + */ > > > > > +static void > > > > > +rte_eth_convert_rxmode_offloads(struct rte_eth_rxmode *rxmode, > > > > > + struct rte_eth_rxq_conf *rxq_conf) > > > > > +{ > > > > > + if (rxmode->header_split =3D=3D 1) > > > > > + rxq_conf->offloads |=3D DEV_RX_OFFLOAD_HEADER_SPLIT; > > > > > + if (rxmode->hw_ip_checksum =3D=3D 1) > > > > > + rxq_conf->offloads |=3D DEV_RX_OFFLOAD_CHECKSUM; > > > > > + if (rxmode->hw_vlan_filter =3D=3D 1) > > > > > + rxq_conf->offloads |=3D DEV_RX_OFFLOAD_VLAN_FILTER; > > > > > > > > Thinking on it a bit more: > > > > VLAN_FILTER is definitely one per device, as it would affect VFs al= so. > > > > At least that's what we have for Intel devices (ixgbe, i40e) right = now. > > > > For Intel devices VLAN_STRIP is also per device and > > > > will also be applied to all corresponding VFs. > > > > In fact, right now it is possible to query/change these 3 vlan offl= oad flags on the fly > > > > (after dev_start) on port basis by rte_eth_dev_(get|set)_vlan_offl= oad API. > > > > So, I think at least these 3 flags need to be remained on a port ba= sis. > > > > > > I don't understand how it helps to be able to configure the same thin= g > > > in 2 places. > > > > Because some offloads are per device, another - per queue. > > Configuring on a device basis would allow most users to conjure all > > queues in the same manner by default. > > Those users who would need more fine-grained setup (per queue) > > will be able to overwrite it by rx_queue_setup(). >=20 > Those users can set the same config for all queues. > > > > > I think you are just describing a limitation of these HW: some offloa= ds > > > must be the same for all queues. > > > > As I said above - on some devices some offloads might also affect queue= s > > that belong to VFs (to another ports in DPDK words). > > You might never invoke rx_queue_setup() for these queues per your app. > > But you still want to enable this offload on that device. I am ok with having per-port and per-queue offload configuration. My concern is that after that patch only per-queue offload configuration wi= ll remain. I think we need both. Konstantin >=20 > You are advocating for per-port configuration API because > some settings must be the same on all the ports of your hardware? > So there is a big trouble. You don't need per-port settings, > but per-hw-device settings. > Or would you accept more fine-grained per-port settings? > If yes, you can accept even finer grained per-queues settings. > > > > > It does not prevent from configuring them in the per-queue setup. > > > > > > > In fact, why can't we have both per port and per queue RX offload: > > > > - dev_configure() will accept RX_OFFLOAD_* flags and apply them on = a port basis. > > > > - rx_queue_setup() will also accept RX_OFFLOAD_* flags and apply th= em on a queue basis. > > > > - if particular RX_OFFLOAD flag for that device couldn't be setup o= n a queue basis - > > > > rx_queue_setup() will return an error. > > > > > > The queue setup can work while the value is the same for every queues= . > > > > Ok, and how people would know that? > > That for device N offload X has to be the same for all queues, > > and for device M offload X can be differs for different queues. >=20 > We can know the hardware limitations by filling this information > at PMD init. >=20 > > Again, if we don't allow to enable/disable offloads for particular queu= e, > > why to bother with updating rx_queue_setup() API at all? >=20 > I do not understand this question. >=20 > > > > - rte_eth_rxq_info can be extended to provide information which RX_= OFFLOADs > > > > can be configured on a per queue basis. > > > > > > Yes the PMD should advertise its limitations like being forced to > > > apply the same configuration to all its queues. > > > > Didn't get your last sentence. >=20 > I agree that the hardware limitations must be written in an ethdev struct= ure.