From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 1CFF611C5 for ; Tue, 29 Aug 2017 11:43:24 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Aug 2017 02:43:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,444,1498546800"; d="scan'208";a="1008771657" Received: from irsmsx107.ger.corp.intel.com ([163.33.3.99]) by orsmga003.jf.intel.com with ESMTP; 29 Aug 2017 02:43:21 -0700 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.49]) by IRSMSX107.ger.corp.intel.com ([169.254.10.65]) with mapi id 14.03.0319.002; Tue, 29 Aug 2017 10:43:21 +0100 From: "Ananyev, Konstantin" To: Shahaf Shuler , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the new offloads API Thread-Index: AQHTD2vAsavtP2XuMkuwCed+kyyKsKKR9qdg///87ACAB/RVgIABCAkAgABAPaA= Date: Tue, 29 Aug 2017 09:43:20 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772584F23D882@IRSMSX103.ger.corp.intel.com> References: <1e9c5ead0fbd92a3e7fb4c76397cb3dc369eca89.1502096064.git.shahafs@mellanox.com> <2601191342CEEE43887BDE71AB977258489B5073@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772584F23D376@IRSMSX103.ger.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the new offloads API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Aug 2017 09:43:25 -0000 > -----Original Message----- > From: Shahaf Shuler [mailto:shahafs@mellanox.com] > Sent: Tuesday, August 29, 2017 7:27 AM > To: Ananyev, Konstantin ; dev@dpdk.org > Subject: RE: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to th= e new offloads API >=20 > Monday, August 28, 2017 5:12 PM, Ananyev, Konstantin: > > > Wednesday, August 23, 2017 3:29 PM, Ananyev, Konstantin: > > > > > > > > Hmm, and why we would need to reconfigure our device in the middle > > > > of rx queue setup? > > > > > > The reason is the old Rx offloads API is configured on device configu= re. > > > This if section is for applications which already moved to the new of= fload > > API however the underlying PMD still uses the old one. > > > > Ok, but as I remember, right now the initialization order is pretty str= ict: > > rx_queue_setup() has to be always called after dev_configure(). >=20 > Is this restriction backed up in the API definition? > From what I saw in dev_configure API documentation: >=20 > " > This function must be invoked first before any other function in the Ethe= rnet API. This function can also be re-invoked when a device is in > the stopped state. > " >=20 > It does not dictate one is forbidden to do (while port is stopped): > dev_configure > rx_queue_setup(queue 1) > dev _configure (for the same configuration) > rx_queue_setup(queue 2) >=20 It might work in some cases, but no guarantee it would work in general. Though, as I understand, in your case for second call of dev_configure() th= e=20 config parameters might be different anyway. > didn't saw any words on in it also in rx_queue_setup API. maybe API doc s= hould be updated. Yes, good point. >=20 > You are correct though that all examples and apps on dpdk tree behaves in= the way you described. >=20 > > One of the reasons for that: rx_queue_setup() might change fileds insid= e > > dev->data->dev_private. > > Second call for dev_configure() will void these changes and some of rxq > > config information will be lost. > > So I think we should avoid calling dev_configure() inside rx_queue_setu= p(). >=20 > In continue to above comment, is this reason is because of a wrong implem= entation of the callbacks by the PMDs or actual limitation from > ethdev? I'd say it is a combination of limitations of ethdev design and actual HW r= estrictions. If we'll have a rx/tx function per queue some of these limitations might be= removed I think. Though for some HW - offloads can be enabled/disabled per device, not queue= , so I guess some limitations would persist anyway. >=20 > > > > My preference still would be to force all PMDs to move to the new versi= on of > > rx_queue_setup() first. > > I think it would be much more error prone then supporting two flavors o= f > > PMD config > > and will allow us to catch errors early - in case this new scheme doesn= 't work > > by some PMD for any reason. >=20 > I Fully agree with you. If we can force all PMDs to move the new API it w= ill be the best. > No need in risky re-configuration in the middle of the queue setup. > But how can we force all to do such transition? I saw there is a discussi= on on this RFC on the techboard - maybe it can be decided there. >=20 > As I said before, I think I will be very hard to migrate all PMDs to the = new API by myself. > It is not a simple function prototype change or some identical swap I nee= d to do on each PMD. > Each device has its own characteristics and restriction on the offloads i= t can provide. > Some offloads can be enabled only on specific places in the device initia= lization. > Some offloads are only per port. > To do such transition properly I must fully understand the underlying dev= ice of each PMD. >=20 > I can commit on transition for the devices I familiar with: Mlx4 and mlx5= . That's understandable. For tx_prepare() work, we used the following approach: 1. submitted patch with changes in rte_ethdev and PMDs we are familiar wit= h (Intel ones). For other PMDs - patch contained just minimal changes to make it build = cleanly. 2. Asked other PMD maintainers to review rte_ethdev changes and provide a p= roper patch for the PMD they own. Probably same approach can be used here. Konstantin >=20 > > > > Also it seems that you forgot to: > > struct rte_eth_rxmode rxmode =3D dev->data->dev_conf.rxmode; >=20 > Thanks, discovered it right after I sent the RFC. Already fixed. >=20 > > > > Konstantin > > > > > > > > > > > > > > > > + if (ret < 0) { > > > > > + RTE_PMD_DEBUG_TRACE( > > > > > + "unable to re-configure port %d " > > > > > + "in order to apply rxq offloads " > > > > > + "configuration\n", port_id); > > > > > + } > > > > > + } > > > > > + } > > > > > + > > > > > ret =3D (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, > > > > nb_rx_desc, > > > > > socket_id, rx_conf, mp); > > > > > > > > BTW, I don't see changes in any PMD for new offload flags? > > > > Is it because it is just a n RFC and full patch would contain such = changes? > > > > > > Yes this is because this is an RFC. > > > > > > The full patch I intend will move all examples and testpmd to the new > > offloads API. > > > In addition it will include the mlx5 PMD support for the new offloads= API. > > > > > > As I said on previous mail, I believe that the work to move the > > > different PMDs to the new API should be done by their developers or > > maintainers. > > > > > > > > > > > > > > > > if (!ret) { > > > > > @@ -1094,6 +1182,51 @@ rte_eth_rx_queue_setup(uint8_t port_id, > > > > uint16_t rx_queue_id, > > > > > return ret; > > > > > } > > > > > > > > > > +/** > > > > > + * A copy function from txq_flags to rte_eth_txq_conf offloads > > > > > +API, > > > > > + * to enable PMDs to support only one of the APIs. > > > > > + */ > > > > > +static void > > > > > +rte_eth_copy_txq_flags(struct rte_eth_txq_conf *txq_conf) { > > > > > + uint32_t txq_flags =3D txq_conf->txq_flags > > > > > + uint64_t *offloads =3D &txq_conf->offloads; > > > > > + > > > > > + if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS)) > > > > > + *offloads |=3D DEV_TX_OFFLOAD_MULTI_SEGS; > > > > > + if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL)) > > > > > + *offloads |=3D DEV_TX_OFFLOAD_VLAN_INSERT; > > > > > + if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP)) > > > > > + *offloads |=3D DEV_TX_OFFLOAD_SCTP_CKSUM; > > > > > + if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP)) > > > > > + *offloads |=3D DEV_TX_OFFLOAD_UDP_CKSUM; > > > > > + if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP)) > > > > > + *offloads |=3D DEV_TX_OFFLOAD_TCP_CKSUM; } > > > > > + > > > > > +/** > > > > > + * A copy function between rte_eth_txq_conf offloads API to > > > > > +txq_flags > > > > > + * offloads API, to enable application to be agnostic to the PMD > > > > > +supported > > > > > + * API. > > > > > + */ > > > > > +static void > > > > > +rte_eth_copy_txq_offloads(struct rte_eth_txq_conf *txq_conf) { > > > > > + uint32_t *txq_flags =3D &txq_conf->txq_flags > > > > > + uint64_t offloads =3D txq_conf->offloads; > > > > > + > > > > > + if (!(offloads & DEV_TX_OFFLOAD_MULTI_SEGS)) > > > > > + *txq_flags |=3D ETH_TXQ_FLAGS_NOMULTSEGS; > > > > > + if (!(offloads & DEV_TX_OFFLOAD_VLAN_INSERT)) > > > > > + *txq_flags |=3D ETH_TXQ_FLAGS_NOVLANOFFL; > > > > > + if (!(offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)) > > > > > + *txq_flags |=3D ETH_TXQ_FLAGS_NOXSUMSCTP; > > > > > + if (!(offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) > > > > > + *txq_flags |=3D ETH_TXQ_FLAGS_NOXSUMUDP; > > > > > + if (!(offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) > > > > > + *txq_flags |=3D ETH_TXQ_FLAGS_NOXSUMTCP; } > > > > > + > > > > > int > > > > > rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id, > > > > > uint16_t nb_tx_desc, unsigned int socket_id, @@ -1145,6 > > > > > +1278,13 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t > > > > tx_queue_id, > > > > > if (tx_conf =3D=3D NULL) > > > > > tx_conf =3D &dev_info.default_txconf; > > > > > > > > > > + if ((dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) && > > > > > + (!(tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE))) > > > > > + rte_eth_copy_txq_flags(tx_conf); > > > > > + else if (!(dev->data->dev_flags & RTE_ETH_DEV_TXQ_OFFLOAD) && > > > > > + (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)) > > > > > + rte_eth_copy_txq_offloads(tx_conf); > > > > > + > > > > > > > > As I said in my previous mail - I think better to always convrert > > > > from old txq_flags to new TX offload flags and make each PMD to > > > > understand new offload values only. > > > > Konstantin > > > > > > > > > return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, > > > > nb_tx_desc, > > > > > socket_id, tx_conf); > > > > > } > > > > > -- > > > > > 2.12.0