From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7D1F73990 for ; Mon, 13 Jun 2016 01:16:48 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 12 Jun 2016 16:16:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,463,1459839600"; d="scan'208";a="974190050" Received: from irsmsx154.ger.corp.intel.com ([163.33.192.96]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2016 16:16:46 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.51]) by IRSMSX154.ger.corp.intel.com ([169.254.12.28]) with mapi id 14.03.0248.002; Mon, 13 Jun 2016 00:16:45 +0100 From: "Ananyev, Konstantin" To: "Lu, Wenzhuo" , "Tao, Zhe" , "dev@dpdk.org" CC: "Richardson, Bruce" , "Chen, Jing D" , "Liang, Cunming" , "Wu, Jingjing" , "Zhang, Helin" Thread-Topic: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode Thread-Index: AQHRwIlPe9ZormIF7U2HZ+7hR7jCn5/du9hggAFfxoCAACa6QIAFyEGAgAFzWMA= Date: Sun, 12 Jun 2016 23:16:45 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836B6E969@irsmsx105.ger.corp.intel.com> References: <1465278858-5131-1-git-send-email-zhe.tao@intel.com> <1465282390-6025-1-git-send-email-zhe.tao@intel.com> <1465282390-6025-3-git-send-email-zhe.tao@intel.com> <2601191342CEEE43887BDE71AB97725836B6C44E@irsmsx105.ger.corp.intel.com> <6A0DE07E22DDAD4C9103DF62FEBC090903483A2C@shsmsx102.ccr.corp.intel.com> <2601191342CEEE43887BDE71AB97725836B6CCE1@irsmsx105.ger.corp.intel.com> <6A0DE07E22DDAD4C9103DF62FEBC09090348499B@shsmsx102.ccr.corp.intel.com> In-Reply-To: <6A0DE07E22DDAD4C9103DF62FEBC09090348499B@shsmsx102.ccr.corp.intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Jun 2016 23:16:49 -0000 Hi Wenzhuo, >=20 > Hi Konstantin, >=20 >=20 > > -----Original Message----- > > From: Ananyev, Konstantin > > Sent: Wednesday, June 8, 2016 5:20 PM > > To: Lu, Wenzhuo; Tao, Zhe; dev@dpdk.org > > Cc: Richardson, Bruce; Chen, Jing D; Liang, Cunming; Wu, Jingjing; Zhan= g, Helin > > Subject: RE: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode > > > > > > > > > > > > Hi Konstantin, > > > > > > > > > > -----Original Message----- > > > > From: Ananyev, Konstantin > > > > Sent: Tuesday, June 7, 2016 5:59 PM > > > > To: Tao, Zhe; dev@dpdk.org > > > > Cc: Lu, Wenzhuo; Richardson, Bruce; Chen, Jing D; Liang, Cunming; > > > > Wu, Jingjing; Zhang, Helin > > > > Subject: RE: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mod= e > > > > > > > > > > > > Hi Zhe & Wenzhuo, > > > > > > > > Please find my comments below. > > > > BTW, for clarification - is that patch for 16.11? > > > > I believe it's too late to introduce such significant change in 16.= 07. > > > > Thanks > > > > Konstantin > > > Thanks for the comments. > > > Honestly, our purpose is 16.07. Realizing the big impact, we use > > > NEXT_ABI to comment our change. So, I think although we want to merge= it in > > 16.07 this change will become effective after we remove NEXT_ABI in 16.= 11. > > > > I don't think it is achievable. > > First I think your code is not in proper shape yet, right now. > > Second, as you said, it is a significant change and I would like to hea= r opinions > > from the rest of the community. > Agree it should have risk. I mean our target is 16.07. But surely if it c= an be achieved depends on the feedback from the community. >=20 > > > > > > > > > > > > > > Define lock mode for RX/TX queue. Because when resetting the > > > > > device we want the resetting thread to get the lock of the RX/TX > > > > > queue to make sure the RX/TX is stopped. > > > > > > > > > > Using next ABI macro for this ABI change as it has too much > > > > > impact. 7 APIs and 1 global variable are impacted. > > > > > > > > > > Signed-off-by: Wenzhuo Lu > > > > > Signed-off-by: Zhe Tao > > > > > --- > > > > > lib/librte_ether/rte_ethdev.h | 62 > > > > > +++++++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 62 insertions(+) > > > > > > > > > > diff --git a/lib/librte_ether/rte_ethdev.h > > > > > b/lib/librte_ether/rte_ethdev.h index 74e895f..4efb5e9 100644 > > > > > --- a/lib/librte_ether/rte_ethdev.h > > > > > +++ b/lib/librte_ether/rte_ethdev.h > > > > > @@ -354,7 +354,12 @@ struct rte_eth_rxmode { > > > > > jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */ > > > > > hw_strip_crc : 1, /**< Enable CRC stripping by hardware. *= / > > > > > enable_scatter : 1, /**< Enable scatter packets rx handler *= / > > > > > +#ifndef RTE_NEXT_ABI > > > > > enable_lro : 1; /**< Enable LRO */ > > > > > +#else > > > > > + enable_lro : 1, /**< Enable LRO */ > > > > > + lock_mode : 1; /**< Using lock path */ > > > > > +#endif > > > > > }; > > > > > > > > > > /** > > > > > @@ -634,11 +639,68 @@ struct rte_eth_txmode { > > > > > /**< If set, reject sending out tagged pkts */ > > > > > hw_vlan_reject_untagged : 1, > > > > > /**< If set, reject sending out untagged pkts */ > > > > > +#ifndef RTE_NEXT_ABI > > > > > hw_vlan_insert_pvid : 1; > > > > > /**< If set, enable port based VLAN insertion */ > > > > > +#else > > > > > + hw_vlan_insert_pvid : 1, > > > > > + /**< If set, enable port based VLAN insertion */ > > > > > + lock_mode : 1; > > > > > + /**< If set, using lock path */ #endif > > > > > }; > > > > > > > > > > /** > > > > > + * The macros for the RX/TX lock mode functions */ #ifdef > > > > > +RTE_NEXT_ABI #define RX_LOCK_FUNCTION(dev, func) \ > > > > > + (dev->data->dev_conf.rxmode.lock_mode ? \ > > > > > + func ## _lock : func) > > > > > + > > > > > +#define TX_LOCK_FUNCTION(dev, func) \ > > > > > + (dev->data->dev_conf.txmode.lock_mode ? \ > > > > > + func ## _lock : func) > > > > > +#else > > > > > +#define RX_LOCK_FUNCTION(dev, func) func > > > > > + > > > > > +#define TX_LOCK_FUNCTION(dev, func) func #endif > > > > > + > > > > > +/* Add the lock RX/TX function for VF reset */ #define > > > > > +GENERATE_RX_LOCK(func, nic) \ uint16_t func ## _lock(void > > > > > +*rx_queue, \ > > > > > + struct rte_mbuf **rx_pkts, \ > > > > > + uint16_t nb_pkts) \ > > > > > +{ \ > > > > > + struct nic ## _rx_queue *rxq =3D rx_queue; \ > > > > > + uint16_t nb_rx =3D 0; \ > > > > > + \ > > > > > + if (rte_spinlock_trylock(&rxq->rx_lock)) { \ > > > > > + nb_rx =3D func(rx_queue, rx_pkts, nb_pkts); \ > > > > > + rte_spinlock_unlock(&rxq->rx_lock); \ > > > > > + } \ > > > > > + \ > > > > > + return nb_rx; \ > > > > > +} > > > > > + > > > > > +#define GENERATE_TX_LOCK(func, nic) \ uint16_t func ## _lock(voi= d > > > > > +*tx_queue, \ > > > > > + struct rte_mbuf **tx_pkts, \ > > > > > + uint16_t nb_pkts) \ > > > > > +{ \ > > > > > + struct nic ## _tx_queue *txq =3D tx_queue; \ > > > > > + uint16_t nb_tx =3D 0; \ > > > > > + \ > > > > > + if (rte_spinlock_trylock(&txq->tx_lock)) { \ > > > > > + nb_tx =3D func(tx_queue, tx_pkts, nb_pkts); \ > > > > > + rte_spinlock_unlock(&txq->tx_lock); \ > > > > > + } \ > > > > > + \ > > > > > + return nb_tx; \ > > > > > +} > > > > > > > > 1. As I said in off-line dicussiion, I think this locking could (an= d > > > > I think better be) impelented completely on rte_ethdev layer. > > > > So actual PMD code will be unaffected. > > > > Again that avoids us to introduce _lock version of every RX/Tx > > > > function in each PMD. > > > One purpose of implementing the lock in PMD layer is to avoid ABI > > > change. But we introduce the field lock_mode in struct > > > rte_eth_rx/txmode. So seems it's not a good reason now :) The other > > > purpose is we want to add a lock for every queue. But in rte layer th= e > > > queue is void *, so we add the lock in the specific structures of the= NICs. But as > > you mentioned below, we can add the lock as dev->data->rx_queue_state i= t the > > struct rte_eth_dev_data. > > > So, I prefer to add the lock in rte layer now. > > > > OK. > > > > > > > > > > > > > 2. Again, as discussed offline, I think it is better to have an > > > > explicit > > > > rte_eth_(rx|tx)_burst_lock(sync?) API, instead of add new fileds > > > > into RX/TX config strcutures. > > > > Would help to avoid any confusion, I think. > > > We want the users to choose the rx/tx path without lock if they're > > > sensitive to the performance and can handle the reset event in their = APP. After > > introducing new fields of config struct, users can change the config to= choose > > the different path. > > > > I understand what you are doing. > > > > > If we introduce new API, it may be harder for the use to use it. I > > > mean when users want to use lock mode, they may need to replace all t= he > > rte_eth_rx/tx_burst by rte_eth_rx/tx_burst_lock. > > > > Yes, my opinion if users would like to use locking API they need to cal= l it > > explicitly. > > > > > > >So if we add the lock in rte layer, I still prefer adding lock_mode in > > >the configuration, and the rte_eth_rx/tx_burst is changed like this, > > >rte_eth_rx/tx_burst { > > > + if lock_mode > > > + try_lock > > > ...... > > > + if lock_mode > > > + release_lock > > > } > > > > My preference is to keep existing rx/tx_burst() functions unaffected by= that > > patch. > > At least for now. > > I suppose that will minimise the risks and help users to avoid confusio= n what API > > (locking/non-locking) is in use. > OK. Let me add new APIs. >=20 > > > > > > > > > > > > > > > > 3. I thought the plan was to introduce a locking in all appropriat= e > > > > control path functions (dev_start/dev_stop etc.) Without that > > > > locking version of RX/TX seems a bit useless. > > > > Yes, I understand that you do use locking inside dev_reset, but I > > > > suppose the plan was to have a generic solution, no? > > > > Again, interrupt fire when user invokes dev_start/stop or so, so we > > > > still need some synchronisation between them. > > > > > > > > To be more specific, I thought about something like that: > > > > > > > > static inline uint16_t > > > > rte_eth_rx_burst_lock(uint8_t port_id, uint16_t queue_id, > > > > struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)= { > > > > struct rte_eth_dev *dev =3D &rte_eth_devices[port_id]; > > > > > > > > #ifdef RTE_LIBRTE_ETHDEV_DEBUG > > > > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0); > > > > > > > > if (queue_id >=3D dev->data->nb_rx_queues) { > > > > RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=3D%d\n", q= ueue_id); > > > > return 0; > > > > } > > > > #endif > > > > > > > > + if (rte_spinlock_trylock(&dev->data->rx_queue_state[rx_queue_id].= lock) > > =3D=3D 0) > > > > + return 0; > > > > + else if (dev->data->rx_queue_state[rx_queue_id] =3D=3D > > > > RTE_ETH_QUEUE_STATE_STOPPED)) { > > > > + rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unloc= k); > > > > + return 0; > > > > + > > > > > > > > nb_rx =3D (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id], > > > > rx_pkts, nb_pkts); > > > > > > > > + rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unloc= k > > > > + ); > > > > > > > > .... > > > > > > > > return nb_rx; > > > > } > > > > > > > > And inside queue_start: > > > > > > > > int > > > > rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id) { > > > > struct rte_eth_dev *dev; > > > > > > > > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL); > > > > > > > > dev =3D &rte_eth_devices[port_id]; > > > > if (rx_queue_id >=3D dev->data->nb_rx_queues) { > > > > RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=3D%d\n", > > rx_queue_id); > > > > return -EINVAL; > > > > } > > > > > > > > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start, > > > > -ENOTSUP); > > > > > > > > rte_spinlock_lock(&dev->data->rx_queue_state[rx_queue_id].lock= ) > > > I think you add the lock here to stop the rx/tx. > > > But to my opinion, we should lock the rx/tx much earlier before > > > starting the queue. For example, when stop the port, the resource of = the > > queues may be released. > > > > I didn't get you here... > > Before releasing the queue resources, queue_stop() has to be executed, = right? > Sorry, I saw your example with rte_eth_dev_rx_queue_start, I didn't know = you also want to change rte_eth_dev_rx_queue_stop > too. > Agree this should work it we call queue_start/stop when reset the port. B= ut we will not call them. I find the queue_stop/start are per- > queue functions and not supported by all NICs. But right now you do reset only for ixgbe/i40e. For these devices we defiantly do support queue start/stop. And again, it is not only about reset op. If we want to add rx locked (synced), I think it should be in sync with all= control API that changes queue state. As I said before it is a lot of work and a lot of hassle... So probably the easiest (and might be safiest) way just leave things as the= re are right now: we allow user to setup a callback on VF reset, and it is user responsibilit= y to make sure no RX/TX is active while reset operation is performed. Pretty much what Olivier and Stephen suggested, as I understand. Konstantin > Our solution now is stop the whole port and restart the whole port. We wi= ll not stop/restart queue by queue. >=20 > > > > >The rx/tx cannot be executed. So I prefer to get the lock before stopp= ing the > > ports. > > > > Might be I wasn't clear enough here. > > What I think we need to have: > > -To stop/start/rx/tx the queue (or do any other action that might chan= ge the > > queue internal structure) > > you have to grab the lock. > > After queue is stopped it's state has to be changed to > > QUEUE_STATE_STOPPED (whti queue lock grabbed), > > so rx/tx_locked wouldn't proceed with that queue. > > - dev_stop() - has to stop all its queues first, i.e. it needs to cal= l queue_stop() > > for all of them. > > So after dev_stop() had finished - all device queues have to be in > > QUEUE_STATE_STOPPED > > Same about dev_start() - after it does all other things - it will call = queue_start() > > for all it's queues. > > that will bring them into QUEUE_STARTED. > > After that rx/tx_locked can use them again. > > > > >Maybe better to keep the spinlock in the dev_reset. > > > > Might be not :) > > > > > > > > > > > > > if (dev->data->rx_queue_state[rx_queue_id] !=3D > > > > RTE_ETH_QUEUE_STATE_STOPPED) { > > > > RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device wit= h > > > > port_id=3D%" PRIu8 > > > > " already started\n", > > > > rx_queue_id, port_id); > > > > ret =3D -EINVAL 0; > > > > } else > > > > ret =3D dev->dev_ops->rx_queue_start(dev, rx_queue_id); > > > > > > > > > > > > rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unlock)= ; > > > > > > > > return ret; > > > > } > > > > > > > > Then again, we don't need to do explicit locking inside dev_reset()= . > > > > Does it make sense to you guys? > > > Please see the answer above. > > > > > > > > > > > > > > > > + > > > > > +/** > > > > > * A structure used to configure an RX ring of an Ethernet port. > > > > > */ > > > > > struct rte_eth_rxconf { > > > > > -- > > > > > 2.1.4