From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <wenzhuo.lu@intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 4C9B12E41
 for <dev@dpdk.org>; Sun, 12 Jun 2016 04:00:44 +0200 (CEST)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by fmsmga101.fm.intel.com with ESMTP; 11 Jun 2016 19:00:43 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.26,459,1459839600"; d="scan'208";a="995836438"
Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206])
 by orsmga002.jf.intel.com with ESMTP; 11 Jun 2016 19:00:43 -0700
Received: from fmsmsx119.amr.corp.intel.com (10.18.124.207) by
 FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Sat, 11 Jun 2016 19:00:42 -0700
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
 FMSMSX119.amr.corp.intel.com (10.18.124.207) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Sat, 11 Jun 2016 19:00:42 -0700
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.147]) by
 SHSMSX152.ccr.corp.intel.com ([169.254.6.107]) with mapi id 14.03.0248.002;
 Sun, 12 Jun 2016 10:00:40 +0800
From: "Lu, Wenzhuo" <wenzhuo.lu@intel.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, "Tao, Zhe"
 <zhe.tao@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
CC: "Richardson, Bruce" <bruce.richardson@intel.com>, "Chen, Jing D"
 <jing.d.chen@intel.com>, "Liang, Cunming" <cunming.liang@intel.com>, "Wu,
 Jingjing" <jingjing.wu@intel.com>, "Zhang, Helin" <helin.zhang@intel.com>
Thread-Topic: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode
Thread-Index: AQHRwIlQvj4+w3LRO06zb58e27oReZ/dPyEAgAGGJDCAAAFWgIAGTU9Q
Date: Sun, 12 Jun 2016 02:00:39 +0000
Message-ID: <6A0DE07E22DDAD4C9103DF62FEBC09090348499B@shsmsx102.ccr.corp.intel.com>
References: <1465278858-5131-1-git-send-email-zhe.tao@intel.com>
 <1465282390-6025-1-git-send-email-zhe.tao@intel.com>
 <1465282390-6025-3-git-send-email-zhe.tao@intel.com>
 <2601191342CEEE43887BDE71AB97725836B6C44E@irsmsx105.ger.corp.intel.com>
 <6A0DE07E22DDAD4C9103DF62FEBC090903483A2C@shsmsx102.ccr.corp.intel.com>
 <2601191342CEEE43887BDE71AB97725836B6CCE1@irsmsx105.ger.corp.intel.com>
In-Reply-To: <2601191342CEEE43887BDE71AB97725836B6CCE1@irsmsx105.ger.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock
	mode
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Sun, 12 Jun 2016 02:00:45 -0000

Hi Konstantin,


> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, June 8, 2016 5:20 PM
> To: Lu, Wenzhuo; Tao, Zhe; dev@dpdk.org
> Cc: Richardson, Bruce; Chen, Jing D; Liang, Cunming; Wu, Jingjing; Zhang,=
 Helin
> Subject: RE: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode
>=20
>=20
>=20
> >
> > Hi Konstantin,
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Tuesday, June 7, 2016 5:59 PM
> > > To: Tao, Zhe; dev@dpdk.org
> > > Cc: Lu, Wenzhuo; Richardson, Bruce; Chen, Jing D; Liang, Cunming;
> > > Wu, Jingjing; Zhang, Helin
> > > Subject: RE: [PATCH v4 2/8] lib/librte_ether: defind RX/TX lock mode
> > >
> > >
> > > Hi Zhe & Wenzhuo,
> > >
> > > Please find my comments below.
> > > BTW, for clarification - is that patch for 16.11?
> > > I believe it's too late to introduce such significant change in 16.07=
.
> > > Thanks
> > > Konstantin
> > Thanks for the comments.
> > Honestly, our purpose is 16.07. Realizing the big impact, we use
> > NEXT_ABI to comment our change. So, I think although we want to merge i=
t in
> 16.07 this change will become effective after we remove NEXT_ABI in 16.11=
.
>=20
> I don't think it is achievable.
> First I think your code is not in proper shape yet, right now.
> Second, as you said, it is a significant change and I would like to hear =
opinions
> from the rest of the community.
Agree it should have risk. I mean our target is 16.07. But surely if it can=
 be achieved depends on the feedback from the community.

>=20
> >
> > >
> > > > Define lock mode for RX/TX queue. Because when resetting the
> > > > device we want the resetting thread to get the lock of the RX/TX
> > > > queue to make sure the RX/TX is stopped.
> > > >
> > > > Using next ABI macro for this ABI change as it has too much
> > > > impact. 7 APIs and 1 global variable are impacted.
> > > >
> > > > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > > > Signed-off-by: Zhe Tao <zhe.tao@intel.com>
> > > > ---
> > > >  lib/librte_ether/rte_ethdev.h | 62
> > > > +++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 62 insertions(+)
> > > >
> > > > diff --git a/lib/librte_ether/rte_ethdev.h
> > > > b/lib/librte_ether/rte_ethdev.h index 74e895f..4efb5e9 100644
> > > > --- a/lib/librte_ether/rte_ethdev.h
> > > > +++ b/lib/librte_ether/rte_ethdev.h
> > > > @@ -354,7 +354,12 @@ struct rte_eth_rxmode {
> > > >  		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
> > > >  		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
> > > >  		enable_scatter   : 1, /**< Enable scatter packets rx handler */
> > > > +#ifndef RTE_NEXT_ABI
> > > >  		enable_lro       : 1; /**< Enable LRO */
> > > > +#else
> > > > +		enable_lro       : 1, /**< Enable LRO */
> > > > +		lock_mode        : 1; /**< Using lock path */
> > > > +#endif
> > > >  };
> > > >
> > > >  /**
> > > > @@ -634,11 +639,68 @@ struct rte_eth_txmode {
> > > >  		/**< If set, reject sending out tagged pkts */
> > > >  		hw_vlan_reject_untagged : 1,
> > > >  		/**< If set, reject sending out untagged pkts */
> > > > +#ifndef RTE_NEXT_ABI
> > > >  		hw_vlan_insert_pvid : 1;
> > > >  		/**< If set, enable port based VLAN insertion */
> > > > +#else
> > > > +		hw_vlan_insert_pvid : 1,
> > > > +		/**< If set, enable port based VLAN insertion */
> > > > +		lock_mode : 1;
> > > > +		/**< If set, using lock path */ #endif
> > > >  };
> > > >
> > > >  /**
> > > > + * The macros for the RX/TX lock mode functions  */ #ifdef
> > > > +RTE_NEXT_ABI #define RX_LOCK_FUNCTION(dev, func) \
> > > > +	(dev->data->dev_conf.rxmode.lock_mode ? \
> > > > +	func ## _lock : func)
> > > > +
> > > > +#define TX_LOCK_FUNCTION(dev, func) \
> > > > +	(dev->data->dev_conf.txmode.lock_mode ? \
> > > > +	func ## _lock : func)
> > > > +#else
> > > > +#define RX_LOCK_FUNCTION(dev, func) func
> > > > +
> > > > +#define TX_LOCK_FUNCTION(dev, func) func #endif
> > > > +
> > > > +/* Add the lock RX/TX function for VF reset */ #define
> > > > +GENERATE_RX_LOCK(func, nic) \ uint16_t func ## _lock(void
> > > > +*rx_queue, \
> > > > +		      struct rte_mbuf **rx_pkts, \
> > > > +		      uint16_t nb_pkts) \
> > > > +{					\
> > > > +	struct nic ## _rx_queue *rxq =3D rx_queue; \
> > > > +	uint16_t nb_rx =3D 0; \
> > > > +						\
> > > > +	if (rte_spinlock_trylock(&rxq->rx_lock)) { \
> > > > +		nb_rx =3D func(rx_queue, rx_pkts, nb_pkts); \
> > > > +		rte_spinlock_unlock(&rxq->rx_lock); \
> > > > +	} \
> > > > +	\
> > > > +	return nb_rx; \
> > > > +}
> > > > +
> > > > +#define GENERATE_TX_LOCK(func, nic) \ uint16_t func ## _lock(void
> > > > +*tx_queue, \
> > > > +		      struct rte_mbuf **tx_pkts, \
> > > > +		      uint16_t nb_pkts) \
> > > > +{					\
> > > > +	struct nic ## _tx_queue *txq =3D tx_queue; \
> > > > +	uint16_t nb_tx =3D 0; \
> > > > +						\
> > > > +	if (rte_spinlock_trylock(&txq->tx_lock)) { \
> > > > +		nb_tx =3D func(tx_queue, tx_pkts, nb_pkts); \
> > > > +		rte_spinlock_unlock(&txq->tx_lock); \
> > > > +	} \
> > > > +	\
> > > > +	return nb_tx; \
> > > > +}
> > >
> > > 1. As I said in off-line dicussiion, I think this locking could (and
> > > I think better be) impelented completely on rte_ethdev layer.
> > > So actual PMD code will be unaffected.
> > > Again that avoids us to introduce _lock version of every RX/Tx
> > > function in each PMD.
> > One purpose of implementing the lock in PMD layer is to avoid ABI
> > change. But we introduce the field lock_mode in struct
> > rte_eth_rx/txmode. So seems it's not a good reason now :) The other
> > purpose is we want to add a lock for every queue. But in rte layer the
> > queue is void *, so we add the lock in the specific structures of the N=
ICs. But as
> you mentioned below, we can add the lock as dev->data->rx_queue_state it =
the
> struct rte_eth_dev_data.
> > So, I prefer to add the lock in rte layer now.
>=20
> OK.
>=20
> >
> > >
> > > 2. Again, as discussed offline, I think it is better to have an
> > > explicit
> > > rte_eth_(rx|tx)_burst_lock(sync?) API, instead of add new fileds
> > > into RX/TX config strcutures.
> > > Would help to avoid any confusion, I think.
> > We want the users to choose the rx/tx path without  lock if they're
> > sensitive to the performance and can handle the reset event in their AP=
P. After
> introducing new fields of config struct, users can change the config to c=
hoose
> the different path.
>=20
> I understand what you are doing.
>=20
> > If we introduce new API, it may be harder for the use to use it. I
> > mean when users want to use lock mode, they may need to replace all the
> rte_eth_rx/tx_burst by rte_eth_rx/tx_burst_lock.
>=20
> Yes, my opinion if users would like to use locking API they need to call =
it
> explicitly.
>=20
>=20
> >So if we add the lock in rte layer, I still prefer adding lock_mode in
> >the  configuration, and the rte_eth_rx/tx_burst is changed like this,
> >rte_eth_rx/tx_burst  {
> > + if lock_mode
> > + try_lock
> > ......
> > + if lock_mode
> > + release_lock
> > }
>=20
> My preference is to keep existing rx/tx_burst() functions unaffected by t=
hat
> patch.
> At least for now.
> I suppose that will minimise the risks and help users to avoid confusion =
what API
> (locking/non-locking) is in use.
OK. Let me add new APIs.

>=20
> >
> >
> > >
> > > 3.  I thought the plan was to introduce a locking in all appropriate
> > > control path functions (dev_start/dev_stop etc.) Without that
> > > locking version of RX/TX seems a bit useless.
> > > Yes, I understand that you do use locking inside dev_reset, but I
> > > suppose the plan was to have a generic solution, no?
> > > Again, interrupt fire when user invokes dev_start/stop or so, so we
> > > still need some synchronisation between them.
> > >
> > > To be more specific, I thought about something like that:
> > >
> > > static inline uint16_t
> > > rte_eth_rx_burst_lock(uint8_t port_id, uint16_t queue_id,
> > >                  struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) {
> > >         struct rte_eth_dev *dev =3D &rte_eth_devices[port_id];
> > >
> > > #ifdef RTE_LIBRTE_ETHDEV_DEBUG
> > >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
> > >         RTE_FUNC_PTR_OR_ERR_RET(*dev->rx_pkt_burst, 0);
> > >
> > >         if (queue_id >=3D dev->data->nb_rx_queues) {
> > >                 RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=3D%d\n", que=
ue_id);
> > >                 return 0;
> > >         }
> > > #endif
> > >
> > > + if (rte_spinlock_trylock(&dev->data->rx_queue_state[rx_queue_id].lo=
ck)
> =3D=3D 0)
> > > +	return 0;
> > > +  else if (dev->data->rx_queue_state[rx_queue_id] =3D=3D
> > > RTE_ETH_QUEUE_STATE_STOPPED)) {
> > > +	rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unlock)=
;
> > > +	return 0;
> > > +
> > >
> > >  nb_rx =3D (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id],
> > >                         rx_pkts, nb_pkts);
> > >
> > > + rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unlock
> > > + );
> > >
> > > ....
> > >
> > > return nb_rx;
> > > }
> > >
> > > And inside queue_start:
> > >
> > > int
> > > rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id) {
> > >         struct rte_eth_dev *dev;
> > >
> > >         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
> > >
> > >         dev =3D &rte_eth_devices[port_id];
> > >         if (rx_queue_id >=3D dev->data->nb_rx_queues) {
> > >                 RTE_PMD_DEBUG_TRACE("Invalid RX queue_id=3D%d\n",
> rx_queue_id);
> > >                 return -EINVAL;
> > >         }
> > >
> > >         RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_start,
> > > -ENOTSUP);
> > >
> > >      rte_spinlock_lock(&dev->data->rx_queue_state[rx_queue_id].lock)
> > I think you add the lock here to stop the rx/tx.
> > But to my opinion, we should lock the rx/tx much earlier before
> > starting the queue. For example, when stop the port, the resource of th=
e
> queues may be released.
>=20
> I didn't get you here...
> Before releasing the queue resources, queue_stop() has to be executed, ri=
ght?
Sorry, I saw your example with rte_eth_dev_rx_queue_start, I didn't know yo=
u also want to change rte_eth_dev_rx_queue_stop too.
Agree this should work it we call queue_start/stop when reset the port. But=
 we will not call them. I find the queue_stop/start are per-queue functions=
 and not supported by all NICs.
Our solution now is stop the whole port and restart the whole port. We will=
 not stop/restart queue by queue.

>=20
> >The rx/tx cannot be executed. So I prefer to get the lock before stoppin=
g the
> ports.
>=20
> Might be I wasn't clear enough here.
> What I think we need to have:
>  -To stop/start/rx/tx the queue (or do any other action that might change=
 the
> queue internal structure)
>    you have to grab the lock.
>    After queue is stopped it's state has to be changed to
> QUEUE_STATE_STOPPED (whti queue lock grabbed),
>    so rx/tx_locked wouldn't    proceed with that queue.
>   - dev_stop() - has to stop all its queues first, i.e. it needs to call =
queue_stop()
> for all of them.
>  So after dev_stop() had finished  - all device queues have to be in
> QUEUE_STATE_STOPPED
> Same about dev_start() - after it does all other things - it will call qu=
eue_start()
> for all it's queues.
> that will bring them into QUEUE_STARTED.
> After that rx/tx_locked can use them again.
>=20
> >Maybe better to keep the spinlock in the dev_reset.
>=20
> Might be not :)
>=20
> >
> > >
> > >         if (dev->data->rx_queue_state[rx_queue_id] !=3D
> > > RTE_ETH_QUEUE_STATE_STOPPED) {
> > >                 RTE_PMD_DEBUG_TRACE("Queue %" PRIu16" of device with
> > > port_id=3D%" PRIu8
> > >                         " already started\n",
> > >                         rx_queue_id, port_id);
> > >                 ret =3D -EINVAL 0;
> > >         } else
> > >         	ret =3D dev->dev_ops->rx_queue_start(dev, rx_queue_id);
> > >
> > >
> > > rte_spinlock_unlock(&dev->data->rx_queue_state[rx_queue_id].unlock);
> > >
> > >    return ret;
> > > }
> > >
> > > Then again, we don't need to do explicit locking inside dev_reset().
> > > Does it make sense to you guys?
> > Please see the answer above.
> >
> > >
> > >
> > > > +
> > > > +/**
> > > >   * A structure used to configure an RX ring of an Ethernet port.
> > > >   */
> > > >  struct rte_eth_rxconf {
> > > > --
> > > > 2.1.4