From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <gage.eads@intel.com>
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by dpdk.org (Postfix) with ESMTP id 60E231C00
 for <dev@dpdk.org>; Wed, 10 May 2017 18:40:55 +0200 (CEST)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 10 May 2017 09:40:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.38,320,1491289200"; d="scan'208";a="259526079"
Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201])
 by fmsmga004.fm.intel.com with ESMTP; 10 May 2017 09:40:53 -0700
Received: from fmsmsx153.amr.corp.intel.com (10.18.125.6) by
 FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS)
 id 14.3.319.2; Wed, 10 May 2017 09:40:53 -0700
Received: from fmsmsx108.amr.corp.intel.com ([169.254.9.101]) by
 FMSMSX153.amr.corp.intel.com ([169.254.9.224]) with mapi id 14.03.0319.002;
 Wed, 10 May 2017 09:40:53 -0700
From: "Eads, Gage" <gage.eads@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, "Van Haaren, Harry"
 <harry.van.haaren@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "Richardson, Bruce"
 <bruce.richardson@intel.com>
Thread-Topic: [PATCH 1/3] examples/eventdev_pipeline: added sample app
Thread-Index: AQHSuoT4lPyaEinzVE2vpzMKMFa/oqHuLyaA//+pycA=
Date: Wed, 10 May 2017 16:40:52 +0000
Message-ID: <9184057F7FC11744A2107296B6B8EB1E01EA98FF@FMSMSX108.amr.corp.intel.com>
References: <1492768299-84016-1-git-send-email-harry.van.haaren@intel.com>
 <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com>
 <20170510141202.GA8431@jerin>
In-Reply-To: <20170510141202.GA8431@jerin>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.1.200.106]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample
	app
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 10 May 2017 16:40:56 -0000



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Wednesday, May 10, 2017 9:12 AM
>  To: Van Haaren, Harry <harry.van.haaren@intel.com>
>  Cc: dev@dpdk.org; Eads, Gage <gage.eads@intel.com>; Richardson, Bruce
>  <bruce.richardson@intel.com>
>  Subject: Re: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> =20
>  -----Original Message-----
>  > Date: Fri, 21 Apr 2017 10:51:37 +0100
>  > From: Harry van Haaren <harry.van.haaren@intel.com>
>  > To: dev@dpdk.org
>  > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
>  > <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
>  > Richardson <bruce.richardson@intel.com>
>  > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>  > X-Mailer: git-send-email 2.7.4
>  >
>  > This commit adds a sample app for the eventdev library.
>  > The app has been tested with DPDK 17.05-rc2, hence this release (or
>  > later) is recommended.
>  >

<snip>

> =20
>  > +		ev[i].op =3D RTE_EVENT_OP_NEW;
>  > +		ev[i].sched_type =3D queue_type;
> =20
>  The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY !=3D
>  RTE_SCHED_TYPE_ORDERED. So, we cannot assign .sched_type as
>  queue_type.
> =20
>  I think, one option could be to avoid translation in application is to
>  - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES,
>  RTE_EVENT_QUEUE_CFG_*_ONLY
>  - Introduce a new RTE_EVENT_DEV_CAP_ to denote
>  RTE_EVENT_QUEUE_CFG_ALL_TYPES cap ability
>  - add sched_type in struct rte_event_queue_conf. If capability flag is
>    not set then implementation takes sched_type value for the queue.
> =20
>  Any thoughts?

I'm not sure this change is needed. We could create a sched_type[] array, i=
ndexed by queue ID, for assigning the event's sched type.

With the proposed approach, the sched_type assignment would still be needed=
 for "all types"-capable PMDs, so I'm not sure it buys us anything in the a=
pplication.

> =20
> =20
>  > +		ev[i].queue_id =3D qid;
>  > +		ev[i].event_type =3D RTE_EVENT_TYPE_CPU;
> =20
>  IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is producin=
g
>  the Ethernet packets/events.
> =20
>  > +		ev[i].sub_event_type =3D 0;
>  > +		ev[i].priority =3D RTE_EVENT_DEV_PRIORITY_NORMAL;
>  > +		ev[i].mbuf =3D mbufs[i];
>  > +		RTE_SET_USED(prio_idx);
>  > +	}
>  > +
>  > +	const int nb_tx =3D rte_event_enqueue_burst(dev_id, port_id, ev,
>  > +nb_rx);
> =20
>  For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do
>  burst operation unlike FORWARD case(which is one event at a time).Earlie=
r, I
>  thought I can abstract the producer pattern in PMD, but it looks like we=
 are
>  going with application driven producer model based on latest RFC.So I th=
ink,
>  we can add one flag to rte_event_enqueue_burst to denote all the events =
are
>  of type RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
> =20
>  I can send a patch for the same.
> =20
>  Any thoughts?
> =20

Makes sense, though I'm a little hesitant about putting this sort of PMD-sp=
ecific hint in the enqueue API. Perhaps we can use the impl_opaque field, o=
r have the PMD inspect the event_type (if TYPE_ETHDEV, assume all packets i=
n the burst are NEWs)?

> =20
>  > +	if (nb_tx !=3D nb_rx) {
>  > +		for (i =3D nb_tx; i < nb_rx; i++)
>  > +			rte_pktmbuf_free(mbufs[i]);
>  > +	}
>  > +	enqueue_cnt[0] +=3D nb_tx;
>  > +
>  > +	if (unlikely(prod_stop))
> =20
>  I think, No one updating the prod_stop
> =20
>  > +		done =3D 1;
>  > +
>  > +	return 0;
>  > +}
>  > +
>  > +static inline void
>  > +schedule_devices(uint8_t dev_id, unsigned lcore_id) {
>  > +	if (rx_core[lcore_id] && (rx_single ||
>  > +	    rte_atomic32_cmpset(&rx_lock, 0, 1))) {
> =20
>  This pattern(rte_atomic32_cmpset) makes application can inject only "one
>  core" worth of packets. Not enough for low-end cores. May be we need
>  multiple producer options. I think, new RFC is addressing it.
> =20

Right, in the "wimpy" core case one can partition the Rx queues across mult=
iple adapters, and assign the adapters to different service cores. The prop=
osal doesn't explicitly state this, but rte_eth_rx_event_adapter_run() is n=
ot intended to be MT-safe -- so the service core implementation would need =
something along the lines of the cmpset if a service is affinitized to mult=
iple service cores.

Thanks,
Gage