From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <harry.van.haaren@intel.com>
Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
 by dpdk.org (Postfix) with ESMTP id 38A2ADE5
 for <dev@dpdk.org>; Wed,  8 Feb 2017 11:44:15 +0100 (CET)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga103.fm.intel.com with ESMTP; 08 Feb 2017 02:44:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.33,346,1477983600"; d="scan'208";a="62206228"
Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157])
 by orsmga005.jf.intel.com with ESMTP; 08 Feb 2017 02:44:13 -0800
Received: from irsmsx102.ger.corp.intel.com ([169.254.2.230]) by
 IRSMSX103.ger.corp.intel.com ([163.33.3.157]) with mapi id 14.03.0248.002;
 Wed, 8 Feb 2017 10:44:12 +0000
From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "Richardson, Bruce"
 <bruce.richardson@intel.com>, "Hunt, David" <david.hunt@intel.com>,
 "nipun.gupta@nxp.com" <nipun.gupta@nxp.com>, "hemant.agrawal@nxp.com"
 <hemant.agrawal@nxp.com>, "Eads, Gage" <gage.eads@intel.com>
Thread-Topic: [PATCH v2 15/15] app/test: add unit tests for SW eventdev driver
Thread-Index: AQHSgfVpYZlNJtiT2UaKdhOhwWrqaaFe6WoA
Date: Wed, 8 Feb 2017 10:44:11 +0000
Message-ID: <E923DB57A917B54B9182A2E928D00FA6129EF179@IRSMSX102.ger.corp.intel.com>
References: <1484580885-148524-1-git-send-email-harry.van.haaren@intel.com>
 <1485879273-86228-1-git-send-email-harry.van.haaren@intel.com>
 <1485879273-86228-16-git-send-email-harry.van.haaren@intel.com>
 <20170208102306.GA19597@localhost.localdomain>
In-Reply-To: <20170208102306.GA19597@localhost.localdomain>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMDQ0ZmNkY2YtZGRjZC00NjEyLWJlOTMtNmFjODIwNjBjOTIzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IlZFUUJLQ0p6MTB6U0F2XC9WcERuZVlEMk1ZNllnZDBuYjhRME8zT0d5bjE0PSJ9
x-ctpclassification: CTP_IC
x-originating-ip: [163.33.239.182]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v2 15/15] app/test: add unit tests for SW
	eventdev driver
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 08 Feb 2017 10:44:16 -0000

> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Wednesday, February 8, 2017 10:23 AM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Hunt, D=
avid
> <david.hunt@intel.com>; nipun.gupta@nxp.com; hemant.agrawal@nxp.com; Eads=
, Gage
> <gage.eads@intel.com>
> Subject: Re: [PATCH v2 15/15] app/test: add unit tests for SW eventdev dr=
iver

<snip>
=20
> Thanks for SW driver specific test cases. It provided me a good insight
> of expected application behavior from SW driver perspective and in turn i=
t created
> some challenge in portable applications.
>=20
> I would like highlight a main difference between the implementation and g=
et a
> consensus on how to abstract it?

Thanks for taking the time to detail your thoughts - the examples certainly=
 help to get a better picture of the whole.


> Based on existing header file, We can do event pipelining in two differen=
t ways
> a) Flow-based event pipelining
> b) queue_id based event pipelining
>=20
> I will provide an example to showcase application flow in both modes.
> Based on my understanding from SW driver source code, it supports only
> queue_id based event pipelining. I guess, Flow based event pipelining wil=
l
> work semantically with SW driver but it will be very slow.
>=20
> I think, the reason for the difference is the capability of the context d=
efinition.
> SW model the context is - queue_id
> Cavium HW model the context is queue_id + flow_id + sub_event_type +
> event_type
>=20
> AFAIK, queue_id based event pipelining will work with NXP HW but I am not
> sure about flow based event pipelining model with NXP HW. Appreciate any
> input this?
>=20
> In Cavium HW, We support both modes.
>=20
> As an open question, Should we add a capability flag to advertise the sup=
ported
> models and let application choose the model based on implementation capab=
ility. The
> downside is, a small portion of stage advance code will be different but =
we
> can reuse the STAGE specific application code(I think it a fair
> trade off)
>=20
> Bruce, Harry, Gage, Hemant, Nipun
> Thoughts? Or any other proposal?


[HvH] Comments inline.

=20
> I will take an non trivial realworld NW use case show the difference.
> A standard IPSec outbound processing will have minimum 4 to 5 stages
>=20
> stage_0:
> --------
> a) Takes the pkts from ethdev and push to eventdev as
> RTE_EVENT_OP_NEW
> b) Some HW implementation, This will be done by HW. In SW implementation
> it done by service cores
>=20
> stage_1:(ORDERED)
> ------------------
> a) Receive pkts from stage_0 in ORDERED flow and it process in parallel o=
n N
> of cores
> b) Find a SA belongs that packet move to next stage for SA specific
> outbound operations.Outbound processing starts with updating the
> sequence number in the critical section and followed by packet encryption=
 in
> parallel.
>=20
> stage_2(ATOMIC) based on SA
> ----------------------------
> a) Update the sequence number and move to ORDERED sched_type for packet
> encryption in parallel
>=20
> stage_3(ORDERED) based on SA
> ----------------------------
> a) Encrypt the packets in parallel
> b) Do output route look-up and figure out tx port and queue to transmit
> the packet
> c) Move to ATOMIC stage based on tx port and tx queue_id to transmit
> the packet _without_ losing the ingress ordering
>=20
> stage_4(ATOMIC) based on tx port/tx queue
> -----------------------------------------
> a) enqueue the encrypted packet to ethdev tx port/tx_queue
>
>
> 1) queue_id based event pipelining
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D
>=20
> stage_1_work(assigned to event queue 1)# N ports/N cores establish
> link to queue 1 through rte_event_port_link()
>=20
> on_each_cores_linked_to_queue1(stage1)


[HvH] All worker cores can be linked to all stages - we do a lookup of what=
 stage the work is based on the event->queue_id.


> while(1)
> {
>                 /* STAGE 1 processing */
>                 nr_events =3D rte_event_dequeue_burst(ev,..);
>                 if (!nr_events);
>                                 continue;
>=20
>                 sa =3D find_sa_from_packet(ev.mbuf);
>=20
>                 /* move to next stage(ATOMIC) */
>                 ev.event_type =3D RTE_EVENT_TYPE_CPU;
>                 ev.sub_event_type =3D 2;
>                 ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC;
>                 ev.flow_id =3D  sa;
>                 ev.op =3D RTE_EVENT_OP_FORWARD;
>                 ev.queue_id =3D 2;
>                 /* move to stage 2(event queue 2) */
>                 rte_event_enqueue_burst(ev,..);
> }
>=20
> on_each_cores_linked_to_queue2(stage2)
> while(1)
> {
>                 /* STAGE 2 processing */
>                 nr_events =3D rte_event_dequeue_burst(ev,..);
>                 if (!nr_events);
> 			continue;
>=20
>                 sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq=
 number update in
> critical section */
>=20
>                 /* move to next stage(ORDERED) */
>                 ev.event_type =3D RTE_EVENT_TYPE_CPU;
>                 ev.sub_event_type =3D 3;
>                 ev.sched_type =3D RTE_SCHED_TYPE_ORDERED;
>                 ev.flow_id =3D  sa;
>                 ev.op =3D RTE_EVENT_OP_FORWARD;
>                 ev.queue_id =3D 3;
>                 /* move to stage 3(event queue 3) */
>                 rte_event_enqueue_burst(ev,..);
> }
>=20
> on_each_cores_linked_to_queue3(stage3)
> while(1)
> {
>                 /* STAGE 3 processing */
>                 nr_events =3D rte_event_dequeue_burst(ev,..);
>                 if (!nr_events);
> 			continue;
>=20
>                 sa_specific_ordered_processing(sa /*ev.flow_id */);/* pac=
kets encryption in
> parallel */
>=20
>                 /* move to next stage(ATOMIC) */
>                 ev.event_type =3D RTE_EVENT_TYPE_CPU;
>                 ev.sub_event_type =3D 4;
>                 ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC;
> 		output_tx_port_queue =3D find_output_tx_queue_and_tx_port(ev.mbuff);
>                 ev.flow_id =3D  output_tx_port_queue;
>                 ev.op =3D RTE_EVENT_OP_FORWARD;
>                 ev.queue_id =3D 4;
>                 /* move to stage 4(event queue 4) */
>                 rte_event_enqueue_burst(ev,...);
> }
>=20
> on_each_cores_linked_to_queue4(stage4)
> while(1)
> {
>                 /* STAGE 4 processing */
>                 nr_events =3D rte_event_dequeue_burst(ev,..);
>                 if (!nr_events);
> 			continue;
>=20
> 		rte_eth_tx_buffer();
> }
>=20
> 2) flow-based event pipelining
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D
>=20
> - No need to partition queues for different stages
> - All the cores can operate on all the stages, Thus enables
> automatic multicore scaling, true dynamic load balancing,


[HvH] The sw case is the same - all cores can map to all stages, the lookup=
 for stage of work is the queue_id.


> - Fairly large number of SA(kind of 2^16 to 2^20) can be processed in par=
allel
> Something existing IPSec application has constraints on
> http://dpdk.org/doc/guides-16.04/sample_app_ug/ipsec_secgw.html
>=20
> on_each_worker_cores()
> while(1)
> {
> 	rte_event_dequeue_burst(ev,..)
> 	if (!nr_events);
> 		continue;
>=20
> 	/* STAGE 1 processing */
> 	if(ev.event_type =3D=3D RTE_EVENT_TYPE_ETHDEV) {
> 		sa =3D find_it_from_packet(ev.mbuf);
> 		/* move to next stage2(ATOMIC) */
> 		ev.event_type =3D RTE_EVENT_TYPE_CPU;
> 		ev.sub_event_type =3D 2;
> 		ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC;
> 		ev.flow_id =3D  sa;
> 		ev.op =3D RTE_EVENT_OP_FORWARD;
> 		rte_event_enqueue_burst(ev..);
>=20
> 	} else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && ev.sub_event_type =
=3D=3D 2) { /* stage 2 */


[HvH] In the case of software eventdev ev.queue_id is used instead of ev.su=
b_event_type - but this is the same lookup operation as mentioned above. I =
don't see a fundamental difference between these approaches?

>=20
> 		sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq number update=
 in critical
> section */
> 		/* move to next stage(ORDERED) */
> 		ev.event_type =3D RTE_EVENT_TYPE_CPU;
> 		ev.sub_event_type =3D 3;
> 		ev.sched_type =3D RTE_SCHED_TYPE_ORDERED;
> 		ev.flow_id =3D  sa;
> 		ev.op =3D RTE_EVENT_OP_FORWARD;
> 		rte_event_enqueue_burst(ev,..);
>=20
> 	} else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && ev.sub_event_type =
=3D=3D 3) { /* stage 3 */
>=20
> 		sa_specific_ordered_processing(sa /* ev.flow_id */);/* like encrypting =
packets in
> parallel */
> 		/* move to next stage(ATOMIC) */
> 		ev.event_type =3D RTE_EVENT_TYPE_CPU;
> 		ev.sub_event_type =3D 4;
> 		ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC;
> 		output_tx_port_queue =3D find_output_tx_queue_and_tx_port(ev.mbuff);
> 		ev.flow_id =3D  output_tx_port_queue;
> 		ev.op =3D RTE_EVENT_OP_FORWARD;
> 		rte_event_enqueue_burst(ev,..);
>=20
> 	} else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && ev.sub_event_type =
=3D=3D 4) { /* stage 4 */
> 		rte_eth_tx_buffer();
> 	}
> }
>=20
> /Jerin
> Cavium