From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30055.outbound.protection.outlook.com [40.107.3.55]) by dpdk.org (Postfix) with ESMTP id D9F49282 for ; Wed, 8 Feb 2017 19:02:28 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=HY9m5/YgAVfyEKseChspPA4L7Jo9giXr06crANScZf0=; b=t4ahGqgTjZGmLlHuai9YhNw83+bv4N9gRU7i+ACRgo3xzlfTRZhXRKto1m2q7EzLVcsNXJTHoDEB83RX9NFVytnu4g5Gy064Xyl3YqTWg43Rlv0YRGY7zs2e2s1Ern7WnQ/rUWJWVwCUyhmlICBcONikmHW8OQgFimxs5P98Et4= Received: from AM5PR0401MB2514.eurprd04.prod.outlook.com (10.169.244.146) by AM4PR04MB1604.eurprd04.prod.outlook.com (10.164.78.150) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.888.16; Wed, 8 Feb 2017 18:02:27 +0000 Received: from AM5PR0401MB2514.eurprd04.prod.outlook.com ([10.169.244.146]) by AM5PR0401MB2514.eurprd04.prod.outlook.com ([10.169.244.146]) with mapi id 15.01.0888.026; Wed, 8 Feb 2017 18:02:26 +0000 From: Nipun Gupta To: Jerin Jacob , Harry van Haaren CC: "dev@dpdk.org" , Bruce Richardson , David Hunt , "Hemant Agrawal" , "gage.eads@intel.com" Thread-Topic: [PATCH v2 15/15] app/test: add unit tests for SW eventdev driver Thread-Index: AQHSgfVkPHAAYB6qpEq9t+vFiGdAoaFfUyGg Date: Wed, 8 Feb 2017 18:02:26 +0000 Message-ID: References: <1484580885-148524-1-git-send-email-harry.van.haaren@intel.com> <1485879273-86228-1-git-send-email-harry.van.haaren@intel.com> <1485879273-86228-16-git-send-email-harry.van.haaren@intel.com> <20170208102306.GA19597@localhost.localdomain> In-Reply-To: <20170208102306.GA19597@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=nipun.gupta@nxp.com; x-originating-ip: [182.68.229.76] x-microsoft-exchange-diagnostics: 1; AM4PR04MB1604; 7:3WPOPs+5fSzT1g6/JkyprDWKwppj3dEB7Do2dLwliN/AX+ihv7bkcRTTQ2inuDC0NyGtDF+N3dEjXSd4IXFiRPz9dpZlSCLwZKyQnCRFFuZ7gLKAzLfSGYhd+JvpsZr2tcjQPEifwkgKqTvABFrhajK6TrLPSKWojuSREKIEtl1OfXOgfRcY5azkBUKXZYh5rUmuXuCWggPF2iTB+Gbw9fteuYSduYLLqibQBE5Ar1X9PPUt4t4VqwRQ0uwGsdLvShsVmbK9z6Ts1PE1CohGMy7JI9jSqL3hrhwqzg3q4x38VCo8SrIK/F1psBK432V27TNzNKBKBDesqaCPlR428ZEfRyde1BkgUJqTu5ZvMjGRq/x5qVn8yEfaU1w1mKNQGnhbZVN3TYLkJmV8No4ZP26q8qBH4KLEkmYzJu4VH/J/bJFn6yfnxS2bV93qYEkWkBI2GylS8oiz9tZe2+BHbQVq56XXaAfpF8bo+hOWGtgzlMoghZyqy4vQL4EsU/lI8iecn4RnS4OZeyyoj+lWUQ== x-forefront-antispam-report: SFV:SKI; SCL:-1SFV:NSPM; SFS:(10009020)(6009001)(7916002)(39850400002)(39860400002)(39450400003)(39840400002)(39410400002)(13464003)(189002)(199003)(24454002)(6436002)(53376002)(229853002)(3846002)(6116002)(102836003)(6506006)(6306002)(7696004)(5660300001)(106116001)(106356001)(86362001)(38730400002)(55016002)(9686003)(74316002)(6246003)(77096006)(99286003)(305945005)(68736007)(2950100002)(3660700001)(7736002)(53936002)(122556002)(54906002)(966004)(25786008)(101416001)(3280700002)(92566002)(54356999)(2906002)(93886004)(66066001)(4326007)(105586002)(2900100001)(97736004)(50986999)(561944003)(81166006)(33656002)(76176999)(189998001)(81156014)(8676002)(8936002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR04MB1604; H:AM5PR0401MB2514.eurprd04.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; x-ms-office365-filtering-correlation-id: c97587bc-b8a8-4af3-6889-08d4504ca42e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(48565401081); SRVR:AM4PR04MB1604; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(185117386973197)(100405760836317)(155532106045638)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(2017020702029)(5005006)(20170203043)(8121501046)(3002001)(10201501046)(6055026)(6041248)(20161123555025)(20161123562025)(20161123560025)(20161123564025)(20161123558025)(6072148); SRVR:AM4PR04MB1604; BCL:0; PCL:0; RULEID:; SRVR:AM4PR04MB1604; x-forefront-prvs: 0212BDE3BE received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Feb 2017 18:02:26.7523 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR04MB1604 Subject: Re: [dpdk-dev] [PATCH v2 15/15] app/test: add unit tests for SW eventdev driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Feb 2017 18:02:29 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Wednesday, February 08, 2017 15:53 > To: Harry van Haaren > Cc: dev@dpdk.org; Bruce Richardson ; David > Hunt ; Nipun Gupta ; Hemant > Agrawal ; gage.eads@intel.com > Subject: Re: [PATCH v2 15/15] app/test: add unit tests for SW eventdev dr= iver >=20 > On Tue, Jan 31, 2017 at 04:14:33PM +0000, Harry van Haaren wrote: > > From: Bruce Richardson > > > > Since the sw driver is a standalone lookaside device that has no HW > > requirements, we can provide a set of unit tests that test its > > functionality across the different queue types and with different input > > scenarios. > > >=20 > Thanks for SW driver specific test cases. It provided me a good insight > of expected application behavior from SW driver perspective and in turn i= t > created > some challenge in portable applications. >=20 > I would like highlight a main difference between the implementation and g= et a > consensus on how to abstract it? >=20 > Based on existing header file, We can do event pipelining in two differen= t ways > a) Flow-based event pipelining > b) queue_id based event pipelining >=20 > I will provide an example to showcase application flow in both modes. > Based on my understanding from SW driver source code, it supports only > queue_id based event pipelining. I guess, Flow based event pipelining wil= l > work semantically with SW driver but it will be very slow. >=20 > I think, the reason for the difference is the capability of the context d= efinition. > SW model the context is - queue_id > Cavium HW model the context is queue_id + flow_id + sub_event_type + > event_type >=20 > AFAIK, queue_id based event pipelining will work with NXP HW but I am not > sure about flow based event pipelining model with NXP HW. Appreciate any > input this? [Nipun] Yes Jerin, that's right. NXP HW will not be suitable for flow based= event pipelining. >=20 > In Cavium HW, We support both modes. >=20 > As an open question, Should we add a capability flag to advertise the sup= ported > models and let application choose the model based on implementation > capability. The > downside is, a small portion of stage advance code will be different but = we > can reuse the STAGE specific application code(I think it a fair > trade off) >=20 > Bruce, Harry, Gage, Hemant, Nipun > Thoughts? Or any other proposal? >=20 > I will take an non trivial realworld NW use case show the difference. > A standard IPSec outbound processing will have minimum 4 to 5 stages >=20 > stage_0: > -------- > a) Takes the pkts from ethdev and push to eventdev as > RTE_EVENT_OP_NEW > b) Some HW implementation, This will be done by HW. In SW implementation > it done by service cores >=20 > stage_1:(ORDERED) > ------------------ > a) Receive pkts from stage_0 in ORDERED flow and it process in parallel o= n N > of cores > b) Find a SA belongs that packet move to next stage for SA specific > outbound operations.Outbound processing starts with updating the > sequence number in the critical section and followed by packet encryption= in > parallel. >=20 > stage_2(ATOMIC) based on SA > ---------------------------- > a) Update the sequence number and move to ORDERED sched_type for packet > encryption in parallel >=20 > stage_3(ORDERED) based on SA > ---------------------------- > a) Encrypt the packets in parallel > b) Do output route look-up and figure out tx port and queue to transmit > the packet > c) Move to ATOMIC stage based on tx port and tx queue_id to transmit > the packet _without_ losing the ingress ordering >=20 > stage_4(ATOMIC) based on tx port/tx queue > ----------------------------------------- > a) enqueue the encrypted packet to ethdev tx port/tx_queue >=20 >=20 > 1) queue_id based event pipelining > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D >=20 > stage_1_work(assigned to event queue 1)# N ports/N cores establish > link to queue 1 through rte_event_port_link() >=20 > on_each_cores_linked_to_queue1(stage1) > while(1) > { > /* STAGE 1 processing */ > nr_events =3D rte_event_dequeue_burst(ev,..); > if (!nr_events); > continue; >=20 > sa =3D find_sa_from_packet(ev.mbuf); >=20 > /* move to next stage(ATOMIC) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 2; > ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC; > ev.flow_id =3D sa; > ev.op =3D RTE_EVENT_OP_FORWARD; > ev.queue_id =3D 2; > /* move to stage 2(event queue 2) */ > rte_event_enqueue_burst(ev,..); > } >=20 > on_each_cores_linked_to_queue2(stage2) > while(1) > { > /* STAGE 2 processing */ > nr_events =3D rte_event_dequeue_burst(ev,..); > if (!nr_events); > continue; >=20 > sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq= number > update in critical section */ >=20 > /* move to next stage(ORDERED) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 3; > ev.sched_type =3D RTE_SCHED_TYPE_ORDERED; > ev.flow_id =3D sa; [Nipun] Queue1 has flow_id as an 'sa' with sched_type as RTE_SCHED_TYPE_ATO= MIC and Queue2 has same flow_id but with sched_type as RTE_SCHED_TYPE_ORDERED. Does this mean that same flow_id be associated with separate RTE_SCHED_TYPE= _* as sched_type? My understanding is that one flow can either be parallel or atomic or order= ed. The rte_eventdev.h states that sched_type is associated with flow_id, which= also seems legitimate: uint8_t sched_type:2; /**< Scheduler synchronization type (RTE_SCHED_TYPE_*) * associated with flow id on a given event queue * for the enqueue and dequeue operation. */ > ev.op =3D RTE_EVENT_OP_FORWARD; > ev.queue_id =3D 3; > /* move to stage 3(event queue 3) */ > rte_event_enqueue_burst(ev,..); > } >=20 > on_each_cores_linked_to_queue3(stage3) > while(1) > { > /* STAGE 3 processing */ > nr_events =3D rte_event_dequeue_burst(ev,..); > if (!nr_events); > continue; >=20 > sa_specific_ordered_processing(sa /*ev.flow_id */);/* pac= kets > encryption in parallel */ >=20 > /* move to next stage(ATOMIC) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 4; > ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC; > output_tx_port_queue =3D > find_output_tx_queue_and_tx_port(ev.mbuff); > ev.flow_id =3D output_tx_port_queue; > ev.op =3D RTE_EVENT_OP_FORWARD; > ev.queue_id =3D 4; > /* move to stage 4(event queue 4) */ > rte_event_enqueue_burst(ev,...); > } >=20 > on_each_cores_linked_to_queue4(stage4) > while(1) > { > /* STAGE 4 processing */ > nr_events =3D rte_event_dequeue_burst(ev,..); > if (!nr_events); > continue; >=20 > rte_eth_tx_buffer(); > } >=20 > 2) flow-based event pipelining > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D >=20 > - No need to partition queues for different stages > - All the cores can operate on all the stages, Thus enables > automatic multicore scaling, true dynamic load balancing, > - Fairly large number of SA(kind of 2^16 to 2^20) can be processed in par= allel > Something existing IPSec application has constraints on > http://dpdk.org/doc/guides-16.04/sample_app_ug/ipsec_secgw.html >=20 > on_each_worker_cores() > while(1) > { > rte_event_dequeue_burst(ev,..) > if (!nr_events); > continue; >=20 > /* STAGE 1 processing */ > if(ev.event_type =3D=3D RTE_EVENT_TYPE_ETHDEV) { > sa =3D find_it_from_packet(ev.mbuf); > /* move to next stage2(ATOMIC) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 2; > ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC; > ev.flow_id =3D sa; > ev.op =3D RTE_EVENT_OP_FORWARD; > rte_event_enqueue_burst(ev..); >=20 > } else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && > ev.sub_event_type =3D=3D 2) { /* stage 2 */ [Nipun] I didn't got that in this case on which event queue (and eventually its associated event ports) will the RTE_EVENT_TYPE_CPU type events be rece= ived on? Adding on to what Harry also mentions in other mail, If same code is run in= the case you mentioned in '#1 - queue_id based event pipelining', after specifying the e= v.queue_id with appropriate value then also #1 would be good. Isn't it? >=20 > sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq > number update in critical section */ > /* move to next stage(ORDERED) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 3; > ev.sched_type =3D RTE_SCHED_TYPE_ORDERED; > ev.flow_id =3D sa; > ev.op =3D RTE_EVENT_OP_FORWARD; > rte_event_enqueue_burst(ev,..); >=20 > } else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && > ev.sub_event_type =3D=3D 3) { /* stage 3 */ >=20 > sa_specific_ordered_processing(sa /* ev.flow_id */);/* like > encrypting packets in parallel */ > /* move to next stage(ATOMIC) */ > ev.event_type =3D RTE_EVENT_TYPE_CPU; > ev.sub_event_type =3D 4; > ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC; > output_tx_port_queue =3D > find_output_tx_queue_and_tx_port(ev.mbuff); > ev.flow_id =3D output_tx_port_queue; > ev.op =3D RTE_EVENT_OP_FORWARD; > rte_event_enqueue_burst(ev,..); >=20 > } else if(ev.event_type =3D=3D RTE_EVENT_TYPE_CPU && > ev.sub_event_type =3D=3D 4) { /* stage 4 */ > rte_eth_tx_buffer(); > } > } >=20 > /Jerin > Cavium