From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 983919E7 for ; Tue, 7 Feb 2017 10:58:16 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP; 07 Feb 2017 01:58:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,345,1477983600"; d="scan'208";a="40837544" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by orsmga002.jf.intel.com with ESMTP; 07 Feb 2017 01:58:14 -0800 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.230]) by IRSMSX104.ger.corp.intel.com ([163.33.3.159]) with mapi id 14.03.0248.002; Tue, 7 Feb 2017 09:58:13 +0000 From: "Van Haaren, Harry" To: Jerin Jacob CC: "dev@dpdk.org" , "Richardson, Bruce" Thread-Topic: [PATCH v2 07/15] event/sw: add support for event queues Thread-Index: AQHSgFsDIBLHX5ew9UC20l4IRbpDKKFbw9VQgAFagACAADE3cA== Date: Tue, 7 Feb 2017 09:58:13 +0000 Message-ID: References: <1484580885-148524-1-git-send-email-harry.van.haaren@intel.com> <1485879273-86228-1-git-send-email-harry.van.haaren@intel.com> <1485879273-86228-8-git-send-email-harry.van.haaren@intel.com> <20170206092509.GF25242@localhost.localdomain> <20170207065800.GB12563@localhost.localdomain> In-Reply-To: <20170207065800.GB12563@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNTY3MTdmNTUtN2JhZC00MjNhLTliZWUtMzhlOWI3ZTQyMzFkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Ijg4UWNcL1VrdEp1R054dXJ5Nk1INlJaODVtVFNGVkxVSGFXQ3pKNkUrUlwvOD0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2 07/15] event/sw: add support for event queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Feb 2017 09:58:17 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, February 7, 2017 6:58 AM > To: Van Haaren, Harry > Cc: dev@dpdk.org; Richardson, Bruce > Subject: Re: [PATCH v2 07/15] event/sw: add support for event queues >=20 > On Mon, Feb 06, 2017 at 10:25:18AM +0000, Van Haaren, Harry wrote: > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Monday, February 6, 2017 9:25 AM > > > To: Van Haaren, Harry > > > Cc: dev@dpdk.org; Richardson, Bruce > > > Subject: Re: [PATCH v2 07/15] event/sw: add support for event queues > > > > > > On Tue, Jan 31, 2017 at 04:14:25PM +0000, Harry van Haaren wrote: > > > > From: Bruce Richardson > > > > > > > > Add in the data structures for the event queues, and the eventdev > > > > functions to create and destroy those queues. > > > > > > > > Signed-off-by: Bruce Richardson > > > > Signed-off-by: Harry van Haaren > > > > --- > > > > drivers/event/sw/iq_ring.h | 176 ++++++++++++++++++++++++++++++++= ++++++++++++ > > > > drivers/event/sw/sw_evdev.c | 158 ++++++++++++++++++++++++++++++++= +++++++ > > > > drivers/event/sw/sw_evdev.h | 75 +++++++++++++++++++ > > > > 3 files changed, 409 insertions(+) > > > > create mode 100644 drivers/event/sw/iq_ring.h > > > > > > > > + */ > > > > + > > > > +/* > > > > + * Ring structure definitions used for the internal ring buffers o= f the > > > > + * SW eventdev implementation. These are designed for single-core = use only. > > > > + */ > > > > > > If I understand it correctly, IQ and QE rings are single producer and > > > single consumer rings. By the specification, multiple producers throu= gh > > > multiple ports can enqueue to the event queues at a time.Does SW impl= ementation > > > support that? or am I missing something here? > > > > You're right that the IQ and QE rings are Single Producer, Single Consu= mer rings. More > specifically, the QE is a ring for sending rte_event structs between core= s, while the IQ ring > is optimized for internal use in the scheduler core - and should not be u= sed to send events > between cores. Note that the design of the SW scheduler includes a centra= l core for performing > scheduling. >=20 > Thanks Harry. One question though, > In RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED mode, multiple SW schedulers can > be active. Right? If so, We need multi consumer. Right? Note that the sw scheduler is a centralized scheduler, and does not support= distributing the *scheduling* work itself. In the case of having multiple software schedulers, each instance is its ow= n scheduling domain - they don't interact with each other directly. There i= s no need for Multi-Producer / Multi-Consumer with this design as there is = never more than 1 thread accessing the producer/consumer side of a ring. >=20 > > > > In other works, the QE rings transfer events from the worker core to th= e scheduler - and the > scheduler pushes the events into what you call the "event queues" (aka, t= he atomic/ordered > queue itself). These "event queues" are IQ instances. On egress from the = scheduler, the event > passes through a QE ring to the worker. > > > > The result is that despite that only SP/SC rings are used, multiple wor= kers can enqueue to > any event queue. >=20 > Got it. >=20