From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 6308C1F5 for ; Thu, 18 May 2017 12:13:13 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP; 18 May 2017 03:13:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,358,1491289200"; d="scan'208";a="1149610314" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.42]) by fmsmga001.fm.intel.com with SMTP; 18 May 2017 03:13:04 -0700 Received: by (sSMTP sendmail emulation); Thu, 18 May 2017 11:13:04 +0100 Date: Thu, 18 May 2017 11:13:03 +0100 From: Bruce Richardson To: Jerin Jacob Cc: Harry van Haaren , dev@dpdk.org, Gage Eads Message-ID: <20170518101303.GA6300@bricha3-MOBL3.ger.corp.intel.com> References: <1492768299-84016-1-git-send-email-harry.van.haaren@intel.com> <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com> <20170517180314.GA26402@jerin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170517180314.GA26402@jerin> Organization: Intel Research and =?iso-8859-1?Q?De=ACvel?= =?iso-8859-1?Q?opment?= Ireland Ltd. User-Agent: Mutt/1.8.0 (2017-02-23) Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 May 2017 10:13:13 -0000 On Wed, May 17, 2017 at 11:33:16PM +0530, Jerin Jacob wrote: > -----Original Message----- > > Date: Fri, 21 Apr 2017 10:51:37 +0100 > > From: Harry van Haaren > > To: dev@dpdk.org > > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren > > , Gage Eads , Bruce > > Richardson > > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app > > X-Mailer: git-send-email 2.7.4 > > > > This commit adds a sample app for the eventdev library. > > The app has been tested with DPDK 17.05-rc2, hence this > > release (or later) is recommended. > > > > The sample app showcases a pipeline processing use-case, > > with event scheduling and processing defined per stage. > > The application recieves traffic as normal, with each > > packet traversing the pipeline. Once the packet has > > been processed by each of the pipeline stages, it is > > transmitted again. > > > > The app provides a framework to utilize cores for a single > > role or multiple roles. Examples of roles are the RX core, > > TX core, Scheduling core (in the case of the event/sw PMD), > > and worker cores. > > > > Various flags are available to configure numbers of stages, > > cycles of work at each stage, type of scheduling, number of > > worker cores, queue depths etc. For a full explaination, > > please refer to the documentation. > > > > Signed-off-by: Gage Eads > > Signed-off-by: Bruce Richardson > > Signed-off-by: Harry van Haaren > > --- > > + > > +static inline void > > +schedule_devices(uint8_t dev_id, unsigned lcore_id) > > +{ > > + if (rx_core[lcore_id] && (rx_single || > > + rte_atomic32_cmpset(&rx_lock, 0, 1))) { > > + producer(); > > + rte_atomic32_clear((rte_atomic32_t *)&rx_lock); > > + } > > + > > + if (sched_core[lcore_id] && (sched_single || > > + rte_atomic32_cmpset(&sched_lock, 0, 1))) { > > + rte_event_schedule(dev_id); > > One question here, > > Does rte_event_schedule()'s SW PMD implementation capable of running > concurrently on multiple cores? > No, it's not. It's designed to be called on a single (dedicated) core. > Context: > Currently I am writing a testpmd like test framework to realize > different use cases along with with performance test cases like throughput > and latency and making sure it works on SW and HW driver. > > I see the following segfault problem when rte_event_schedule() invoked on > multiple core currently. Is it expected? Yes,pretty much. /Bruce