From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id C62512BE1 for ; Thu, 11 Jan 2018 13:17:43 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Jan 2018 04:17:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,344,1511856000"; d="scan'208";a="23195981" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by orsmga001.jf.intel.com with ESMTP; 11 Jan 2018 04:17:40 -0800 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.180]) by IRSMSX153.ger.corp.intel.com ([169.254.9.34]) with mapi id 14.03.0319.002; Thu, 11 Jan 2018 12:17:39 +0000 From: "Van Haaren, Harry" To: Pavan Nikhilesh , "jerin.jacob@caviumnetworks.com" , "santosh.shukla@caviumnetworks.com" , "Eads, Gage" , "hemant.agrawal@nxp.com" , "nipun.gupta@nxp.com" , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v3 09/12] app/eventdev: add pipeline queue worker functions Thread-Index: AQHTiiK5mRtKbCy3BUu0FF77R8OT8aNtTtvggAACt5CAADolAIAA/cDQ Date: Thu, 11 Jan 2018 12:17:38 +0000 Message-ID: References: <20171130072406.15605-1-pbhagavatula@caviumnetworks.com> <20180110145144.28403-1-pbhagavatula@caviumnetworks.com> <20180110145144.28403-9-pbhagavatula@caviumnetworks.com> <20180110201710.3uolm2hwzwcowoif@Pavan-LT> In-Reply-To: <20180110201710.3uolm2hwzwcowoif@Pavan-LT> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZmY0NzE2MjAtOTdiOC00OGJhLWE3OWEtMjFkZGM3M2ZhODRkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IlY0QVFPWUFFcDBNVUIzMXpUV1N5b2RcL3ZLdEE0ZHFOS0hlb0VcLzNVRjdOZz0ifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 09/12] app/eventdev: add pipeline queue worker functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jan 2018 12:17:44 -0000 > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com] > Sent: Wednesday, January 10, 2018 8:17 PM > To: Van Haaren, Harry ; > jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com; Eads, > Gage ; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; = Ma, > Liang J > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v3 09/12] app/eventdev: add pipeline queue > worker functions >=20 > On Wed, Jan 10, 2018 at 04:53:53PM +0000, Van Haaren, Harry wrote: > > Replying to self... > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Van Haaren, Harr= y > > > Sent: Wednesday, January 10, 2018 4:45 PM > > > To: Pavan Nikhilesh ; > > > jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.com; Ea= ds, > > > Gage ; hemant.agrawal@nxp.com; nipun.gupta@nxp.c= om; > Ma, > > > Liang J > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH v3 09/12] app/eventdev: add pipeline > queue > > > worker functions > > > > > > > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com] > > > > Sent: Wednesday, January 10, 2018 2:52 PM > > > > To: jerin.jacob@caviumnetworks.com; santosh.shukla@caviumnetworks.c= om; > Van > > > > Haaren, Harry ; Eads, Gage > > > > ; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; > Ma, > > > > Liang J > > > > Cc: dev@dpdk.org; Pavan Nikhilesh > > > > Subject: [dpdk-dev] [PATCH v3 09/12] app/eventdev: add pipeline que= ue > > > worker > > > > functions > > > > > > > > > > > > > > > > > > > > +static __rte_always_inline void > > > > +pipeline_tx_pkt_safe(struct rte_mbuf *mbuf) > > > > +{ > > > > + while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) !=3D 1) > > > > + rte_pause(); > > > > +} > > > > > > re safe, see comment below > > > > > > > + > > > > +static __rte_always_inline void > > > > +pipeline_tx_pkt_unsafe(struct rte_mbuf *mbuf, struct test_pipeline > *t) > > > > +{ > > > > + rte_spinlock_t *lk =3D &t->tx_lk[mbuf->port]; > > > > + > > > > + rte_spinlock_lock(lk); > > > > + pipeline_tx_pkt_safe(mbuf); > > > > + rte_spinlock_unlock(lk); > > > > +} > > > > > > IIRC usually the "Safe" version of a function has extra > locks/protection, > > > while the "normal" version has better performance, but less-error- > checking. > > > > > > Here, the "unsafe" function does the extra locking. If looking from t= he > HW > > > POV, that makes sense, but I think its inverted from most existing > code... > > > > > > Happy to be proved wrong here .. ? > > > > > > > > > > > > Thinking a little more about this, also in light of patch 11/12 of this > series. > > > > The code here has a "safe" and "unsafe" version of TX. This involves > adding a spinlock inside the code, which is being locked/unlocked before > doing the actual TX action. > > > > I don't understand why this is necessary? DPDK's general stance on lock= ing > for data-path is DPDK functions do not provide locks, and that applicatio= n > level must implement thread-synchronization if it is required. > > > > In this case, the app/eventdev can be considered an App, but I don't li= ke > the idea of providing a sample application and code that duplicates core > functionality with safe/unsafe versions.. > > >=20 > Some PMD's (net/octeontx) have capability to do multi-thread safe Tx wher= e > no > thread-synchronization is required. This is exposed via the offload flag > 'DEV_TX_OFFLOAD_MT_LOCKFREE'. Yes understood. > So, the _safe Tx functions are selected based on the above offload > capability > and when the capability is absent _unsafe Tx functions are selected i.e. > synchronized Tx via spin locks based on the Egress port id. This part changes the current behavior of the sample app. Currently there is a (SINGLE_LINK | ATOMIC) stage at the end of the pipelin= e, which performs this "many-to-one" action, allowing a single core to dequ= eue all TX traffic, and perform the TX operation in a lock-free manner. Changing this to a locking mechanism is going to hurt performance on platfo= rms that do not support TX_OFFLOAD_MT_LOCKFREE. In my opinion, the correct fix is to alter the overall pipeline, and always= use lockless TX. Examples below; NO TX_OFFLOAD_MT_LOCKFREE: Eth RX adapter -> stage 1 -> stage 2...(N-1) -> stage N -> stage TX (Ato= mic | SINGLE_LINK) -> eth TX WITH TX_OFFLOAD_MT_LOCKFREE: Eth RX adapter -> stage 1 -> stage 2...(N-1) -> stage N -> eth TX MT Cap= able By configuring the pipeline based on MT_OFFLOAD_LOCKFREE capability flag, a= nd adding the SINGLE_LINK at the end if required, we can support both model= s without resorting to locked TX functions. I think this will lead to a cleaner and more performant solution.