From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 3095BFB19 for ; Mon, 27 Mar 2017 17:17:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490627874; x=1522163874; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=pIKkaN8//n/WSZacv3sIG3l1Ws0LaF9YkaRtxECvFYA=; b=m+8xmy2+xESsC/Pdx6DPDjF9uaAvXYxSFbITJw5f95xRsZSBvhE1vq4t 5i7bxgl0AIeNkX7sccd8lbxB6MtBXw==; Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Mar 2017 08:17:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,232,1486454400"; d="scan'208";a="1112587865" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by orsmga001.jf.intel.com with ESMTP; 27 Mar 2017 08:17:49 -0700 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.153]) by IRSMSX151.ger.corp.intel.com ([169.254.4.123]) with mapi id 14.03.0319.002; Mon, 27 Mar 2017 16:17:49 +0100 From: "Van Haaren, Harry" To: Jerin Jacob CC: "dev@dpdk.org" , "Richardson, Bruce" Thread-Topic: [PATCH v5 06/20] event/sw: add support for event queues Thread-Index: AQHSpL8pQiOO24QChUGy3SqZRbqlMqGoQdgAgAAj03A= Date: Mon, 27 Mar 2017 15:17:48 +0000 Message-ID: References: <489175012-101439-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-7-git-send-email-harry.van.haaren@intel.com> <20170327074011.fgodyrhquabj54r2@localhost.localdomain> In-Reply-To: <20170327074011.fgodyrhquabj54r2@localhost.localdomain> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTIzMTczMjMtZTI4MC00NWQ3LWI2OWUtMjI2MmUwNjYzNDgxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX1BVQkxJQyJ9XX1dfSwiU3ViamVjdExhYmVscyI6W10sIlRNQ1ZlcnNpb24iOiIxNS45LjYuNiIsIlRydXN0ZWRMYWJlbEhhc2giOiJtRDFxZ3czSlwvZXdhVDNIWFI0UVdUdmdYODFNc3dYbWEwek1xMTEzblhWMD0ifQ== x-ctpclassification: CTP_PUBLIC x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Mar 2017 15:17:54 -0000 > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Monday, March 27, 2017 8:45 AM > To: Van Haaren, Harry > Cc: dev@dpdk.org; Richardson, Bruce > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues >=20 > On Fri, Mar 24, 2017 at 04:53:01PM +0000, Harry van Haaren wrote: > > From: Bruce Richardson > > > > Add in the data structures for the event queues, and the eventdev > > functions to create and destroy those queues. > > > > Signed-off-by: Bruce Richardson > > Signed-off-by: Harry van Haaren > > --- > > +static int32_t > > +qid_init(struct sw_evdev *sw, unsigned int idx, int type, > > + const struct rte_event_queue_conf *queue_conf) > > +{ > > + unsigned int i; > > + int dev_id =3D sw->data->dev_id; > > + int socket_id =3D sw->data->socket_id; > > + char buf[IQ_RING_NAMESIZE]; > > + struct sw_qid *qid =3D &sw->qids[idx]; > > + > > + for (i =3D 0; i < SW_IQS_MAX; i++) { >=20 > Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to address > different priority for each enqueue operation? What is the significance o= f > 4(SW_IQS_MAX) here? Yes each IQ represents a priority level. There is a compile-time define (SW= _IQS_MAX) which allows setting the number of internal-queues at each queue = stage. The default number of priorities is currently 4. > > +static int > > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, > > + const struct rte_event_queue_conf *conf) > > +{ > > + int type; > > + > > + switch (conf->event_queue_cfg) { > > + case RTE_EVENT_QUEUE_CFG_SINGLE_LINK: > > + type =3D SW_SCHED_TYPE_DIRECT; > > + break; >=20 > event_queue_cfg is a bitmap. It is valid to have > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. > i.e An atomic schedule type queue and it has only one port linked to > dequeue the events. > So in the above context, The switch case is not correct. i.e > it goes to the default condition. Right? > Is this intentional? >=20 > If I understand it correctly, Based on the use case(grouped based event > pipelining), you have shared in > the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for last > stage(last queue). One option is if SW PMD cannot support > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mode > then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK | > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where > application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the pip= eline. >=20 > Thoughts? I don't like the idea of the SW PMD ignoring flags for queues - the PMD has= no idea if the queue is the final or middle of the pipeline as it's the ap= plications usage which defines that. Does anybody have a need for a queue to be both Atomic *and* Single-link? = I understand the current API doesn't prohibit it, but I don't see the actua= l use-case in which that may be useful. Atomic implies load-balancing is oc= curring, single link implies there is only one consuming core. Those seem l= ike opposites to me? Unless anybody sees value in queue's having both, I suggest we update the d= ocumentation to specify that a queue is either load balanced, or single-lin= k, and that setting both flags will result in -ENOTSUP being returned. (Thi= s check can be added to EventDev layer if consistent for all PMDs). Counter-thoughts? > > + case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY: > > + type =3D RTE_SCHED_TYPE_ATOMIC; > > + break; > > + case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY: > > + type =3D RTE_SCHED_TYPE_ORDERED; > > + break; > > + case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY: > > + type =3D RTE_SCHED_TYPE_PARALLEL; > > + break; > > + case RTE_EVENT_QUEUE_CFG_ALL_TYPES: > > + SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n"); > > + return -ENOTSUP; > > + default: > > + SW_LOG_ERR("Unknown queue type %d requested\n", > > + conf->event_queue_cfg); > > + return -EINVAL; > > + } > > + > > + struct sw_evdev *sw =3D sw_pmd_priv(dev); > > + return qid_init(sw, queue_id, type, conf); > > +}