From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id ECD052A5E for ; Thu, 8 Dec 2016 11:14:42 +0100 (CET) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP; 08 Dec 2016 02:14:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,318,1477983600"; d="scan'208";a="40149832" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.64]) by orsmga005.jf.intel.com with SMTP; 08 Dec 2016 02:14:39 -0800 Received: by (sSMTP sendmail emulation); Thu, 08 Dec 2016 10:14:39 +0000 Date: Thu, 8 Dec 2016 10:14:38 +0000 From: Bruce Richardson To: Alan Robertson Cc: "Dumitrescu, Cristian" , "dev@dpdk.org" Message-ID: <20161208101438.GD55440@bricha3-MOBL3.ger.corp.intel.com> References: <1480529810-95280-1-git-send-email-cristian.dumitrescu@intel.com> <57688e98-15d5-1866-0c3a-9dda81621651@brocade.com> <6d862b500e1e4f34a4cbf790db8d5d48@EMEAWP-EXMB11.corp.brocade.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6d862b500e1e4f34a4cbf790db8d5d48@EMEAWP-EXMB11.corp.brocade.com> Organization: Intel Research and =?iso-8859-1?Q?De=ACvel?= =?iso-8859-1?Q?opment?= Ireland Ltd. User-Agent: Mutt/1.7.1 (2016-10-04) Subject: Re: [dpdk-dev] [RFC] ethdev: abstraction layer for QoS hierarchical scheduler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Dec 2016 10:14:43 -0000 On Wed, Dec 07, 2016 at 10:58:49AM +0000, Alan Robertson wrote: > Hi Cristian, > > Looking at points 10 and 11 it's good to hear nodes can be dynamically added. > > We've been trying to decide the best way to do this for support of qos on tunnels for > some time now and the existing implementation doesn't allow this so effectively ruled > out hierarchical queueing for tunnel targets on the output interface. > > Having said that, has thought been given to separating the queueing from being so closely > tied to the Ethernet transmit process ? When queueing on a tunnel for example we may > be working with encryption. When running with an anti-reply window it is really much > better to do the QOS (packet reordering) before the encryption. To support this would > it be possible to have a separate scheduler structure which can be passed into the > scheduling API ? This means the calling code can hang the structure of whatever entity > it wishes to perform qos on, and we get dynamic target support (sessions/tunnels etc). > Hi, just to note that not all ethdevs need to be actual NICs (physical or virtual). It was also for situations like this that the ring PMD was created. For the QoS scheduler, the common "output port" type chosen was the ethdev, to avoid having to support multiple underlying types. To use a ring instead as the output port, just create a ring and then call "rte_eth_from_ring" to get an ethdev port wrapper around the ring, and which you can then use for just about any API that wants an ethdev. [Note: the rte_eth_from_ring API is in the ring driver itself, so you do need to link against that driver directly if using shared libs] Regards, /Bruce