From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 7AF352A66 for ; Wed, 7 Dec 2016 13:47:44 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 07 Dec 2016 04:47:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,310,1477983600"; d="scan'208";a="1069099632" Received: from dwdohert-dpdk.ir.intel.com ([163.33.210.152]) by orsmga001.jf.intel.com with ESMTP; 07 Dec 2016 04:47:42 -0800 To: Neil Horman References: <1480688123-39494-1-git-send-email-roy.fan.zhang@intel.com> <8047937.9v81RFizFU@xps13> <20161202145730.GA322432@bricha3-MOBL3.ger.corp.intel.com> <63671b1d-52e0-e653-1323-5d9513c0b9dc@intel.com> <20161205151209.GA4232@hmsreliant.think-freely.org> Cc: Bruce Richardson , Thomas Monjalon , Fan Zhang , dev@dpdk.org From: Declan Doherty Message-ID: <558a1817-9c81-5e5f-b1e2-b71934772631@intel.com> Date: Wed, 7 Dec 2016 12:42:15 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20161205151209.GA4232@hmsreliant.think-freely.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] Scheduler: add driver for scheduler crypto pmd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Dec 2016 12:47:45 -0000 On 05/12/16 15:12, Neil Horman wrote: > On Fri, Dec 02, 2016 at 04:22:16PM +0000, Declan Doherty wrote: >> On 02/12/16 14:57, Bruce Richardson wrote: >>> On Fri, Dec 02, 2016 at 03:31:24PM +0100, Thomas Monjalon wrote: >>>> 2016-12-02 14:15, Fan Zhang: >>>>> This patch provides the initial implementation of the scheduler poll mode >>>>> driver using DPDK cryptodev framework. >>>>> >>>>> Scheduler PMD is used to schedule and enqueue the crypto ops to the >>>>> hardware and/or software crypto devices attached to it (slaves). The >>>>> dequeue operation from the slave(s), and the possible dequeued crypto op >>>>> reordering, are then carried out by the scheduler. >>>>> >>>>> The scheduler PMD can be used to fill the throughput gap between the >>>>> physical core and the existing cryptodevs to increase the overall >>>>> performance. For example, if a physical core has higher crypto op >>>>> processing rate than a cryptodev, the scheduler PMD can be introduced to >>>>> attach more than one cryptodevs. >>>>> >>>>> This initial implementation is limited to supporting the following >>>>> scheduling modes: >>>>> >>>>> - CRYPTO_SCHED_SW_ROUND_ROBIN_MODE (round robin amongst attached software >>>>> slave cryptodevs, to set this mode, the scheduler should have been >>>>> attached 1 or more software cryptodevs. >>>>> >>>>> - CRYPTO_SCHED_HW_ROUND_ROBIN_MODE (round robin amongst attached hardware >>>>> slave cryptodevs (QAT), to set this mode, the scheduler should have >>>>> been attached 1 or more QATs. >>>> >>>> Could it be implemented on top of the eventdev API? >>>> >>> Not really. The eventdev API is for different types of scheduling >>> between multiple sources that are all polling for packets, compared to >>> this, which is more analgous - as I understand it - to the bonding PMD >>> for ethdev. >>> >>> To make something like this work with an eventdev API you would need to >>> use one of the following models: >>> * have worker cores for offloading packets to the different crypto >>> blocks pulling from the eventdev APIs. This would make it difficult to >>> do any "smart" scheduling of crypto operations between the blocks, >>> e.g. that one crypto instance may be better at certain types of >>> operations than another. >>> * move the logic in this driver into an existing eventdev instance, >>> which uses the eventdev api rather than the crypto APIs and so has an >>> extra level of "structure abstraction" that has to be worked though. >>> It's just not really a good fit. >>> >>> So for this workload, I believe the pseudo-cryptodev instance is the >>> best way to go. >>> >>> /Bruce >>> >> >> >> As Bruce says this is much more analogous to the ethdev bonding driver, the >> main idea is to allow different crypto op scheduling mechanisms to be >> defined transparently to an application. This could be load-balancing across >> multiple hw crypto devices, or having a software crypto device to act as a >> backup device for a hw accelerator if it becomes oversubscribed. I think the >> main advantage of a crypto-scheduler approach means that the data path of >> the application doesn't need to have any knowledge that scheduling is >> happening at all, it is just using a different crypto device id, which is >> then manages the distribution of crypto work. >> >> >> > This is a good deal like the bonding pmd, and so from a certain standpoint it > makes sense to do this, but whereas the bonding pmd is meant to create a single > path to a logical network over several physical networks, this pmd really only > focuses on maximizing througput, and for that we already have tools. As Thomas > mentions, there is the eventdev library, but from my view the distributor > library already fits this bill. It already is a basic framework to process > mbufs in parallel according to whatever policy you want to implement, which > sounds like exactly what the goal of this pmd is. > > Neil > > Hey Neil, this is actually intended to act and look a good deal like the ethernet bonding device but to handling the crypto scheduling use cases. For example, take the case where multiple hw accelerators may be available. We want to provide user applications with a mechanism to transparently balance work across all devices without having to manage the load balancing details or the guaranteeing of ordering of the processed ops on the dequeue_burst side. In this case the application would just use the crypto dev_id of the scheduler and it would look after balancing the workload across the available hw accelerators. +-------------------+ | Crypto Sch PMD | | | | ORDERING / RR SCH | +-------------------+ ^ ^ ^ | | | +-+ | +-------------------------------+ | +---------------+ | | | | V V V +---------------+ +---------------+ +---------------+ | Crypto HW PMD | | Crypto HW PMD | | Crypto HW PMD | +---------------+ +---------------+ +---------------+ Another use case we hope to support is migration of processing from one device to another where a hw and sw crypto pmd can be bound to the same crypto scheduler and the crypto processing could be transparently migrated from the hw to sw pmd. This would allow for hw accelerators to be hot-plugged attached/detached in a Guess VM +----------------+ | Crypto Sch PMD | | | | MIGRATION SCH | +----------------+ | | | +-----------------+ | | V V +---------------+ +---------------+ | Crypto HW PMD | | Crypto SW PMD | | (Active) | | (Inactive) | +---------------+ +---------------+ The main point is that isn't envisaged as just a mechanism for scheduling crypto work loads across multiple cores, but a framework for allowing different scheduling mechanisms to be introduced, to handle different crypto scheduling problems, and done so in a way which is completely transparent to the data path of an application. Like the eth bonding driver we want to support creating the crypto scheduler from EAL options, which allow specification of the scheduling mode and the crypto pmds which are to be bound to that crypto scheduler.