From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5454FA0C41; Fri, 27 Aug 2021 14:19:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DFF344067C; Fri, 27 Aug 2021 14:19:52 +0200 (CEST) Received: from mail-io1-f43.google.com (mail-io1-f43.google.com [209.85.166.43]) by mails.dpdk.org (Postfix) with ESMTP id 0D27D40150; Fri, 27 Aug 2021 14:19:51 +0200 (CEST) Received: by mail-io1-f43.google.com with SMTP id z1so8178883ioh.7; Fri, 27 Aug 2021 05:19:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=UEp7PD5IpMYpawukZXr540G7o5+H2OY+iiGZqGv6NX8=; b=QTZr+61jNLpQxd7ESIvnZOtRK5i8thHBaQxJuVgxCl7Nb7vT5TtdRR741r0zzA4ddb +yYSKzbN36bFY3BrBMhPQi6e9Ho/a8cnZCKcvftENMhH1Et6i/mgFzluSLW98UpeMN5u NR8igledLnY/5Y4M2Ky+p+oLGJR40z1qCNnB/v9fN1CTNXvM/xI+sNKQjqCstQ0YYLkF i/xtxk9SEwZbjSF9RoeIrQl9HKKO7Y3eDRcjxWM7jQsVOLy4rPM/GqmFiYKv5qaIx87L b5sotabXITMUe+v7VKNlcrYyxO105RHxYMtNdHq3qqOyK5QUhpZ3pDgMmU6mQQQLDoay 8NqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=UEp7PD5IpMYpawukZXr540G7o5+H2OY+iiGZqGv6NX8=; b=EKfv4r1lRYGHRTTCN/durHWA3GhVJ+Y+Sn6Q79ufmD2JqjkVLiS7ydj/z/94x0ytna xPc1ez8o/ea1j+KECtmVzwley6963IYoCS8YANfd2Ukd+yP04EcdJvWobDzELcvVNVb6 34VjwxGkr5xWIv2eNZkb5Oa6rFND73vMC3KmLLHNYWtmeXwLFfji/TtEwAfvSdRwFbW/ YOTZ8N9ZnuT7EUCoHY31QTnkkpHthZ8IjOzXxN1rDqkvFQJdyk4YcLDnhnwRDrtxEWvB xzvJnmSC+/f/SjporbYCtlxl0E+FRxKbCGXrt7KT0jkmkJ3H2WjXWYrZ1Wl4H5r0tJec V/vw== X-Gm-Message-State: AOAM532xy/5ftDpZVmz7EA/T9aS8VsfqDxnDWsdiOxbWmGA/ZqRxKQ9o bDbKlCBj1fIbY5lJk9BLZeZnjKou2b3dFnQ0aao= X-Google-Smtp-Source: ABdhPJznrdPSYnpjcRhBrOMfGi5AKpXtFpDWTLAWpTcRbwrC3TNzXaGcZed6tecQvppvplfAQLtk+fAAkHDb7jhsEpE= X-Received: by 2002:a5d:8a0e:: with SMTP id w14mr7407489iod.94.1630066790280; Fri, 27 Aug 2021 05:19:50 -0700 (PDT) MIME-Version: 1.0 References: <20210602203531.2288645-1-thomas@monjalon.net> <2204873.CS8KgQhqXy@thomas> <4431702.ObvnSNnLSN@thomas> In-Reply-To: <4431702.ObvnSNnLSN@thomas> From: Jerin Jacob Date: Fri, 27 Aug 2021 17:49:24 +0530 Message-ID: To: Thomas Monjalon Cc: Jerin Jacob , dpdk-dev , Stephen Hemminger , David Marchand , Andrew Rybchenko , Haiyue Wang , Honnappa Nagarahalli , Ferruh Yigit , techboard@dpdk.org, Elena Agostini Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Aug 27, 2021 at 3:14 PM Thomas Monjalon wrote: > > 31/07/2021 15:42, Jerin Jacob: > > On Sat, Jul 31, 2021 at 1:51 PM Thomas Monjalon wrote: > > > 31/07/2021 09:06, Jerin Jacob: > > > > On Fri, Jul 30, 2021 at 7:25 PM Thomas Monjalon wrote: > > > > > From: Elena Agostini > > > > > > > > > > In heterogeneous computing system, processing is not only in the CPU. > > > > > Some tasks can be delegated to devices working in parallel. > > > > > > > > > > The goal of this new library is to enhance the collaboration between > > > > > DPDK, that's primarily a CPU framework, and other type of devices like GPUs. > > > > > > > > > > When mixing network activity with task processing on a non-CPU device, > > > > > there may be the need to put in communication the CPU with the device > > > > > in order to manage the memory, synchronize operations, exchange info, etc.. > > > > > > > > > > This library provides a number of new features: > > > > > - Interoperability with device specific library with generic handlers > > > > > - Possibility to allocate and free memory on the device > > > > > - Possibility to allocate and free memory on the CPU but visible from the device > > > > > - Communication functions to enhance the dialog between the CPU and the device > > > > > > > > > > The infrastructure is prepared to welcome drivers in drivers/hc/ > > > > > as the upcoming NVIDIA one, implementing the hcdev API. > > > > > > > > > > Some parts are not complete: > > > > > - locks > > > > > - memory allocation table > > > > > - memory freeing > > > > > - guide documentation > > > > > - integration in devtools/check-doc-vs-code.sh > > > > > - unit tests > > > > > - integration in testpmd to enable Rx/Tx to/from GPU memory. > > > > > > > > Since the above line is the crux of the following text, I will start > > > > from this point. > > > > > > > > + Techboard > > > > > > > > I can give my honest feedback on this. > > > > > > > > I can map similar stuff in Marvell HW, where we do machine learning > > > > as compute offload > > > > on a different class of CPU. > > > > > > > > In terms of RFC patch features > > > > > > > > 1) memory API - Use cases are aligned > > > > 2) communication flag and communication list > > > > Our structure is completely different and we are using HW ring kind of > > > > interface to post the job to compute interface and > > > > the job completion result happens through the event device. > > > > Kind of similar to the DMA API that has been discussed on the mailing list. > > > > > > Interesting. > > > > It is hard to generalize the communication mechanism. > > Is other GPU vendors have a similar communication mechanism? AMD, Intel ?? > > I don't know who to ask in AMD & Intel. Any ideas? Good question. At least in Marvell HW, the communication flag and communication list is our structure is completely different and we are using HW ring kind of interface to post the job to compute interface and the job completion result happens through the event device. kind of similar to the DMA API that has been discussed on the mailing list. > > > > > Now the bigger question is why need to Tx and then Rx something to > > > > compute the device > > > > Isn't ot offload something? If so, why not add the those offload in > > > > respective subsystem > > > > to improve the subsystem(ethdev, cryptiodev etc) features set to adapt > > > > new features or > > > > introduce new subsystem (like ML, Inline Baseband processing) so that > > > > it will be an opportunity to > > > > implement the same in HW or compute device. For example, if we take > > > > this path, ML offloading will > > > > be application code like testpmd, which deals with "specific" device > > > > commands(aka glorified rawdev) > > > > to deal with specific computing device offload "COMMANDS" > > > > (The commands will be specific to offload device, the same code wont > > > > run on other compute device) > > > > > > Having specific features API is convenient for compatibility > > > between devices, yes, for the set of defined features. > > > Our approach is to start with a flexible API that the application > > > can use to implement any processing because with GPU programming, > > > there is no restriction on what can be achieved. > > > This approach does not contradict what you propose, > > > it does not prevent extending existing classes. > > > > It does prevent extending the existing classes as no one is going to > > extent it there is the path of not doing do. > > I disagree. Specific API is more convenient for some tasks, > so there is an incentive to define or extend specific device class APIs. > But it should not forbid doing custom processing. This is the same as the raw device is in DPDK where the device personality is not defined. Even if define another API and if the personality is not defined, it comes similar to the raw device as similar to rawdev enqueue and dequeue. To summarize, 1) My _personal_ preference is to have specific subsystems to improve the DPDK instead of the raw device kind of path. 2) If the device personality is not defined, use rawdev 3) All computing devices do not use "communication flag" and "communication list" kind of structure. If are targeting a generic computing device then that is not a portable scheme. For GPU abstraction if "communication flag" and "communication list" is the right kind of mechanism then we can have a separate library for GPU communication specific to GPU <-> DPDK communication needs and explicit for GPU. I think generic DPDK applications like testpmd should not pollute with device-specific functions. Like, call device-specific messages from the application which makes the application runs only one device. I don't have a strong opinion(expect standardizing "communication flag" and "communication list" as generic computing device communication mechanism) of others think it is OK to do that way in DPDK. > > > If an application can run only on a specific device, it is similar to > > a raw device, > > where the device definition is not defined. (i.e JOB metadata is not defined and > > it is specific to the device). > > > > > > Just my _personal_ preference is to have specific subsystems to > > > > improve the DPDK instead of raw device kind of > > > > path. If we decide another path as a community it is _fine_ too(as a > > > > _project manager_ point of view it will be an easy path to dump SDK > > > > stuff to DPDK without introducing the pain of the subsystem nor > > > > improving the DPDK). > > > > > > Adding a new class API is also improving DPDK. > > > > But the class is similar as raw dev class. The reason I say, > > Job submission and response is can be abstracted as queue/dequeue APIs. > > Taks/Job metadata is specific to compute devices (and it can not be > > generalized). > > If we generalize it makes sense to have a new class that does > > "specific function". > > Computing device programming is already generalized with languages like OpenCL. > We should not try to reinvent the same. > We are just trying to properly integrate the concept in DPDK > and allow building on top of it. See above. > >