From: Jerin Jacob <jerinjacobk@gmail.com>
To: Elena Agostini <eagostini@nvidia.com>
Cc: "Wang, Haiyue" <haiyue.wang@intel.com>,
NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
Jerin Jacob <jerinj@marvell.com>, dpdk-dev <dev@dpdk.org>,
Stephen Hemminger <stephen@networkplumber.org>,
David Marchand <david.marchand@redhat.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
"Yigit, Ferruh" <ferruh.yigit@intel.com>,
"techboard@dpdk.org" <techboard@dpdk.org>
Subject: Re: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library
Date: Thu, 2 Sep 2021 18:42:00 +0530 [thread overview]
Message-ID: <CALBAE1OUChHEf7jGdKJPO8Ycp-yc5GuSzZwy6SQpa=kia6tq9A@mail.gmail.com> (raw)
In-Reply-To: <DM6PR12MB410752B1E271A734918E8D5ACDCD9@DM6PR12MB4107.namprd12.prod.outlook.com>
On Wed, Sep 1, 2021 at 9:05 PM Elena Agostini <eagostini@nvidia.com> wrote:
>
>
> > -----Original Message-----
> > From: Wang, Haiyue <haiyue.wang@intel.com>
> > Sent: Sunday, August 29, 2021 7:33 AM
> > To: Jerin Jacob <jerinjacobk@gmail.com>; NBU-Contact-Thomas Monjalon
> > <thomas@monjalon.net>
> > Cc: Jerin Jacob <jerinj@marvell.com>; dpdk-dev <dev@dpdk.org>; Stephen
> > Hemminger <stephen@networkplumber.org>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru>; Honnappa Nagarahalli
> > <honnappa.nagarahalli@arm.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> > techboard@dpdk.org; Elena Agostini <eagostini@nvidia.com>
> > Subject: RE: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Friday, August 27, 2021 20:19
> > > To: Thomas Monjalon <thomas@monjalon.net>
> > > Cc: Jerin Jacob <jerinj@marvell.com>; dpdk-dev <dev@dpdk.org>; Stephen
> > Hemminger
> > > <stephen@networkplumber.org>; David Marchand
> > <david.marchand@redhat.com>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>; Wang, Haiyue <haiyue.wang@intel.com>;
> > Honnappa Nagarahalli
> > > <honnappa.nagarahalli@arm.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> > techboard@dpdk.org; Elena
> > > Agostini <eagostini@nvidia.com>
> > > Subject: Re: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library
> > >
> > > On Fri, Aug 27, 2021 at 3:14 PM Thomas Monjalon <thomas@monjalon.net>
> > wrote:
> > > >
> > > > 31/07/2021 15:42, Jerin Jacob:
> > > > > On Sat, Jul 31, 2021 at 1:51 PM Thomas Monjalon
> > <thomas@monjalon.net> wrote:
> > > > > > 31/07/2021 09:06, Jerin Jacob:
> > > > > > > On Fri, Jul 30, 2021 at 7:25 PM Thomas Monjalon
> > <thomas@monjalon.net> wrote:
> > > > > > > > From: Elena Agostini <eagostini@nvidia.com>
> > > > > > > >
> > > > > > > > In heterogeneous computing system, processing is not only in the
> > CPU.
> > > > > > > > Some tasks can be delegated to devices working in parallel.
> > > > > > > >
> > > > > > > > The goal of this new library is to enhance the collaboration between
> > > > > > > > DPDK, that's primarily a CPU framework, and other type of devices
> > like GPUs.
> > > > > > > >
> > > > > > > > When mixing network activity with task processing on a non-CPU
> > device,
> > > > > > > > there may be the need to put in communication the CPU with the
> > device
> > > > > > > > in order to manage the memory, synchronize operations, exchange
> > info, etc..
> > > > > > > >
> > > > > > > > This library provides a number of new features:
> > > > > > > > - Interoperability with device specific library with generic handlers
> > > > > > > > - Possibility to allocate and free memory on the device
> > > > > > > > - Possibility to allocate and free memory on the CPU but visible from
> > the device
> > > > > > > > - Communication functions to enhance the dialog between the CPU
> > and the device
> > > > > > > >
> > > > > > > > The infrastructure is prepared to welcome drivers in drivers/hc/
> > > > > > > > as the upcoming NVIDIA one, implementing the hcdev API.
> > > > > > > >
> > > > > > > > Some parts are not complete:
> > > > > > > > - locks
> > > > > > > > - memory allocation table
> > > > > > > > - memory freeing
> > > > > > > > - guide documentation
> > > > > > > > - integration in devtools/check-doc-vs-code.sh
> > > > > > > > - unit tests
> > > > > > > > - integration in testpmd to enable Rx/Tx to/from GPU memory.
> > > > > > >
> > > > > > > Since the above line is the crux of the following text, I will start
> > > > > > > from this point.
> > > > > > >
> > > > > > > + Techboard
> > > > > > >
> > > > > > > I can give my honest feedback on this.
> > > > > > >
> > > > > > > I can map similar stuff in Marvell HW, where we do machine learning
> > > > > > > as compute offload
> > > > > > > on a different class of CPU.
> > > > > > >
> > > > > > > In terms of RFC patch features
> > > > > > >
> > > > > > > 1) memory API - Use cases are aligned
> > > > > > > 2) communication flag and communication list
> > > > > > > Our structure is completely different and we are using HW ring kind of
> > > > > > > interface to post the job to compute interface and
> > > > > > > the job completion result happens through the event device.
> > > > > > > Kind of similar to the DMA API that has been discussed on the mailing
> > list.
> > > > > >
> > > > > > Interesting.
> > > > >
> > > > > It is hard to generalize the communication mechanism.
> > > > > Is other GPU vendors have a similar communication mechanism? AMD,
> > Intel ??
> > > >
> > > > I don't know who to ask in AMD & Intel. Any ideas?
> > >
> > > Good question.
> > >
> > > At least in Marvell HW, the communication flag and communication list is
> > > our structure is completely different and we are using HW ring kind of
> > > interface to post the job to compute interface and
> > > the job completion result happens through the event device.
> > > kind of similar to the DMA API that has been discussed on the mailing list.
>
> Please correct me if I'm wrong but what you are describing is a specific way
> to submit work on the device. Communication flag/list here is a direct data
> communication between the CPU and some kind of workload (e.g. GPU kernel)
> that's already running on the device.
Exactly. What I meant is Communication flag/list is not generic enough
to express
and generic compute device. If all GPU works in this way, we could
make the library
name as GPU specific and add GPU specific communication mechanism.
>
> The rationale here is that:
> - some work has been already submitted on the device and it's running
> - CPU needs a real-time direct interaction through memory with the device
> - the workload on the device needs some info from the CPU it can't get at submission time
>
> This is good enough for NVIDIA and AMD GPU.
> Need to double check for Intel GPU.
>
> > >
> > > >
> > > > > > > Now the bigger question is why need to Tx and then Rx something to
> > > > > > > compute the device
> > > > > > > Isn't ot offload something? If so, why not add the those offload in
> > > > > > > respective subsystem
> > > > > > > to improve the subsystem(ethdev, cryptiodev etc) features set to adapt
> > > > > > > new features or
> > > > > > > introduce new subsystem (like ML, Inline Baseband processing) so that
> > > > > > > it will be an opportunity to
> > > > > > > implement the same in HW or compute device. For example, if we take
> > > > > > > this path, ML offloading will
> > > > > > > be application code like testpmd, which deals with "specific" device
> > > > > > > commands(aka glorified rawdev)
> > > > > > > to deal with specific computing device offload "COMMANDS"
> > > > > > > (The commands will be specific to offload device, the same code wont
> > > > > > > run on other compute device)
> > > > > >
> > > > > > Having specific features API is convenient for compatibility
> > > > > > between devices, yes, for the set of defined features.
> > > > > > Our approach is to start with a flexible API that the application
> > > > > > can use to implement any processing because with GPU programming,
> > > > > > there is no restriction on what can be achieved.
> > > > > > This approach does not contradict what you propose,
> > > > > > it does not prevent extending existing classes.
> > > > >
> > > > > It does prevent extending the existing classes as no one is going to
> > > > > extent it there is the path of not doing do.
> > > >
> > > > I disagree. Specific API is more convenient for some tasks,
> > > > so there is an incentive to define or extend specific device class APIs.
> > > > But it should not forbid doing custom processing.
> > >
> > > This is the same as the raw device is in DPDK where the device
> > > personality is not defined.
> > >
> > > Even if define another API and if the personality is not defined,
> > > it comes similar to the raw device as similar
> > > to rawdev enqueue and dequeue.
> > >
> > > To summarize,
> > >
> > > 1) My _personal_ preference is to have specific subsystems
> > > to improve the DPDK instead of the raw device kind of path.
> >
> > Something like rte_memdev to focus on device (GPU) memory management ?
> >
> > The new DPDK auxiliary bus maybe make life easier to solve the complex
> > heterogeneous computing library. ;-)
>
> To get a concrete idea about what's the best and most comprehensive
> approach we should start with something that's flexible and simple enough.
>
> A dedicated library it's a good starting point: easy to implement and embed in DPDK applications,
> isolated from other components and users can play with it learning from the code.
> As a second step we can think to embed the functionality in some other way
> within DPDK (e.g. split memory management and communication features).
>
> >
> > > 2) If the device personality is not defined, use rawdev
> > > 3) All computing devices do not use "communication flag" and
> > > "communication list"
> > > kind of structure. If are targeting a generic computing device then
> > > that is not a portable scheme.
> > > For GPU abstraction if "communication flag" and "communication list"
> > > is the right kind of mechanism
> > > then we can have a separate library for GPU communication specific to GPU <-
> > >
> > > DPDK communication needs and explicit for GPU.
> > >
> > > I think generic DPDK applications like testpmd should not
> > > pollute with device-specific functions. Like, call device-specific
> > > messages from the application
> > > which makes the application runs only one device. I don't have a
> > > strong opinion(expect
> > > standardizing "communication flag" and "communication list" as
> > > generic computing device
> > > communication mechanism) of others think it is OK to do that way in DPDK.
>
> I'd like to introduce (with a dedicated option) the memory API in testpmd to
> provide an example of how to TX/RX packets using device memory.
Not sure without embedding sideband communication mechanism how it can notify to
GPU and back to CPU. If you could share the example API sequence that helps to
us understand the level of coupling with testpmd.
>
> I agree to not embed communication flag/list features.
>
> > >
> > > >
> > > > > If an application can run only on a specific device, it is similar to
> > > > > a raw device,
> > > > > where the device definition is not defined. (i.e JOB metadata is not defined
> > and
> > > > > it is specific to the device).
> > > > >
> > > > > > > Just my _personal_ preference is to have specific subsystems to
> > > > > > > improve the DPDK instead of raw device kind of
> > > > > > > path. If we decide another path as a community it is _fine_ too(as a
> > > > > > > _project manager_ point of view it will be an easy path to dump SDK
> > > > > > > stuff to DPDK without introducing the pain of the subsystem nor
> > > > > > > improving the DPDK).
> > > > > >
> > > > > > Adding a new class API is also improving DPDK.
> > > > >
> > > > > But the class is similar as raw dev class. The reason I say,
> > > > > Job submission and response is can be abstracted as queue/dequeue APIs.
> > > > > Taks/Job metadata is specific to compute devices (and it can not be
> > > > > generalized).
> > > > > If we generalize it makes sense to have a new class that does
> > > > > "specific function".
> > > >
> > > > Computing device programming is already generalized with languages like
> > OpenCL.
> > > > We should not try to reinvent the same.
> > > > We are just trying to properly integrate the concept in DPDK
> > > > and allow building on top of it.
>
> Agree.
>
> > >
> > > See above.
> > >
> > > >
> > > >
next prev parent reply other threads:[~2021-09-02 13:12 UTC|newest]
Thread overview: 128+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-02 20:35 [dpdk-dev] [PATCH] gpudev: introduce memory API Thomas Monjalon
2021-06-02 20:46 ` Stephen Hemminger
2021-06-02 20:48 ` Thomas Monjalon
2021-06-03 7:06 ` Andrew Rybchenko
2021-06-03 7:26 ` Thomas Monjalon
2021-06-03 7:49 ` Andrew Rybchenko
2021-06-03 8:26 ` Thomas Monjalon
2021-06-03 8:57 ` Andrew Rybchenko
2021-06-03 7:18 ` David Marchand
2021-06-03 7:30 ` Thomas Monjalon
2021-06-03 7:47 ` Jerin Jacob
2021-06-03 8:28 ` Thomas Monjalon
2021-06-03 8:41 ` Jerin Jacob
2021-06-03 8:43 ` Thomas Monjalon
2021-06-03 8:47 ` Jerin Jacob
2021-06-03 8:53 ` Thomas Monjalon
2021-06-03 9:20 ` Jerin Jacob
2021-06-03 9:36 ` Thomas Monjalon
2021-06-03 10:04 ` Jerin Jacob
2021-06-03 10:30 ` Thomas Monjalon
2021-06-03 11:38 ` Jerin Jacob
2021-06-04 12:55 ` Thomas Monjalon
2021-06-04 15:05 ` Jerin Jacob
2021-06-03 9:33 ` Ferruh Yigit
2021-06-04 10:28 ` Thomas Monjalon
2021-06-04 11:09 ` Jerin Jacob
2021-06-04 12:46 ` Thomas Monjalon
2021-06-04 13:05 ` Andrew Rybchenko
2021-06-04 13:18 ` Thomas Monjalon
2021-06-04 13:59 ` Andrew Rybchenko
2021-06-04 14:09 ` Thomas Monjalon
2021-06-04 15:20 ` Jerin Jacob
2021-06-04 15:51 ` Thomas Monjalon
2021-06-04 18:20 ` Wang, Haiyue
2021-06-05 5:09 ` Jerin Jacob
2021-06-06 1:13 ` Honnappa Nagarahalli
2021-06-06 5:28 ` Jerin Jacob
2021-06-07 10:29 ` Thomas Monjalon
2021-06-07 7:20 ` Wang, Haiyue
2021-06-07 10:43 ` Thomas Monjalon
2021-06-07 13:54 ` Jerin Jacob
2021-06-07 16:47 ` Thomas Monjalon
2021-06-08 4:10 ` Jerin Jacob
2021-06-08 6:34 ` Thomas Monjalon
2021-06-08 7:09 ` Jerin Jacob
2021-06-08 7:32 ` Thomas Monjalon
2021-06-15 18:24 ` Ferruh Yigit
2021-06-15 18:54 ` Thomas Monjalon
2021-06-07 23:31 ` Honnappa Nagarahalli
2021-06-04 5:51 ` Wang, Haiyue
2021-06-04 8:15 ` Thomas Monjalon
2021-06-04 11:07 ` Wang, Haiyue
2021-06-04 12:43 ` Thomas Monjalon
2021-06-04 13:25 ` Wang, Haiyue
2021-06-04 14:06 ` Thomas Monjalon
2021-06-04 18:04 ` Wang, Haiyue
2021-06-05 7:49 ` Thomas Monjalon
2021-06-05 11:09 ` Wang, Haiyue
2021-06-06 1:10 ` Honnappa Nagarahalli
2021-06-07 10:50 ` Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 1/7] hcdev: introduce heterogeneous computing device library Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 2/7] hcdev: add event notification Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 3/7] hcdev: add child device representing a device context Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 4/7] hcdev: support multi-process Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 5/7] hcdev: add memory API Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 6/7] hcdev: add communication flag Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 7/7] hcdev: add communication list Thomas Monjalon
2021-07-31 7:06 ` [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library Jerin Jacob
2021-07-31 8:21 ` Thomas Monjalon
2021-07-31 13:42 ` Jerin Jacob
2021-08-27 9:44 ` Thomas Monjalon
2021-08-27 12:19 ` Jerin Jacob
2021-08-29 5:32 ` Wang, Haiyue
2021-09-01 15:35 ` Elena Agostini
2021-09-02 13:12 ` Jerin Jacob [this message]
2021-09-06 16:11 ` Elena Agostini
2021-09-06 17:15 ` Wang, Haiyue
2021-09-06 17:22 ` Elena Agostini
2021-09-07 0:55 ` Wang, Haiyue
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 0/9] GPU library eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 1/9] gpudev: introduce GPU device class library eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 2/9] gpudev: add event notification eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 3/9] gpudev: add child device representing a device context eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 4/9] gpudev: support multi-process eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 5/9] gpudev: add memory API eagostini
2021-10-08 20:18 ` Thomas Monjalon
2021-10-29 19:38 ` Mattias Rönnblom
2021-11-08 15:16 ` Elena Agostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 6/9] gpudev: add memory barrier eagostini
2021-10-08 20:16 ` Thomas Monjalon
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 7/9] gpudev: add communication flag eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 8/9] gpudev: add communication list eagostini
2021-10-09 1:53 ` [dpdk-dev] [PATCH v3 9/9] doc: add CUDA example in GPU guide eagostini
2021-10-10 10:16 ` [dpdk-dev] [PATCH v3 0/9] GPU library Jerin Jacob
2021-10-11 8:18 ` Thomas Monjalon
2021-10-11 8:43 ` Jerin Jacob
2021-10-11 9:12 ` Thomas Monjalon
2021-10-11 9:29 ` Jerin Jacob
2021-10-11 10:27 ` Thomas Monjalon
2021-10-11 11:41 ` Jerin Jacob
2021-10-11 12:44 ` Thomas Monjalon
2021-10-11 13:30 ` Jerin Jacob
2021-10-19 10:00 ` Elena Agostini
2021-10-19 18:47 ` Jerin Jacob
2021-10-19 19:11 ` Thomas Monjalon
2021-10-19 19:56 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 " eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 1/9] gpudev: introduce GPU device class library eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 2/9] gpudev: add event notification eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 3/9] gpudev: add child device representing a device context eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 4/9] gpudev: support multi-process eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 5/9] gpudev: add memory API eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 6/9] gpudev: add memory barrier eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 7/9] gpudev: add communication flag eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 8/9] gpudev: add communication list eagostini
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 9/9] doc: add CUDA example in GPU guide eagostini
2021-11-08 18:57 ` [dpdk-dev] [PATCH v5 0/9] GPU library eagostini
2021-11-08 16:25 ` Thomas Monjalon
2021-11-08 18:57 ` [dpdk-dev] [PATCH v5 1/9] gpudev: introduce GPU device class library eagostini
2021-11-08 18:57 ` [dpdk-dev] [PATCH v5 2/9] gpudev: add event notification eagostini
2021-11-08 18:57 ` [dpdk-dev] [PATCH v5 3/9] gpudev: add child device representing a device context eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 4/9] gpudev: support multi-process eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 5/9] gpudev: add memory API eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 6/9] gpudev: add memory barrier eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 7/9] gpudev: add communication flag eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 8/9] gpudev: add communication list eagostini
2021-11-08 18:58 ` [dpdk-dev] [PATCH v5 9/9] doc: add CUDA example in GPU guide eagostini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALBAE1OUChHEf7jGdKJPO8Ycp-yc5GuSzZwy6SQpa=kia6tq9A@mail.gmail.com' \
--to=jerinjacobk@gmail.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=eagostini@nvidia.com \
--cc=ferruh.yigit@intel.com \
--cc=haiyue.wang@intel.com \
--cc=honnappa.nagarahalli@arm.com \
--cc=jerinj@marvell.com \
--cc=stephen@networkplumber.org \
--cc=techboard@dpdk.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).