DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: dpdk-dev <dev@dpdk.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	 David Marchand <david.marchand@redhat.com>,
	 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	Haiyue Wang <haiyue.wang@intel.com>,
	 Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
	Jerin Jacob <jerinj@marvell.com>,
	 Ferruh Yigit <ferruh.yigit@intel.com>,
	techboard@dpdk.org
Subject: Re: [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library
Date: Sat, 31 Jul 2021 12:36:55 +0530	[thread overview]
Message-ID: <CALBAE1O=+g-LpwhSVb=Km96h-v0-evwem4dwvKYUb-7og3U1ow@mail.gmail.com> (raw)
In-Reply-To: <20210730135533.417611-1-thomas@monjalon.net>

On Fri, Jul 30, 2021 at 7:25 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> From: Elena Agostini <eagostini@nvidia.com>
>
> In heterogeneous computing system, processing is not only in the CPU.
> Some tasks can be delegated to devices working in parallel.
>
> The goal of this new library is to enhance the collaboration between
> DPDK, that's primarily a CPU framework, and other type of devices like GPUs.
>
> When mixing network activity with task processing on a non-CPU device,
> there may be the need to put in communication the CPU with the device
> in order to manage the memory, synchronize operations, exchange info, etc..
>
> This library provides a number of new features:
> - Interoperability with device specific library with generic handlers
> - Possibility to allocate and free memory on the device
> - Possibility to allocate and free memory on the CPU but visible from the device
> - Communication functions to enhance the dialog between the CPU and the device
>
> The infrastructure is prepared to welcome drivers in drivers/hc/
> as the upcoming NVIDIA one, implementing the hcdev API.
>
> Some parts are not complete:
>   - locks
>   - memory allocation table
>   - memory freeing
>   - guide documentation
>   - integration in devtools/check-doc-vs-code.sh
>   - unit tests
>   - integration in testpmd to enable Rx/Tx to/from GPU memory.

Since the above line is the crux of the following text, I will start
from this point.

+ Techboard

I  can give my honest feedback on this.

I can map similar  stuff  in Marvell HW, where we do machine learning
as compute offload
on a different class of CPU.

In terms of RFC patch features

1) memory API - Use cases are aligned
2) communication flag and communication list
Our structure is completely different and we are using HW ring kind of
interface to post the job to compute interface and
the job completion result happens through the event device.
Kind of similar to the DMA API that has been discussed on the mailing list.


Now the bigger question is why need to Tx and then Rx something to
compute the device
Isn't  ot offload something? If so, why not add the those offload in
respective subsystem
to improve the subsystem(ethdev, cryptiodev etc) features set to adapt
new features or
introduce new subsystem (like ML, Inline Baseband processing) so that
it will be an opportunity to
implement the same in  HW or compute device. For example, if we take
this path, ML offloading will
be application code like testpmd, which deals with "specific" device
commands(aka glorified rawdev)
to deal with specific computing device offload "COMMANDS"
(The commands will be specific to  offload device, the same code wont
run on  other compute device)

Just my _personal_ preference is to have specific subsystems to
improve the DPDK instead of raw device kind of
path. If we decide another path as a community it is _fine_ too(as a
_project manager_ point of view it will be an easy path to dump SDK
stuff to DPDK without introducing the pain of the subsystem nor
improving the DPDK).

>
> Below is a pseudo-code to give an example about how to use functions
> in this library in case of a CUDA application.
>
>
> Elena Agostini (4):
>   hcdev: introduce heterogeneous computing device library
>   hcdev: add memory API
>   hcdev: add communication flag
>   hcdev: add communication list
>
> Thomas Monjalon (3):
>   hcdev: add event notification
>   hcdev: add child device representing a device context
>   hcdev: support multi-process
>
>  .gitignore                             |   1 +
>  MAINTAINERS                            |   6 +
>  doc/api/doxy-api-index.md              |   1 +
>  doc/api/doxy-api.conf.in               |   1 +
>  doc/guides/conf.py                     |   8 +
>  doc/guides/hcdevs/features/default.ini |  13 +
>  doc/guides/hcdevs/index.rst            |  11 +
>  doc/guides/hcdevs/overview.rst         |  11 +
>  doc/guides/index.rst                   |   1 +
>  doc/guides/prog_guide/hcdev.rst        |   5 +
>  doc/guides/prog_guide/index.rst        |   1 +
>  doc/guides/rel_notes/release_21_08.rst |   5 +
>  drivers/hc/meson.build                 |   4 +
>  drivers/meson.build                    |   1 +
>  lib/hcdev/hcdev.c                      | 789 +++++++++++++++++++++++++
>  lib/hcdev/hcdev_driver.h               |  96 +++
>  lib/hcdev/meson.build                  |  12 +
>  lib/hcdev/rte_hcdev.h                  | 592 +++++++++++++++++++
>  lib/hcdev/version.map                  |  35 ++
>  lib/meson.build                        |   1 +
>  20 files changed, 1594 insertions(+)
>  create mode 100644 doc/guides/hcdevs/features/default.ini
>  create mode 100644 doc/guides/hcdevs/index.rst
>  create mode 100644 doc/guides/hcdevs/overview.rst
>  create mode 100644 doc/guides/prog_guide/hcdev.rst
>  create mode 100644 drivers/hc/meson.build
>  create mode 100644 lib/hcdev/hcdev.c
>  create mode 100644 lib/hcdev/hcdev_driver.h
>  create mode 100644 lib/hcdev/meson.build
>  create mode 100644 lib/hcdev/rte_hcdev.h
>  create mode 100644 lib/hcdev/version.map
>
>
>
> ////////////////////////////////////////////////////////////////////////
> ///// HCDEV library + CUDA functions
> ////////////////////////////////////////////////////////////////////////
> #define GPU_PAGE_SHIFT 16
> #define GPU_PAGE_SIZE (1UL << GPU_PAGE_SHIFT)
>
> int main() {
>     struct rte_hcdev_flag quit_flag;
>     struct rte_hcdev_comm_list *comm_list;
>     int nb_rx = 0;
>     int comm_list_entry = 0;
>     struct rte_mbuf * rx_mbufs[max_rx_mbufs];
>     cudaStream_t cstream;
>     struct rte_mempool *mpool_payload, *mpool_header;
>     struct rte_pktmbuf_extmem ext_mem;
>     int16_t dev_id;
>
>     /* Initialize CUDA objects (cstream, context, etc..). */
>     /* Use hcdev library to register a new CUDA context if any */
>     /* Let's assume the application wants to use the default context of the GPU device 0 */
>     dev_id = 0;
>
>     /* Create an external memory mempool using memory allocated on the GPU. */
>     ext_mem.elt_size = mbufs_headroom_size;
>                 ext_mem.buf_len = RTE_ALIGN_CEIL(mbufs_num * ext_mem.elt_size, GPU_PAGE_SIZE);
>     ext_mem.buf_iova = RTE_BAD_IOVA;
>     ext_mem.buf_ptr = rte_hcdev_malloc(dev_id, ext_mem.buf_len, 0);
>     rte_extmem_register(ext_mem.buf_ptr, ext_mem.buf_len, NULL, ext_mem.buf_iova, GPU_PAGE_SIZE);
>     rte_dev_dma_map(rte_eth_devices[l2fwd_port_id].device, ext_mem.buf_ptr, ext_mem.buf_iova, ext_mem.buf_len);
>     mpool_payload = rte_pktmbuf_pool_create_extbuf("gpu_mempool", mbufs_num,
>                                                     0, 0, ext_mem.elt_size,
>                                                     rte_socket_id(), &ext_mem, 1);
>
>     /*
>      * Create CPU - device communication flag. With this flag, the CPU can tell to the CUDA kernel
>      * to exit from the main loop.
>      */
>     rte_hcdev_comm_create_flag(dev_id, &quit_flag, RTE_HCDEV_COMM_FLAG_CPU);
>     rte_hcdev_comm_set_flag(&quit_flag, 0);
>
>     /*
>      * Create CPU - device communication list. Each entry of this list will be populated by the CPU
>      * with a new set of received mbufs that the CUDA kernel has to process.
>      */
>     comm_list = rte_hcdev_comm_create_list(dev_id, num_entries);
>
>     /* A very simple CUDA kernel with just 1 CUDA block and RTE_HCDEV_COMM_LIST_PKTS_MAX CUDA threads. */
>     cuda_kernel_packet_processing<<<1, RTE_HCDEV_COMM_LIST_PKTS_MAX, 0, cstream>>>(quit_flag->ptr, comm_list, num_entries, ...);
>
>     /*
>      * For simplicity, the CPU here receives only 2 bursts of mbufs.
>      * In a real application, network activity and device processing should overlap.
>      */
>     nb_rx = rte_eth_rx_burst(port_id, queue_id, &(rx_mbufs[0]), max_rx_mbufs);
>     rte_hcdev_comm_populate_list_pkts(comm_list[0], rx_mbufs, nb_rx);
>     nb_rx = rte_eth_rx_burst(port_id, queue_id, &(rx_mbufs[0]), max_rx_mbufs);
>     rte_hcdev_comm_populate_list_pkts(comm_list[1], rx_mbufs, nb_rx);
>
>     /*
>      * CPU waits for the completion of the packets' processing on the CUDA kernel
>      * and then it does a cleanup of the received mbufs.
>      */
>     while (rte_hcdev_comm_cleanup_list(comm_list[0]));
>     while (rte_hcdev_comm_cleanup_list(comm_list[1]));
>
>     /* CPU notifies the CUDA kernel that it has to terminate */
>     rte_hcdev_comm_set_flag(&quit_flag, 1);
>
>     /* hcdev objects cleanup/destruction */
>     /* CUDA cleanup */
>     /* DPDK cleanup */
>
>     return 0;
> }
>
> ////////////////////////////////////////////////////////////////////////
> ///// CUDA kernel
> ////////////////////////////////////////////////////////////////////////
>
> void cuda_kernel(uint32_t * quit_flag_ptr, struct rte_hcdev_comm_list *comm_list, int comm_list_entries) {
>     int comm_list_index = 0;
>     struct rte_hcdev_comm_pkt *pkt_list = NULL;
>
>     /* Do some pre-processing operations. */
>
>     /* GPU kernel keeps checking this flag to know if it has to quit or wait for more packets. */
>     while (*quit_flag_ptr == 0)
>     {
>         if (comm_list[comm_list_index]->status != RTE_HCDEV_COMM_LIST_READY)
>             continue;
>
>         if (threadIdx.x < comm_list[comm_list_index]->num_pkts)
>         {
>             /* Each CUDA thread processes a different packet. */
>             packet_processing(comm_list[comm_list_index]->addr, comm_list[comm_list_index]->size, ..);
>         }
>         __threadfence();
>         __syncthreads();
>
>         /* Wait for new packets on the next communication list entry. */
>         comm_list_index = (comm_list_index+1) % comm_list_entries;
>     }
>
>     /* Do some post-processing operations. */
> }
>
>
> --
> 2.31.1
>

  parent reply	other threads:[~2021-07-31  7:07 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-02 20:35 [dpdk-dev] [PATCH] gpudev: introduce memory API Thomas Monjalon
2021-06-02 20:46 ` Stephen Hemminger
2021-06-02 20:48   ` Thomas Monjalon
2021-06-03  7:06 ` Andrew Rybchenko
2021-06-03  7:26   ` Thomas Monjalon
2021-06-03  7:49     ` Andrew Rybchenko
2021-06-03  8:26       ` Thomas Monjalon
2021-06-03  8:57         ` Andrew Rybchenko
2021-06-03  7:18 ` David Marchand
2021-06-03  7:30   ` Thomas Monjalon
2021-06-03  7:47 ` Jerin Jacob
2021-06-03  8:28   ` Thomas Monjalon
2021-06-03  8:41     ` Jerin Jacob
2021-06-03  8:43       ` Thomas Monjalon
2021-06-03  8:47         ` Jerin Jacob
2021-06-03  8:53           ` Thomas Monjalon
2021-06-03  9:20             ` Jerin Jacob
2021-06-03  9:36               ` Thomas Monjalon
2021-06-03 10:04                 ` Jerin Jacob
2021-06-03 10:30                   ` Thomas Monjalon
2021-06-03 11:38                     ` Jerin Jacob
2021-06-04 12:55                       ` Thomas Monjalon
2021-06-04 15:05                         ` Jerin Jacob
2021-06-03  9:33   ` Ferruh Yigit
2021-06-04 10:28     ` Thomas Monjalon
2021-06-04 11:09       ` Jerin Jacob
2021-06-04 12:46         ` Thomas Monjalon
2021-06-04 13:05           ` Andrew Rybchenko
2021-06-04 13:18             ` Thomas Monjalon
2021-06-04 13:59               ` Andrew Rybchenko
2021-06-04 14:09                 ` Thomas Monjalon
2021-06-04 15:20                   ` Jerin Jacob
2021-06-04 15:51                     ` Thomas Monjalon
2021-06-04 18:20                       ` Wang, Haiyue
2021-06-05  5:09                         ` Jerin Jacob
2021-06-06  1:13                           ` Honnappa Nagarahalli
2021-06-06  5:28                             ` Jerin Jacob
2021-06-07 10:29                               ` Thomas Monjalon
2021-06-07  7:20                             ` Wang, Haiyue
2021-06-07 10:43                               ` Thomas Monjalon
2021-06-07 13:54                                 ` Jerin Jacob
2021-06-07 16:47                                   ` Thomas Monjalon
2021-06-08  4:10                                     ` Jerin Jacob
2021-06-08  6:34                                       ` Thomas Monjalon
2021-06-08  7:09                                         ` Jerin Jacob
2021-06-08  7:32                                           ` Thomas Monjalon
2021-06-15 18:24                                         ` Ferruh Yigit
2021-06-15 18:54                                           ` Thomas Monjalon
2021-06-07 23:31                                   ` Honnappa Nagarahalli
2021-06-04  5:51 ` Wang, Haiyue
2021-06-04  8:15   ` Thomas Monjalon
2021-06-04 11:07 ` Wang, Haiyue
2021-06-04 12:43   ` Thomas Monjalon
2021-06-04 13:25     ` Wang, Haiyue
2021-06-04 14:06       ` Thomas Monjalon
2021-06-04 18:04         ` Wang, Haiyue
2021-06-05  7:49           ` Thomas Monjalon
2021-06-05 11:09             ` Wang, Haiyue
2021-06-06  1:10 ` Honnappa Nagarahalli
2021-06-07 10:50   ` Thomas Monjalon
2021-07-30 13:55 ` [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 1/7] hcdev: introduce heterogeneous computing device library Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 2/7] hcdev: add event notification Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 3/7] hcdev: add child device representing a device context Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 4/7] hcdev: support multi-process Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 5/7] hcdev: add memory API Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 6/7] hcdev: add communication flag Thomas Monjalon
2021-07-30 13:55   ` [dpdk-dev] [RFC PATCH v2 7/7] hcdev: add communication list Thomas Monjalon
2021-07-31  7:06   ` Jerin Jacob [this message]
2021-07-31  8:21     ` [dpdk-dev] [RFC PATCH v2 0/7] heterogeneous computing library Thomas Monjalon
2021-07-31 13:42       ` Jerin Jacob
2021-08-27  9:44         ` Thomas Monjalon
2021-08-27 12:19           ` Jerin Jacob
2021-08-29  5:32             ` Wang, Haiyue
2021-09-01 15:35               ` Elena Agostini
2021-09-02 13:12                 ` Jerin Jacob
2021-09-06 16:11                   ` Elena Agostini
2021-09-06 17:15                     ` Wang, Haiyue
2021-09-06 17:22                       ` Elena Agostini
2021-09-07  0:55                         ` Wang, Haiyue
2021-10-09  1:53 ` [dpdk-dev] [PATCH v3 0/9] GPU library eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 1/9] gpudev: introduce GPU device class library eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 2/9] gpudev: add event notification eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 3/9] gpudev: add child device representing a device context eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 4/9] gpudev: support multi-process eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 5/9] gpudev: add memory API eagostini
2021-10-08 20:18     ` Thomas Monjalon
2021-10-29 19:38     ` Mattias Rönnblom
2021-11-08 15:16       ` Elena Agostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 6/9] gpudev: add memory barrier eagostini
2021-10-08 20:16     ` Thomas Monjalon
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 7/9] gpudev: add communication flag eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 8/9] gpudev: add communication list eagostini
2021-10-09  1:53   ` [dpdk-dev] [PATCH v3 9/9] doc: add CUDA example in GPU guide eagostini
2021-10-10 10:16   ` [dpdk-dev] [PATCH v3 0/9] GPU library Jerin Jacob
2021-10-11  8:18     ` Thomas Monjalon
2021-10-11  8:43       ` Jerin Jacob
2021-10-11  9:12         ` Thomas Monjalon
2021-10-11  9:29           ` Jerin Jacob
2021-10-11 10:27             ` Thomas Monjalon
2021-10-11 11:41               ` Jerin Jacob
2021-10-11 12:44                 ` Thomas Monjalon
2021-10-11 13:30                   ` Jerin Jacob
2021-10-19 10:00                     ` Elena Agostini
2021-10-19 18:47                       ` Jerin Jacob
2021-10-19 19:11                         ` Thomas Monjalon
2021-10-19 19:56                           ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2021-11-03 19:15 ` [dpdk-dev] [PATCH v4 " eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 1/9] gpudev: introduce GPU device class library eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 2/9] gpudev: add event notification eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 3/9] gpudev: add child device representing a device context eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 4/9] gpudev: support multi-process eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 5/9] gpudev: add memory API eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 6/9] gpudev: add memory barrier eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 7/9] gpudev: add communication flag eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 8/9] gpudev: add communication list eagostini
2021-11-03 19:15   ` [dpdk-dev] [PATCH v4 9/9] doc: add CUDA example in GPU guide eagostini
2021-11-08 18:57 ` [dpdk-dev] [PATCH v5 0/9] GPU library eagostini
2021-11-08 16:25   ` Thomas Monjalon
2021-11-08 18:57   ` [dpdk-dev] [PATCH v5 1/9] gpudev: introduce GPU device class library eagostini
2021-11-08 18:57   ` [dpdk-dev] [PATCH v5 2/9] gpudev: add event notification eagostini
2021-11-08 18:57   ` [dpdk-dev] [PATCH v5 3/9] gpudev: add child device representing a device context eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 4/9] gpudev: support multi-process eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 5/9] gpudev: add memory API eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 6/9] gpudev: add memory barrier eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 7/9] gpudev: add communication flag eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 8/9] gpudev: add communication list eagostini
2021-11-08 18:58   ` [dpdk-dev] [PATCH v5 9/9] doc: add CUDA example in GPU guide eagostini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1O=+g-LpwhSVb=Km96h-v0-evwem4dwvKYUb-7og3U1ow@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=haiyue.wang@intel.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jerinj@marvell.com \
    --cc=stephen@networkplumber.org \
    --cc=techboard@dpdk.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).