DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: "Medvedkin, Vladimir" <vladimir.medvedkin@intel.com>
Cc: fengchengwen <fengchengwen@huawei.com>,
	sburla@marvell.com, anoobj@marvell.com,
	 Anatoly Burakov <anatoly.burakov@intel.com>,
	dev@dpdk.org,  Kevin Laatz <kevin.laatz@intel.com>,
	Bruce Richardson <bruce.richardson@intel.com>
Subject: Re: [PATCH v1 1/3] dmadev: add inter-domain operations
Date: Thu, 23 Nov 2023 10:54:49 +0530	[thread overview]
Message-ID: <CALBAE1NeCqaTdhzq7_VGNGTG=oc291BJe9kUSrx1_okDrwQJdg@mail.gmail.com> (raw)
In-Reply-To: <3504aaf1-b4e0-44f5-883a-5e4bc0197283@intel.com>

On Fri, Oct 27, 2023 at 7:16 PM Medvedkin, Vladimir
<vladimir.medvedkin@intel.com> wrote:
>
> Hi Satananda, Anoob, Chengwen, Jerin, all,
>
> After a number of internal discussions we have decided that we're going
> to postpone this feature/patchset till next release.
>
>  >[Satananda] Have you considered extending  rte_dma_port_param and
> rte_dma_vchan_conf to represent interdomain memory transfer setup as a
> separate port type like RTE_DMA_PORT_INTER_DOMAIN ?
>
>  >[Anoob] Can we move this 'src_handle' and 'dst_handle' registration to
> rte_dma_vchan_setup so that the 'src_handle' and 'dst_handle' can be
> configured in control path and the existing datapath APIs can work as is.
>
>  >[Jerin] Or move src_handle/dst_hanel to vchan config
>
> We've listened to feedback on implementation, and have prototyped a
> vchan-based interface. This has a number of advantages and
> disadvantages, both in terms of API usage and in terms of our specific
> driver.
>
> Setting up inter-domain operations as separate vchans allow us to store
> data inside the PMD and not duplicate any API paths, so having multiple
> vchans addresses that problem. However, this also means that any new
> vchans added while the PMD is active (such as attaching to a new

This could be mitigated by setup max number of vchan up front before start()
and use as demanded.

> process) will have to be gated by start/stop. This is probably fine from
> API point of view, but a hassle for user (previously, we could've just
> started using the new inter-domain handle right away).
>
> Another usability issue with multiple vchan approach is that now, each
> vchan will have its own enqueue/submit/completion cycle, so any use case
> relying on one thread communicating with many processes will have to
> process each vchan separately, instead of everything going into one
> vchan - again, looks fine API-wise, but a hassle for the user, since
> this requires calling submit and completion for each vchan, and in some
> cases it requires maintaining some kind of reordering queue. (On the
> other hand, it would be much easier to separate operations intended for
> different processes with this approach, so perhaps this is not such a
> big issue)

IMO, The design principle behind vchan was,
-A single HW queue be serving N number of vchan
-A vchan is nothing, but it creates desired HW instruction format as
template in slow path to use in fast path or write some slow path
registers to define the attribute of vchan.

IMO, The above-mentioned usability constraints will be there in all
PMD as vchan is muxing a single HW queue.

IMO, Decision for vchan vs fast path API could be
a) Number of vchan is required - In this case, we are using for VM to
VM or Container to Container copy. So I think, it is limited
b) HW support - Some HW's in order to reduce size of HW descriptor
size, some features will be configured as slow path only(can not be
changed at runtime, without reconfiguring the vchan).

  reply	other threads:[~2023-11-23  5:25 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11 16:14 [PATCH v1 0/3] Add support for inter-domain DMA operations Anatoly Burakov
2023-08-11 16:14 ` [PATCH v1 1/3] dmadev: add inter-domain operations Anatoly Burakov
2023-08-18  8:08   ` [EXT] " Anoob Joseph
2023-10-08  2:33   ` fengchengwen
2023-10-09  5:05     ` Jerin Jacob
2023-10-27 13:46       ` Medvedkin, Vladimir
2023-11-23  5:24         ` Jerin Jacob [this message]
2023-08-11 16:14 ` [PATCH v1 2/3] dma/idxd: implement " Anatoly Burakov
2023-08-11 16:14 ` [PATCH v1 3/3] dma/idxd: add API to create and attach to window Anatoly Burakov
2023-08-14  4:39   ` Jerin Jacob
2023-08-14  9:55     ` Burakov, Anatoly
2023-08-15 19:20 ` [EXT] [PATCH v1 0/3] Add support for inter-domain DMA operations Satananda Burla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1NeCqaTdhzq7_VGNGTG=oc291BJe9kUSrx1_okDrwQJdg@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=anoobj@marvell.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=kevin.laatz@intel.com \
    --cc=sburla@marvell.com \
    --cc=vladimir.medvedkin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).