DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: "Fu, Patrick" <patrick.fu@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Xia, Chenbo" <chenbo.xia@intel.com>
Subject: Re: [dpdk-dev] [PATCH v1] vhost: support cross page buf in async data path
Date: Tue, 21 Jul 2020 10:35:24 +0200	[thread overview]
Message-ID: <a5601ef6-b869-50b7-8897-0038c160be37@redhat.com> (raw)
In-Reply-To: <BYAPR11MB37355A9E0699BD06D75E2E3284780@BYAPR11MB3735.namprd11.prod.outlook.com>

Hi Patrick,

On 7/21/20 4:57 AM, Fu, Patrick wrote:
> Hi Maxime,
> 
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Tuesday, July 21, 2020 12:40 AM
>> To: Fu, Patrick <patrick.fu@intel.com>; dev@dpdk.org; Xia, Chenbo
>> <chenbo.xia@intel.com>
>> Subject: Re: [PATCH v1] vhost: support cross page buf in async data path
>>
>> The title could be improved, it is not very clear IMHO.
> How about: 
> vhost: fix async copy failure on buffers cross page boundary
> 
>> On 7/20/20 4:52 AM, patrick.fu@intel.com wrote:
>>> From: Patrick Fu <patrick.fu@intel.com>
>>>
>>> Async copy fails when ring buffer cross two physical pages. This patch
>>> fix the failure by letting copies occur in sync mode if crossing page
>>> buffers are given.
>>
>> Wouldn't it be possible to have the buffer split into two iovecs?
> Technically we can do that, however, it will also introduce significant overhead:
>  - overhead from adding additional logic in vhost async data path to handle the case
>  - overhead from dma device to consume 2 iovecs
> In average, I don't think dma copy can benefit too much for the buffer which are split into multiple pages. 
> CPU copy shall be a more suitable method.

I think we should try, that would make a cleaner implementation. I don't
think having to fallback to sync mode is a good idea because it adds an
overhead on the CPU, which is what we try to avoid with this async mode.

Also, I am not convinced the overhead would be that significant, at
least I hope so, otherwise it would mean this new path is just
performing better because it takes a lot of shortcuts, like the vector
path in Virtio PMD.

Regards,
Maxime

>  
>>> Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")
>>>
>>> Signed-off-by: Patrick Fu <patrick.fu@intel.com>
>>> ---
>>>  lib/librte_vhost/virtio_net.c | 12 +++---------
>>>  1 file changed, 3 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c
>>> b/lib/librte_vhost/virtio_net.c index 1d0be3dd4..44b22a8ad 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -1071,16 +1071,10 @@ async_mbuf_to_desc(struct virtio_net *dev,
>> struct vhost_virtqueue *vq,
>>>  		}
>>>
>>>  		cpy_len = RTE_MIN(buf_avail, mbuf_avail);
>>> +		hpa = (void *)(uintptr_t)gpa_to_hpa(dev,
>>> +				buf_iova + buf_offset, cpy_len);
>>>
>>> -		if (unlikely(cpy_len >= cpy_threshold)) {
>>> -			hpa = (void *)(uintptr_t)gpa_to_hpa(dev,
>>> -					buf_iova + buf_offset, cpy_len);
>>> -
>>> -			if (unlikely(!hpa)) {
>>> -				error = -1;
>>> -				goto out;
>>> -			}
>>> -
>>> +		if (unlikely(cpy_len >= cpy_threshold && hpa)) {
>>>  			async_fill_vec(src_iovec + tvec_idx,
>>>  				(void *)(uintptr_t)rte_pktmbuf_iova_offset(m,
>>>  						mbuf_offset), cpy_len);
>>>
> 


  reply	other threads:[~2020-07-21  8:35 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-20  2:52 patrick.fu
2020-07-20 16:39 ` Maxime Coquelin
2020-07-21  2:57   ` Fu, Patrick
2020-07-21  8:35     ` Maxime Coquelin [this message]
2020-07-21  9:01       ` Fu, Patrick
2020-07-24 13:49 ` [dpdk-dev] [PATCH v2] vhost: fix async copy fail on multi-page buffers patrick.fu
2020-07-27  6:33 ` [dpdk-dev] [PATCH v3] " patrick.fu
2020-07-27 13:14   ` Xia, Chenbo
2020-07-28  3:09     ` Fu, Patrick
2020-07-28  3:28 ` [dpdk-dev] [PATCH v4] " patrick.fu
2020-07-28 13:55   ` Maxime Coquelin
2020-07-29  1:40     ` Fu, Patrick
2020-07-29  2:05       ` Fu, Patrick
2020-07-29  2:04 ` [dpdk-dev] [PATCH v5] " Patrick Fu
2020-07-29 14:24   ` Maxime Coquelin
2020-07-29 14:55   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5601ef6-b869-50b7-8897-0038c160be37@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=patrick.fu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).