DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ilya Maximets <i.maximets@ovn.org>
To: Flavio Leitner <fbl@sysclose.org>,
	Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Shahaf Shuler <shahafs@mellanox.com>,
	David Marchand <david.marchand@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>, Tiwei Bie <tiwei.bie@intel.com>,
	Zhihong Wang <zhihong.wang@intel.com>,
	Obrembski MichalX <michalx.obrembski@intel.com>,
	Stokes Ian <ian.stokes@intel.com>,
	Ilya Maximets <i.maximets@ovn.org>
Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs
Date: Thu, 3 Oct 2019 18:57:32 +0200	[thread overview]
Message-ID: <088ea83c-cc00-5542-a554-ca857b9ef6ec@ovn.org> (raw)
In-Reply-To: <20191002151528.0f285b8a@p50.lan>

On 02.10.2019 20:15, Flavio Leitner wrote:
> On Wed, 2 Oct 2019 17:50:41 +0000
> Shahaf Shuler <shahafs@mellanox.com> wrote:
> 
>> Wednesday, October 2, 2019 3:59 PM, Flavio Leitner:
>>> Obrembski MichalX <michalx.obrembski@intel.com>; Stokes Ian
>>> <ian.stokes@intel.com>
>>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear
>>> mbufs
>>>
>>>
>>> Hi Shahaf,
>>>
>>> Thanks for looking into this, see my inline comments.
>>>
>>> On Wed, 2 Oct 2019 09:00:11 +0000
>>> Shahaf Shuler <shahafs@mellanox.com> wrote:
>>>    
>>>> Wednesday, October 2, 2019 11:05 AM, David Marchand:
>>>>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large
>>>>> linear mbufs
>>>>>
>>>>> Hello Shahaf,
>>>>>
>>>>> On Wed, Oct 2, 2019 at 6:46 AM Shahaf Shuler
>>>>> <shahafs@mellanox.com> wrote:
>>>>>>   
>>
>> [...]
>>
>>>>>
>>>>> I am missing some piece here.
>>>>> Which pool would the PMD take those external buffers from?
>>>>
>>>> The mbuf is always taken from the single mempool associated w/ the
>>>> rxq. The buffer for the mbuf may be allocated (in case virtio
>>>> payload is bigger than current mbuf size) from DPDK hugepages or
>>>> any other system memory and be attached to the mbuf.
>>>>
>>>> You can see example implementation of it in mlx5 PMD (checkout
>>>> rte_pktmbuf_attach_extbuf call)
>>>
>>> Thanks, I wasn't aware of external buffers.
>>>
>>> I see that attaching external buffers of the correct size would be
>>> more efficient in terms of saving memory/avoiding sparsing.
>>>
>>> However, we still need to be prepared to the worse case scenario
>>> (all packets 64K), so that doesn't help with the total memory
>>> required.
>>
>> Am not sure why.
>> The allocation can be per demand. That is - only when you encounter a
>> large buffer.
>>
>> Having buffer allocated in advance will benefit only from removing
>> the cost of the rte_*malloc. However on such big buffers, and further
>> more w/ device offloads like TSO, am not sure that is an issue.
> 
> Now I see what you're saying. I was thinking we had to reserve the
> memory before, like mempool does, then get the buffers as needed.
> 
> OK, I can give a try with rte_*malloc and see how it goes.

This way we actually could have a nice API.  For example, by
introducing some new flag RTE_VHOST_USER_NO_CHAINED_MBUFS (there
might be better name) which could be passed to driver_register().
On receive, depending on this flag, function will create chained
mbufs or allocate new contiguous memory chunk and attach it as
an external buffer if the data could not be stored in a single
mbuf from the registered memory pool.

Supporting external memory in mbufs will require some additional
work from the OVS side (e.g. better work with ol_flags), but
we'll have to do it anyway for upgrade to DPDK 19.11.

Best regards, Ilya Maximets.

> 
>>> The current patch pushes the decision to the application which
>>> knows better the workload. If more memory is available, it can
>>> optionally use large buffers, otherwise just don't pass that. Or
>>> even decide whether to share the same 64K mempool between multiple
>>> vhost ports or use one mempool per port.
>>>
>>> Perhaps I missed something, but managing memory with mempool still
>>> require us to have buffers of 64K regardless if the data consumes
>>> less space. Otherwise the application or the PMD will have to
>>> manage memory itself.
>>>
>>> If we let the PMD manages the memory, what happens if a port/queue
>>> is closed and one or more buffers are still in use (switching)? I
>>> don't see how to solve this cleanly.
>>
>> Closing of the dev should return EBUSY till all buffers are free.
>> What is the use case of closing a port while still having packet
>> pending on other port of the switch? And why we cannot wait for them
>> to complete transmission?
> 
> The vswitch gets the request from outside and the assumption is that
> the command will succeed. AFAIK, there is no retry mechanism.
> 
> Thanks Shahaf!
> fbl

  reply	other threads:[~2019-10-04  7:58 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01 22:19 Flavio Leitner
2019-10-01 23:10 ` Flavio Leitner
2019-10-02  4:45 ` Shahaf Shuler
2019-10-02  8:04   ` David Marchand
2019-10-02  9:00     ` Shahaf Shuler
2019-10-02 12:58       ` Flavio Leitner
2019-10-02 17:50         ` Shahaf Shuler
2019-10-02 18:15           ` Flavio Leitner
2019-10-03 16:57             ` Ilya Maximets [this message]
2019-10-03 21:25               ` Flavio Leitner
2019-10-02  7:51 ` Maxime Coquelin
2019-10-04 20:10 ` [dpdk-dev] [PATCH v2] vhost: add support for large buffers Flavio Leitner
2019-10-06  4:47   ` Shahaf Shuler
2019-10-10  5:12   ` Tiwei Bie
2019-10-10 12:12     ` Flavio Leitner
2019-10-11 17:09   ` [dpdk-dev] [PATCH v3] " Flavio Leitner
2019-10-14  2:44     ` Tiwei Bie
2019-10-15 16:17     ` [dpdk-dev] [PATCH v4] " Flavio Leitner
2019-10-15 17:41       ` Ilya Maximets
2019-10-15 18:44         ` Flavio Leitner
2019-10-15 18:59       ` [dpdk-dev] [PATCH v5] " Flavio Leitner
2019-10-16 10:02         ` Maxime Coquelin
2019-10-16 11:13         ` Maxime Coquelin
2019-10-16 13:32           ` Ilya Maximets
2019-10-16 13:46             ` Maxime Coquelin
2019-10-16 14:02               ` Flavio Leitner
2019-10-16 14:08                 ` Ilya Maximets
2019-10-16 14:14                   ` Flavio Leitner
2019-10-16 14:05               ` Ilya Maximets
2019-10-29  9:02         ` David Marchand
2019-10-29 12:21           ` Flavio Leitner
2019-10-29 16:19             ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=088ea83c-cc00-5542-a554-ca857b9ef6ec@ovn.org \
    --to=i.maximets@ovn.org \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=fbl@sysclose.org \
    --cc=ian.stokes@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=michalx.obrembski@intel.com \
    --cc=shahafs@mellanox.com \
    --cc=tiwei.bie@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).