DPDK patches and discussions
 help / color / mirror / Atom feed
From: Tetsuya Mukawa <mukawa@igel.co.jp>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "nakajima.yoshihiro@lab.ntt.co.jp"
	<nakajima.yoshihiro@lab.ntt.co.jp>,
	"mst@redhat.com" <mst@redhat.com>
Subject: Re: [dpdk-dev] [PATCH v1 0/2] Virtio-net PMD Extension to work on host
Date: Wed, 6 Jan 2016 16:35:00 +0900	[thread overview]
Message-ID: <568CC3A4.7080603@igel.co.jp> (raw)
In-Reply-To: <568CA950.9000205@intel.com>

On 2016/01/06 14:42, Tan, Jianfeng wrote:
>
>
> On 1/6/2016 11:57 AM, Tetsuya Mukawa wrote:
>> On 2015/12/28 20:06, Tetsuya Mukawa wrote:
>>> On 2015/12/24 23:05, Tan, Jianfeng wrote:
>>>> Hi Tetsuya,
>>>>
>>>> After several days' studying your patch, I have some questions as
>>>> follows:
>>>>
>>>> 1. Is physically-contig memory really necessary?
>>>> This is a too strong requirement IMHO. IVSHMEM doesn't require this
>>>> in its original meaning. So how do you think of
>>>> Huawei Xie's idea of using virtual address for address translation?
>>>> (In addition, virtual address of mem_table could be
>>>> different in application and QTest, but this can be addressed
>>>> because SET_MEM_TABLE msg will be intercepted by
>>>> QTest)
>>> Hi Jianfeng,
>>>
>>> Thanks for your suggestion.
>>> Huawei's idea may solve contig-mem restriction.
>>> Let me have time to check it more.
>> Hi Jianfeng,
>>
>> I made sure we can remove the restriction with Huawei's idea.
>> One thing I concern is below.
>> If we don't use contiguous memory, this PMD will not work with other
>> 'physical' PMDs like e1000 PMD, virtio-net PMD, and etc.
>> (This is because allocated memory may not  be physically contiguous.)
>>
>> One of examples is that if we implement like above, in QEMU guest, we
>> can handle a host NIC directly, but in container, we will not be able to
>> handle the device.
>> This will be a restriction for this virtual addressing changing.
>>
>> Do you know an use case that the user wants to handle 'physical' PMD and
>> 'virtual' virtio-net PMD together?
>>
>> Tetsuya,
> Hi Tetsuya,
>
> I have no use case in hand, which handles 'physical' PMDs and
> 'virtual' virtio-net PMD together.
> (Pavel Fedin once tried to run ovs in container, but that case just
> uses virtual virtio devices, I
> don't know if he has plan to add 'physical' PMDs as well.)
>
> Actually, it's not completely contradictory to make them work
> together. Like this:
> a. containers with root privilege
> We can initialize memory as legacy way. (TODO: besides
> physical-contiguous, we try allocate
> virtual-contiguous big area for all memsegs as well.)

Hi Jianfeng,

Yes, I agree with you.
If the feature is really needed, we will be able to have work around.

>
> a.1 For vhost-net, before sending memory regions into kernel, we can
> merge those virtual-contiguous regions into one region.
> a.2 For vhost-user, we can merge memory regions in the vhost. The
> blocker is that for now, maximum fd num was restricted
> by VHOST_MEMORY_MAX_NREGIONS=8 (so in 2M-hugepage's case, 16M shared
> memory is not nearly enough).
>

With current your implementation, when 'virtual' virtio-net PMD is used,
'phys_addr' will be virtual address in EAL layer.

struct rte_memseg {
        phys_addr_t phys_addr;      /**< Start physical address. */
        union {
                void *addr;         /**< Start virtual address. */
                uint64_t addr_64;   /**< Makes sure addr is always 64
bits */
        };
        .......
};

How about choosing it in virtio-net PMD?
(In the case of 'virtual', just use 'addr' instead of using 'phys_addr'.)
For example, port0 may use physical address, but port1 may use virtual
address.

With this, of course, we don't have an issue with 'physical' virtio-net PMD.
Also, with 'virtual' virtio-net PMD, we can use virtual address and fd
that represents the big virtual address space.
(TODO: Need to change rte_memseg and EAL to keep fd and offset?)
Then, you don't worry about VHOST_MEMORY_MAX_NREGIONS, because we have
only one fd.

> b. containers without root privilege
> No need to worry about this problem, because it lacks of privilege to
> construct physical-contiguous memory.
>

Yes, we cannot run 'physical' PMDs in this type of container.
Anyway, I will check it more, if we really need it.

Thanks,
Tetsuya

  reply	other threads:[~2016-01-06  7:34 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-19 10:57 [dpdk-dev] [RFC PATCH " Tetsuya Mukawa
2015-11-19 10:57 ` [dpdk-dev] [RFC PATCH 1/2] EAL: Add new EAL "--shm" option Tetsuya Mukawa
2015-12-16  8:37   ` [dpdk-dev] [PATCH v1 0/2] Virtio-net PMD Extension to work on host Tetsuya Mukawa
2015-12-16  8:37     ` [dpdk-dev] [PATCH v1 1/2] EAL: Add new EAL "--contig-mem" option Tetsuya Mukawa
2015-12-16  8:37     ` [dpdk-dev] [PATCH v1 2/2] virtio: Extend virtio-net PMD to support container environment Tetsuya Mukawa
2015-12-28 11:57       ` Pavel Fedin
2016-01-06  3:57         ` Tetsuya Mukawa
2016-01-06  5:56           ` Tan, Jianfeng
2016-01-06  7:27             ` Tetsuya Mukawa
2015-12-24 14:05     ` [dpdk-dev] [PATCH v1 0/2] Virtio-net PMD Extension to work on host Tan, Jianfeng
2015-12-28 11:06       ` Tetsuya Mukawa
2016-01-06  3:57         ` Tetsuya Mukawa
2016-01-06  5:42           ` Tan, Jianfeng
2016-01-06  7:35             ` Tetsuya Mukawa [this message]
2016-01-11  5:31               ` Tan, Jianfeng
2015-11-19 10:57 ` [dpdk-dev] [RFC PATCH 2/2] virtio: Extend virtio-net PMD to support container environment Tetsuya Mukawa
2015-11-19 18:16 ` [dpdk-dev] [RFC PATCH 0/2] Virtio-net PMD Extension to work on host Rich Lane
2015-11-20  2:00   ` Xie, Huawei
2015-11-20  2:35     ` Tetsuya Mukawa
2015-11-20  2:53       ` Tetsuya Mukawa
2015-12-28  5:15 ` Qiu, Michael
2015-12-28 11:06   ` Tetsuya Mukawa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=568CC3A4.7080603@igel.co.jp \
    --to=mukawa@igel.co.jp \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    --cc=mst@redhat.com \
    --cc=nakajima.yoshihiro@lab.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).