DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: cys <chaoys155@163.com>
Cc: dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] kni: continuous memory restriction ?
Date: Tue, 20 Mar 2018 15:25:13 +0000	[thread overview]
Message-ID: <230e41e3-c397-4088-fc47-3ecceb1468e5@intel.com> (raw)
In-Reply-To: <428fa651.2c.16221ef2eaa.Coremail.chaoys155@163.com>


> 
> 在2018年03月13 22时57分, "Ferruh Yigit"<ferruh.yigit@intel.com>写道:
> 
> 
>     On 3/9/2018 12:14 PM, cys wrote:
>     > Commit 8451269e6d7ba7501723fe2efd0 said "remove continuous memory
>     restriction";
>     >
>     http://dpdk.org/browse/dpdk/commit/lib/librte_eal/linuxapp/kni/kni_net.c?id=8451269e6d7ba7501723fe2efd05745010295bac
>     > For chained mbufs(nb_segs > 1), function va2pa use the offset of previous mbuf
>     > to calculate physical address of next mbuf.
>     > So anywhere guarante that all mbufs have the same offset (buf_addr -
>     buf_physaddr) ?
>     > Or have I misunderstood chained mbufs?
> 
>     Hi,
> 
>     Your description is correct, KNI chained mbufs is broken if chained mbufs are
>     from different mempools.
> 
>     Two commits seems involved, in time order:
>     [1] d89a58dfe90b ("kni: support chained mbufs")
>     [2] 8451269e6d7b ("kni: remove continuous memory restriction")
> 
>     With current implementation, kernel needs to know physical address of the mbuf
>     to be able to access it.
>     For chained mbufs, first mbuf is OK but for rest kernel side gets the virtual
>     address of the mbuf and this only works if all chained mbufs are from same
>     mempool.
> 
>     I don't have any good solution indeed, but it is possible to:
>     a) If you are using chained mbufs, keep old limitation of using singe mempool
>     b) Serialize chained mbufs for KNI in userspace
> 

On 3/14/2018 12:35 AM, cys wrote:
> Thanks for your reply.
> With your solution a), I guess 'single mempool' mean a mempool fit in one memseg
> (continuous memory).

Yes I mean physically continuous memory, a mempool from single memseg, otherwise
it has same problem.

> What about a mempool across many memsegs ? I'm afraid it's still not safe.
> Just like this one:
> -------------- MEMPOOL ----------------
> mempool <mbuf_pool[0]>@0x7ff9e4833d00
>   flags=10
>   pool=0x7ff9fbfffe00
>   phys_addr=0xc4fc33d00
>   nb_mem_chunks=91
>   size=524288
>   populated_size=524288
>   header_size=64
>   elt_size=2432
>   trailer_size=0
>   total_obj_size=2496
>   private_data_size=64
>   avg bytes/object=2496.233643
>
> Zone 0: name:<rte_eth_dev_data>, phys:0xc4fdb7f40, len:0x34000,
> virt:0x7ff9e49b7f40, socket_id:0, flags:0
> Zone 1: name:<MP_mbuf_pool[0]>, phys:0xc4fc33d00, len:0x182100,
> virt:0x7ff9e4833d00, socket_id:0, flags:0
> Zone 2: name:<MP_mbuf_pool[0]_0>, phys:0xb22000080, len:0x16ffff40,
> virt:0x7ffa3a800080, socket_id:0, flags:0
> Zone 3: name:<RG_MP_mbuf_pool[0]>, phys:0xc199ffe00, len:0x800180,
> virt:0x7ff9fbfffe00, socket_id:0, flags:0
> Zone 4: name:<MP_mbuf_pool[0]_1>, phys:0xc29c00080, len:0x77fff40,
> virt:0x7ff9e5800080, socket_id:0, flags:0
> Zone 5: name:<MP_mbuf_pool[0]_2>, phys:0xc22c00080, len:0x67fff40,
> virt:0x7ff9ed200080, socket_id:0, flags:0
> Zone 6: name:<MP_mbuf_pool[0]_3>, phys:0xc1dc00080, len:0x3bfff40,
> virt:0x7ff9f4800080, socket_id:0, flags:0
> Zone 7: name:<MP_mbuf_pool[0]_4>, phys:0xc1bc00080, len:0x1bfff40,
> virt:0x7ff9f8600080, socket_id:0, flags:0
> Zone 8: name:<MP_mbuf_pool[0]_5>, phys:0xbf4600080, len:0xffff40,
> virt:0x7ffa1ea00080, socket_id:0, flags:0
> Zone 9: name:<MP_mbuf_pool[0]_6>, phys:0xc0e000080, len:0xdfff40,
> virt:0x7ffa06400080, socket_id:0, flags:0
> Zone 10: name:<MP_mbuf_pool[0]_7>, phys:0xbe0600080, len:0xdfff40,
> virt:0x7ffa32000080, socket_id:0, flags:0
> Zone 11: name:<MP_mbuf_pool[0]_8>, phys:0xc18000080, len:0xbfff40,
> virt:0x7ff9fd000080, socket_id:0, flags:0
> Zone 12: name:<MP_mbuf_pool[0]_9>, phys:0x65000080, len:0xbfff40,
> virt:0x7ffa54e00080, socket_id:0, flags:0
> Zone 13: name:<MP_mbuf_pool[0]_10>, phys:0xc12a00080, len:0x7fff40,
> virt:0x7ffa02200080, socket_id:0, flags:0
> Zone 14: name:<MP_mbuf_pool[0]_11>, phys:0xc0d600080, len:0x7fff40,
> virt:0x7ffa07400080, socket_id:0, flags:0
> Zone 15: name:<MP_mbuf_pool[0]_12>, phys:0xc06600080, len:0x7fff40,
> virt:0x7ffa0de00080, socket_id:0, flags:0
> ...

      reply	other threads:[~2018-03-20 15:25 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-09 12:14 cys
2018-03-13  9:36 ` cys
2018-03-13 14:57 ` Ferruh Yigit
2018-03-14  0:35   ` cys
2018-03-20 15:25     ` Ferruh Yigit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=230e41e3-c397-4088-fc47-3ecceb1468e5@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=chaoys155@163.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).