From: cys <chaoys155@163.com>
To: "Ferruh Yigit" <ferruh.yigit@intel.com>
Cc: dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] kni: continuous memory restriction ?
Date: Wed, 14 Mar 2018 08:35:45 +0800 (CST) [thread overview]
Message-ID: <428fa651.2c.16221ef2eaa.Coremail.chaoys155@163.com> (raw)
In-Reply-To: <60b8909c-b28d-eec4-110e-4c02c32cb087@intel.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=utf-8, Size: 3864 bytes --]
Thanks for your reply.
With your solution a), I guess 'single mempool' mean a mempool fit in one memseg (continuous memory).
What about a mempool across many memsegs ? I'm afraid it's still not safe.
Just like this one:
-------------- MEMPOOL ----------------
mempool <mbuf_pool[0]>@0x7ff9e4833d00
flags=10
pool=0x7ff9fbfffe00
phys_addr=0xc4fc33d00
nb_mem_chunks=91
size=524288
populated_size=524288
header_size=64
elt_size=2432
trailer_size=0
total_obj_size=2496
private_data_size=64
avg bytes/object=2496.233643
Zone 0: name:<rte_eth_dev_data>, phys:0xc4fdb7f40, len:0x34000, virt:0x7ff9e49b7f40, socket_id:0, flags:0
Zone 1: name:<MP_mbuf_pool[0]>, phys:0xc4fc33d00, len:0x182100, virt:0x7ff9e4833d00, socket_id:0, flags:0
Zone 2: name:<MP_mbuf_pool[0]_0>, phys:0xb22000080, len:0x16ffff40, virt:0x7ffa3a800080, socket_id:0, flags:0
Zone 3: name:<RG_MP_mbuf_pool[0]>, phys:0xc199ffe00, len:0x800180, virt:0x7ff9fbfffe00, socket_id:0, flags:0
Zone 4: name:<MP_mbuf_pool[0]_1>, phys:0xc29c00080, len:0x77fff40, virt:0x7ff9e5800080, socket_id:0, flags:0
Zone 5: name:<MP_mbuf_pool[0]_2>, phys:0xc22c00080, len:0x67fff40, virt:0x7ff9ed200080, socket_id:0, flags:0
Zone 6: name:<MP_mbuf_pool[0]_3>, phys:0xc1dc00080, len:0x3bfff40, virt:0x7ff9f4800080, socket_id:0, flags:0
Zone 7: name:<MP_mbuf_pool[0]_4>, phys:0xc1bc00080, len:0x1bfff40, virt:0x7ff9f8600080, socket_id:0, flags:0
Zone 8: name:<MP_mbuf_pool[0]_5>, phys:0xbf4600080, len:0xffff40, virt:0x7ffa1ea00080, socket_id:0, flags:0
Zone 9: name:<MP_mbuf_pool[0]_6>, phys:0xc0e000080, len:0xdfff40, virt:0x7ffa06400080, socket_id:0, flags:0
Zone 10: name:<MP_mbuf_pool[0]_7>, phys:0xbe0600080, len:0xdfff40, virt:0x7ffa32000080, socket_id:0, flags:0
Zone 11: name:<MP_mbuf_pool[0]_8>, phys:0xc18000080, len:0xbfff40, virt:0x7ff9fd000080, socket_id:0, flags:0
Zone 12: name:<MP_mbuf_pool[0]_9>, phys:0x65000080, len:0xbfff40, virt:0x7ffa54e00080, socket_id:0, flags:0
Zone 13: name:<MP_mbuf_pool[0]_10>, phys:0xc12a00080, len:0x7fff40, virt:0x7ffa02200080, socket_id:0, flags:0
Zone 14: name:<MP_mbuf_pool[0]_11>, phys:0xc0d600080, len:0x7fff40, virt:0x7ffa07400080, socket_id:0, flags:0
Zone 15: name:<MP_mbuf_pool[0]_12>, phys:0xc06600080, len:0x7fff40, virt:0x7ffa0de00080, socket_id:0, flags:0
...
å¨2018å¹´03æ13 22æ¶57å, "Ferruh Yigit"<ferruh.yigit@intel.com>åé:
On 3/9/2018 12:14 PM, cys wrote:
> Commit 8451269e6d7ba7501723fe2efd0 said "remove continuous memory restriction";
> http://dpdk.org/browse/dpdk/commit/lib/librte_eal/linuxapp/kni/kni_net.c?id=8451269e6d7ba7501723fe2efd05745010295bac
> For chained mbufs(nb_segs > 1), function va2pa use the offset of previous mbuf
> to calculate physical address of next mbuf.
> So anywhere guarante that all mbufs have the same offset (buf_addr - buf_physaddr) ?
> Or have I misunderstood chained mbufs?
Hi,
Your description is correct, KNI chained mbufs is broken if chained mbufs are
from different mempools.
Two commits seems involved, in time order:
[1] d89a58dfe90b ("kni: support chained mbufs")
[2] 8451269e6d7b ("kni: remove continuous memory restriction")
With current implementation, kernel needs to know physical address of the mbuf
to be able to access it.
For chained mbufs, first mbuf is OK but for rest kernel side gets the virtual
address of the mbuf and this only works if all chained mbufs are from same mempool.
I don't have any good solution indeed, but it is possible to:
a) If you are using chained mbufs, keep old limitation of using singe mempool
b) Serialize chained mbufs for KNI in userspace
\x16º&}êëº\x1c¢+b×¥ryÓ\x1a¯^4׶÷m5ñ\x17º¹ÏjØ_zºî( اµé\¢d^qè¯y×ë¢i k^â×¥r¦{{^Ê&×ݹçµçVòvd¢¸\x0f¢Ë_\x1c"¶\x11\x1213âw¡uð.ãèׯvd¢¸\x16yÝx1ªöÓ_4׶÷í5Ó@Mp&¥\x17¬º[R(Ï\x109èjÛZr\x19اë,j\a\x02jEW¦Z\x1auçEj[\x1eEç\x1e÷~º&~k&4åù¢×¥rµÓnwÛöo'æ²h\x1a×Nß)ízW(\bDLÿÓ-/Ã\x1cDR\0\x01\x12Û\bÄ\x03\x7f8×s\x1a¯m5ó^ùënôïM\x17\x13^[K¢uÕr+¢sè®Ð\x15
next prev parent reply other threads:[~2018-03-14 0:35 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-09 12:14 cys
2018-03-13 9:36 ` cys
2018-03-13 14:57 ` Ferruh Yigit
2018-03-14 0:35 ` cys [this message]
2018-03-20 15:25 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=428fa651.2c.16221ef2eaa.Coremail.chaoys155@163.com \
--to=chaoys155@163.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).