* [dpdk-dev] segmented recv ixgbevf
@ 2014-10-30 10:23 Alex Markuze
2014-10-30 11:09 ` Bruce Richardson
0 siblings, 1 reply; 5+ messages in thread
From: Alex Markuze @ 2014-10-30 10:23 UTC (permalink / raw)
To: dev
Hi,
I'm seeing an unwanted behaviour in the receive flow of ixgbevf. While
using Jumbo frames and sending 4k+ bytes , the receive side breaks up the
packets into 2K buffers, and I receive 3 mbuffs per packet.
Im setting the .max_rx_pkt_len to 4.5K and the mempoll has 5K sized
elements?
Anything else I'm missing here. The purpose is to have all 4k+ bytes in one
single continuous buffer.
Thanks
Alex.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] segmented recv ixgbevf
2014-10-30 10:23 [dpdk-dev] segmented recv ixgbevf Alex Markuze
@ 2014-10-30 11:09 ` Bruce Richardson
2014-10-30 12:48 ` Alex Markuze
0 siblings, 1 reply; 5+ messages in thread
From: Bruce Richardson @ 2014-10-30 11:09 UTC (permalink / raw)
To: Alex Markuze; +Cc: dev
On Thu, Oct 30, 2014 at 12:23:09PM +0200, Alex Markuze wrote:
> Hi,
> I'm seeing an unwanted behaviour in the receive flow of ixgbevf. While
> using Jumbo frames and sending 4k+ bytes , the receive side breaks up the
> packets into 2K buffers, and I receive 3 mbuffs per packet.
>
> Im setting the .max_rx_pkt_len to 4.5K and the mempoll has 5K sized
> elements?
>
> Anything else I'm missing here. The purpose is to have all 4k+ bytes in one
> single continuous buffer.
>
That should be working, I think.
Does it work with the plain ixgbe driver on the host?
Is there anything in the output of the driver initialization saying something like "forcing scatter mode"?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] segmented recv ixgbevf
2014-10-30 11:09 ` Bruce Richardson
@ 2014-10-30 12:48 ` Alex Markuze
2014-10-30 13:18 ` Bruce Richardson
2014-11-05 14:48 ` Matt Laswell
0 siblings, 2 replies; 5+ messages in thread
From: Alex Markuze @ 2014-10-30 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
For posterity.
1.When using MTU larger then 2K its advised to provide the value
to rte_pktmbuf_pool_init.
2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
nearest 1K multiple when deciding on the receiving capabilities [buffer
size]of the Buffers in the pool.
The function SRRCTL register, is considered here for some reason?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] segmented recv ixgbevf
2014-10-30 12:48 ` Alex Markuze
@ 2014-10-30 13:18 ` Bruce Richardson
2014-11-05 14:48 ` Matt Laswell
1 sibling, 0 replies; 5+ messages in thread
From: Bruce Richardson @ 2014-10-30 13:18 UTC (permalink / raw)
To: Alex Markuze; +Cc: dev
On Thu, Oct 30, 2014 at 02:48:42PM +0200, Alex Markuze wrote:
> For posterity.
>
> 1.When using MTU larger then 2K its advised to provide the value
> to rte_pktmbuf_pool_init.
> 2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
> nearest 1K multiple when deciding on the receiving capabilities [buffer
> size]of the Buffers in the pool.
> The function SRRCTL register, is considered here for some reason?
So problem is now solved, right?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] segmented recv ixgbevf
2014-10-30 12:48 ` Alex Markuze
2014-10-30 13:18 ` Bruce Richardson
@ 2014-11-05 14:48 ` Matt Laswell
1 sibling, 0 replies; 5+ messages in thread
From: Matt Laswell @ 2014-11-05 14:48 UTC (permalink / raw)
To: Alex Markuze; +Cc: dev
Hey Folks,
I ran into the same issue that Alex is describing here, and I wanted to
expand just a little bit on his comments, as the documentation isn't very
clear.
Per the documentation, the two arguments to rte_pktmbuf_pool_init() are a
pointer to the memory pool that contains the newly-allocated mbufs and an
opaque pointer. The docs are pretty vague about what the opaque pointer
should point to or what it's contents mean; all of the examples I looked at
just pass a NULL pointer. The docs for this function describe the opaque
pointer this way:
"A pointer that can be used by the user to retrieve useful information for
mbuf initialization. This pointer comes from the init_arg parameter of
rte_mempool_create()
<http://www.dpdk.org/doc/api/rte__mempool_8h.html#a7dc1d01a45144e3203c36d1800cb8f17>
."
This is a little bit misleading. Under the covers, rte_pktmbuf_pool_init()
doesn't threat the opaque pointer as a pointer at all. Rather, it just
converts it to a uint16_t which contains the desired mbuf size. If it
receives 0 (in other words, if you passed in a NULL pointer), it will use
2048 bytes + RTE_PKTMBUF_HEADROOM. Hence, incoming jumbo frames will be
segmented into 2K chunks.
Any chance we could get an improvement to the documentation about this
parameter? It seems as though the opaque pointer isn't a pointer and
probably shouldn't be opaque.
Hope this helps the next person who comes across this behavior.
--
Matt Laswell
infinite io, inc.
On Thu, Oct 30, 2014 at 7:48 AM, Alex Markuze <alex@weka.io> wrote:
> For posterity.
>
> 1.When using MTU larger then 2K its advised to provide the value
> to rte_pktmbuf_pool_init.
> 2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
> nearest 1K multiple when deciding on the receiving capabilities [buffer
> size]of the Buffers in the pool.
> The function SRRCTL register, is considered here for some reason?
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-11-05 14:39 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-10-30 10:23 [dpdk-dev] segmented recv ixgbevf Alex Markuze
2014-10-30 11:09 ` Bruce Richardson
2014-10-30 12:48 ` Alex Markuze
2014-10-30 13:18 ` Bruce Richardson
2014-11-05 14:48 ` Matt Laswell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).