From: Olivier MATZ <olivier.matz@6wind.com>
To: "Venkatesan, Venky" <venky.venkatesan@intel.com>,
Thomas Monjalon <thomas.monjalon@6wind.com>,
"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
"Shaw, Jeffrey B" <jeffrey.b.shaw@intel.com>,
"Richardson, Bruce" <bruce.richardson@intel.com>,
"nhorman@tuxdriver.com" <nhorman@tuxdriver.com>,
"stephen@networkplumber.org" <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
Date: Mon, 26 May 2014 13:59:32 +0200 [thread overview]
Message-ID: <53832CA4.8060204@6wind.com> (raw)
In-Reply-To: <1FD9B82B8BF2CF418D9A1000154491D9740C60E2@ORSMSX102.amr.corp.intel.com>
Hi Venky,
>> my testpmd iofwd test with the txqflags option disabling many mbuf
>> features is not representative of a real world application.
> [Venky] I did see your test reports. I also made the point that the
> tests we have are insufficient for testing the impact. If you look at
> data_ofs, it actually has an impact on two sides - the driver and the
> upper layer. We do not at this time have a test for the upper
> layer/accessor. Secondly, there is a whole class of apps (fast path
> route for example) that aren't in your purview that do not need
> txqflags. Calling it not representative of a real world application is
> incorrect.
I was talking about "iofwd with txqflags", not "txqflags". The iofwd
mode means that the mbuf data is not accessed by the application,
so I think we should not rely on it.
I agree that "txqflags" could be useful in some situations, when the
user does not need multi-segment. Anyway, the tests I've posted on the
list do not show a real impact on this use case.
Finally, as stated in the initial TSO cover letter [1], I've tested
the upper layer impact that you are talking about with the 6WINDGate
stack and there is no visible regression.
> Secondly, your testpmd run baseline performance should be higher. At
> this point, it is about 40% off from the numbers we see on the baseline
> on the same CPU. If the baseline is incorrect, I cannot judge anything
> more on the performance. We need to get the baseline performance the
> same, and then compare impact.
I tested with the default testpmd configuration, I though it was
close enough to the best performance.
> [Venky] I exclude VLAN because it is something explicitly set by the Rx
> side of the driver. Having Rx access a second cache line will generate a
> performance impact (can be mitigated by a prefetch, but it will cost
> more instructions, and cannot be deterministically controlled). The rest
> of the structure is on the transmit side - which is going to be cache
> hot - at least in LLC anyway. There are cases where this will not be in
> LLC - and we have a few of those. Those however, we can mitigate.
If we add another cache line to the mbuf, if the code accesses it
(whatever rx or tx side), it will at least increase the required memory
bandwidth, and in the worst case will result in an additional cache
miss. This is difficult to predict and depends on the use case. I
think each solution would have its drawbacks in specific cases.
I think the tests I've provided are representative enough to assume
that there is no need to search for an alternative, knowing the
fact that it also clean the mbuf structure and give more room for
later use.
> [Venky] I don't think reworking core data structures (especially
> regressing core data structures) is a good thing. We have kept this
> relatively stable over 5 releases, sometimes at the impact of
> performance, and churning data structures is not a good thing.
Sorry, but I don't think this is a good argument: we cannot say
"it was stable during 5 releases so we cannot modify it". I would say
that the code modifications should be guided by technical reasons.
Keeping the API stable could be a good technical reason, but as far
as I understood, the DPDK is not at this stage today. The mbuf
rework brings 9 additional bytes in the structure, which help for
TSO today, but may also help for next features. For instance, having
another checksum flag as proposed by Stephen and me [2].
Moreover, the patches make the rte_mbuf structure clearer: the current
separation rte_mbuf / rte_pktmbuf is a bit strange, for instance the
ol_flags field which is clearly related to a pktmuf is located in
the rte_mbuf.
Now, I'm ready to make some concessions and discuss about an alternative
solution.
Thomas, you are the maintainer ;) what are your plans?
Regards,
Olivier
[1] http://dpdk.org/ml/archives/dev/2014-May/002322.html
[2] http://dpdk.org/ml/archives/dev/2014-May/002339.html
next prev parent reply other threads:[~2014-05-26 11:59 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 02/17] mbuf: rename RTE_MBUF_SCATTER_GATHER into RTE_MBUF_REFCNT Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 03/17] mbuf: remove rte_ctrlmbuf Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 04/17] mbuf: remove the rte_pktmbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 05/17] mbuf: merge physaddr and buf_len in a bitfield Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 06/17] mbuf: cosmetic changes in rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 07/17] mbuf: replace data pointer by an offset Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 08/17] mbuf: add functions to get the name of an ol_flag Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 09/17] mbuf: change ol_flags to 32 bits Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 11/17] testpmd: modify source address to validate checksum calculation Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 12/17] mbuf: generic support of TCP segmentation offload Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 13/17] ixgbe: support " Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [virtio-net-pmd PATCH v2 14/17] pmd: adapt to new rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 15/17] pmd: remove support of old dpdk versions Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 16/17] pmd: adapt to new rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [memnic PATCH v2 17/17] " Olivier Matz
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
2014-05-22 16:09 ` Venkatesan, Venky
2014-05-23 14:22 ` Olivier MATZ
2014-05-23 14:43 ` Venkatesan, Venky
2014-05-26 11:59 ` Olivier MATZ [this message]
2014-05-23 12:47 ` Ananyev, Konstantin
2014-05-23 14:32 ` Olivier MATZ
2014-05-26 15:20 ` Ananyev, Konstantin
2014-11-03 7:32 ` [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Liu, Jijiang
2014-11-03 10:12 ` Olivier MATZ
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53832CA4.8060204@6wind.com \
--to=olivier.matz@6wind.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jeffrey.b.shaw@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=nhorman@tuxdriver.com \
--cc=stephen@networkplumber.org \
--cc=thomas.monjalon@6wind.com \
--cc=venky.venkatesan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).