From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To: chenchanghu <chenchanghu@huawei.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"nelio.laranjeiro@6wind.com" <nelio.laranjeiro@6wind.com>,
Zhoujingbin <zhoujingbin@huawei.com>,
"Zhoulei (G)" <stone.zhou@huawei.com>,
Deng Kairong <dengkairong@huawei.com>,
Chenrujie <chenrujie@huawei.com>, cuiyayun <cuiyayun@huawei.com>,
"Chengwei (Titus)" <titus.chengwei@huawei.com>,
"Lixuan (Alex)" <Awesome.li@huawei.com>,
"Lilijun (Jerry)" <jerry.lilijun@huawei.com>
Subject: Re: [dpdk-dev] [disscussion] mlx4 driver MLX4_PMD_TX_MP_CACHE default vaule
Date: Fri, 28 Jul 2017 14:00:18 +0200 [thread overview]
Message-ID: <20170728120018.GJ19852@6wind.com> (raw)
In-Reply-To: <859E1CB9FBF08C4B839DCF451B09C5032D62C090@dggeml505-mbx.china.huawei.com>
Hi Changhu,
On Fri, Jul 28, 2017 at 10:52:45AM +0000, chenchanghu wrote:
> Hi Adrien,
> Thanks very much! I have got the question about MLX4_PMD_TX_MP_CACHE value, we will modify this value to suit our applications.
> However, in the 2 clients or more clients test, we found that the function 'txq->if_qp->send_pending' and 'txq->if_qp->send_flush(txq->qp)' in 'mlx4_tx_burst' probabilistic cost almost *5ms* each function . The probability is about 1/50000, which means every 50000 packets sending appeared once.
> Does this phenomenon is normal? Or do we ignored some configurations that not showed documented?
5 ms for these function calls is strange and certainly not normal. Are you
sure this time is spent in send_pending()/send_flush() and not in
mlx4_tx_burst() itself?
Given the MP cache size and number of mempools involved in your setup, cache
look-up might be longer than normal, but this alone does not explain it.
Might be something else, such as:
- txq_mp2mr() fails to register a mempool of one of these packets for some
reason (chunked mempool?) Enable CONFIG_RTE_LIBRTE_MLX4_DEBUG and look
for "unable to get MP <-> MR association" messages.
- You've enabled TX inline mode using a large value and CPU cycles are
wasted by the PMD doing memcpy() on large packets. Don't enable inline TX
(set CONFIG_RTE_LIBRTE_MLX4_MAX_INLINE to 0).
- Sent packets have too many segments (more than MLX4_PMD_SGE_WR_N). This is
super expensive as the PMD needs to linearize extra segments. You can set
MLX4_PMD_SGE_WR_N to the next power of two (8), however beware doing so
will degrade performance.
This might also be caused by external factors that depend on the application
or the host system, if for instance DPDK memory is spread across NUMA
nodes. Make sure it's not the case.
--
Adrien Mazarguil
6WIND
next prev parent reply other threads:[~2017-07-28 12:00 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-28 10:52 chenchanghu
2017-07-28 12:00 ` Adrien Mazarguil [this message]
-- strict thread matches above, loose matches on Subject: below --
2017-07-28 7:58 chenchanghu
2017-07-28 8:40 ` Adrien Mazarguil
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170728120018.GJ19852@6wind.com \
--to=adrien.mazarguil@6wind.com \
--cc=Awesome.li@huawei.com \
--cc=chenchanghu@huawei.com \
--cc=chenrujie@huawei.com \
--cc=cuiyayun@huawei.com \
--cc=dengkairong@huawei.com \
--cc=dev@dpdk.org \
--cc=jerry.lilijun@huawei.com \
--cc=nelio.laranjeiro@6wind.com \
--cc=stone.zhou@huawei.com \
--cc=titus.chengwei@huawei.com \
--cc=zhoujingbin@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).