From: Yongseok Koh <yskoh@mellanox.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>,
Bruce Richardson <bruce.richardson@intel.com>,
Thomas Monjalon <thomas@monjalon.net>
Cc: dev <dev@dpdk.org>, dpdk stable <stable@dpdk.org>
Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH v2] examples/multi_process: fix buffer underrun
Date: Tue, 30 Apr 2019 18:01:06 +0000 [thread overview]
Message-ID: <4CD96011-26C5-438E-B962-BBFE4F16EB0D@mellanox.com> (raw)
In-Reply-To: <2BF52021-45E5-4E8B-A98F-AEF2EDC1C25E@mellanox.com>
> On Apr 11, 2019, at 12:18 AM, Yongseok Koh <yskoh@mellanox.com> wrote:
>
>> On Apr 10, 2019, at 12:41 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
>>
>> For client_server_mp, the total number of buffers for the mbuf mempool
>> should be correctly calculated. Otherwise, having more clients will stop
>> traffic.
>>
>> Fixes: af75078fece3 ("first public release")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
>> ---
>>
>> v2:
>> * split up the calculation
>
> Sorry, I forgot to specify 'v2' in the title of this email.
No ack? No merge?
Yongseok
>
>> examples/multi_process/client_server_mp/mp_server/init.c | 13 +++++++++----
>> 1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
>> index 30c8e44bc0..3af5dc6994 100644
>> --- a/examples/multi_process/client_server_mp/mp_server/init.c
>> +++ b/examples/multi_process/client_server_mp/mp_server/init.c
>> @@ -37,8 +37,6 @@
>> #include "args.h"
>> #include "init.h"
>>
>> -#define MBUFS_PER_CLIENT 1536
>> -#define MBUFS_PER_PORT 1536
>> #define MBUF_CACHE_SIZE 512
>>
>> #define RTE_MP_RX_DESC_DEFAULT 1024
>> @@ -63,8 +61,15 @@ struct port_info *ports;
>> static int
>> init_mbuf_pools(void)
>> {
>> - const unsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \
>> - + (ports->num_ports * MBUFS_PER_PORT);
>> + const unsigned int num_mbufs_server =
>> + RTE_MP_RX_DESC_DEFAULT * ports->num_ports;
>> + const unsigned int num_mbufs_client =
>> + num_clients * (CLIENT_QUEUE_RINGSIZE +
>> + RTE_MP_TX_DESC_DEFAULT * ports->num_ports);
>> + const unsigned int num_mbufs_mp_cache =
>> + (num_clients + 1) * MBUF_CACHE_SIZE;
>> + const unsigned int num_mbufs =
>> + num_mbufs_server + num_mbufs_client + num_mbufs_mp_cache;
>>
>> /* don't pass single-producer/single-consumer flags to mbuf create as it
>> * seems faster to use a cache instead */
>> --
>> 2.11.0
>>
>
next prev parent reply other threads:[~2019-04-30 18:01 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-09 22:59 [dpdk-dev] [PATCH] " Yongseok Koh
2019-04-09 22:59 ` Yongseok Koh
2019-04-10 9:14 ` Bruce Richardson
2019-04-10 9:14 ` Bruce Richardson
2019-04-10 19:41 ` Yongseok Koh
2019-04-10 19:41 ` Yongseok Koh
2019-04-11 7:18 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
2019-04-11 7:18 ` Yongseok Koh
2019-04-30 18:01 ` Yongseok Koh [this message]
2019-04-30 18:01 ` [dpdk-dev] [dpdk-stable] " Yongseok Koh
2019-05-02 23:36 ` [dpdk-dev] [PATCH] " Thomas Monjalon
2019-05-02 23:36 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CD96011-26C5-438E-B962-BBFE4F16EB0D@mellanox.com \
--to=yskoh@mellanox.com \
--cc=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=stable@dpdk.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).