From: Anand Prasad <anandnprasad@yahoo.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] DPDK Performance tips
Date: Mon, 19 Feb 2018 14:26:53 +0000 (UTC) [thread overview]
Message-ID: <1951877383.1038987.1519050413410@mail.yahoo.com> (raw)
In-Reply-To: <4438207.qMLY8BktU0@xps>
Hi Thomas, et.al.
Thanks for response.
I have followed the advices provided in the links you have provided but still i see the problem.
Recently, I studied dpdk-pktgen application and developed an application very much similar to that. I mean, have made device, rx and tx, memmory pool and cache size configurations that pktgen uses. pktgen seems work fine without any Rx packet drop but my application reports packet drop.
Any guess what could be wrong with my application?
I see packet drops in my application, when I launch some other application (say browser or system monitor). But, in such situations, pktgen application works without any packet drops.
Regards,Anand Prasad
On Wednesday, 13 December 2017 4:41 PM, Thomas Monjalon <thomas@monjalon.net> wrote:
13/12/2017 09:14, Anand Prasad:
> Hi Dpdk team,
> Can anyone please share tips to get better DPDK performance? I have tried to run DPDK test applications on various PC configurations with different CPU Speeds, RAM Size/Speed and and PCIex16 2nd and 3rd generation connections... but I don't get very consistent results.
> The same test application does not behave similarly on 2 PC's with same hardware configuration.But general observation is that, performance is better on Intel motherboard with 3rd generation PCIe slot. When tried on Gigabyte motherboard (even with higher CPU and RAM speeds), performance was very poor.
> The performance issue I am facing is the Packet Drop issue on the Rx side.
> Two PC's with exactly same hardware configuration, one PC drops packets after few hours, but in the other PC I don't observe packet drop
> Highly appreciate a quick response
> Regards,Anand Prasad
This is very complicate because it really depends on the hardware.
Managing performance requires a very good knowledge of the hardware.
You can find some basic advices in this guide for some Intel hardware:
http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
You will also find some informations in the drivers guide. Example:
http://dpdk.org/doc/guides/nics/mlx5.html#performance-tuning
From vhuertas@gmail.com Wed Feb 21 18:20:22 2018
Return-Path: <vhuertas@gmail.com>
Received: from mail-wr0-f176.google.com (mail-wr0-f176.google.com
[209.85.128.176]) by dpdk.org (Postfix) with ESMTP id 67F162B9F
for <users@dpdk.org>; Wed, 21 Feb 2018 18:20:22 +0100 (CET)
Received: by mail-wr0-f176.google.com with SMTP id m5so6650191wrg.1
for <users@dpdk.org>; Wed, 21 Feb 2018 09:20:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s 161025;
h=mime-version:from:date:message-id:subject:to;
bh=5SKx8jU7xzzIbh+JoMIl8KWCBr0B7CkR+bhvvhIePZM=;
b=vOO1rzrpFllWfO3TJ07tzD9DThPYH2A8lUDP9q5W/CWepAuCEJdwO6EfS+0YOOl+E8
xZaAbddLRDhxRU9msDUtEPcKe9fY+73l/5o+vJnA2d+NYsoE2zqYTMMkJ+Ziuzy6TRyy
00bprHkcE7/oJ5eUimRvUXfA4GFLSwbMASByM/YBXO04wdNLhK3ghhyJRwSNIT3Tou2K
nKvBlkVrAqXkliIZsUee9JXPiq6KlnFOkwuub1GpxZIeGLCYPBzu4guiwzlZEKYdeZ1k
mEaiSTyDH6BfYmmrd5rk4zFrL+/bWdk1UPJSVqDlkEpRRIow2YDvlsC3zvKLoGLP7IDb
DfsA=X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d\x1e100.net; s 161025;
h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
bh=5SKx8jU7xzzIbh+JoMIl8KWCBr0B7CkR+bhvvhIePZM=;
b=SaO2yv0AA1YZwxj2WGtJ+KMSlJxlPsd0u8GMN1kZ7XsnVqRUDv24k9a8BWTiRm+UH4
bsrYxAvxv7goMwj5ZDXg4nMz/oXH8jBnG56HpPuMP+rEY6Ln6c8SFRbltSjX56jgdQYK
QFDYXGfEwE5+oX2Z1IVLyG97JooWkOKfF9GT140sRZiQiUaCUsSHMwj+BjUqQGx04noA
Tnye0Yq7aaSv33ULLvegD/CijcUncclHbJ9U209a64eFQ/yVoXzR4jDDU6Suro7QsbjT
CNbMAKBUsX0plZf58o94Xj/GBIarLfUayLuYA/RozgHjXMsK/NRpXLJG3FTHN9QQySqV
NThQ=X-Gm-Message-State: APf1xPA66KF12bupE1Bkid3CY32SsUxv0mE2bVMumsvd3CDxRgfBzEZo
OgCswEcj/iU94sf+HcUISho79sUBh1qx5/qZUZnnhQ=X-Google-Smtp-Source: AH8x224eaLmZ8V47WrLkKoZ16UJUt2aEiYAZgLwm1hZm3BgAlvq2ubCw1rssmorzpoVsKbEQlpssvOIkmxjoPQxhhCYX-Received: by 10.80.170.131 with SMTP id q3mr6020540edc.43.1519233621806;
Wed, 21 Feb 2018 09:20:21 -0800 (PST)
MIME-Version: 1.0
Received: by 10.80.185.37 with HTTP; Wed, 21 Feb 2018 09:20:21 -0800 (PST)
From: Victor Huertas <vhuertas@gmail.com>
Date: Wed, 21 Feb 2018 18:20:21 +0100
Message-ID: <CAGxG5chJncxbr159ZE8st1zLeqOzNzFmUwA3PqkybOTUb76zjg@mail.gmail.com>
To: users@dpdk.org
Content-Type: text/plain; charset="UTF-8"
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: [dpdk-users] rte_eth_rx_queue_setup is returning error -28 (ENOSPC)
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <https://dpdk.org/ml/options/users>,
<mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <https://dpdk.org/ml/listinfo/users>,
<mailto:users-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 21 Feb 2018 17:20:22 -0000
Hi all,
I am trying to make an application for having various rx threads capturing
packets from various queues of the same NIC (to increase performance using
RSS on one NIC). I am basing the app on an example called l3fwd-thread.
However, when I try to create the rx queue in a port using the
rte_eth_rx_queue_setup function it returns an -28 error (ENOSPC).
Having a look at source code of rte_ethdev.c, where this function is
implemented, I have seen the only place where ENOSCP value is returned (see
below).
if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) {
RTE_PMD_DEBUG_TRACE("%s private_data_size %d < %d\n",
mp->name, (int) mp->private_data_size,
(int) sizeof(struct rte_pktmbuf_pool_private));
return -ENOSPC;
}
Executing step by step (using Eclipse), I saw that the private_data_size of
the pktmbuf_pool was set to zero. And that was the reason why it returned
-ENOSPC.
Nevertheless in the init_mem function of the example, when the pktmbuf_pool
is created, the value for private_data_size as parameter is 0.
if (pktmbuf_pool[socketid] == NULL) {
snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
pktmbuf_pool[socketid] rte_pktmbuf_pool_create(s, nb_mbuf,
MEMPOOL_CACHE_SIZE, 0,
RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
if (pktmbuf_pool[socketid] == NULL)
rte_exit(EXIT_FAILURE,
"Cannot init mbuf pool on socket %d\n", socketid);
else
printf("Allocated mbuf pool on socket %d\n", socketid);
#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)
setup_lpm(socketid);
#else
setup_hash(socketid);
#endif
}
So this is contradictory. Why the example initializes the private_data_size
to 0 and then it provokes a bat rx queue initialization?
Or am I understanding it wrongly?
Thanks for your attention,
--
Victor
prev parent reply other threads:[~2018-02-19 14:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <401616826.3643255.1513152888444.ref@mail.yahoo.com>
2017-12-13 8:14 ` Anand Prasad
2017-12-13 11:11 ` Thomas Monjalon
2018-02-19 14:26 ` Anand Prasad [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1951877383.1038987.1519050413410@mail.yahoo.com \
--to=anandnprasad@yahoo.com \
--cc=thomas@monjalon.net \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).