DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>,
	"wenzhuo.lu@intel.com" <wenzhuo.lu@intel.com>,
	"beilei.xing@intel.com" <beilei.xing@intel.com>,
	"bernard.iremonger@intel.com" <bernard.iremonger@intel.com>,
	Ori Kam <orika@nvidia.com>,
	NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
	"Raslan Darawsheh" <rasland@nvidia.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix testpmd packets dump overlapping
Date: Fri, 13 Nov 2020 16:30:08 +0000	[thread overview]
Message-ID: <BL0PR12MB2419887ADE03673FBF5C7561C6E60@BL0PR12MB2419.namprd12.prod.outlook.com> (raw)
In-Reply-To: <1c967e7c-d929-2126-6efe-48c905588e65@intel.com>

Hi Ferruh, 

Thanks for your comments.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, November 13, 2020 12:45 AM
> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>; wenzhuo.lu@intel.com;
> beilei.xing@intel.com; bernard.iremonger@intel.com; Ori Kam
> <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Raslan Darawsheh <rasland@nvidia.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix testpmd packets dump
> overlapping
> 
> On 11/12/2020 8:36 AM, Jiawei Wang wrote:
> > When testpmd enabled the verbosity for the received packets, if two
> > packets was received at the same time, for example, sampling packet
> > and normal packet, the dump output of these packets may be overlapping
> > due to multiple core handled the multiple queues simultaneously.
> >
> 
> Hi Jiawei,
> 
> Is the problem observer only when having sampling? Do you observe the
> problem when multiple cores Rx?
We saw this problem happened easily with sampling.
And yes, I also can see the overlapping when multiple core RX if Traffic generator using test-pmd with --txonly-multi-flow.
> 
> > The patch uses one string buffer that collects all the packet dump
> > output into this buffer and then printout it at last, that guarantee
> > to printout separately the dump output per packet.
> >
> > Fixes: d862c45 ("app/testpmd: move dumping packets to a separate
> > function")
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> > ---
> >   app/test-pmd/util.c | 238
> ++++++++++++++++++++++++++++++++++++++--------------
> >   1 file changed, 177 insertions(+), 61 deletions(-)
> >
> > diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index
> > 649bf8f..47b75b0 100644
> > --- a/app/test-pmd/util.c
> > +++ b/app/test-pmd/util.c
> > @@ -12,15 +12,20 @@
> >   #include <rte_vxlan.h>
> >   #include <rte_ethdev.h>
> >   #include <rte_flow.h>
> > +#include <rte_log.h>
> >
> >   #include "testpmd.h"
> >
> > -static inline void
> > -print_ether_addr(const char *what, const struct rte_ether_addr
> > *eth_addr)
> > +#define MAX_STRING_LEN 8192
> > +
> > +static inline int
> > +print_ether_addr(const char *what, const struct rte_ether_addr
> *eth_addr,
> > +		 char print_buf[], int buf_size)
> >   {
> >   	char buf[RTE_ETHER_ADDR_FMT_SIZE];
> > +
> >   	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE,
> eth_addr);
> > -	printf("%s%s", what, buf);
> > +	return snprintf(print_buf, buf_size, "%s%s", what, buf);
> >   }
> >
> >   static inline bool
> > @@ -74,13 +79,18 @@
> >   	uint32_t vx_vni;
> >   	const char *reason;
> >   	int dynf_index;
> > +	int buf_size = MAX_STRING_LEN * nb_pkts;
> > +	char print_buf[buf_size];
> 
> This is a large value to allocate from stack, specially with larger burst size.
> Allocating from heap is an option but it has a cost.
> 
> So what do you think print per packet, instead of print per burst? This also
> prevents display latency of the packet log.
Good idea, then we only need keep MAX_STRING_LEN stack size, each packet of burst can reuse the stack.
> 
> > +	int cur_len = 0;
> >
> > +	memset(print_buf, 0, sizeof(print_buf));
> >   	if (!nb_pkts)
> >   		return;
> > -	printf("port %u/queue %u: %s %u packets\n",
> > -		port_id, queue,
> > -	       is_rx ? "received" : "sent",
> > -	       (unsigned int) nb_pkts);
> > +	cur_len += snprintf(print_buf + cur_len, buf_size - cur_len,
> > +			    "port %u/queue %u: %s %u packets\n",
> > +			    port_id, queue,
> > +			    is_rx ? "received" : "sent",
> > +			    (unsigned int) nb_pkts);
> >   	for (i = 0; i < nb_pkts; i++) {
> >   		int ret;
> >   		struct rte_flow_error error;
> > @@ -93,95 +103,183 @@
> >   		is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
> >   		ret = rte_flow_get_restore_info(port_id, mb, &info, &error);
> >   		if (!ret) {
> > -			printf("restore info:");
> > +			cur_len += snprintf(print_buf + cur_len,
> > +					    buf_size - cur_len,
> > +					    "restore info:");
> 
> This is not safe.
> 'snprintf' returns size of the required buffer size, not written chars, this can
> make "buf_size - cur_len" a negative value at some point, and since 'size'
> type is unsigned negative value will be converted into a very large number
> and this will corrupt the stack.
> 
Yes, so we need add the checking for each snprintf calls, if truncated happened, then print the current buffer  and dump will exit.
> <...>
> 
> > +	if (cur_len >= buf_size)
> > +		TESTPMD_LOG(ERR, "no enough buffer (size: %d) to store "
> > +			    "the current dump output (size: %d)\n",
> > +			    buf_size, cur_len);
> 
> Instead of this error log, which I believe not very useful, why not append
> some chars at the end of the actual buffer to say it is truncated, something
> like
> 
>   if (truncated)
>     TESTPMD_LOG(INFO, "%s ...", print_buf);
>   else
>     TESTPMD_LOG(INFO, "%s", print_buf);
ok, I will remove the err log and append the message like " the dump output was truncated ...".

I will send V2 patch for above changes.

Thanks.
Jonny

  reply	other threads:[~2020-11-13 16:30 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-12  8:36 Jiawei Wang
2020-11-12 16:44 ` Ferruh Yigit
2020-11-13 16:30   ` Jiawei(Jonny) Wang [this message]
2020-11-14 12:01 ` [dpdk-dev] [PATCH v2] " Jiawei Wang
2020-11-17 12:06   ` Ferruh Yigit
2020-11-18 18:09     ` Jiawei(Jonny) Wang
2020-11-19 10:30       ` Ferruh Yigit
2020-11-20 17:35   ` [dpdk-dev] [PATCH v3] " Jiawei Wang
2020-11-20 17:50     ` Ferruh Yigit
2020-11-23  6:13       ` Jiawei(Jonny) Wang
2020-11-23 10:22         ` Ferruh Yigit
2020-11-23 11:03           ` Jiawei(Jonny) Wang
2020-11-23  8:29     ` [dpdk-dev] [PATCH v4] " Jiawei Wang
2020-11-23 11:00       ` [dpdk-dev] [PATCH v5] " Jiawei Wang
2020-11-23 14:20         ` Ferruh Yigit
2021-01-06 14:13         ` [dpdk-dev] [PATCH v6] " Jiawei Wang
2021-01-06 15:54           ` Stephen Hemminger
2021-01-08  8:31             ` Jiawei(Jonny) Wang
2021-01-06 15:55           ` Stephen Hemminger
2021-01-08  8:31             ` Jiawei(Jonny) Wang
2021-01-20  6:50           ` [dpdk-dev] [PATCH v7] " Jiawei Wang
2021-01-20 17:57             ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BL0PR12MB2419887ADE03673FBF5C7561C6E60@BL0PR12MB2419.namprd12.prod.outlook.com \
    --to=jiaweiw@nvidia.com \
    --cc=beilei.xing@intel.com \
    --cc=bernard.iremonger@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).