DPDK patches and discussions
 help / color / mirror / Atom feed
From: Slava Ovsiienko <viacheslavo@mellanox.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
Subject: Re: [dpdk-dev] [PATCH] common/mlx5: reduce size of netlink buffer
Date: Tue, 28 Apr 2020 09:34:06 +0000	[thread overview]
Message-ID: <AM4PR05MB326529544E6459A0BB7A32E3D2AC0@AM4PR05MB3265.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <20200422223000.16602-1-stephen@networkplumber.org>

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Stephen Hemminger
> Sent: Thursday, April 23, 2020 1:30
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Nélio Laranjeiro
> <nelio.laranjeiro@6wind.com>
> Subject: [dpdk-dev] [PATCH] common/mlx5: reduce size of netlink buffer
> 
> Since the driver is just using netlink to receive acknowledgements for
> command requests, having a large (32K) buffer is unnecessary.

Not for acknowledgements only, it is also used to get the dumps for the MAC
addresses and Infiniband device ports, as we know dump operations might
merge replies into one large message, smaller buffer might prevent the
dump receiving. What do you think about the buffer allocation  on the device
initialization and storing that somewhere in the device structure?

With best regards,
Slava

> Allocating large buffers on stack will break in application is using smaller per-
> thread stacks.
> 
> It looks like original code intended to use a smaller buffer for the read than
> the socket, so keep that set of defines.
> 
> Fixes: ccdcba53a3f4 ("net/mlx5: use Netlink to add/remove MAC addresses")
> Cc: nelio.laranjeiro@6wind.com
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  drivers/common/mlx5/mlx5_nl.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/common/mlx5/mlx5_nl.c
> b/drivers/common/mlx5/mlx5_nl.c index d9f0d4129e35..847e78dbcea6
> 100644
> --- a/drivers/common/mlx5/mlx5_nl.c
> +++ b/drivers/common/mlx5/mlx5_nl.c
> @@ -28,7 +28,7 @@
> 
> 
>  /* Size of the buffer to receive kernel messages */ -#define
> MLX5_NL_BUF_SIZE (32 * 1024)
> +#define MLX5_NL_BUF_SIZE 4096
>  /* Send buffer size for the Netlink socket */  #define MLX5_SEND_BUF_SIZE
> 32768
>  /* Receive buffer size for the Netlink socket */ @@ -330,7 +330,7 @@
> mlx5_nl_recv(int nlsk_fd, uint32_t sn, int (*cb)(struct nlmsghdr *, void *arg),
>  	     void *arg)
>  {
>  	struct sockaddr_nl sa;
> -	char buf[MLX5_RECV_BUF_SIZE];
> +	char buf[MLX5_NL_BUF_SIZE];
>  	struct iovec iov = {
>  		.iov_base = buf,
>  		.iov_len = sizeof(buf),
> --
> 2.20.1


  reply	other threads:[~2020-04-28  9:34 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-22 22:30 Stephen Hemminger
2020-04-28  9:34 ` Slava Ovsiienko [this message]
2020-05-14  7:11 ` [dpdk-dev] [PATCH v2] common/mlx5: fix netlink buffer allocation from stack Viacheslav Ovsiienko
2020-05-17 12:02   ` Matan Azrad
2020-05-17 12:51   ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM4PR05MB326529544E6459A0BB7A32E3D2AC0@AM4PR05MB3265.eurprd05.prod.outlook.com \
    --to=viacheslavo@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=nelio.laranjeiro@6wind.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).