From: Alejandro Lucero <alejandro.lucero@netronome.com>
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: dev <dev@dpdk.org>, Bert van Leeuwen <bert.vanleeuwen@netronome.com>
Subject: Re: [dpdk-dev] [PATCH] ethdev: check number of queues less than RTE_ETHDEV_QUEUE_STAT_CNTRS
Date: Thu, 10 Nov 2016 15:43:08 +0000 [thread overview]
Message-ID: <CAD+H992CDrSOow_pmhcSmM4n5mGUmUM+8rTzDaC+r7eV95LSog@mail.gmail.com> (raw)
In-Reply-To: <3059112.zVgrzqmBCq@xps13>
On Thu, Nov 10, 2016 at 2:42 PM, Thomas Monjalon <thomas.monjalon@6wind.com>
wrote:
> 2016-11-10 14:00, Alejandro Lucero:
> > From: Bert van Leeuwen <bert.vanleeuwen@netronome.com>
> >
> > A device can have more than RTE_ETHDEV_QUEUE_STAT_CNTRS queues which
> > is used inside struct rte_eth_stats. Ideally, DPDK should be built with
> > RTE_ETHDEV_QUEUE_STAT_CNTRS to the maximum number of queues a device
> > can support, 65536, as uint16_t is used for keeping those values for
> > RX and TX. But of course, having such big arrays inside struct
> rte_eth_stats
> > is not a good idea.
>
> RTE_ETHDEV_QUEUE_STAT_CNTRS come from a limitation in Intel devices.
> They have limited number of registers to store the stats per queue.
>
> > Current default value is 16, which could likely be changed to 32 or 64
> > without too much opposition. And maybe it would be a good idea to modify
> > struct rte_eth_stats for allowing dynamically allocated arrays and maybe
> > some extra fields for keeping the array sizes.
>
> Yes
> and? what is your issue exactly? with which device?
> Please explain the idea brought by your patch.
>
Netronome NFP devices support 128 queues and future version will support
1024.
A particular VF, our PMD just supports VFs, could get as much as 128.
Although that is not likely, that could be an option for some client.
Clients want to use a DPDK coming with a distribution, so changing the
RTE_ETHDEV_QUEUE_STAT_CNTRS depending on the present devices is not an
option.
We would be happy if RTE_ETHDEV_QUEUE_STAT_CNTRS could be set to 1024,
covering current and future requirements for our cards, but maybe having
such big arrays inside struct rte_eth_stats is something people do not want
to have.
A solution could be to create such arrays dynamically based on the device
to get the stats from. For example, call to rte_eth_dev_configure could
have ax extra field for allocating a rte_eth_stats struct, which will be
based on nb_rx_q and nb_tx_q params already given to that function.
Maybe the first thing to know is what people think about just incrementing
RTE_ETHDEV_QUEUE_STAT_CNTRS to 1024.
So Thomas, what do you think about this?
next prev parent reply other threads:[~2016-11-10 15:43 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-10 14:00 Alejandro Lucero
2016-11-10 14:42 ` Thomas Monjalon
2016-11-10 15:43 ` Alejandro Lucero [this message]
2016-11-10 16:01 ` Thomas Monjalon
2016-11-10 16:04 ` Alejandro Lucero
2016-11-11 9:16 ` Alejandro Lucero
2016-11-11 9:29 ` Thomas Monjalon
2016-11-11 9:32 ` Alejandro Lucero
2016-11-11 9:48 ` Bert van Leeuwen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAD+H992CDrSOow_pmhcSmM4n5mGUmUM+8rTzDaC+r7eV95LSog@mail.gmail.com \
--to=alejandro.lucero@netronome.com \
--cc=bert.vanleeuwen@netronome.com \
--cc=dev@dpdk.org \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).