* [dpdk-dev] [PATCH 0/2] i40e: enlarge the number of supported queues
@ 2015-09-20 14:51 Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 1/2] i40e: adjust the number of queues for RSS Helin Zhang
` (2 more replies)
0 siblings, 3 replies; 20+ messages in thread
From: Helin Zhang @ 2015-09-20 14:51 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
There was a software limitation of 64 queues, it should be enlarged to
the hardware allowed maximum. As all the queues are shared among PF,
VFs and VMDq, the number of queues supported in PF, VFs and VMDq may
vary on different use cases.
Helin Zhang (2):
i40e: adjust the number of queues for RSS
i40e: Enlarge the number of supported queues
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
drivers/net/i40e/i40e_ethdev.c | 146 ++++++++++++++++----------------------
drivers/net/i40e/i40e_ethdev.h | 8 +++
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
5 files changed, 74 insertions(+), 88 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 1/2] i40e: adjust the number of queues for RSS
2015-09-20 14:51 [dpdk-dev] [PATCH 0/2] i40e: enlarge the number of supported queues Helin Zhang
@ 2015-09-20 14:51 ` Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
2 siblings, 0 replies; 20+ messages in thread
From: Helin Zhang @ 2015-09-20 14:51 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
It adjusts the number of queues for RSS from power of 2 to any as
long as it does not exceeds the hardware allowed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 8 ++++----
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..4b70588 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5153,12 +5153,12 @@ i40e_pf_config_rss(struct i40e_pf *pf)
* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calulate the actual PF queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
- num = i40e_align_floor(num);
- } else
- num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ else
+ num = pf->dev_data->nb_rx_queues;
+ num = RTE_MIN(num, I40E_MAX_Q_PER_TC);
PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
num);
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index b694400..b15ff7b 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1915,7 +1915,7 @@ i40evf_config_rss(struct i40e_vf *vf)
return 0;
}
- num = i40e_align_floor(vf->dev_data->nb_rx_queues);
+ num = RTE_MIN(vf->dev_data->nb_rx_queues, I40E_MAX_QP_NUM_PER_VF);
/* Fill out the look up table */
for (i = 0, j = 0; i < nb_q; i++, j++) {
if (j >= num)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-09-20 14:51 [dpdk-dev] [PATCH 0/2] i40e: enlarge the number of supported queues Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 1/2] i40e: adjust the number of queues for RSS Helin Zhang
@ 2015-09-20 14:51 ` Helin Zhang
2015-09-21 7:41 ` David Marchand
2015-10-19 8:29 ` Wu, Jingjing
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
2 siblings, 2 replies; 20+ messages in thread
From: Helin Zhang @ 2015-09-20 14:51 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
drivers/net/i40e/i40e_ethdev.c | 138 +++++++++++++++++------------------------
drivers/net/i40e/i40e_ethdev.h | 8 +++
4 files changed, 69 insertions(+), 83 deletions(-)
diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..dac6dad 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -141,7 +141,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -187,6 +187,7 @@ CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..2ce8d66 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -139,7 +139,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -185,6 +185,7 @@ CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 4b70588..3bdcaa4 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2240,113 +2240,88 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis, left_queues;
+ uint16_t qp_count = 0, vsi_count = 0;
- /* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV");
return -EINVAL;
}
pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED;
- pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis, I40E_MAX_NUM_VSIS);
- PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi);
- /* Allocate queues for pf */
- if (hw->func_caps.rss) {
- pf->flags |= I40E_FLAG_RSS;
- pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
- (uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
- } else
+ pf->max_num_vsi = hw->func_caps.num_vsis;
+ pf->lan_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
+ pf->vmdq_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
+
+ /* FDir queue/VSI allocation */
+ pf->fdir_qp_offset = 0;
+ if (hw->func_caps.fd) {
+ pf->flags |= I40E_FLAG_FDIR;
+ pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
+ } else {
+ pf->fdir_nb_qps = 0;
+ }
+ qp_count += pf->fdir_nb_qps;
+ vsi_count += 1;
+
+ /* LAN queue/VSI allocation */
+ pf->lan_qp_offset = pf->fdir_qp_offset + pf->fdir_nb_qps;
+ if (!hw->func_caps.rss) {
pf->lan_nb_qps = 1;
- sum_queues = pf->lan_nb_qps;
- /* Default VSI is not counted in */
- sum_vsis = 0;
- PMD_INIT_LOG(INFO, "PF queue pairs:%u", pf->lan_nb_qps);
+ } else {
+ pf->flags |= I40E_FLAG_RSS;
+ pf->lan_nb_qps = pf->lan_nb_qp_max;
+ }
+ qp_count += pf->lan_nb_qps;
+ vsi_count += 1;
+ /* VF queue/VSI allocation */
+ pf->vf_qp_offset = pf->lan_qp_offset + pf->lan_nb_qps;
if (hw->func_caps.sr_iov_1_1 && dev->pci_dev->max_vfs) {
pf->flags |= I40E_FLAG_SRIOV;
pf->vf_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
- if (dev->pci_dev->max_vfs > hw->func_caps.num_vfs) {
- PMD_INIT_LOG(ERR, "Config VF number %u, "
- "max supported %u.",
- dev->pci_dev->max_vfs,
- hw->func_caps.num_vfs);
- return -EINVAL;
- }
- if (pf->vf_nb_qps > I40E_MAX_QP_NUM_PER_VF) {
- PMD_INIT_LOG(ERR, "FVL VF queue %u, "
- "max support %u queues.",
- pf->vf_nb_qps, I40E_MAX_QP_NUM_PER_VF);
- return -EINVAL;
- }
pf->vf_num = dev->pci_dev->max_vfs;
- sum_queues += pf->vf_nb_qps * pf->vf_num;
- sum_vsis += pf->vf_num;
- PMD_INIT_LOG(INFO, "Max VF num:%u each has queue pairs:%u",
- pf->vf_num, pf->vf_nb_qps);
- } else
+ PMD_DRV_LOG(DEBUG, "%u VF VSIs, %u queues per VF VSI, "
+ "in total %u queues", pf->vf_num, pf->vf_nb_qps,
+ pf->vf_nb_qps * pf->vf_num);
+ } else {
+ pf->vf_nb_qps = 0;
pf->vf_num = 0;
+ }
+ qp_count += pf->vf_nb_qps * pf->vf_num;
+ vsi_count += pf->vf_num;
+ /* VMDq queue/VSI allocation */
+ pf->vmdq_qp_offset = pf->vf_qp_offset + pf->vf_nb_qps * pf->vf_num;
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
pf->max_nb_vmdq_vsi = 1;
- /*
- * If VMDQ available, assume a single VSI can be created. Will adjust
- * later.
- */
- sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
- sum_vsis += pf->max_nb_vmdq_vsi;
+ PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues per VMDQ VSI, "
+ "in total %u queues", pf->max_nb_vmdq_vsi,
+ pf->vmdq_nb_qps,
+ pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi);
} else {
pf->vmdq_nb_qps = 0;
pf->max_nb_vmdq_vsi = 0;
}
- pf->nb_cfg_vmdq_vsi = 0;
-
- if (hw->func_caps.fd) {
- pf->flags |= I40E_FLAG_FDIR;
- pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
- /**
- * Each flow director consumes one VSI and one queue,
- * but can't calculate out predictably here.
- */
- }
+ qp_count += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ vsi_count += pf->max_nb_vmdq_vsi;
- if (sum_vsis > pf->max_num_vsi ||
- sum_queues > hw->func_caps.num_rx_qp) {
- PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
- PMD_INIT_LOG(ERR, "Max VSIs: %u, asked:%u",
- pf->max_num_vsi, sum_vsis);
- PMD_INIT_LOG(ERR, "Total queue pairs:%u, asked:%u",
- hw->func_caps.num_rx_qp, sum_queues);
+ if (qp_count > hw->func_caps.num_tx_qp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u queues, which exceeds "
+ "the hardware maximum %u", qp_count,
+ hw->func_caps.num_tx_qp);
return -EINVAL;
}
-
- /* Adjust VMDQ setting to support as many VMs as possible */
- if (pf->flags & I40E_FLAG_VMDQ) {
- left_queues = hw->func_caps.num_rx_qp - sum_queues;
-
- pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
- pf->max_num_vsi - sum_vsis);
-
- /* Limit the max VMDQ number that rte_ether that can support */
- pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS - 1);
-
- PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
- pf->max_nb_vmdq_vsi);
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
- }
-
- /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
- * cause */
- if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
- PMD_INIT_LOG(ERR, "Too many VSIs(%u), MSIX intr(%u) not enough",
- sum_vsis, hw->func_caps.num_msix_vectors);
+ if (vsi_count > hw->func_caps.num_vsis) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u VSIs, which exceeds "
+ "the hardware maximum %u", vsi_count,
+ hw->func_caps.num_vsis);
return -EINVAL;
}
- return I40E_SUCCESS;
+
+ return 0;
}
static int
@@ -2736,7 +2711,8 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
bsf = rte_bsf32(qpnum_per_tc);
/* Adjust the queue number to actual queues that can be applied */
- vsi->nb_qps = qpnum_per_tc * total_tc;
+ if (!(vsi->type == I40E_VSI_MAIN && total_tc == 1))
+ vsi->nb_qps = qpnum_per_tc * total_tc;
/**
* Configure TC and queue mapping parameters, for enabled TC,
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..7656b20 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -370,10 +370,18 @@ struct i40e_pf {
uint16_t vf_num;
/* Each of below queue pairs should be power of 2 since it's the
precondition after TC configuration applied */
+ uint16_t lan_nb_qp_max;
uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+ uint16_t lan_qp_offset;
+ uint16_t vmdq_nb_qp_max;
uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
+ uint16_t vmdq_qp_offset;
+ uint16_t vf_nb_qp_max;
uint16_t vf_nb_qps; /* The number of queue pairs of VF */
+ uint16_t vf_qp_offset;
uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+ uint16_t fdir_qp_offset;
+
uint16_t hash_lut_size; /* The size of hash lookup table */
/* store VXLAN UDP ports */
uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-09-20 14:51 ` [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues Helin Zhang
@ 2015-09-21 7:41 ` David Marchand
2015-09-21 8:15 ` Zhang, Helin
2015-09-22 6:36 ` Zhang, Helin
2015-10-19 8:29 ` Wu, Jingjing
1 sibling, 2 replies; 20+ messages in thread
From: David Marchand @ 2015-09-21 7:41 UTC (permalink / raw)
To: Helin Zhang, Richardson, Bruce; +Cc: dev, yulong.pei
Hello Helin, Bruce,
On Sun, Sep 20, 2015 at 4:51 PM, Helin Zhang <helin.zhang@intel.com> wrote:
> It enlarges the number of supported queues to hardware allowed
> maximum. There was a software limitation of 64 per physical port
> which is not reasonable.
>
I looked at the commit that introduced this limitation, can't we just get
rid of this ?
The primary process should know the current max queue number and
initialises the array properly before any secondary process tries to set
any callback, or tries to call rx/tx functions.
Did I miss something ?
--
David Marchand
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-09-21 7:41 ` David Marchand
@ 2015-09-21 8:15 ` Zhang, Helin
2015-09-22 6:36 ` Zhang, Helin
1 sibling, 0 replies; 20+ messages in thread
From: Zhang, Helin @ 2015-09-21 8:15 UTC (permalink / raw)
To: David Marchand, Richardson, Bruce; +Cc: dev, Pei, Yulong
Hi David
PF, VFs VMDq, FD on the same port share the queues, actually we can know the total number of the queues, the maximum number of queues may vary depends on how they will be used with PF, VF, VMDq AND FD.
So the users will define the number for each, the code will just check the total number of them and make sure not exceed that.
Regards,
Helin
From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Monday, September 21, 2015 3:42 PM
To: Zhang, Helin; Richardson, Bruce
Cc: dev@dpdk.org; Pei, Yulong
Subject: Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
Hello Helin, Bruce,
On Sun, Sep 20, 2015 at 4:51 PM, Helin Zhang <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
I looked at the commit that introduced this limitation, can't we just get rid of this ?
The primary process should know the current max queue number and initialises the array properly before any secondary process tries to set any callback, or tries to call rx/tx functions.
Did I miss something ?
--
David Marchand
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-09-21 7:41 ` David Marchand
2015-09-21 8:15 ` Zhang, Helin
@ 2015-09-22 6:36 ` Zhang, Helin
1 sibling, 0 replies; 20+ messages in thread
From: Zhang, Helin @ 2015-09-22 6:36 UTC (permalink / raw)
To: David Marchand, Richardson, Bruce; +Cc: dev, Pei, Yulong
Hi David
PF, VFs VMDq, FD on the same port share the queues, actually we can know the total number of the queues, the maximum number of queues may vary depends on how they will be used with PF, VF, VMDq AND FD.
So the users will define the number for each, the code will just check the total number of them and make sure not exceed that.
Regards,
Helin
Note: just resend it with plain text format.
From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Monday, September 21, 2015 3:42 PM
To: Zhang, Helin; Richardson, Bruce
Cc: dev@dpdk.org; Pei, Yulong
Subject: Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
Hello Helin, Bruce,
On Sun, Sep 20, 2015 at 4:51 PM, Helin Zhang <helin.zhang@intel.com> wrote:
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
I looked at the commit that introduced this limitation, can't we just get rid of this ?
The primary process should know the current max queue number and initialises the array properly before any secondary process tries to set any callback, or tries to call rx/tx functions.
Did I miss something ?
--
David Marchand
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-09-20 14:51 ` [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues Helin Zhang
2015-09-21 7:41 ` David Marchand
@ 2015-10-19 8:29 ` Wu, Jingjing
2015-10-19 8:37 ` Zhang, Helin
1 sibling, 1 reply; 20+ messages in thread
From: Wu, Jingjing @ 2015-10-19 8:29 UTC (permalink / raw)
To: Zhang, Helin, dev; +Cc: Pei, Yulong
Hi, helin
Few comments
> a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index
> 4b70588..3bdcaa4 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -2240,113 +2240,88 @@ i40e_pf_parameter_init(struct rte_eth_dev
> *dev) {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint16_t sum_queues = 0, sum_vsis, left_queues;
> + uint16_t qp_count = 0, vsi_count = 0;
>
> - /* First check if FW support SRIOV */
> if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
> PMD_INIT_LOG(ERR, "HW configuration doesn't support
> SRIOV");
> return -EINVAL;
> }
>
> pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED;
> - pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis,
> I40E_MAX_NUM_VSIS);
> - PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi);
> - /* Allocate queues for pf */
> - if (hw->func_caps.rss) {
> - pf->flags |= I40E_FLAG_RSS;
> - pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
> - (uint32_t)(1 << hw-
> >func_caps.rss_table_entry_width));
> - pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
> - } else
> + pf->max_num_vsi = hw->func_caps.num_vsis;
> + pf->lan_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
> + pf->vmdq_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
> + pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
> +
Need use the NUM_PER_VF but not NUM_PER_PF
pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF; ==> pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues
2015-10-19 8:29 ` Wu, Jingjing
@ 2015-10-19 8:37 ` Zhang, Helin
0 siblings, 0 replies; 20+ messages in thread
From: Zhang, Helin @ 2015-10-19 8:37 UTC (permalink / raw)
To: Wu, Jingjing, dev; +Cc: Pei, Yulong
> -----Original Message-----
> From: Wu, Jingjing
> Sent: Monday, October 19, 2015 4:30 PM
> To: Zhang, Helin; dev@dpdk.org
> Cc: Pei, Yulong; Liu, Yong
> Subject: RE: [PATCH 2/2] i40e: Enlarge the number of supported queues
>
> Hi, helin
>
> Few comments
>
> > a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> > index
> > 4b70588..3bdcaa4 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -2240,113 +2240,88 @@ i40e_pf_parameter_init(struct rte_eth_dev
> > *dev) {
> > struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> > >dev_private);
> > struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> > - uint16_t sum_queues = 0, sum_vsis, left_queues;
> > + uint16_t qp_count = 0, vsi_count = 0;
> >
> > - /* First check if FW support SRIOV */
> > if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
> > PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV");
> > return -EINVAL;
> > }
> >
> > pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED;
> > - pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis,
> > I40E_MAX_NUM_VSIS);
> > - PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi);
> > - /* Allocate queues for pf */
> > - if (hw->func_caps.rss) {
> > - pf->flags |= I40E_FLAG_RSS;
> > - pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
> > - (uint32_t)(1 << hw-
> > >func_caps.rss_table_entry_width));
> > - pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
> > - } else
> > + pf->max_num_vsi = hw->func_caps.num_vsis;
> > + pf->lan_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
> > + pf->vmdq_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
> > + pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
> > +
> Need use the NUM_PER_VF but not NUM_PER_PF
> pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF; ==>
> pf->pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
Yes, you are right. Thank you very much!
I will correct it in the next version.
Helin
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v2 0/2] i40e: Enlarge the number of supported queues
2015-09-20 14:51 [dpdk-dev] [PATCH 0/2] i40e: enlarge the number of supported queues Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues Helin Zhang
@ 2015-10-22 7:28 ` Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 1/2] i40e: adjust the number of queues for RSS Helin Zhang
` (3 more replies)
2 siblings, 4 replies; 20+ messages in thread
From: Helin Zhang @ 2015-10-22 7:28 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
v2 changes:
Fixed issues of using wrong configured number of VF queues.
Helin Zhang (2):
i40e: adjust the number of queues for RSS
i40e: Enlarge the number of supported queues
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
drivers/net/i40e/i40e_ethdev.c | 146 ++++++++++++++++----------------------
drivers/net/i40e/i40e_ethdev.h | 8 +++
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
5 files changed, 74 insertions(+), 88 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] i40e: adjust the number of queues for RSS
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
@ 2015-10-22 7:28 ` Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues Helin Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Helin Zhang @ 2015-10-22 7:28 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
It adjusts the number of queues for RSS from power of 2 to any as
long as it does not exceeds the hardware allowed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 8 ++++----
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2dd9fdc..4b70588 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5153,12 +5153,12 @@ i40e_pf_config_rss(struct i40e_pf *pf)
* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calulate the actual PF queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
- num = i40e_align_floor(num);
- } else
- num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ else
+ num = pf->dev_data->nb_rx_queues;
+ num = RTE_MIN(num, I40E_MAX_Q_PER_TC);
PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
num);
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index b694400..b15ff7b 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1915,7 +1915,7 @@ i40evf_config_rss(struct i40e_vf *vf)
return 0;
}
- num = i40e_align_floor(vf->dev_data->nb_rx_queues);
+ num = RTE_MIN(vf->dev_data->nb_rx_queues, I40E_MAX_QP_NUM_PER_VF);
/* Fill out the look up table */
for (i = 0, j = 0; i < nb_q; i++, j++) {
if (j >= num)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 1/2] i40e: adjust the number of queues for RSS Helin Zhang
@ 2015-10-22 7:28 ` Helin Zhang
2015-11-03 1:16 ` Thomas Monjalon
2015-10-22 15:36 ` [dpdk-dev] [PATCH v2 0/2] " Wu, Jingjing
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
3 siblings, 1 reply; 20+ messages in thread
From: Helin Zhang @ 2015-10-22 7:28 UTC (permalink / raw)
To: dev; +Cc: yulong.pei
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
drivers/net/i40e/i40e_ethdev.c | 138 +++++++++++++++++------------------------
drivers/net/i40e/i40e_ethdev.h | 8 +++
4 files changed, 69 insertions(+), 83 deletions(-)
v2 changes:
Fixed issues of using wrong configured number of VF queues
diff --git a/config/common_bsdapp b/config/common_bsdapp
index b37dcf4..dac6dad 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -141,7 +141,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -187,6 +187,7 @@ CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 0de43d5..2ce8d66 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -139,7 +139,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -185,6 +185,7 @@ CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=n
CONFIG_RTE_LIBRTE_I40E_DEBUG_DRIVER=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 4b70588..8928b0a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2240,113 +2240,88 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis, left_queues;
+ uint16_t qp_count = 0, vsi_count = 0;
- /* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV");
return -EINVAL;
}
pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED;
- pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis, I40E_MAX_NUM_VSIS);
- PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi);
- /* Allocate queues for pf */
- if (hw->func_caps.rss) {
- pf->flags |= I40E_FLAG_RSS;
- pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
- (uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
- } else
+ pf->max_num_vsi = hw->func_caps.num_vsis;
+ pf->lan_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
+ pf->vmdq_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
+
+ /* FDir queue/VSI allocation */
+ pf->fdir_qp_offset = 0;
+ if (hw->func_caps.fd) {
+ pf->flags |= I40E_FLAG_FDIR;
+ pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
+ } else {
+ pf->fdir_nb_qps = 0;
+ }
+ qp_count += pf->fdir_nb_qps;
+ vsi_count += 1;
+
+ /* LAN queue/VSI allocation */
+ pf->lan_qp_offset = pf->fdir_qp_offset + pf->fdir_nb_qps;
+ if (!hw->func_caps.rss) {
pf->lan_nb_qps = 1;
- sum_queues = pf->lan_nb_qps;
- /* Default VSI is not counted in */
- sum_vsis = 0;
- PMD_INIT_LOG(INFO, "PF queue pairs:%u", pf->lan_nb_qps);
+ } else {
+ pf->flags |= I40E_FLAG_RSS;
+ pf->lan_nb_qps = pf->lan_nb_qp_max;
+ }
+ qp_count += pf->lan_nb_qps;
+ vsi_count += 1;
+ /* VF queue/VSI allocation */
+ pf->vf_qp_offset = pf->lan_qp_offset + pf->lan_nb_qps;
if (hw->func_caps.sr_iov_1_1 && dev->pci_dev->max_vfs) {
pf->flags |= I40E_FLAG_SRIOV;
pf->vf_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
- if (dev->pci_dev->max_vfs > hw->func_caps.num_vfs) {
- PMD_INIT_LOG(ERR, "Config VF number %u, "
- "max supported %u.",
- dev->pci_dev->max_vfs,
- hw->func_caps.num_vfs);
- return -EINVAL;
- }
- if (pf->vf_nb_qps > I40E_MAX_QP_NUM_PER_VF) {
- PMD_INIT_LOG(ERR, "FVL VF queue %u, "
- "max support %u queues.",
- pf->vf_nb_qps, I40E_MAX_QP_NUM_PER_VF);
- return -EINVAL;
- }
pf->vf_num = dev->pci_dev->max_vfs;
- sum_queues += pf->vf_nb_qps * pf->vf_num;
- sum_vsis += pf->vf_num;
- PMD_INIT_LOG(INFO, "Max VF num:%u each has queue pairs:%u",
- pf->vf_num, pf->vf_nb_qps);
- } else
+ PMD_DRV_LOG(DEBUG, "%u VF VSIs, %u queues per VF VSI, "
+ "in total %u queues", pf->vf_num, pf->vf_nb_qps,
+ pf->vf_nb_qps * pf->vf_num);
+ } else {
+ pf->vf_nb_qps = 0;
pf->vf_num = 0;
+ }
+ qp_count += pf->vf_nb_qps * pf->vf_num;
+ vsi_count += pf->vf_num;
+ /* VMDq queue/VSI allocation */
+ pf->vmdq_qp_offset = pf->vf_qp_offset + pf->vf_nb_qps * pf->vf_num;
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
pf->max_nb_vmdq_vsi = 1;
- /*
- * If VMDQ available, assume a single VSI can be created. Will adjust
- * later.
- */
- sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
- sum_vsis += pf->max_nb_vmdq_vsi;
+ PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues per VMDQ VSI, "
+ "in total %u queues", pf->max_nb_vmdq_vsi,
+ pf->vmdq_nb_qps,
+ pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi);
} else {
pf->vmdq_nb_qps = 0;
pf->max_nb_vmdq_vsi = 0;
}
- pf->nb_cfg_vmdq_vsi = 0;
-
- if (hw->func_caps.fd) {
- pf->flags |= I40E_FLAG_FDIR;
- pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
- /**
- * Each flow director consumes one VSI and one queue,
- * but can't calculate out predictably here.
- */
- }
+ qp_count += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ vsi_count += pf->max_nb_vmdq_vsi;
- if (sum_vsis > pf->max_num_vsi ||
- sum_queues > hw->func_caps.num_rx_qp) {
- PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
- PMD_INIT_LOG(ERR, "Max VSIs: %u, asked:%u",
- pf->max_num_vsi, sum_vsis);
- PMD_INIT_LOG(ERR, "Total queue pairs:%u, asked:%u",
- hw->func_caps.num_rx_qp, sum_queues);
+ if (qp_count > hw->func_caps.num_tx_qp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u queues, which exceeds "
+ "the hardware maximum %u", qp_count,
+ hw->func_caps.num_tx_qp);
return -EINVAL;
}
-
- /* Adjust VMDQ setting to support as many VMs as possible */
- if (pf->flags & I40E_FLAG_VMDQ) {
- left_queues = hw->func_caps.num_rx_qp - sum_queues;
-
- pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
- pf->max_num_vsi - sum_vsis);
-
- /* Limit the max VMDQ number that rte_ether that can support */
- pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS - 1);
-
- PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
- pf->max_nb_vmdq_vsi);
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
- }
-
- /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
- * cause */
- if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
- PMD_INIT_LOG(ERR, "Too many VSIs(%u), MSIX intr(%u) not enough",
- sum_vsis, hw->func_caps.num_msix_vectors);
+ if (vsi_count > hw->func_caps.num_vsis) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u VSIs, which exceeds "
+ "the hardware maximum %u", vsi_count,
+ hw->func_caps.num_vsis);
return -EINVAL;
}
- return I40E_SUCCESS;
+
+ return 0;
}
static int
@@ -2736,7 +2711,8 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
bsf = rte_bsf32(qpnum_per_tc);
/* Adjust the queue number to actual queues that can be applied */
- vsi->nb_qps = qpnum_per_tc * total_tc;
+ if (!(vsi->type == I40E_VSI_MAIN && total_tc == 1))
+ vsi->nb_qps = qpnum_per_tc * total_tc;
/**
* Configure TC and queue mapping parameters, for enabled TC,
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 6185657..7656b20 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -370,10 +370,18 @@ struct i40e_pf {
uint16_t vf_num;
/* Each of below queue pairs should be power of 2 since it's the
precondition after TC configuration applied */
+ uint16_t lan_nb_qp_max;
uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+ uint16_t lan_qp_offset;
+ uint16_t vmdq_nb_qp_max;
uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
+ uint16_t vmdq_qp_offset;
+ uint16_t vf_nb_qp_max;
uint16_t vf_nb_qps; /* The number of queue pairs of VF */
+ uint16_t vf_qp_offset;
uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+ uint16_t fdir_qp_offset;
+
uint16_t hash_lut_size; /* The size of hash lookup table */
/* store VXLAN UDP ports */
uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/2] i40e: Enlarge the number of supported queues
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues Helin Zhang
@ 2015-10-22 15:36 ` Wu, Jingjing
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
3 siblings, 0 replies; 20+ messages in thread
From: Wu, Jingjing @ 2015-10-22 15:36 UTC (permalink / raw)
To: Zhang, Helin, dev; +Cc: Pei, Yulong
> -----Original Message-----
> From: Zhang, Helin
> Sent: Thursday, October 22, 2015 3:28 PM
> To: dev@dpdk.org
> Cc: Pei, Yulong; Liu, Yong; Wu, Jingjing; Zhang, Helin
> Subject: [PATCH v2 0/2] i40e: Enlarge the number of supported queues
>
> It enlarges the number of supported queues to hardware allowed
> maximum. There was a software limitation of 64 per physical port
> which is not reasonable.
>
> v2 changes:
> Fixed issues of using wrong configured number of VF queues.
>
> Helin Zhang (2):
> i40e: adjust the number of queues for RSS
> i40e: Enlarge the number of supported queues
>
> config/common_bsdapp | 3 +-
> config/common_linuxapp | 3 +-
> drivers/net/i40e/i40e_ethdev.c | 146 ++++++++++++++++----------------------
> drivers/net/i40e/i40e_ethdev.h | 8 +++
> drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
> 5 files changed, 74 insertions(+), 88 deletions(-)
>
> --
> 1.9.3
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues Helin Zhang
@ 2015-11-03 1:16 ` Thomas Monjalon
2015-11-03 2:49 ` Zhang, Helin
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Monjalon @ 2015-11-03 1:16 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-10-22 15:28, Helin Zhang:
> It enlarges the number of supported queues to hardware allowed
> maximum. There was a software limitation of 64 per physical port
> which is not reasonable.
>
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> ---
> config/common_bsdapp | 3 +-
> config/common_linuxapp | 3 +-
> drivers/net/i40e/i40e_ethdev.c | 138 +++++++++++++++++------------------------
> drivers/net/i40e/i40e_ethdev.h | 8 +++
Please update the release notes (remove deprecation notice and add ABI change).
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues
2015-11-03 1:16 ` Thomas Monjalon
@ 2015-11-03 2:49 ` Zhang, Helin
0 siblings, 0 replies; 20+ messages in thread
From: Zhang, Helin @ 2015-11-03 2:49 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 3, 2015 9:17 AM
> To: Zhang, Helin
> Cc: dev@dpdk.org; Pei, Yulong
> Subject: Re: [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported
> queues
>
> 2015-10-22 15:28, Helin Zhang:
> > It enlarges the number of supported queues to hardware allowed
> > maximum. There was a software limitation of 64 per physical port which
> > is not reasonable.
> >
> > Signed-off-by: Helin Zhang <helin.zhang@intel.com>
> > ---
> > config/common_bsdapp | 3 +-
> > config/common_linuxapp | 3 +-
> > drivers/net/i40e/i40e_ethdev.c | 138
> +++++++++++++++++------------------------
> > drivers/net/i40e/i40e_ethdev.h | 8 +++
>
> Please update the release notes (remove deprecation notice and add ABI
> change).
OK. Sorry for the missing! Thank you very much for the reminder!
Regards,
Helin
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported queues
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
` (2 preceding siblings ...)
2015-10-22 15:36 ` [dpdk-dev] [PATCH v2 0/2] " Wu, Jingjing
@ 2015-11-03 15:40 ` Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 1/2] i40e: adjust the number of queues for RSS Helin Zhang
` (3 more replies)
3 siblings, 4 replies; 20+ messages in thread
From: Helin Zhang @ 2015-11-03 15:40 UTC (permalink / raw)
To: dev
It enlarges the number of supported queues to hardware allowed maximum. There
was a software limitation of 64 per physical port which is not reasonable.
v2 changes:
Fixed issues of using wrong configured number of VF queues.
v3 changes:
Updated release notes.
Helin Zhang (2):
i40e: adjust the number of queues for RSS
i40e: Enlarge the number of supported queues
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
doc/guides/rel_notes/deprecation.rst | 5 --
doc/guides/rel_notes/release_2_2.rst | 12 +++
drivers/net/i40e/i40e_ethdev.c | 146 +++++++++++++++--------------------
drivers/net/i40e/i40e_ethdev.h | 8 ++
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
7 files changed, 86 insertions(+), 93 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] i40e: adjust the number of queues for RSS
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
@ 2015-11-03 15:40 ` Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 2/2] i40e: Enlarge the number of supported queues Helin Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Helin Zhang @ 2015-11-03 15:40 UTC (permalink / raw)
To: dev
It adjusts the number of queues for RSS from power of 2 to any as
long as it does not exceeds the hardware allowed.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 8 ++++----
drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index d852bf1..66dfdba 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5729,12 +5729,12 @@ i40e_pf_config_rss(struct i40e_pf *pf)
* If both VMDQ and RSS enabled, not all of PF queues are configured.
* It's necessary to calulate the actual PF queues that are configured.
*/
- if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
- num = i40e_align_floor(num);
- } else
- num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ else
+ num = pf->dev_data->nb_rx_queues;
+ num = RTE_MIN(num, I40E_MAX_Q_PER_TC);
PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
num);
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 57ea8b6..7986fc0 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2169,7 +2169,7 @@ i40evf_config_rss(struct i40e_vf *vf)
return 0;
}
- num = i40e_align_floor(vf->dev_data->nb_rx_queues);
+ num = RTE_MIN(vf->dev_data->nb_rx_queues, I40E_MAX_QP_NUM_PER_VF);
/* Fill out the look up table */
for (i = 0, j = 0; i < nb_q; i++, j++) {
if (j >= num)
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] i40e: Enlarge the number of supported queues
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 1/2] i40e: adjust the number of queues for RSS Helin Zhang
@ 2015-11-03 15:40 ` Helin Zhang
2015-11-03 21:59 ` [dpdk-dev] [PATCH v3 0/2] " Thomas Monjalon
2015-11-04 14:54 ` Traynor, Kevin
3 siblings, 0 replies; 20+ messages in thread
From: Helin Zhang @ 2015-11-03 15:40 UTC (permalink / raw)
To: dev
It enlarges the number of supported queues to hardware allowed
maximum. There was a software limitation of 64 per physical port
which is not reasonable.
Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
config/common_bsdapp | 3 +-
config/common_linuxapp | 3 +-
doc/guides/rel_notes/deprecation.rst | 5 --
doc/guides/rel_notes/release_2_2.rst | 12 +++
drivers/net/i40e/i40e_ethdev.c | 138 +++++++++++++++--------------------
drivers/net/i40e/i40e_ethdev.h | 8 ++
6 files changed, 81 insertions(+), 88 deletions(-)
v2 changes:
Fixed issues of using wrong configured number of VF queues.
v3 changes:
Updated release notes.
diff --git a/config/common_bsdapp b/config/common_bsdapp
index f202d2f..fba29e5 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -141,7 +141,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -189,6 +189,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
CONFIG_RTE_LIBRTE_I40E_RX_OLFLAGS_ENABLE=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index c1d4bbd..7248262 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -139,7 +139,7 @@ CONFIG_RTE_LIBRTE_KVARGS=y
CONFIG_RTE_LIBRTE_ETHER=y
CONFIG_RTE_LIBRTE_ETHDEV_DEBUG=n
CONFIG_RTE_MAX_ETHPORTS=32
-CONFIG_RTE_MAX_QUEUES_PER_PORT=256
+CONFIG_RTE_MAX_QUEUES_PER_PORT=1024
CONFIG_RTE_LIBRTE_IEEE1588=n
CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16
CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
@@ -187,6 +187,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_INC_VECTOR=y
CONFIG_RTE_LIBRTE_I40E_RX_OLFLAGS_ENABLE=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF=64
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f099ac0..730c3b7 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -8,11 +8,6 @@ API and ABI deprecation notices are to be posted here.
Deprecation Notices
-------------------
-* Significant ABI changes are planned for struct rte_eth_dev to support up to
- 1024 queues per port. This change will be in release 2.2.
- There is no backward compatibility planned from release 2.2.
- All binaries will need to be rebuilt from release 2.2.
-
* The following fields have been deprecated in rte_eth_stats:
ibadcrc, ibadlen, imcasts, fdirmatch, fdirmiss,
tx_pause_xon, rx_pause_xon, tx_pause_xoff, rx_pause_xoff
diff --git a/doc/guides/rel_notes/release_2_2.rst b/doc/guides/rel_notes/release_2_2.rst
index 16fcc89..5d119f4 100644
--- a/doc/guides/rel_notes/release_2_2.rst
+++ b/doc/guides/rel_notes/release_2_2.rst
@@ -116,6 +116,13 @@ Drivers
Fixed i40e issue that occurred when a DPDK application didn't initialize
ports if memory wasn't available on socket 0.
+* **i40e: Fixed issue of cannot supporting more than 64 queues per port.**
+
+ Fixed the issue in i40e of cannot supporting more than 64 queues per port,
+ though hardware actually supports that. The real number of queues may vary,
+ as long as the total number of queues used in PF, VFs, VMDq and FD does not
+ exceeds the hardware maximum.
+
* **vhost: Fixed Qemu shutdown.**
Fixed issue with libvirt ``virsh destroy`` not killing the VM.
@@ -205,6 +212,11 @@ ABI Changes
* librte_cfgfile: Allow longer names and values by increasing the constants
CFG_NAME_LEN and CFG_VALUE_LEN to 64 and 256 respectively.
+* i40e: From 2.2, enlarge the maximum number of queues per port by increasing
+ the config parameter of CONFIG_RTE_MAX_QUEUES_PER_PORT to 1024. Also an new
+ config parameter of CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF will be added to
+ configure the maximum number of queues per PF.
+
Shared Library Versions
-----------------------
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 66dfdba..1e8de7b 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2748,9 +2748,8 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis, left_queues;
+ uint16_t qp_count = 0, vsi_count = 0;
- /* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV");
return -EINVAL;
@@ -2761,109 +2760,85 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->fc_conf.low_water[I40E_MAX_TRAFFIC_CLASS] = I40E_DEFAULT_LOW_WATER;
pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED;
- pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis, I40E_MAX_NUM_VSIS);
- PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi);
- /* Allocate queues for pf */
- if (hw->func_caps.rss) {
+ pf->max_num_vsi = hw->func_caps.num_vsis;
+ pf->lan_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF;
+ pf->vmdq_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vf_nb_qp_max = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
+
+ /* FDir queue/VSI allocation */
+ pf->fdir_qp_offset = 0;
+ if (hw->func_caps.fd) {
+ pf->flags |= I40E_FLAG_FDIR;
+ pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
+ } else {
+ pf->fdir_nb_qps = 0;
+ }
+ qp_count += pf->fdir_nb_qps;
+ vsi_count += 1;
+
+ /* LAN queue/VSI allocation */
+ pf->lan_qp_offset = pf->fdir_qp_offset + pf->fdir_nb_qps;
+ if (!hw->func_caps.rss) {
+ pf->lan_nb_qps = 1;
+ } else {
pf->flags |= I40E_FLAG_RSS;
if (hw->mac.type == I40E_MAC_X722)
pf->flags |= I40E_FLAG_RSS_AQ_CAPABLE;
- pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
- (uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
- } else
- pf->lan_nb_qps = 1;
- sum_queues = pf->lan_nb_qps;
- /* Default VSI is not counted in */
- sum_vsis = 0;
- PMD_INIT_LOG(INFO, "PF queue pairs:%u", pf->lan_nb_qps);
+ pf->lan_nb_qps = pf->lan_nb_qp_max;
+ }
+ qp_count += pf->lan_nb_qps;
+ vsi_count += 1;
+ /* VF queue/VSI allocation */
+ pf->vf_qp_offset = pf->lan_qp_offset + pf->lan_nb_qps;
if (hw->func_caps.sr_iov_1_1 && dev->pci_dev->max_vfs) {
pf->flags |= I40E_FLAG_SRIOV;
pf->vf_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF;
- if (dev->pci_dev->max_vfs > hw->func_caps.num_vfs) {
- PMD_INIT_LOG(ERR, "Config VF number %u, "
- "max supported %u.",
- dev->pci_dev->max_vfs,
- hw->func_caps.num_vfs);
- return -EINVAL;
- }
- if (pf->vf_nb_qps > I40E_MAX_QP_NUM_PER_VF) {
- PMD_INIT_LOG(ERR, "FVL VF queue %u, "
- "max support %u queues.",
- pf->vf_nb_qps, I40E_MAX_QP_NUM_PER_VF);
- return -EINVAL;
- }
pf->vf_num = dev->pci_dev->max_vfs;
- sum_queues += pf->vf_nb_qps * pf->vf_num;
- sum_vsis += pf->vf_num;
- PMD_INIT_LOG(INFO, "Max VF num:%u each has queue pairs:%u",
- pf->vf_num, pf->vf_nb_qps);
- } else
+ PMD_DRV_LOG(DEBUG, "%u VF VSIs, %u queues per VF VSI, "
+ "in total %u queues", pf->vf_num, pf->vf_nb_qps,
+ pf->vf_nb_qps * pf->vf_num);
+ } else {
+ pf->vf_nb_qps = 0;
pf->vf_num = 0;
+ }
+ qp_count += pf->vf_nb_qps * pf->vf_num;
+ vsi_count += pf->vf_num;
+ /* VMDq queue/VSI allocation */
+ pf->vmdq_qp_offset = pf->vf_qp_offset + pf->vf_nb_qps * pf->vf_num;
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
pf->max_nb_vmdq_vsi = 1;
- /*
- * If VMDQ available, assume a single VSI can be created. Will adjust
- * later.
- */
- sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
- sum_vsis += pf->max_nb_vmdq_vsi;
+ PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues per VMDQ VSI, "
+ "in total %u queues", pf->max_nb_vmdq_vsi,
+ pf->vmdq_nb_qps,
+ pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi);
} else {
pf->vmdq_nb_qps = 0;
pf->max_nb_vmdq_vsi = 0;
}
- pf->nb_cfg_vmdq_vsi = 0;
-
- if (hw->func_caps.fd) {
- pf->flags |= I40E_FLAG_FDIR;
- pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR;
- /**
- * Each flow director consumes one VSI and one queue,
- * but can't calculate out predictably here.
- */
- }
+ qp_count += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ vsi_count += pf->max_nb_vmdq_vsi;
if (hw->func_caps.dcb)
pf->flags |= I40E_FLAG_DCB;
- if (sum_vsis > pf->max_num_vsi ||
- sum_queues > hw->func_caps.num_rx_qp) {
- PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied");
- PMD_INIT_LOG(ERR, "Max VSIs: %u, asked:%u",
- pf->max_num_vsi, sum_vsis);
- PMD_INIT_LOG(ERR, "Total queue pairs:%u, asked:%u",
- hw->func_caps.num_rx_qp, sum_queues);
+ if (qp_count > hw->func_caps.num_tx_qp) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u queues, which exceeds "
+ "the hardware maximum %u", qp_count,
+ hw->func_caps.num_tx_qp);
return -EINVAL;
}
-
- /* Adjust VMDQ setting to support as many VMs as possible */
- if (pf->flags & I40E_FLAG_VMDQ) {
- left_queues = hw->func_caps.num_rx_qp - sum_queues;
-
- pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
- pf->max_num_vsi - sum_vsis);
-
- /* Limit the max VMDQ number that rte_ether that can support */
- pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
- ETH_64_POOLS - 1);
-
- PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
- pf->max_nb_vmdq_vsi);
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
- }
-
- /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
- * cause */
- if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
- PMD_INIT_LOG(ERR, "Too many VSIs(%u), MSIX intr(%u) not enough",
- sum_vsis, hw->func_caps.num_msix_vectors);
+ if (vsi_count > hw->func_caps.num_vsis) {
+ PMD_DRV_LOG(ERR, "Failed to allocate %u VSIs, which exceeds "
+ "the hardware maximum %u", vsi_count,
+ hw->func_caps.num_vsis);
return -EINVAL;
}
- return I40E_SUCCESS;
+
+ return 0;
}
static int
@@ -3253,7 +3228,8 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
bsf = rte_bsf32(qpnum_per_tc);
/* Adjust the queue number to actual queues that can be applied */
- vsi->nb_qps = qpnum_per_tc * total_tc;
+ if (!(vsi->type == I40E_VSI_MAIN && total_tc == 1))
+ vsi->nb_qps = qpnum_per_tc * total_tc;
/**
* Configure TC and queue mapping parameters, for enabled TC,
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index de3b9d9..fe3d331 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -402,10 +402,18 @@ struct i40e_pf {
uint16_t vf_num;
/* Each of below queue pairs should be power of 2 since it's the
precondition after TC configuration applied */
+ uint16_t lan_nb_qp_max;
uint16_t lan_nb_qps; /* The number of queue pairs of LAN */
+ uint16_t lan_qp_offset;
+ uint16_t vmdq_nb_qp_max;
uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
+ uint16_t vmdq_qp_offset;
+ uint16_t vf_nb_qp_max;
uint16_t vf_nb_qps; /* The number of queue pairs of VF */
+ uint16_t vf_qp_offset;
uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+ uint16_t fdir_qp_offset;
+
uint16_t hash_lut_size; /* The size of hash lookup table */
/* store VXLAN UDP ports */
uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
--
1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported queues
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 2/2] i40e: Enlarge the number of supported queues Helin Zhang
@ 2015-11-03 21:59 ` Thomas Monjalon
2015-11-04 14:54 ` Traynor, Kevin
3 siblings, 0 replies; 20+ messages in thread
From: Thomas Monjalon @ 2015-11-03 21:59 UTC (permalink / raw)
To: Helin Zhang; +Cc: dev
2015-11-03 23:40, Helin Zhang:
> It enlarges the number of supported queues to hardware allowed maximum. There
> was a software limitation of 64 per physical port which is not reasonable.
>
> v2 changes:
> Fixed issues of using wrong configured number of VF queues.
>
> v3 changes:
> Updated release notes.
>
> Helin Zhang (2):
> i40e: adjust the number of queues for RSS
> i40e: Enlarge the number of supported queues
Applied, thanks
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported queues
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
` (2 preceding siblings ...)
2015-11-03 21:59 ` [dpdk-dev] [PATCH v3 0/2] " Thomas Monjalon
@ 2015-11-04 14:54 ` Traynor, Kevin
2015-11-05 0:39 ` Zhang, Helin
3 siblings, 1 reply; 20+ messages in thread
From: Traynor, Kevin @ 2015-11-04 14:54 UTC (permalink / raw)
To: Zhang, Helin, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> Sent: Tuesday, November 3, 2015 3:40 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported
> queues
>
> It enlarges the number of supported queues to hardware allowed maximum. There
> was a software limitation of 64 per physical port which is not reasonable.
Hi Helin,
Is the layout of the queues and how CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF
affects them documented?
I'm wondering if I increase CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF to more
than 64 queue, will they be contiguous? For example, if I increase to 128
will I be able to use queues 0-127, or there will there be gaps for queues
reserved for VMDQ etc.
Kevin.
>
> v2 changes:
> Fixed issues of using wrong configured number of VF queues.
>
> v3 changes:
> Updated release notes.
>
> Helin Zhang (2):
> i40e: adjust the number of queues for RSS
> i40e: Enlarge the number of supported queues
>
> config/common_bsdapp | 3 +-
> config/common_linuxapp | 3 +-
> doc/guides/rel_notes/deprecation.rst | 5 --
> doc/guides/rel_notes/release_2_2.rst | 12 +++
> drivers/net/i40e/i40e_ethdev.c | 146 +++++++++++++++------------------
> --
> drivers/net/i40e/i40e_ethdev.h | 8 ++
> drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
> 7 files changed, 86 insertions(+), 93 deletions(-)
>
> --
> 1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported queues
2015-11-04 14:54 ` Traynor, Kevin
@ 2015-11-05 0:39 ` Zhang, Helin
0 siblings, 0 replies; 20+ messages in thread
From: Zhang, Helin @ 2015-11-05 0:39 UTC (permalink / raw)
To: Traynor, Kevin; +Cc: dev
> -----Original Message-----
> From: Traynor, Kevin
> Sent: Wednesday, November 4, 2015 10:54 PM
> To: Zhang, Helin; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of supported
> queues
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Helin Zhang
> > Sent: Tuesday, November 3, 2015 3:40 PM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v3 0/2] i40e: Enlarge the number of
> > supported queues
> >
> > It enlarges the number of supported queues to hardware allowed
> > maximum. There was a software limitation of 64 per physical port which is not
> reasonable.
>
> Hi Helin,
>
> Is the layout of the queues and how
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF
> affects them documented?
Its name is quite straight forward, this is the number of queue user allowed in a PF.
>
> I'm wondering if I increase CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF to
> more than 64 queue, will they be contiguous? For example, if I increase to 128
> will I be able to use queues 0-127, or there will there be gaps for queues
> reserved for VMDQ etc.
0 is reserved for FD, so 1-128 is for your case.
Regards,
Helin
>
> Kevin.
>
> >
> > v2 changes:
> > Fixed issues of using wrong configured number of VF queues.
> >
> > v3 changes:
> > Updated release notes.
> >
> > Helin Zhang (2):
> > i40e: adjust the number of queues for RSS
> > i40e: Enlarge the number of supported queues
> >
> > config/common_bsdapp | 3 +-
> > config/common_linuxapp | 3 +-
> > doc/guides/rel_notes/deprecation.rst | 5 --
> > doc/guides/rel_notes/release_2_2.rst | 12 +++
> > drivers/net/i40e/i40e_ethdev.c | 146
> +++++++++++++++------------------
> > --
> > drivers/net/i40e/i40e_ethdev.h | 8 ++
> > drivers/net/i40e/i40e_ethdev_vf.c | 2 +-
> > 7 files changed, 86 insertions(+), 93 deletions(-)
> >
> > --
> > 1.9.3
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2015-11-05 0:39 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-20 14:51 [dpdk-dev] [PATCH 0/2] i40e: enlarge the number of supported queues Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-09-20 14:51 ` [dpdk-dev] [PATCH 2/2] i40e: Enlarge the number of supported queues Helin Zhang
2015-09-21 7:41 ` David Marchand
2015-09-21 8:15 ` Zhang, Helin
2015-09-22 6:36 ` Zhang, Helin
2015-10-19 8:29 ` Wu, Jingjing
2015-10-19 8:37 ` Zhang, Helin
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 0/2] " Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-10-22 7:28 ` [dpdk-dev] [PATCH v2 2/2] i40e: Enlarge the number of supported queues Helin Zhang
2015-11-03 1:16 ` Thomas Monjalon
2015-11-03 2:49 ` Zhang, Helin
2015-10-22 15:36 ` [dpdk-dev] [PATCH v2 0/2] " Wu, Jingjing
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 " Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 1/2] i40e: adjust the number of queues for RSS Helin Zhang
2015-11-03 15:40 ` [dpdk-dev] [PATCH v3 2/2] i40e: Enlarge the number of supported queues Helin Zhang
2015-11-03 21:59 ` [dpdk-dev] [PATCH v3 0/2] " Thomas Monjalon
2015-11-04 14:54 ` Traynor, Kevin
2015-11-05 0:39 ` Zhang, Helin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).