* [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-10-14 14:09 ` Thomas Monjalon
` (2 more replies)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
` (7 subsequent siblings)
8 siblings, 3 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Clear pool bitmap when trying to remove specific MAC.
2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
arguments to better support RSS, DCB and VMDQ.
4. Fix bug in rte_eth_dev_config_restore function, which will restore
all MAC address to default pool.
5. Define additional 3 arguments for better VMDQ support.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
lib/librte_ether/rte_ethdev.c | 12 +++++++-----
lib/librte_ether/rte_ethdev.h | 39 ++++++++++++++++++++++++++++-----------
2 files changed, 35 insertions(+), 16 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index fd1010a..b7ef56e 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -771,7 +771,8 @@ rte_eth_dev_config_restore(uint8_t port_id)
continue;
/* add address to the hardware */
- if (*dev->dev_ops->mac_addr_add)
+ if (*dev->dev_ops->mac_addr_add &&
+ dev->data->mac_pool_sel[i] & (1ULL << pool))
(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
else {
PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
@@ -1249,10 +1250,8 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
}
dev = &rte_eth_devices[port_id];
- /* Default device offload capabilities to zero */
- dev_info->rx_offload_capa = 0;
- dev_info->tx_offload_capa = 0;
- dev_info->if_index = 0;
+ /* Set all fields with zero */
+ memset(dev_info, 0, sizeof(*dev_info));
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
dev_info->pci_dev = dev->pci_dev;
@@ -2022,6 +2021,9 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
/* Update address in NIC data structure */
ether_addr_copy(&null_mac_addr, &dev->data->mac_addrs[index]);
+ /* Update pool bitmap in NIC data structure */
+ dev->data->mac_pool_sel[index] = 0;
+
return 0;
}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 50df654..8f3b6df 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -251,21 +251,34 @@ struct rte_eth_thresh {
uint8_t wthresh; /**< Ring writeback threshold. */
};
+#define ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_VMDQ_FLAG 0x4
+
/**
* A set of values to identify what method is to be used to route
* packets to multiple queues.
*/
enum rte_eth_rx_mq_mode {
- ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
-
- ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
- ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
- ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
-
- ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in VMDq */
+ /**< None of DCB,RSS or VMDQ mode */
+ ETH_MQ_RX_NONE = 0,
+
+ /**< For RX side, only RSS is on */
+ ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ /**< For RX side,only DCB is on. */
+ ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ /**< Both DCB and RSS enable */
+ ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+
+ /**< Only VMDQ, no RSS nor DCB */
+ ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ /**< RSS mode with VMDQ */
+ ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ /**< Use VMDQ+DCB to route traffic to queues */
+ ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ /**< Enable both VMDQ and DCB in VMDq */
+ ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
+ ETH_MQ_RX_VMDQ_FLAG,
};
/**
@@ -840,7 +853,7 @@ struct rte_eth_conf {
Read the datasheet of given ethernet controller
for details. The possible values of this field
are defined in implementation of each driver. */
- union {
+ struct {
struct rte_eth_rss_conf rss_conf; /**< Port RSS configuration */
struct rte_eth_vmdq_dcb_conf vmdq_dcb_conf;
/**< Port vmdq+dcb configuration. */
@@ -906,6 +919,10 @@ struct rte_eth_dev_info {
uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+ /**< Specify the queue range belongs to VMDQ pools if VMDQ applicable */
+ uint16_t vmdq_queue_base;
+ uint16_t vmdq_queue_num;
+ uint16_t vmdq_pool_base; /** < Specify the start pool ID of VMDQ pools */
};
struct rte_eth_dev;
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-09-23 13:14 ` [dpdk-dev] [PATCH 1/6] ether: enhancement for " Chen Jing D(Mark)
@ 2014-10-14 14:09 ` Thomas Monjalon
2014-10-15 6:59 ` Chen, Jing D
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
2 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-10-14 14:09 UTC (permalink / raw)
To: Chen Jing D(Mark); +Cc: dev
2014-09-23 21:14, Chen Jing D:
> The change includes several parts:
> 1. Clear pool bitmap when trying to remove specific MAC.
> 2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
> 3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
> arguments to better support RSS, DCB and VMDQ.
> 4. Fix bug in rte_eth_dev_config_restore function, which will restore
> all MAC address to default pool.
> 5. Define additional 3 arguments for better VMDQ support.
>
> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> Acked-by: Jijiang Liu <jijiang.liu@intel.com>
> Acked-by: Huawei Xie <huawei.xie@intel.com>
Whaou, there were a lot of reviewers!
The patch should be really clean. Let's see :)
> --- a/lib/librte_ether/rte_ethdev.c
> +++ b/lib/librte_ether/rte_ethdev.c
> /* add address to the hardware */
> - if (*dev->dev_ops->mac_addr_add)
> + if (*dev->dev_ops->mac_addr_add &&
> + dev->data->mac_pool_sel[i] & (1ULL << pool))
> (*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
> + /* Update pool bitmap in NIC data structure */
> + dev->data->mac_pool_sel[index] = 0;
Reset is a better word than "Update" in this case.
But do we really need a comment for that?
> +#define ETH_MQ_RX_RSS_FLAG 0x1
> +#define ETH_MQ_RX_DCB_FLAG 0x2
> +#define ETH_MQ_RX_VMDQ_FLAG 0x4
Need a comment to know where these flags can be used.
> enum rte_eth_rx_mq_mode {
> - ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
> -
> - ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
> - ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
> - ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
> -
> - ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
> - ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
> - ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to queues */
> - ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in VMDq */
> + /**< None of DCB,RSS or VMDQ mode */
> + ETH_MQ_RX_NONE = 0,
> +
> + /**< For RX side, only RSS is on */
> + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> + /**< For RX side,only DCB is on. */
> + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> + /**< Both DCB and RSS enable */
> + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
> +
> + /**< Only VMDQ, no RSS nor DCB */
> + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> + /**< RSS mode with VMDQ */
> + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
> + /**< Use VMDQ+DCB to route traffic to queues */
> + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
> + /**< Enable both VMDQ and DCB in VMDq */
> + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
> + ETH_MQ_RX_VMDQ_FLAG,
> };
Why not simply remove all these combinations and keep only flags?
Please keep it simple.
> + /**< Specify the queue range belongs to VMDQ pools if VMDQ applicable */
> + uint16_t vmdq_queue_base;
> + uint16_t vmdq_queue_num;
If comment is before, it should be /** not /**<.
> + uint16_t vmdq_pool_base; /** < Specify the start pool ID of VMDQ pools */
There is a typo with the space --^
Please, when writing comments, ask yourself if each word is required
and how it can be shorter.
Example here: /**< first ID of VMDQ pools */
Conclusion: NACK
There are only few typos and minor things but it would help to have more
careful reviews. Having a list of people at the beginning of the patch
didn't help in this case.
Thanks for your attention
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-10-14 14:09 ` Thomas Monjalon
@ 2014-10-15 6:59 ` Chen, Jing D
2014-10-15 8:10 ` Thomas Monjalon
0 siblings, 1 reply; 45+ messages in thread
From: Chen, Jing D @ 2014-10-15 6:59 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, October 14, 2014 10:10 PM
> To: Chen, Jing D
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
>
> 2014-09-23 21:14, Chen Jing D:
> > The change includes several parts:
> > 1. Clear pool bitmap when trying to remove specific MAC.
> > 2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
> > 3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
> > arguments to better support RSS, DCB and VMDQ.
> > 4. Fix bug in rte_eth_dev_config_restore function, which will restore
> > all MAC address to default pool.
> > 5. Define additional 3 arguments for better VMDQ support.
> >
> > Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> > Acked-by: Jijiang Liu <jijiang.liu@intel.com>
> > Acked-by: Huawei Xie <huawei.xie@intel.com>
>
> Whaou, there were a lot of reviewers!
> The patch should be really clean. Let's see :)
First time I saw you are so humorous. :)
>
> > --- a/lib/librte_ether/rte_ethdev.c
> > +++ b/lib/librte_ether/rte_ethdev.c
> > /* add address to the hardware */
> > - if (*dev->dev_ops->mac_addr_add)
> > + if (*dev->dev_ops->mac_addr_add &&
> > + dev->data->mac_pool_sel[i] & (1ULL << pool))
> > (*dev->dev_ops->mac_addr_add)(dev, &addr, i,
> pool);
>
> > + /* Update pool bitmap in NIC data structure */
> > + dev->data->mac_pool_sel[index] = 0;
>
> Reset is a better word than "Update" in this case.
> But do we really need a comment for that?
Accept.
>
> > +#define ETH_MQ_RX_RSS_FLAG 0x1
> > +#define ETH_MQ_RX_DCB_FLAG 0x2
> > +#define ETH_MQ_RX_VMDQ_FLAG 0x4
>
> Need a comment to know where these flags can be used.
Accept.
>
> > enum rte_eth_rx_mq_mode {
> > - ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
> > -
> > - ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
> > - ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
> > - ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
> > -
> > - ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
> > - ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
> > - ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to
> queues */
> > - ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in
> VMDq */
> > + /**< None of DCB,RSS or VMDQ mode */
> > + ETH_MQ_RX_NONE = 0,
> > +
> > + /**< For RX side, only RSS is on */
> > + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> > + /**< For RX side,only DCB is on. */
> > + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> > + /**< Both DCB and RSS enable */
> > + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_DCB_FLAG,
> > +
> > + /**< Only VMDQ, no RSS nor DCB */
> > + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> > + /**< RSS mode with VMDQ */
> > + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_VMDQ_FLAG,
> > + /**< Use VMDQ+DCB to route traffic to queues */
> > + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG |
> ETH_MQ_RX_DCB_FLAG,
> > + /**< Enable both VMDQ and DCB in VMDq */
> > + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_DCB_FLAG |
> > + ETH_MQ_RX_VMDQ_FLAG,
> > };
>
> Why not simply remove all these combinations and keep only flags?
> Please keep it simple.
One reason is back-compatibility.
Another reason is not all NIC driver support all the combined modes, only limited sets
driver supported. Under this condition, it's better to use the combination definition
(VMDQ_DCB, DCB_RSS, etc) to let driver check whether it supports.
>
> > + /**< Specify the queue range belongs to VMDQ pools if VMDQ
> applicable */
> > + uint16_t vmdq_queue_base;
> > + uint16_t vmdq_queue_num;
>
> If comment is before, it should be /** not /**<.
Accept.
>
> > + uint16_t vmdq_pool_base; /** < Specify the start pool ID of VMDQ
> pools */
>
> There is a typo with the space --^
> Please, when writing comments, ask yourself if each word is required
> and how it can be shorter.
> Example here: /**< first ID of VMDQ pools */
>
> Conclusion: NACK
> There are only few typos and minor things but it would help to have more
> careful reviews. Having a list of people at the beginning of the patch
> didn't help in this case.
I listed all the code reviewers out to reduce their workload to reply the email,
not mean to make it easier to be applied.
>
> Thanks for your attention
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-10-15 6:59 ` Chen, Jing D
@ 2014-10-15 8:10 ` Thomas Monjalon
2014-10-15 9:47 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-10-15 8:10 UTC (permalink / raw)
To: Chen, Jing D; +Cc: dev
2014-10-15 06:59, Chen, Jing D:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > enum rte_eth_rx_mq_mode {
> > > - ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
> > > -
> > > - ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
> > > - ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
> > > - ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
> > > -
> > > - ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
> > > - ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
> > > - ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to
> > queues */
> > > - ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in
> > VMDq */
> > > + /**< None of DCB,RSS or VMDQ mode */
> > > + ETH_MQ_RX_NONE = 0,
> > > +
> > > + /**< For RX side, only RSS is on */
> > > + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> > > + /**< For RX side,only DCB is on. */
> > > + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> > > + /**< Both DCB and RSS enable */
> > > + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> > ETH_MQ_RX_DCB_FLAG,
> > > +
> > > + /**< Only VMDQ, no RSS nor DCB */
> > > + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> > > + /**< RSS mode with VMDQ */
> > > + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG |
> > ETH_MQ_RX_VMDQ_FLAG,
> > > + /**< Use VMDQ+DCB to route traffic to queues */
> > > + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG |
> > ETH_MQ_RX_DCB_FLAG,
> > > + /**< Enable both VMDQ and DCB in VMDq */
> > > + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> > ETH_MQ_RX_DCB_FLAG |
> > > + ETH_MQ_RX_VMDQ_FLAG,
> > > };
> >
> > Why not simply remove all these combinations and keep only flags?
> > Please keep it simple.
>
> One reason is back-compatibility.
I understand but I think we should prefer cleanup.
As there is no way to advertise deprecation of flags, it should be
simply removed.
> Another reason is not all NIC driver support all the combined modes, only limited sets
> driver supported. Under this condition, it's better to use the combination definition
> (VMDQ_DCB, DCB_RSS, etc) to let driver check whether it supports.
Driver can do the same checks with simple flags and it's probably simpler
(e.g. a driver which doesn't support VMDQ had no need to check all VMDQ
combinations).
> > There are only few typos and minor things but it would help to have more
> > careful reviews. Having a list of people at the beginning of the patch
> > didn't help in this case.
>
> I listed all the code reviewers out to reduce their workload to reply the email,
> not mean to make it easier to be applied.
I have no problem with listing of reviewers when submitting patches.
To say more, I prefer you list them by yourself and you add new reviewers
when sending new versions of the patchset.
But I would like reviewers to be more careful. They are especially useful to
discuss design choices and check typos.
Having reviewer give credits to the patch only if we are confident that the
review task is generally seriously achieved.
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-10-15 8:10 ` Thomas Monjalon
@ 2014-10-15 9:47 ` Chen, Jing D
2014-10-15 9:59 ` Thomas Monjalon
0 siblings, 1 reply; 45+ messages in thread
From: Chen, Jing D @ 2014-10-15 9:47 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, October 15, 2014 4:11 PM
> To: Chen, Jing D
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
>
> 2014-10-15 06:59, Chen, Jing D:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > > enum rte_eth_rx_mq_mode {
> > > > - ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
> > > > -
> > > > - ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
> > > > - ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
> > > > - ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
> > > > -
> > > > - ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
> > > > - ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
> > > > - ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to
> > > queues */
> > > > - ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in
> > > VMDq */
> > > > + /**< None of DCB,RSS or VMDQ mode */
> > > > + ETH_MQ_RX_NONE = 0,
> > > > +
> > > > + /**< For RX side, only RSS is on */
> > > > + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> > > > + /**< For RX side,only DCB is on. */
> > > > + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> > > > + /**< Both DCB and RSS enable */
> > > > + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> > > ETH_MQ_RX_DCB_FLAG,
> > > > +
> > > > + /**< Only VMDQ, no RSS nor DCB */
> > > > + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> > > > + /**< RSS mode with VMDQ */
> > > > + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG |
> > > ETH_MQ_RX_VMDQ_FLAG,
> > > > + /**< Use VMDQ+DCB to route traffic to queues */
> > > > + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG |
> > > ETH_MQ_RX_DCB_FLAG,
> > > > + /**< Enable both VMDQ and DCB in VMDq */
> > > > + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> > > ETH_MQ_RX_DCB_FLAG |
> > > > + ETH_MQ_RX_VMDQ_FLAG,
> > > > };
> > >
> > > Why not simply remove all these combinations and keep only flags?
> > > Please keep it simple.
> >
> > One reason is back-compatibility.
>
> I understand but I think we should prefer cleanup.
> As there is no way to advertise deprecation of flags, it should be
> simply removed.
>
> > Another reason is not all NIC driver support all the combined modes, only
> limited sets
> > driver supported. Under this condition, it's better to use the combination
> definition
> > (VMDQ_DCB, DCB_RSS, etc) to let driver check whether it supports.
>
> Driver can do the same checks with simple flags and it's probably simpler
> (e.g. a driver which doesn't support VMDQ had no need to check all VMDQ
> combinations).
Below is an example with the change in ixgbe_dcb_hw_configure(). DCB only
can be enabled in case DCB or VMDQ_DCB is selected.
Before the change:
switch(dev->data->dev_conf.rxmode.mq_mode){
case ETH_MQ_RX_VMDQ_DCB:
.....
case ETH_MQ_RX_DCB:
.....
default:
FAILED.
}
With the change, it will be:
switch(dev->data->dev_conf.rxmode.mq_mode){
case ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG:
.....
case ETH_MQ_RX_DCB_FLAG:
.....
Default:
FAILED
}
Won't it look weird for reading? In fact, it's more complex in rte_eth_dev_check_mq_mode(),
With the change, the code will look weird.
In fact, I don't see benefit with the change to old code. New PMD driver can use simple flag while
old driver (IXGBE/IGB) can use original definition.
>
> > > There are only few typos and minor things but it would help to have more
> > > careful reviews. Having a list of people at the beginning of the patch
> > > didn't help in this case.
> >
> > I listed all the code reviewers out to reduce their workload to reply the
> email,
> > not mean to make it easier to be applied.
>
> I have no problem with listing of reviewers when submitting patches.
> To say more, I prefer you list them by yourself and you add new reviewers
> when sending new versions of the patchset.
> But I would like reviewers to be more careful. They are especially useful to
> discuss design choices and check typos.
> Having reviewer give credits to the patch only if we are confident that the
> review task is generally seriously achieved.
>
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 1/6] ether: enhancement for VMDQ support
2014-10-15 9:47 ` Chen, Jing D
@ 2014-10-15 9:59 ` Thomas Monjalon
0 siblings, 0 replies; 45+ messages in thread
From: Thomas Monjalon @ 2014-10-15 9:59 UTC (permalink / raw)
To: Chen, Jing D; +Cc: dev
2014-10-15 09:47, Chen, Jing D:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2014-10-15 06:59, Chen, Jing D:
> > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > > > enum rte_eth_rx_mq_mode {
> > > > > - ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
> > > > > -
> > > > > - ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
> > > > > - ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
> > > > > - ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
> > > > > -
> > > > > - ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
> > > > > - ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
> > > > > - ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to queues */
> > > > > - ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in VMDq */
> > > > > + /**< None of DCB,RSS or VMDQ mode */
> > > > > + ETH_MQ_RX_NONE = 0,
> > > > > +
> > > > > + /**< For RX side, only RSS is on */
> > > > > + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> > > > > + /**< For RX side,only DCB is on. */
> > > > > + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> > > > > + /**< Both DCB and RSS enable */
> > > > > + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
> > > > > +
> > > > > + /**< Only VMDQ, no RSS nor DCB */
> > > > > + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> > > > > + /**< RSS mode with VMDQ */
> > > > > + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
> > > > > + /**< Use VMDQ+DCB to route traffic to queues */
> > > > > + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
> > > > > + /**< Enable both VMDQ and DCB in VMDq */
> > > > > + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
> > > > > + ETH_MQ_RX_VMDQ_FLAG,
> > > > > };
> > > >
> > > > Why not simply remove all these combinations and keep only flags?
> > > > Please keep it simple.
> > >
> > > One reason is back-compatibility.
> >
> > I understand but I think we should prefer cleanup.
> > As there is no way to advertise deprecation of flags, it should be
> > simply removed.
> >
> > > Another reason is not all NIC driver support all the combined modes, only
> > > limited sets
> > > driver supported. Under this condition, it's better to use the combination
> > > definition
> > > (VMDQ_DCB, DCB_RSS, etc) to let driver check whether it supports.
> >
> > Driver can do the same checks with simple flags and it's probably simpler
> > (e.g. a driver which doesn't support VMDQ had no need to check all VMDQ
> > combinations).
[...]
> case ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG:
[...]
> Won't it look weird for reading? In fact, it's more complex in
> rte_eth_dev_check_mq_mode(),
> With the change, the code will look weird.
I think that defining all combinations of flags is more weird.
> In fact, I don't see benefit with the change to old code. New PMD driver
> can use simple flag while old driver (IXGBE/IGB) can use original definition.
If nobody else agree with my point of view, I'll accept yours.
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support
2014-09-23 13:14 ` [dpdk-dev] [PATCH 1/6] ether: enhancement for " Chen Jing D(Mark)
2014-10-14 14:09 ` Thomas Monjalon
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 1/6] ether: enhancement for " Chen Jing D(Mark)
` (7 more replies)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
2 siblings, 8 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
v2:
- Fix a few typos.
- Add comments for RX mq mode flags.
- Remove '\n' from some log messages.
- Remove 'Acked-by' in commit log.
v1:
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 12 +-
lib/librte_ether/rte_ethdev.h | 43 +++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 499 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
8 files changed, 536 insertions(+), 169 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 1/6] ether: enhancement for VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-11-03 22:17 ` Thomas Monjalon
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
` (6 subsequent siblings)
7 siblings, 1 reply; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Clear pool bitmap when trying to remove specific MAC.
2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
arguments to better support RSS, DCB and VMDQ.
4. Fix bug in rte_eth_dev_config_restore function, which will restore
all MAC address to default pool.
5. Define additional 3 arguments for better VMDQ support.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_ether/rte_ethdev.c | 12 ++++++----
lib/librte_ether/rte_ethdev.h | 43 ++++++++++++++++++++++++++++++----------
2 files changed, 39 insertions(+), 16 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index fd1010a..86f4409 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -771,7 +771,8 @@ rte_eth_dev_config_restore(uint8_t port_id)
continue;
/* add address to the hardware */
- if (*dev->dev_ops->mac_addr_add)
+ if (*dev->dev_ops->mac_addr_add &&
+ dev->data->mac_pool_sel[i] & (1ULL << pool))
(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
else {
PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
@@ -1249,10 +1250,8 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info)
}
dev = &rte_eth_devices[port_id];
- /* Default device offload capabilities to zero */
- dev_info->rx_offload_capa = 0;
- dev_info->tx_offload_capa = 0;
- dev_info->if_index = 0;
+ /* Set all fields with zero */
+ memset(dev_info, 0, sizeof(*dev_info));
FUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);
(*dev->dev_ops->dev_infos_get)(dev, dev_info);
dev_info->pci_dev = dev->pci_dev;
@@ -2022,6 +2021,9 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
/* Update address in NIC data structure */
ether_addr_copy(&null_mac_addr, &dev->data->mac_addrs[index]);
+ /* reset pool bitmap */
+ dev->data->mac_pool_sel[index] = 0;
+
return 0;
}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 50df654..4c83aa5 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -252,20 +252,37 @@ struct rte_eth_thresh {
};
/**
+ * Simple flags to indicate RX mq mode, which can be used independently or combined
+ * in enum rte_eth_rx_mq_mode definition.
+ */
+#define ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_VMDQ_FLAG 0x4
+
+/**
* A set of values to identify what method is to be used to route
* packets to multiple queues.
*/
enum rte_eth_rx_mq_mode {
- ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
-
- ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
- ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
- ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
-
- ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in VMDq */
+ /**< None of DCB,RSS or VMDQ mode */
+ ETH_MQ_RX_NONE = 0,
+
+ /**< For RX side, only RSS is on */
+ ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ /**< For RX side,only DCB is on. */
+ ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ /**< Both DCB and RSS enable */
+ ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+
+ /**< Only VMDQ, no RSS nor DCB */
+ ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ /**< RSS mode with VMDQ */
+ ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ /**< Use VMDQ+DCB to route traffic to queues */
+ ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ /**< Enable both VMDQ and DCB in VMDq */
+ ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
+ ETH_MQ_RX_VMDQ_FLAG,
};
/**
@@ -840,7 +857,7 @@ struct rte_eth_conf {
Read the datasheet of given ethernet controller
for details. The possible values of this field
are defined in implementation of each driver. */
- union {
+ struct {
struct rte_eth_rss_conf rss_conf; /**< Port RSS configuration */
struct rte_eth_vmdq_dcb_conf vmdq_dcb_conf;
/**< Port vmdq+dcb configuration. */
@@ -906,6 +923,10 @@ struct rte_eth_dev_info {
uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+ /** Specify the queue range belongs to VMDQ pools if VMDQ applicable. */
+ uint16_t vmdq_queue_base;
+ uint16_t vmdq_queue_num;
+ uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
};
struct rte_eth_dev;
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] ether: enhancement for VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 1/6] ether: enhancement for " Chen Jing D(Mark)
@ 2014-11-03 22:17 ` Thomas Monjalon
2014-11-04 5:50 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-11-03 22:17 UTC (permalink / raw)
To: Chen Jing D(Mark); +Cc: dev
2014-10-16 18:07, Chen Jing D:
> /**
> + * Simple flags to indicate RX mq mode, which can be used independently or combined
> + * in enum rte_eth_rx_mq_mode definition.
> + */
> +#define ETH_MQ_RX_RSS_FLAG 0x1
> +#define ETH_MQ_RX_DCB_FLAG 0x2
> +#define ETH_MQ_RX_VMDQ_FLAG 0x4
The comment would be more useful by explaining that these flags are used
for rte_eth_conf.rxmode.mq_mode.
> + /**< None of DCB,RSS or VMDQ mode */
> + ETH_MQ_RX_NONE = 0,
> +
> + /**< For RX side, only RSS is on */
> + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> + /**< For RX side,only DCB is on. */
> + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> + /**< Both DCB and RSS enable */
> + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
> +
> + /**< Only VMDQ, no RSS nor DCB */
> + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> + /**< RSS mode with VMDQ */
> + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
> + /**< Use VMDQ+DCB to route traffic to queues */
> + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
> + /**< Enable both VMDQ and DCB in VMDq */
> + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
> + ETH_MQ_RX_VMDQ_FLAG,
Doxygen comments placed before should start with /** not /**<.
> + /** Specify the queue range belongs to VMDQ pools if VMDQ applicable. */
> + uint16_t vmdq_queue_base;
> + uint16_t vmdq_queue_num;
Please explain what mean the values in vmdq_queue_base and vmdq_queue_num.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] ether: enhancement for VMDQ support
2014-11-03 22:17 ` Thomas Monjalon
@ 2014-11-04 5:50 ` Chen, Jing D
2014-11-04 8:53 ` Thomas Monjalon
0 siblings, 1 reply; 45+ messages in thread
From: Chen, Jing D @ 2014-11-04 5:50 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 04, 2014 6:17 AM
> To: Chen, Jing D
> Cc: dev@dpdk.org; Ananyev, Konstantin
> Subject: Re: [PATCH v2 1/6] ether: enhancement for VMDQ support
>
> 2014-10-16 18:07, Chen Jing D:
> > /**
> > + * Simple flags to indicate RX mq mode, which can be used independently
> or combined
> > + * in enum rte_eth_rx_mq_mode definition.
> > + */
> > +#define ETH_MQ_RX_RSS_FLAG 0x1
> > +#define ETH_MQ_RX_DCB_FLAG 0x2
> > +#define ETH_MQ_RX_VMDQ_FLAG 0x4
>
> The comment would be more useful by explaining that these flags are used
> for rte_eth_conf.rxmode.mq_mode.
Yes, that's more straightforward.
>
> > + /**< None of DCB,RSS or VMDQ mode */
> > + ETH_MQ_RX_NONE = 0,
> > +
> > + /**< For RX side, only RSS is on */
> > + ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
> > + /**< For RX side,only DCB is on. */
> > + ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
> > + /**< Both DCB and RSS enable */
> > + ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_DCB_FLAG,
> > +
> > + /**< Only VMDQ, no RSS nor DCB */
> > + ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
> > + /**< RSS mode with VMDQ */
> > + ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_VMDQ_FLAG,
> > + /**< Use VMDQ+DCB to route traffic to queues */
> > + ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG |
> ETH_MQ_RX_DCB_FLAG,
> > + /**< Enable both VMDQ and DCB in VMDq */
> > + ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG |
> ETH_MQ_RX_DCB_FLAG |
> > + ETH_MQ_RX_VMDQ_FLAG,
>
> Doxygen comments placed before should start with /** not /**<.
My mistake. Thanks for pointing it out.
>
> > + /** Specify the queue range belongs to VMDQ pools if VMDQ
> applicable. */
> > + uint16_t vmdq_queue_base;
> > + uint16_t vmdq_queue_num;
>
> Please explain what mean the values in vmdq_queue_base and
> vmdq_queue_num.
I thinks the name is self- explanatory, I also add some comments for them.
As previous max_rx/tx_queues indicates how many queues available, these
2 variables defines the queue ranges for VM usage.
What kind of explanations you needs me to add?
>
> Thanks
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] ether: enhancement for VMDQ support
2014-11-04 5:50 ` Chen, Jing D
@ 2014-11-04 8:53 ` Thomas Monjalon
2014-11-04 8:59 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-11-04 8:53 UTC (permalink / raw)
To: Chen, Jing D; +Cc: dev
2014-11-04 05:50, Chen, Jing D:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2014-10-16 18:07, Chen Jing D:
> > > + /** Specify the queue range belongs to VMDQ pools if VMDQ
> > applicable. */
> > > + uint16_t vmdq_queue_base;
> > > + uint16_t vmdq_queue_num;
> >
> > Please explain what mean the values in vmdq_queue_base and
> > vmdq_queue_num.
>
> I thinks the name is self- explanatory, I also add some comments for them.
> As previous max_rx/tx_queues indicates how many queues available, these
> 2 variables defines the queue ranges for VM usage.
I understand clearly now.
> What kind of explanations you needs me to add?
You cannot put a doxygen comment which apply to 2 fields.
Try do describe precisely the meaning of each field.
Example: /**< first queue ID in the range for VMDQ pool */
and /**< size of the queue range for VMDQ pool */
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] ether: enhancement for VMDQ support
2014-11-04 8:53 ` Thomas Monjalon
@ 2014-11-04 8:59 ` Chen, Jing D
0 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-11-04 8:59 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 04, 2014 4:54 PM
> To: Chen, Jing D
> Cc: dev@dpdk.org; Ananyev, Konstantin
> Subject: Re: [PATCH v2 1/6] ether: enhancement for VMDQ support
>
> 2014-11-04 05:50, Chen, Jing D:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > 2014-10-16 18:07, Chen Jing D:
> > > > + /** Specify the queue range belongs to VMDQ pools if VMDQ
> > > applicable. */
> > > > + uint16_t vmdq_queue_base;
> > > > + uint16_t vmdq_queue_num;
> > >
> > > Please explain what mean the values in vmdq_queue_base and
> > > vmdq_queue_num.
> >
> > I thinks the name is self- explanatory, I also add some comments for them.
> > As previous max_rx/tx_queues indicates how many queues available,
> these
> > 2 variables defines the queue ranges for VM usage.
>
> I understand clearly now.
>
> > What kind of explanations you needs me to add?
>
> You cannot put a doxygen comment which apply to 2 fields.
> Try do describe precisely the meaning of each field.
> Example: /**< first queue ID in the range for VMDQ pool */
> and /**< size of the queue range for VMDQ pool */
Thanks! Got you.
>
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 1/6] ether: enhancement for " Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-11-03 18:37 ` Thomas Monjalon
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 3/6] ixgbe: " Chen Jing D(Mark)
` (5 subsequent siblings)
7 siblings, 1 reply; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_e1000/igb_ethdev.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index c9acdc5..dc0ea6d 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -1286,18 +1286,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i354:
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
@ 2014-11-03 18:37 ` Thomas Monjalon
2014-11-04 5:26 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-11-03 18:37 UTC (permalink / raw)
To: Chen Jing D(Mark); +Cc: dev
2014-10-16 18:07, Chen Jing D:
> --- a/lib/librte_pmd_e1000/igb_ethdev.c
> +++ b/lib/librte_pmd_e1000/igb_ethdev.c
> @@ -1286,18 +1286,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
> dev_info->max_rx_queues = 16;
> dev_info->max_tx_queues = 16;
> dev_info->max_vmdq_pools = ETH_8_POOLS;
> + dev_info->vmdq_queue_num = 16;
> break;
>
> case e1000_82580:
> dev_info->max_rx_queues = 8;
> dev_info->max_tx_queues = 8;
> dev_info->max_vmdq_pools = ETH_8_POOLS;
> + dev_info->vmdq_queue_num = 8;
> break;
>
> case e1000_i350:
> dev_info->max_rx_queues = 8;
> dev_info->max_tx_queues = 8;
> dev_info->max_vmdq_pools = ETH_8_POOLS;
> + dev_info->vmdq_queue_num = 8;
> break;
Why not simply set it only once?
dev_info->vmdq_queue_num = dev_info->max_rx_queues;
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion
2014-11-03 18:37 ` Thomas Monjalon
@ 2014-11-04 5:26 ` Chen, Jing D
0 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-11-04 5:26 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi,
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 04, 2014 2:37 AM
> To: Chen, Jing D
> Cc: dev@dpdk.org; Ananyev, Konstantin
> Subject: Re: [PATCH v2 2/6] igb: change for VMDQ arguments expansion
>
> 2014-10-16 18:07, Chen Jing D:
> > --- a/lib/librte_pmd_e1000/igb_ethdev.c
> > +++ b/lib/librte_pmd_e1000/igb_ethdev.c
> > @@ -1286,18 +1286,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
> > dev_info->max_rx_queues = 16;
> > dev_info->max_tx_queues = 16;
> > dev_info->max_vmdq_pools = ETH_8_POOLS;
> > + dev_info->vmdq_queue_num = 16;
> > break;
> >
> > case e1000_82580:
> > dev_info->max_rx_queues = 8;
> > dev_info->max_tx_queues = 8;
> > dev_info->max_vmdq_pools = ETH_8_POOLS;
> > + dev_info->vmdq_queue_num = 8;
> > break;
> >
> > case e1000_i350:
> > dev_info->max_rx_queues = 8;
> > dev_info->max_tx_queues = 8;
> > dev_info->max_vmdq_pools = ETH_8_POOLS;
> > + dev_info->vmdq_queue_num = 8;
> > break;
>
> Why not simply set it only once?
> dev_info->vmdq_queue_num = dev_info->max_rx_queues;
There are some other NIC types in this 'switch'. Vmdq_queue_num is set in case max_vmdq_pools is not 0.
>
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] ixgbe: change for VMDQ arguments expansion
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 1/6] ether: enhancement for " Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support Chen Jing D(Mark)
` (4 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index f4b590b..d0f9bcb 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1933,6 +1933,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vmdq_pools = ETH_16_POOLS;
else
dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
` (2 preceding siblings ...)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 3/6] ixgbe: " Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-11-03 18:33 ` Thomas Monjalon
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
` (3 subsequent siblings)
7 siblings, 1 reply; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Get maximum number of VMDQ pools supported in dev_init.
2. Fill VMDQ info in i40e_dev_info_get.
3. Setup VMDQ pools in i40e_dev_configure.
4. i40e_vsi_setup change to support creation of VMDQ VSI.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
config/common_linuxapp | 1 +
lib/librte_pmd_i40e/i40e_ethdev.c | 237 ++++++++++++++++++++++++++++++++-----
lib/librte_pmd_i40e/i40e_ethdev.h | 17 +++-
3 files changed, 225 insertions(+), 30 deletions(-)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 5bee910..d0bb3f7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -208,6 +208,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index a00d6ca..ad65e25 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -168,6 +168,7 @@ static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -269,21 +270,11 @@ static struct eth_driver rte_i40e_pmd = {
};
static inline int
-i40e_prev_power_of_2(int n)
+i40e_align_floor(int n)
{
- int p = n;
-
- --p;
- p |= p >> 1;
- p |= p >> 2;
- p |= p >> 4;
- p |= p >> 8;
- p |= p >> 16;
- if (p == (n - 1))
- return n;
- p >>= 1;
-
- return ++p;
+ if (n == 0)
+ return 0;
+ return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n)));
}
static inline int
@@ -500,7 +491,7 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
if (!dev->data->mac_addrs) {
PMD_INIT_LOG(ERR, "Failed to allocated memory "
"for storing mac address");
- goto err_get_mac_addr;
+ goto err_mac_alloc;
}
ether_addr_copy((struct ether_addr *)hw->mac.perm_addr,
&dev->data->mac_addrs[0]);
@@ -521,8 +512,9 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
return 0;
+err_mac_alloc:
+ i40e_vsi_release(pf->main_vsi);
err_setup_pf_switch:
- rte_free(pf->main_vsi);
err_get_mac_addr:
err_configure_lan_hmc:
(void)i40e_shutdown_lan_hmc(hw);
@@ -541,6 +533,27 @@ err_get_capabilities:
static int
i40e_dev_configure(struct rte_eth_dev *dev)
{
+ int ret;
+ enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+
+ /* VMDQ setup.
+ * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and
+ * RSS setting have different requirements.
+ * General PMD driver call sequence are NIC init, configure,
+ * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
+ * will try to lookup the VSI that specific queue belongs to if VMDQ
+ * applicable. So, VMDQ setting has to be done before
+ * rx/tx_queue_setup(). This function is good to place vmdq_setup.
+ * For RSS setting, it will try to calculate actual configured RX queue
+ * number, which will be available after rx_queue_setup(). dev_start()
+ * function is good to place RSS setup.
+ */
+ if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ ret = i40e_vmdq_setup(dev);
+ if (ret)
+ return ret;
+ }
+
return i40e_dev_init_vlan(dev);
}
@@ -1389,6 +1402,16 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
DEV_TX_OFFLOAD_SCTP_CKSUM;
+
+ if (pf->flags | I40E_FLAG_VMDQ) {
+ dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_queue_base = dev_info->max_rx_queues;
+ dev_info->vmdq_queue_num = pf->vmdq_nb_qps *
+ pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE;
+ dev_info->max_rx_queues += dev_info->vmdq_queue_num;
+ dev_info->max_tx_queues += dev_info->vmdq_queue_num;
+ }
}
static int
@@ -1814,7 +1837,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis;
+ uint16_t sum_queues = 0, sum_vsis, left_queues;
/* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
@@ -1830,7 +1853,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->flags |= I40E_FLAG_RSS;
pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
(uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_prev_power_of_2(pf->lan_nb_qps);
+ pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
} else
pf->lan_nb_qps = 1;
sum_queues = pf->lan_nb_qps;
@@ -1864,11 +1887,19 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = I40E_DEFAULT_QP_NUM_VMDQ;
- sum_queues += pf->vmdq_nb_qps;
- sum_vsis += 1;
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->max_nb_vmdq_vsi = 1;
+ /*
+ * If VMDQ available, assume a single VSI can be created. Will adjust
+ * later.
+ */
+ sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ sum_vsis += pf->max_nb_vmdq_vsi;
+ } else {
+ pf->vmdq_nb_qps = 0;
+ pf->max_nb_vmdq_vsi = 0;
}
+ pf->nb_cfg_vmdq_vsi = 0;
if (hw->func_caps.fd) {
pf->flags |= I40E_FLAG_FDIR;
@@ -1889,6 +1920,22 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
return -EINVAL;
}
+ /* Adjust VMDQ setting to support as many VMs as possible */
+ if (pf->flags & I40E_FLAG_VMDQ) {
+ left_queues = hw->func_caps.num_rx_qp - sum_queues;
+
+ pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
+ pf->max_num_vsi - sum_vsis);
+
+ /* Limit the max VMDQ number that rte_ether that can support */
+ pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
+ ETH_64_POOLS - 1);
+
+ PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
+ pf->max_nb_vmdq_vsi);
+ PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ }
+
/* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
* cause */
if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
@@ -2281,7 +2328,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
vsi->enabled_tc = enabled_tcmap;
/* Number of queues per enabled TC */
- qpnum_per_tc = i40e_prev_power_of_2(vsi->nb_qps / total_tc);
+ qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc);
qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
@@ -2587,6 +2634,9 @@ i40e_vsi_setup(struct i40e_pf *pf,
case I40E_VSI_SRIOV :
vsi->nb_qps = pf->vf_nb_qps;
break;
+ case I40E_VSI_VMDQ2:
+ vsi->nb_qps = pf->vmdq_nb_qps;
+ break;
default:
goto fail_mem;
}
@@ -2728,8 +2778,44 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
- }
- else {
+ } else if (type == I40E_VSI_VMDQ2) {
+ memset(&ctxt, 0, sizeof(ctxt));
+ /*
+ * For other VSI, the uplink_seid equals to uplink VSI's
+ * uplink_seid since they share same VEB
+ */
+ vsi->uplink_seid = uplink_vsi->uplink_seid;
+ ctxt.pf_num = hw->pf_id;
+ ctxt.vf_num = 0;
+ ctxt.uplink_seid = vsi->uplink_seid;
+ ctxt.connection_type = 0x1;
+ ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2;
+
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID);
+ /* user_param carries flag to enable loop back */
+ if (user_param) {
+ ctxt.info.switch_id =
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB);
+ ctxt.info.switch_id |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
+ }
+
+ /* Configure port/vlan */
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL;
+ ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,
+ I40E_DEFAULT_TCMAP);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to configure "
+ "TC queue mapping");
+ goto fail_msix_alloc;
+ }
+ ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP;
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID);
+ } else {
PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet");
goto fail_msix_alloc;
}
@@ -2901,7 +2987,6 @@ i40e_pf_setup(struct i40e_pf *pf)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_filter_control_settings settings;
- struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_vsi *vsi;
int ret;
@@ -2923,8 +3008,6 @@ i40e_pf_setup(struct i40e_pf *pf)
return I40E_ERR_NOT_READY;
}
pf->main_vsi = vsi;
- dev_data->nb_rx_queues = vsi->nb_qps;
- dev_data->nb_tx_queues = vsi->nb_qps;
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
@@ -3195,6 +3278,102 @@ i40e_vsi_init(struct i40e_vsi *vsi)
return err;
}
+static int
+i40e_vmdq_setup(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *conf = &dev->data->dev_conf;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int i, err, conf_vsis, j, loop;
+ struct i40e_vsi *vsi;
+ struct i40e_vmdq_info *vmdq_info;
+ struct rte_eth_vmdq_rx_conf *vmdq_conf;
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+ /*
+ * Disable interrupt to avoid message from VF. Furthermore, it will
+ * avoid race condition in VSI creation/destroy.
+ */
+ i40e_pf_disable_irq0(hw);
+
+ if ((pf->flags & I40E_FLAG_VMDQ) == 0) {
+ PMD_INIT_LOG(ERR, "FW doesn't support VMDQ");
+ return -ENOTSUP;
+ }
+
+ conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools;
+ if (conf_vsis > pf->max_nb_vmdq_vsi) {
+ PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u",
+ conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools,
+ pf->max_nb_vmdq_vsi);
+ return -ENOTSUP;
+ }
+
+ if (pf->vmdq != NULL) {
+ PMD_INIT_LOG(INFO, "VMDQ already configured");
+ return 0;
+ }
+
+ pf->vmdq = rte_zmalloc("vmdq_info_struct",
+ sizeof(*vmdq_info) * conf_vsis, 0);
+
+ if (pf->vmdq == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory");
+ return -ENOMEM;
+ }
+
+ vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf;
+
+ /* Create VMDQ VSI */
+ for (i = 0; i < conf_vsis; i++) {
+ vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi,
+ vmdq_conf->enable_loop_back);
+ if (vsi == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ vmdq_info = &pf->vmdq[i];
+ vmdq_info->pf = pf;
+ vmdq_info->vsi = vsi;
+ }
+ pf->nb_cfg_vmdq_vsi = conf_vsis;
+
+ /* Configure Vlan */
+ loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT;
+ for (i = 0; i < vmdq_conf->nb_pool_maps; i++) {
+ for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) {
+ if (vmdq_conf->pool_map[i].pools & (1UL << j)) {
+ PMD_INIT_LOG(INFO, "Add vlan %u to vmdq pool %u",
+ vmdq_conf->pool_map[i].vlan_id, j);
+
+ err = i40e_vsi_add_vlan(pf->vmdq[j].vsi,
+ vmdq_conf->pool_map[i].vlan_id);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to add vlan");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ }
+ }
+ }
+
+ i40e_pf_enable_irq0(hw);
+
+ return 0;
+
+err_vsi_setup:
+ for (i = 0; i < conf_vsis; i++)
+ if (pf->vmdq[i].vsi == NULL)
+ break;
+ else
+ i40e_vsi_release(pf->vmdq[i].vsi);
+
+ rte_free(pf->vmdq);
+ pf->vmdq = NULL;
+ i40e_pf_enable_irq0(hw);
+ return err;
+}
+
static void
i40e_stat_update_32(struct i40e_hw *hw,
uint32_t reg,
@@ -4086,7 +4265,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_prev_power_of_2(pf->dev_data->nb_rx_queues);
+ uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index 64deef2..b06de05 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -45,13 +45,15 @@
#define I40E_QUEUE_BASE_ADDR_UNIT 128
/* number of VSIs and queue default setting */
#define I40E_MAX_QP_NUM_PER_VF 16
-#define I40E_DEFAULT_QP_NUM_VMDQ 64
#define I40E_DEFAULT_QP_NUM_FDIR 64
#define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
#define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE)
/* Default TC traffic in case DCB is not enabled */
#define I40E_DEFAULT_TCMAP 0x1
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define I40E_VMDQ_POOL_BASE 1
+
/* i40e flags */
#define I40E_FLAG_RSS (1ULL << 0)
#define I40E_FLAG_DCB (1ULL << 1)
@@ -189,6 +191,14 @@ struct i40e_pf_vf {
};
/*
+ * Structure to store private data for VMDQ instance
+ */
+struct i40e_vmdq_info {
+ struct i40e_pf *pf;
+ struct i40e_vsi *vsi;
+};
+
+/*
* Structure to store private data specific for PF instance.
*/
struct i40e_pf {
@@ -216,6 +226,11 @@ struct i40e_pf {
uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
uint16_t vf_nb_qps; /* The number of queue pairs of VF */
uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+
+ /* VMDQ related info */
+ uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */
+ uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */
+ struct i40e_vmdq_info *vmdq;
};
enum pending_msg {
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support Chen Jing D(Mark)
@ 2014-11-03 18:33 ` Thomas Monjalon
2014-11-04 5:22 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-11-03 18:33 UTC (permalink / raw)
To: Chen Jing D(Mark); +Cc: dev
Hi Jing,
2014-10-16 18:07, Chen Jing D:
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -208,6 +208,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
> CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
> +CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
It seems you missed Pablo's comment.
Should you add this option in BSD configuration?
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support
2014-11-03 18:33 ` Thomas Monjalon
@ 2014-11-04 5:22 ` Chen, Jing D
0 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-11-04 5:22 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi,
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 04, 2014 2:34 AM
> To: Chen, Jing D
> Cc: dev@dpdk.org; Ananyev, Konstantin
> Subject: Re: [PATCH v2 4/6] i40e: add VMDQ support
>
> Hi Jing,
>
> 2014-10-16 18:07, Chen Jing D:
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -208,6 +208,7 @@
> CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
> > CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n
> > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
> > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
> > +CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
>
> It seems you missed Pablo's comment.
> Should you add this option in BSD configuration?
Sorry, missed Pablo's email. Will add it for BSD.
>
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] i40e: macaddr add/del enhancement
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
` (3 preceding siblings ...)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 4/6] i40e: add VMDQ support Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
` (2 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Change i40e_macaddr_add and i40e_macaddr_remove functions to support
multiple macaddr add/delete. In the meanwhile, support macaddr ops
on different pools.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 89 +++++++++++++++++-------------------
1 files changed, 42 insertions(+), 47 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index ad65e25..c0e9f48 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -1532,45 +1532,37 @@ i40e_priority_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev,
static void
i40e_macaddr_add(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
- __attribute__((unused)) uint32_t index,
- __attribute__((unused)) uint32_t pool)
+ __rte_unused uint32_t index,
+ uint32_t pool)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct ether_addr old_mac;
+ struct i40e_vsi *vsi;
int ret;
- if (!is_valid_assigned_ether_addr(mac_addr)) {
- PMD_DRV_LOG(ERR, "Invalid ethernet address");
- return;
- }
-
- if (is_same_ether_addr(mac_addr, &(pf->dev_addr))) {
- PMD_DRV_LOG(INFO, "Ignore adding permanent mac address");
+ /* If VMDQ not enabled or configured, return */
+ if (pool != 0 && (!(pf->flags | I40E_FLAG_VMDQ) || !pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u",
+ pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled",
+ pool);
return;
}
- /* Write mac address */
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- mac_addr->addr_bytes, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
+ if (pool > pf->nb_cfg_vmdq_vsi) {
+ PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u",
+ pool, pf->nb_cfg_vmdq_vsi);
return;
}
- (void)rte_memcpy(&old_mac, hw->mac.addr, ETHER_ADDR_LEN);
- (void)rte_memcpy(hw->mac.addr, mac_addr->addr_bytes,
- ETHER_ADDR_LEN);
+ if (pool == 0)
+ vsi = pf->main_vsi;
+ else
+ vsi = pf->vmdq[pool - 1].vsi;
ret = i40e_vsi_add_mac(vsi, mac_addr);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter");
return;
}
-
- ether_addr_copy(mac_addr, &pf->dev_addr);
- i40e_vsi_delete_mac(vsi, &old_mac);
}
/* Remove a MAC address, and update filters */
@@ -1578,36 +1570,39 @@ static void
i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct rte_eth_dev_data *data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct i40e_vsi *vsi;
+ struct rte_eth_dev_data *data = dev->data;
struct ether_addr *macaddr;
int ret;
- struct i40e_hw *hw =
- I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
- if (index >= vsi->max_macaddrs)
- return;
+ uint32_t i;
+ uint64_t pool_sel;
macaddr = &(data->mac_addrs[index]);
- if (!is_valid_assigned_ether_addr(macaddr))
- return;
-
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- hw->mac.perm_addr, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
- return;
- }
-
- (void)rte_memcpy(hw->mac.addr, hw->mac.perm_addr, ETHER_ADDR_LEN);
- ret = i40e_vsi_delete_mac(vsi, macaddr);
- if (ret != I40E_SUCCESS)
- return;
+ pool_sel = dev->data->mac_pool_sel[index];
+
+ for (i = 0; i < sizeof(pool_sel) * CHAR_BIT; i++) {
+ if (pool_sel & (1ULL << i)) {
+ if (i == 0)
+ vsi = pf->main_vsi;
+ else {
+ /* No VMDQ pool enabled or configured */
+ if (!(pf->flags | I40E_FLAG_VMDQ) ||
+ (i > pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "No VMDQ pool enabled"
+ "/configured");
+ return;
+ }
+ vsi = pf->vmdq[i - 1].vsi;
+ }
+ ret = i40e_vsi_delete_mac(vsi, macaddr);
- /* Clear device address as it has been removed */
- if (is_same_ether_addr(&(pf->dev_addr), macaddr))
- memset(&pf->dev_addr, 0, sizeof(struct ether_addr));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter");
+ return;
+ }
+ }
+ }
}
static int
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] i40e: Add full VMDQ pools support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
` (4 preceding siblings ...)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
@ 2014-10-16 10:07 ` Chen Jing D(Mark)
2014-10-21 3:30 ` [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support Cao, Min
2014-11-03 7:54 ` Chen, Jing D
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-10-16 10:07 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
1. Function i40e_vsi_* name change to i40e_dev_* since PF can contains
more than 1 VSI after VMDQ enabled.
2. i40e_dev_rx/tx_queue_setup change to have capability of setup
queues that belongs to VMDQ pools.
3. Add queue mapping. This will do a convertion between queue index
that application used and real NIC queue index.
3. i40e_dev_start/stop change to have capability switching VMDQ queues.
4. i40e_pf_config_rss change to calculate actual main VSI queue numbers
after VMDQ pools introduced.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 175 ++++++++++++++++++++++++++-----------
lib/librte_pmd_i40e/i40e_ethdev.h | 4 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 ++++++++++++++++++++++-----
3 files changed, 227 insertions(+), 77 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index c0e9f48..cf303d0 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -167,7 +167,7 @@ static int i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
-static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_dev_rxtx_init(struct i40e_pf *pf);
static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
@@ -770,8 +770,8 @@ i40e_dev_start(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- int ret;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int ret, i;
if ((dev->data->dev_conf.link_duplex != ETH_LINK_AUTONEG_DUPLEX) &&
(dev->data->dev_conf.link_duplex != ETH_LINK_FULL_DUPLEX)) {
@@ -782,26 +782,37 @@ i40e_dev_start(struct rte_eth_dev *dev)
}
/* Initialize VSI */
- ret = i40e_vsi_init(vsi);
+ ret = i40e_dev_rxtx_init(pf);
if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to init VSI");
+ PMD_DRV_LOG(ERR, "Failed to init rx/tx queues");
goto err_up;
}
/* Map queues with MSIX interrupt */
- i40e_vsi_queues_bind_intr(vsi);
- i40e_vsi_enable_queues_intr(vsi);
+ i40e_vsi_queues_bind_intr(main_vsi);
+ i40e_vsi_enable_queues_intr(main_vsi);
+
+ /* Map VMDQ VSI queues with MSIX interrupt */
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_queues_bind_intr(pf->vmdq[i].vsi);
+ i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi);
+ }
/* Enable all queues which have been configured */
- ret = i40e_vsi_switch_queues(vsi, TRUE);
+ ret = i40e_dev_switch_queues(pf, TRUE);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to enable VSI");
goto err_up;
}
/* Enable receiving broadcast packets */
- if ((vsi->type == I40E_VSI_MAIN) || (vsi->type == I40E_VSI_VMDQ2)) {
- ret = i40e_aq_set_vsi_broadcast(hw, vsi->seid, true, NULL);
+ ret = i40e_aq_set_vsi_broadcast(hw, main_vsi->seid, true, NULL);
+ if (ret != I40E_SUCCESS)
+ PMD_DRV_LOG(INFO, "fail to set vsi broadcast");
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ ret = i40e_aq_set_vsi_broadcast(hw, pf->vmdq[i].vsi->seid,
+ true, NULL);
if (ret != I40E_SUCCESS)
PMD_DRV_LOG(INFO, "fail to set vsi broadcast");
}
@@ -816,7 +827,8 @@ i40e_dev_start(struct rte_eth_dev *dev)
return I40E_SUCCESS;
err_up:
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+ i40e_dev_clear_queues(dev);
return ret;
}
@@ -825,17 +837,26 @@ static void
i40e_dev_stop(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int i;
/* Disable all queues */
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+
+ /* un-map queues with interrupt registers */
+ i40e_vsi_disable_queues_intr(main_vsi);
+ i40e_vsi_queues_unbind_intr(main_vsi);
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_disable_queues_intr(pf->vmdq[i].vsi);
+ i40e_vsi_queues_unbind_intr(pf->vmdq[i].vsi);
+ }
+
+ /* Clear all queues and release memory */
+ i40e_dev_clear_queues(dev);
/* Set link down */
i40e_dev_set_link_down(dev);
-
- /* un-map queues with interrupt registers */
- i40e_vsi_disable_queues_intr(vsi);
- i40e_vsi_queues_unbind_intr(vsi);
}
static void
@@ -3083,11 +3104,11 @@ i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
/* Swith on or off the tx queues */
static int
-i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_tx_queue *txq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3095,8 +3116,9 @@ i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
txq = dev_data->tx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!txq->q_set || (on && txq->start_tx_per_q))
+ if (!txq || !txq->q_set || (on && txq->start_tx_per_q))
continue;
+
if (on)
ret = i40e_dev_tx_queue_start(dev, i);
else
@@ -3161,11 +3183,11 @@ i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
}
/* Switch on or off the rx queues */
static int
-i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_rx_queue *rxq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3173,7 +3195,7 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
rxq = dev_data->rx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!rxq->q_set || (on && rxq->start_rx_per_q))
+ if (!rxq || !rxq->q_set || (on && rxq->start_rx_per_q))
continue;
if (on)
ret = i40e_dev_rx_queue_start(dev, i);
@@ -3188,26 +3210,26 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
/* Switch on or off all the rx/tx queues */
int
-i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_queues(struct i40e_pf *pf, bool on)
{
int ret;
if (on) {
/* enable rx queues before enabling tx queues */
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to switch rx queues");
return ret;
}
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
} else {
/* Stop tx queues before stopping rx queues */
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to switch tx queues");
return ret;
}
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
}
return ret;
@@ -3215,15 +3237,18 @@ i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
/* Initialize VSI for TX */
static int
-i40e_vsi_tx_init(struct i40e_vsi *vsi)
+i40e_dev_tx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
+ struct i40e_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
- ret = i40e_tx_queue_init(data->tx_queues[i]);
+ txq = data->tx_queues[i];
+ if (!txq || !txq->q_set)
+ continue;
+ ret = i40e_tx_queue_init(txq);
if (ret != I40E_SUCCESS)
break;
}
@@ -3233,16 +3258,20 @@ i40e_vsi_tx_init(struct i40e_vsi *vsi)
/* Initialize VSI for RX */
static int
-i40e_vsi_rx_init(struct i40e_vsi *vsi)
+i40e_dev_rx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
int ret = I40E_SUCCESS;
uint16_t i;
+ struct i40e_rx_queue *rxq;
i40e_pf_config_mq_rx(pf);
for (i = 0; i < data->nb_rx_queues; i++) {
- ret = i40e_rx_queue_init(data->rx_queues[i]);
+ rxq = data->rx_queues[i];
+ if (!rxq || !rxq->q_set)
+ continue;
+
+ ret = i40e_rx_queue_init(rxq);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to do RX queue "
"initialization");
@@ -3253,20 +3282,19 @@ i40e_vsi_rx_init(struct i40e_vsi *vsi)
return ret;
}
-/* Initialize VSI */
static int
-i40e_vsi_init(struct i40e_vsi *vsi)
+i40e_dev_rxtx_init(struct i40e_pf *pf)
{
int err;
- err = i40e_vsi_tx_init(vsi);
+ err = i40e_dev_tx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi TX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do TX initialization");
return err;
}
- err = i40e_vsi_rx_init(vsi);
+ err = i40e_dev_rx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi RX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do RX initialization");
return err;
}
@@ -4253,6 +4281,26 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
return 0;
}
+/* Calculate the maximum number of contiguous PF queues that are configured */
+static int
+i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
+{
+ struct rte_eth_dev_data *data = pf->dev_data;
+ int i, num;
+ struct i40e_rx_queue *rxq;
+
+ num = 0;
+ for (i = 0; i < pf->lan_nb_qps; i++) {
+ rxq = data->rx_queues[i];
+ if (rxq && rxq->q_set)
+ num++;
+ else
+ break;
+ }
+
+ return num;
+}
+
/* Configure RSS */
static int
i40e_pf_config_rss(struct i40e_pf *pf)
@@ -4260,7 +4308,25 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ uint16_t j, num;
+
+ /*
+ * If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calulate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ num = i40e_pf_calc_configured_queues_num(pf);
+ num = i40e_align_floor(num);
+ } else
+ num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+
+ PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_INIT_LOG(ERR, "No PF queues are configured to enable RSS");
+ return -ENOTSUP;
+ }
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -4292,16 +4358,19 @@ i40e_pf_config_rss(struct i40e_pf *pf)
static int
i40e_pf_config_mq_rx(struct i40e_pf *pf)
{
- if (!pf->dev_data->sriov.active) {
- switch (pf->dev_data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- i40e_pf_config_rss(pf);
- break;
- default:
- i40e_pf_disable_rss(pf);
- break;
- }
+ int ret = 0;
+ enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
+
+ if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
+ return -ENOTSUP;
}
- return 0;
+ /* RSS setup */
+ if (mq_mode & ETH_MQ_RX_RSS_FLAG)
+ ret = i40e_pf_config_rss(pf);
+ else
+ i40e_pf_disable_rss(pf);
+
+ return ret;
}
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index b06de05..9ad5611 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -305,7 +305,7 @@ struct i40e_adapter {
};
};
-int i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on);
+int i40e_dev_switch_queues(struct i40e_pf *pf, bool on);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf,
enum i40e_vsi_type type,
@@ -357,7 +357,7 @@ i40e_get_vsi_from_adapter(struct i40e_adapter *adapter)
return pf->main_vsi;
}
}
-#define I40E_DEV_PRIVATE_TO_VSI(adapter) \
+#define I40E_DEV_PRIVATE_TO_MAIN_VSI(adapter) \
i40e_get_vsi_from_adapter((struct i40e_adapter *)adapter)
/* I40E_VSI_TO */
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 099699c..c6facea 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -1443,14 +1443,58 @@ i40e_xmit_pkts_simple(void *tx_queue,
return nb_tx;
}
+/*
+ * Find the VSI the queue belongs to. 'queue_idx' is the queue index
+ * application used, which assume having sequential ones. But from driver's
+ * perspective, it's different. For example, q0 belongs to FDIR VSI, q1-q64
+ * to MAIN VSI, , q65-96 to SRIOV VSIs, q97-128 to VMDQ VSIs. For application
+ * running on host, q1-64 and q97-128 can be used, total 96 queues. They can
+ * use queue_idx from 0 to 95 to access queues, while real queue would be
+ * different. This function will do a queue mapping to find VSI the queue
+ * belongs to.
+ */
+static struct i40e_vsi*
+i40e_pf_get_vsi_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return pf->main_vsi;
+
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ /* queue_idx is greater than VMDQ VSIs range */
+ if (queue_idx > pf->nb_cfg_vmdq_vsi * pf->vmdq_nb_qps - 1) {
+ PMD_INIT_LOG(ERR, "queue_idx out of range. VMDQ configured?");
+ return NULL;
+ }
+
+ return pf->vmdq[queue_idx / pf->vmdq_nb_qps].vsi;
+}
+
+static uint16_t
+i40e_get_queue_offset_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return queue_idx;
+
+ /* It's VMDQ queues */
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ if (pf->nb_cfg_vmdq_vsi)
+ return queue_idx % pf->vmdq_nb_qps;
+ else {
+ PMD_INIT_LOG(ERR, "Fail to get queue offset");
+ return (uint16_t)(-1);
+ }
+}
+
int
i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1468,7 +1512,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
/* Init the RX tail regieter. */
I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, TRUE);
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
@@ -1485,16 +1529,18 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (rx_queue_id < dev->data->nb_rx_queues) {
rxq = dev->data->rx_queues[rx_queue_id];
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, FALSE);
+ /*
+ * rx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
@@ -1511,15 +1557,20 @@ i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_tx_queue *txq;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
if (tx_queue_id < dev->data->nb_tx_queues) {
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, TRUE);
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE);
if (err)
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
tx_queue_id);
@@ -1531,16 +1582,18 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_tx_queue *txq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (tx_queue_id < dev->data->nb_tx_queues) {
txq = dev->data->tx_queues[tx_queue_id];
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, FALSE);
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * txq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of",
@@ -1563,14 +1616,23 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_rx_queue *rxq;
const struct rte_memzone *rz;
uint32_t ring_size;
uint16_t len;
int use_def_burst_func = 1;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI not available or queue "
"index exceeds the maximum");
return I40E_ERR_PARAM;
@@ -1603,7 +1665,12 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
- rxq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ rxq->reg_idx = queue_idx;
+ else /* PF device */
+ rxq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
rxq->port_id = dev->data->port_id;
rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ?
0 : ETHER_CRC_LEN);
@@ -1761,13 +1828,22 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI is NULL, or queue index (%u) "
"exceeds the maximum", queue_idx);
return I40E_ERR_PARAM;
@@ -1891,7 +1967,12 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->hthresh = tx_conf->tx_thresh.hthresh;
txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
- txq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ txq->reg_idx = queue_idx;
+ else /* PF device */
+ txq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
txq->port_id = dev->data->port_id;
txq->txq_flags = tx_conf->txq_flags;
txq->vsi = vsi;
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
` (5 preceding siblings ...)
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
@ 2014-10-21 3:30 ` Cao, Min
2014-11-03 7:54 ` Chen, Jing D
7 siblings, 0 replies; 45+ messages in thread
From: Cao, Min @ 2014-10-21 3:30 UTC (permalink / raw)
To: Chen, Jing D, dev
Tested-by: Min Cao <min.cao@intel.com>
This patch has been verified on fortville and it is ready to be integrated to dpdk.org.
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark)
Sent: Thursday, October 16, 2014 6:07 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
v2:
- Fix a few typos.
- Add comments for RX mq mode flags.
- Remove '\n' from some log messages.
- Remove 'Acked-by' in commit log.
v1:
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 12 +-
lib/librte_ether/rte_ethdev.h | 43 +++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 499 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
8 files changed, 536 insertions(+), 169 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
` (6 preceding siblings ...)
2014-10-21 3:30 ` [dpdk-dev] [PATCH v2 0/6] i40e VMDQ support Cao, Min
@ 2014-11-03 7:54 ` Chen, Jing D
7 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-11-03 7:54 UTC (permalink / raw)
To: dev
Hi,
Any comments on this patch?
> -----Original Message-----
> From: Chen, Jing D
> Sent: Thursday, October 16, 2014 6:07 PM
> To: dev@dpdk.org
> Cc: Ananyev, Konstantin; thomas.monjalon@6wind.com; Chen, Jing D
> Subject: [PATCH v2 0/6] i40e VMDQ support
>
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
>
> v2:
> - Fix a few typos.
> - Add comments for RX mq mode flags.
> - Remove '\n' from some log messages.
> - Remove 'Acked-by' in commit log.
>
> v1:
> Define extra VMDQ arguments to expand VMDQ configuration. This also
> includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
> defects in rte_ether library.
>
> Add full VMDQ support in i40e PMD driver. renamed some functions, setup
> VMDQ VSI after it's enabled in application. It also make some improvement
> on macaddr add/delete to support setting multiple macaddr for single or
> multiple pools.
>
> Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
> configure/switch queues belonging to VMDQ pools.
>
>
> Chen Jing D(Mark) (6):
> ether: enhancement for VMDQ support
> igb: change for VMDQ arguments expansion
> ixgbe: change for VMDQ arguments expansion
> i40e: add VMDQ support
> i40e: macaddr add/del enhancement
> i40e: Add full VMDQ pools support
>
> config/common_linuxapp | 1 +
> lib/librte_ether/rte_ethdev.c | 12 +-
> lib/librte_ether/rte_ethdev.h | 43 +++-
> lib/librte_pmd_e1000/igb_ethdev.c | 3 +
> lib/librte_pmd_i40e/i40e_ethdev.c | 499
> ++++++++++++++++++++++++++---------
> lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
> lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
> 8 files changed, 536 insertions(+), 169 deletions(-)
>
> --
> 1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support
2014-09-23 13:14 ` [dpdk-dev] [PATCH 1/6] ether: enhancement for " Chen Jing D(Mark)
2014-10-14 14:09 ` Thomas Monjalon
2014-10-16 10:07 ` [dpdk-dev] [PATCH v2 0/6] i40e " Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 1/6] ether: enhancement for " Chen Jing D(Mark)
` (7 more replies)
2 siblings, 8 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
v3:
- Fix comments style.
- Simplify words in comments.
- Add variable defintion for BSD config file.
- Code rebase to latest DPDK repo.
v2:
- Fix a few typos.
- Add comments for RX mq mode flags.
- Remove '\n' from some log messages.
- Remove 'Acked-by' in commit log.
v1:
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_bsdapp | 1 +
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 6 +-
lib/librte_ether/rte_ethdev.h | 41 ++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 498 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
9 files changed, 532 insertions(+), 165 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 1/6] ether: enhancement for VMDQ support
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
` (6 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Clear pool bitmap when trying to remove specific MAC.
2. Define RSS, DCB and VMDQ flags to combine rx_mq_mode.
3. Use 'struct' to replace 'union', which to expand the rx_adv_conf
arguments to better support RSS, DCB and VMDQ.
4. Fix bug in rte_eth_dev_config_restore function, which will restore
all MAC address to default pool.
5. Define additional 3 arguments for better VMDQ support.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_ether/rte_ethdev.c | 6 +++++-
lib/librte_ether/rte_ethdev.h | 41 ++++++++++++++++++++++++++++++-----------
2 files changed, 35 insertions(+), 12 deletions(-)
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index ff1c769..5e9d576 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -813,7 +813,8 @@ rte_eth_dev_config_restore(uint8_t port_id)
continue;
/* add address to the hardware */
- if (*dev->dev_ops->mac_addr_add)
+ if (*dev->dev_ops->mac_addr_add &&
+ dev->data->mac_pool_sel[i] & (1ULL << pool))
(*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
else {
PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
@@ -2220,6 +2221,9 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr)
/* Update address in NIC data structure */
ether_addr_copy(&null_mac_addr, &dev->data->mac_addrs[index]);
+ /* reset pool bitmap */
+ dev->data->mac_pool_sel[index] = 0;
+
return 0;
}
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8bf274d..7e4c998 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -253,20 +253,36 @@ struct rte_eth_thresh {
};
/**
+ * Simple flags are used for rte_eth_conf.rxmode.mq_mode.
+ */
+#define ETH_MQ_RX_RSS_FLAG 0x1
+#define ETH_MQ_RX_DCB_FLAG 0x2
+#define ETH_MQ_RX_VMDQ_FLAG 0x4
+
+/**
* A set of values to identify what method is to be used to route
* packets to multiple queues.
*/
enum rte_eth_rx_mq_mode {
- ETH_MQ_RX_NONE = 0, /**< None of DCB,RSS or VMDQ mode */
-
- ETH_MQ_RX_RSS, /**< For RX side, only RSS is on */
- ETH_MQ_RX_DCB, /**< For RX side,only DCB is on. */
- ETH_MQ_RX_DCB_RSS, /**< Both DCB and RSS enable */
-
- ETH_MQ_RX_VMDQ_ONLY, /**< Only VMDQ, no RSS nor DCB */
- ETH_MQ_RX_VMDQ_RSS, /**< RSS mode with VMDQ */
- ETH_MQ_RX_VMDQ_DCB, /**< Use VMDQ+DCB to route traffic to queues */
- ETH_MQ_RX_VMDQ_DCB_RSS, /**< Enable both VMDQ and DCB in VMDq */
+ /** None of DCB,RSS or VMDQ mode */
+ ETH_MQ_RX_NONE = 0,
+
+ /** For RX side, only RSS is on */
+ ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+ /** For RX side,only DCB is on. */
+ ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+ /** Both DCB and RSS enable */
+ ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+
+ /** Only VMDQ, no RSS nor DCB */
+ ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+ /** RSS mode with VMDQ */
+ ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+ /** Use VMDQ+DCB to route traffic to queues */
+ ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+ /** Enable both VMDQ and DCB in VMDq */
+ ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
+ ETH_MQ_RX_VMDQ_FLAG,
};
/**
@@ -850,7 +866,7 @@ struct rte_eth_conf {
Read the datasheet of given ethernet controller
for details. The possible values of this field
are defined in implementation of each driver. */
- union {
+ struct {
struct rte_eth_rss_conf rss_conf; /**< Port RSS configuration */
struct rte_eth_vmdq_dcb_conf vmdq_dcb_conf;
/**< Port vmdq+dcb configuration. */
@@ -918,6 +934,9 @@ struct rte_eth_dev_info {
uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
struct rte_eth_rxconf default_rxconf; /**< Default RX configuration */
struct rte_eth_txconf default_txconf; /**< Default TX configuration */
+ uint16_t vmdq_queue_base; /**< First queue ID for VMDQ pools. */
+ uint16_t vmdq_queue_num; /**< Queue number for VMDQ pools. */
+ uint16_t vmdq_pool_base; /**< First ID of VMDQ pools. */
};
/** Maximum name length for extended statistics counters */
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 2/6] igb: change for VMDQ arguments expansion
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 1/6] ether: enhancement for " Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 3/6] ixgbe: " Chen Jing D(Mark)
` (5 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_e1000/igb_ethdev.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index 9e5665f..c13ea05 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -1299,18 +1299,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i354:
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 3/6] ixgbe: change for VMDQ arguments expansion
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 1/6] ether: enhancement for " Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 4/6] i40e: add VMDQ support Chen Jing D(Mark)
` (4 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index c5e4b71..9c73a30 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1950,6 +1950,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vmdq_pools = ETH_16_POOLS;
else
dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 4/6] i40e: add VMDQ support
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
` (2 preceding siblings ...)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 3/6] ixgbe: " Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
` (3 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Get maximum number of VMDQ pools supported in dev_init.
2. Fill VMDQ info in i40e_dev_info_get.
3. Setup VMDQ pools in i40e_dev_configure.
4. i40e_vsi_setup change to support creation of VMDQ VSI.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
config/common_bsdapp | 1 +
config/common_linuxapp | 1 +
lib/librte_pmd_i40e/i40e_ethdev.c | 236 ++++++++++++++++++++++++++++++++-----
lib/librte_pmd_i40e/i40e_ethdev.h | 17 +++-
4 files changed, 225 insertions(+), 30 deletions(-)
diff --git a/config/common_bsdapp b/config/common_bsdapp
index eebd05b..9dc9f56 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -183,6 +183,7 @@ CONFIG_RTE_LIBRTE_I40E_PF_DISABLE_STRIP_CRC=y
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=n
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
diff --git a/config/common_linuxapp b/config/common_linuxapp
index c5751bd..8be79c3 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -206,6 +206,7 @@ CONFIG_RTE_LIBRTE_I40E_PF_DISABLE_STRIP_CRC=n
CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 661d146..020881f 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -163,6 +163,7 @@ static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -275,21 +276,11 @@ static struct eth_driver rte_i40e_pmd = {
};
static inline int
-i40e_prev_power_of_2(int n)
+i40e_align_floor(int n)
{
- int p = n;
-
- --p;
- p |= p >> 1;
- p |= p >> 2;
- p |= p >> 4;
- p |= p >> 8;
- p |= p >> 16;
- if (p == (n - 1))
- return n;
- p >>= 1;
-
- return ++p;
+ if (n == 0)
+ return 0;
+ return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n)));
}
static inline int
@@ -506,7 +497,7 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
if (!dev->data->mac_addrs) {
PMD_INIT_LOG(ERR, "Failed to allocated memory "
"for storing mac address");
- goto err_get_mac_addr;
+ goto err_mac_alloc;
}
ether_addr_copy((struct ether_addr *)hw->mac.perm_addr,
&dev->data->mac_addrs[0]);
@@ -527,8 +518,9 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
return 0;
+err_mac_alloc:
+ i40e_vsi_release(pf->main_vsi);
err_setup_pf_switch:
- rte_free(pf->main_vsi);
err_get_mac_addr:
err_configure_lan_hmc:
(void)i40e_shutdown_lan_hmc(hw);
@@ -547,6 +539,27 @@ err_get_capabilities:
static int
i40e_dev_configure(struct rte_eth_dev *dev)
{
+ int ret;
+ enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+
+ /* VMDQ setup.
+ * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and
+ * RSS setting have different requirements.
+ * General PMD driver call sequence are NIC init, configure,
+ * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
+ * will try to lookup the VSI that specific queue belongs to if VMDQ
+ * applicable. So, VMDQ setting has to be done before
+ * rx/tx_queue_setup(). This function is good to place vmdq_setup.
+ * For RSS setting, it will try to calculate actual configured RX queue
+ * number, which will be available after rx_queue_setup(). dev_start()
+ * function is good to place RSS setup.
+ */
+ if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ ret = i40e_vmdq_setup(dev);
+ if (ret)
+ return ret;
+ }
+
return i40e_dev_init_vlan(dev);
}
@@ -1431,6 +1444,15 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | ETH_TXQ_FLAGS_NOOFFLOADS,
};
+ if (pf->flags | I40E_FLAG_VMDQ) {
+ dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_queue_base = dev_info->max_rx_queues;
+ dev_info->vmdq_queue_num = pf->vmdq_nb_qps *
+ pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE;
+ dev_info->max_rx_queues += dev_info->vmdq_queue_num;
+ dev_info->max_tx_queues += dev_info->vmdq_queue_num;
+ }
}
static int
@@ -1972,7 +1994,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis;
+ uint16_t sum_queues = 0, sum_vsis, left_queues;
/* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
@@ -1988,7 +2010,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->flags |= I40E_FLAG_RSS;
pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
(uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_prev_power_of_2(pf->lan_nb_qps);
+ pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
} else
pf->lan_nb_qps = 1;
sum_queues = pf->lan_nb_qps;
@@ -2022,11 +2044,19 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = I40E_DEFAULT_QP_NUM_VMDQ;
- sum_queues += pf->vmdq_nb_qps;
- sum_vsis += 1;
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->max_nb_vmdq_vsi = 1;
+ /*
+ * If VMDQ available, assume a single VSI can be created. Will adjust
+ * later.
+ */
+ sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ sum_vsis += pf->max_nb_vmdq_vsi;
+ } else {
+ pf->vmdq_nb_qps = 0;
+ pf->max_nb_vmdq_vsi = 0;
}
+ pf->nb_cfg_vmdq_vsi = 0;
if (hw->func_caps.fd) {
pf->flags |= I40E_FLAG_FDIR;
@@ -2047,6 +2077,22 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
return -EINVAL;
}
+ /* Adjust VMDQ setting to support as many VMs as possible */
+ if (pf->flags & I40E_FLAG_VMDQ) {
+ left_queues = hw->func_caps.num_rx_qp - sum_queues;
+
+ pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
+ pf->max_num_vsi - sum_vsis);
+
+ /* Limit the max VMDQ number that rte_ether that can support */
+ pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
+ ETH_64_POOLS - 1);
+
+ PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
+ pf->max_nb_vmdq_vsi);
+ PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ }
+
/* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
* cause */
if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
@@ -2439,7 +2485,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
vsi->enabled_tc = enabled_tcmap;
/* Number of queues per enabled TC */
- qpnum_per_tc = i40e_prev_power_of_2(vsi->nb_qps / total_tc);
+ qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc);
qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
@@ -2752,6 +2798,9 @@ i40e_vsi_setup(struct i40e_pf *pf,
case I40E_VSI_SRIOV :
vsi->nb_qps = pf->vf_nb_qps;
break;
+ case I40E_VSI_VMDQ2:
+ vsi->nb_qps = pf->vmdq_nb_qps;
+ break;
default:
goto fail_mem;
}
@@ -2893,8 +2942,44 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
- }
- else {
+ } else if (type == I40E_VSI_VMDQ2) {
+ memset(&ctxt, 0, sizeof(ctxt));
+ /*
+ * For other VSI, the uplink_seid equals to uplink VSI's
+ * uplink_seid since they share same VEB
+ */
+ vsi->uplink_seid = uplink_vsi->uplink_seid;
+ ctxt.pf_num = hw->pf_id;
+ ctxt.vf_num = 0;
+ ctxt.uplink_seid = vsi->uplink_seid;
+ ctxt.connection_type = 0x1;
+ ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2;
+
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID);
+ /* user_param carries flag to enable loop back */
+ if (user_param) {
+ ctxt.info.switch_id =
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB);
+ ctxt.info.switch_id |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
+ }
+
+ /* Configure port/vlan */
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL;
+ ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,
+ I40E_DEFAULT_TCMAP);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to configure "
+ "TC queue mapping");
+ goto fail_msix_alloc;
+ }
+ ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP;
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID);
+ } else {
PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet");
goto fail_msix_alloc;
}
@@ -3069,7 +3154,6 @@ i40e_pf_setup(struct i40e_pf *pf)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_filter_control_settings settings;
- struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_vsi *vsi;
int ret;
@@ -3091,8 +3175,6 @@ i40e_pf_setup(struct i40e_pf *pf)
return I40E_ERR_NOT_READY;
}
pf->main_vsi = vsi;
- dev_data->nb_rx_queues = vsi->nb_qps;
- dev_data->nb_tx_queues = vsi->nb_qps;
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
@@ -3363,6 +3445,102 @@ i40e_vsi_init(struct i40e_vsi *vsi)
return err;
}
+static int
+i40e_vmdq_setup(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *conf = &dev->data->dev_conf;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int i, err, conf_vsis, j, loop;
+ struct i40e_vsi *vsi;
+ struct i40e_vmdq_info *vmdq_info;
+ struct rte_eth_vmdq_rx_conf *vmdq_conf;
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+ /*
+ * Disable interrupt to avoid message from VF. Furthermore, it will
+ * avoid race condition in VSI creation/destroy.
+ */
+ i40e_pf_disable_irq0(hw);
+
+ if ((pf->flags & I40E_FLAG_VMDQ) == 0) {
+ PMD_INIT_LOG(ERR, "FW doesn't support VMDQ");
+ return -ENOTSUP;
+ }
+
+ conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools;
+ if (conf_vsis > pf->max_nb_vmdq_vsi) {
+ PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u",
+ conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools,
+ pf->max_nb_vmdq_vsi);
+ return -ENOTSUP;
+ }
+
+ if (pf->vmdq != NULL) {
+ PMD_INIT_LOG(INFO, "VMDQ already configured");
+ return 0;
+ }
+
+ pf->vmdq = rte_zmalloc("vmdq_info_struct",
+ sizeof(*vmdq_info) * conf_vsis, 0);
+
+ if (pf->vmdq == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory");
+ return -ENOMEM;
+ }
+
+ vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf;
+
+ /* Create VMDQ VSI */
+ for (i = 0; i < conf_vsis; i++) {
+ vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi,
+ vmdq_conf->enable_loop_back);
+ if (vsi == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ vmdq_info = &pf->vmdq[i];
+ vmdq_info->pf = pf;
+ vmdq_info->vsi = vsi;
+ }
+ pf->nb_cfg_vmdq_vsi = conf_vsis;
+
+ /* Configure Vlan */
+ loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT;
+ for (i = 0; i < vmdq_conf->nb_pool_maps; i++) {
+ for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) {
+ if (vmdq_conf->pool_map[i].pools & (1UL << j)) {
+ PMD_INIT_LOG(INFO, "Add vlan %u to vmdq pool %u",
+ vmdq_conf->pool_map[i].vlan_id, j);
+
+ err = i40e_vsi_add_vlan(pf->vmdq[j].vsi,
+ vmdq_conf->pool_map[i].vlan_id);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to add vlan");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ }
+ }
+ }
+
+ i40e_pf_enable_irq0(hw);
+
+ return 0;
+
+err_vsi_setup:
+ for (i = 0; i < conf_vsis; i++)
+ if (pf->vmdq[i].vsi == NULL)
+ break;
+ else
+ i40e_vsi_release(pf->vmdq[i].vsi);
+
+ rte_free(pf->vmdq);
+ pf->vmdq = NULL;
+ i40e_pf_enable_irq0(hw);
+ return err;
+}
+
static void
i40e_stat_update_32(struct i40e_hw *hw,
uint32_t reg,
@@ -4639,7 +4817,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_prev_power_of_2(pf->dev_data->nb_rx_queues);
+ uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index e61d258..69512cd 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -47,13 +47,15 @@
#define I40E_QUEUE_BASE_ADDR_UNIT 128
/* number of VSIs and queue default setting */
#define I40E_MAX_QP_NUM_PER_VF 16
-#define I40E_DEFAULT_QP_NUM_VMDQ 64
#define I40E_DEFAULT_QP_NUM_FDIR 64
#define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
#define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE)
/* Default TC traffic in case DCB is not enabled */
#define I40E_DEFAULT_TCMAP 0x1
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define I40E_VMDQ_POOL_BASE 1
+
/* i40e flags */
#define I40E_FLAG_RSS (1ULL << 0)
#define I40E_FLAG_DCB (1ULL << 1)
@@ -233,6 +235,14 @@ struct i40e_pf_vf {
};
/*
+ * Structure to store private data for VMDQ instance
+ */
+struct i40e_vmdq_info {
+ struct i40e_pf *pf;
+ struct i40e_vsi *vsi;
+};
+
+/*
* Structure to store private data specific for PF instance.
*/
struct i40e_pf {
@@ -264,6 +274,11 @@ struct i40e_pf {
/* store VXLAN UDP ports */
uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS];
uint16_t vxlan_bitmap; /* Vxlan bit mask */
+
+ /* VMDQ related info */
+ uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */
+ uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */
+ struct i40e_vmdq_info *vmdq;
};
enum pending_msg {
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 5/6] i40e: macaddr add/del enhancement
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
` (3 preceding siblings ...)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 4/6] i40e: add VMDQ support Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
` (2 subsequent siblings)
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Change i40e_macaddr_add and i40e_macaddr_remove functions to support
multiple macaddr add/delete. In the meanwhile, support macaddr ops
on different pools.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 90 +++++++++++++++++-------------------
1 files changed, 43 insertions(+), 47 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 020881f..21401f8 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -1573,48 +1573,41 @@ i40e_priority_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev,
static void
i40e_macaddr_add(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
- __attribute__((unused)) uint32_t index,
- __attribute__((unused)) uint32_t pool)
+ __rte_unused uint32_t index,
+ uint32_t pool)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_mac_filter_info mac_filter;
- struct i40e_vsi *vsi = pf->main_vsi;
- struct ether_addr old_mac;
+ struct i40e_vsi *vsi;
int ret;
- if (!is_valid_assigned_ether_addr(mac_addr)) {
- PMD_DRV_LOG(ERR, "Invalid ethernet address");
+ /* If VMDQ not enabled or configured, return */
+ if (pool != 0 && (!(pf->flags | I40E_FLAG_VMDQ) || !pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u",
+ pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled",
+ pool);
return;
}
- if (is_same_ether_addr(mac_addr, &(pf->dev_addr))) {
- PMD_DRV_LOG(INFO, "Ignore adding permanent mac address");
- return;
- }
-
- /* Write mac address */
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- mac_addr->addr_bytes, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
+ if (pool > pf->nb_cfg_vmdq_vsi) {
+ PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u",
+ pool, pf->nb_cfg_vmdq_vsi);
return;
}
- (void)rte_memcpy(&old_mac, hw->mac.addr, ETHER_ADDR_LEN);
- (void)rte_memcpy(hw->mac.addr, mac_addr->addr_bytes,
- ETHER_ADDR_LEN);
(void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN);
mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH;
+ if (pool == 0)
+ vsi = pf->main_vsi;
+ else
+ vsi = pf->vmdq[pool - 1].vsi;
+
ret = i40e_vsi_add_mac(vsi, &mac_filter);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter");
return;
}
-
- ether_addr_copy(mac_addr, &pf->dev_addr);
- i40e_vsi_delete_mac(vsi, &old_mac);
}
/* Remove a MAC address, and update filters */
@@ -1622,36 +1615,39 @@ static void
i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct rte_eth_dev_data *data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct i40e_vsi *vsi;
+ struct rte_eth_dev_data *data = dev->data;
struct ether_addr *macaddr;
int ret;
- struct i40e_hw *hw =
- I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
- if (index >= vsi->max_macaddrs)
- return;
+ uint32_t i;
+ uint64_t pool_sel;
macaddr = &(data->mac_addrs[index]);
- if (!is_valid_assigned_ether_addr(macaddr))
- return;
-
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- hw->mac.perm_addr, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
- return;
- }
-
- (void)rte_memcpy(hw->mac.addr, hw->mac.perm_addr, ETHER_ADDR_LEN);
- ret = i40e_vsi_delete_mac(vsi, macaddr);
- if (ret != I40E_SUCCESS)
- return;
+ pool_sel = dev->data->mac_pool_sel[index];
+
+ for (i = 0; i < sizeof(pool_sel) * CHAR_BIT; i++) {
+ if (pool_sel & (1ULL << i)) {
+ if (i == 0)
+ vsi = pf->main_vsi;
+ else {
+ /* No VMDQ pool enabled or configured */
+ if (!(pf->flags | I40E_FLAG_VMDQ) ||
+ (i > pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "No VMDQ pool enabled"
+ "/configured");
+ return;
+ }
+ vsi = pf->vmdq[i - 1].vsi;
+ }
+ ret = i40e_vsi_delete_mac(vsi, macaddr);
- /* Clear device address as it has been removed */
- if (is_same_ether_addr(&(pf->dev_addr), macaddr))
- memset(&pf->dev_addr, 0, sizeof(struct ether_addr));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter");
+ return;
+ }
+ }
+ }
}
/* Set perfect match or hash match of MAC and VLAN for a VF */
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH v3 6/6] i40e: Add full VMDQ pools support
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
` (4 preceding siblings ...)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
@ 2014-11-04 10:01 ` Chen Jing D(Mark)
2014-11-04 11:19 ` [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support Ananyev, Konstantin
2014-12-11 6:09 ` Cao, Min
7 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-11-04 10:01 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
1. Function i40e_vsi_* name change to i40e_dev_* since PF can contains
more than 1 VSI after VMDQ enabled.
2. i40e_dev_rx/tx_queue_setup change to have capability of setup
queues that belongs to VMDQ pools.
3. Add queue mapping. This will do a convertion between queue index
that application used and real NIC queue index.
3. i40e_dev_start/stop change to have capability switching VMDQ queues.
4. i40e_pf_config_rss change to calculate actual main VSI queue numbers
after VMDQ pools introduced.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 174 +++++++++++++++++++++++++-----------
lib/librte_pmd_i40e/i40e_ethdev.h | 4 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 ++++++++++++++++++++++-----
3 files changed, 226 insertions(+), 77 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 21401f8..5c15a9d 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -162,7 +162,7 @@ static int i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
-static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_dev_rxtx_init(struct i40e_pf *pf);
static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
@@ -783,8 +783,8 @@ i40e_dev_start(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- int ret;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int ret, i;
if ((dev->data->dev_conf.link_duplex != ETH_LINK_AUTONEG_DUPLEX) &&
(dev->data->dev_conf.link_duplex != ETH_LINK_FULL_DUPLEX)) {
@@ -795,26 +795,37 @@ i40e_dev_start(struct rte_eth_dev *dev)
}
/* Initialize VSI */
- ret = i40e_vsi_init(vsi);
+ ret = i40e_dev_rxtx_init(pf);
if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to init VSI");
+ PMD_DRV_LOG(ERR, "Failed to init rx/tx queues");
goto err_up;
}
/* Map queues with MSIX interrupt */
- i40e_vsi_queues_bind_intr(vsi);
- i40e_vsi_enable_queues_intr(vsi);
+ i40e_vsi_queues_bind_intr(main_vsi);
+ i40e_vsi_enable_queues_intr(main_vsi);
+
+ /* Map VMDQ VSI queues with MSIX interrupt */
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_queues_bind_intr(pf->vmdq[i].vsi);
+ i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi);
+ }
/* Enable all queues which have been configured */
- ret = i40e_vsi_switch_queues(vsi, TRUE);
+ ret = i40e_dev_switch_queues(pf, TRUE);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to enable VSI");
goto err_up;
}
/* Enable receiving broadcast packets */
- if ((vsi->type == I40E_VSI_MAIN) || (vsi->type == I40E_VSI_VMDQ2)) {
- ret = i40e_aq_set_vsi_broadcast(hw, vsi->seid, true, NULL);
+ ret = i40e_aq_set_vsi_broadcast(hw, main_vsi->seid, true, NULL);
+ if (ret != I40E_SUCCESS)
+ PMD_DRV_LOG(INFO, "fail to set vsi broadcast");
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ ret = i40e_aq_set_vsi_broadcast(hw, pf->vmdq[i].vsi->seid,
+ true, NULL);
if (ret != I40E_SUCCESS)
PMD_DRV_LOG(INFO, "fail to set vsi broadcast");
}
@@ -829,7 +840,8 @@ i40e_dev_start(struct rte_eth_dev *dev)
return I40E_SUCCESS;
err_up:
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+ i40e_dev_clear_queues(dev);
return ret;
}
@@ -838,17 +850,26 @@ static void
i40e_dev_stop(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int i;
/* Disable all queues */
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+
+ /* un-map queues with interrupt registers */
+ i40e_vsi_disable_queues_intr(main_vsi);
+ i40e_vsi_queues_unbind_intr(main_vsi);
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_disable_queues_intr(pf->vmdq[i].vsi);
+ i40e_vsi_queues_unbind_intr(pf->vmdq[i].vsi);
+ }
+
+ /* Clear all queues and release memory */
+ i40e_dev_clear_queues(dev);
/* Set link down */
i40e_dev_set_link_down(dev);
-
- /* un-map queues with interrupt registers */
- i40e_vsi_disable_queues_intr(vsi);
- i40e_vsi_queues_unbind_intr(vsi);
}
static void
@@ -3251,11 +3272,11 @@ i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
/* Swith on or off the tx queues */
static int
-i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_tx_queue *txq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3263,7 +3284,7 @@ i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
txq = dev_data->tx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!txq->q_set || (on && txq->tx_deferred_start))
+ if (!txq || !txq->q_set || (on && txq->tx_deferred_start))
continue;
if (on)
ret = i40e_dev_tx_queue_start(dev, i);
@@ -3329,11 +3350,11 @@ i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
}
/* Switch on or off the rx queues */
static int
-i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_rx_queue *rxq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3341,7 +3362,7 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
rxq = dev_data->rx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!rxq->q_set || (on && rxq->rx_deferred_start))
+ if (!rxq || !rxq->q_set || (on && rxq->rx_deferred_start))
continue;
if (on)
ret = i40e_dev_rx_queue_start(dev, i);
@@ -3356,26 +3377,26 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
/* Switch on or off all the rx/tx queues */
int
-i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_queues(struct i40e_pf *pf, bool on)
{
int ret;
if (on) {
/* enable rx queues before enabling tx queues */
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to switch rx queues");
return ret;
}
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
} else {
/* Stop tx queues before stopping rx queues */
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
if (ret) {
PMD_DRV_LOG(ERR, "Failed to switch tx queues");
return ret;
}
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
}
return ret;
@@ -3383,15 +3404,18 @@ i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
/* Initialize VSI for TX */
static int
-i40e_vsi_tx_init(struct i40e_vsi *vsi)
+i40e_dev_tx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
+ struct i40e_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
- ret = i40e_tx_queue_init(data->tx_queues[i]);
+ txq = data->tx_queues[i];
+ if (!txq || !txq->q_set)
+ continue;
+ ret = i40e_tx_queue_init(txq);
if (ret != I40E_SUCCESS)
break;
}
@@ -3401,16 +3425,20 @@ i40e_vsi_tx_init(struct i40e_vsi *vsi)
/* Initialize VSI for RX */
static int
-i40e_vsi_rx_init(struct i40e_vsi *vsi)
+i40e_dev_rx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
int ret = I40E_SUCCESS;
uint16_t i;
+ struct i40e_rx_queue *rxq;
i40e_pf_config_mq_rx(pf);
for (i = 0; i < data->nb_rx_queues; i++) {
- ret = i40e_rx_queue_init(data->rx_queues[i]);
+ rxq = data->rx_queues[i];
+ if (!rxq || !rxq->q_set)
+ continue;
+
+ ret = i40e_rx_queue_init(rxq);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to do RX queue "
"initialization");
@@ -3421,20 +3449,19 @@ i40e_vsi_rx_init(struct i40e_vsi *vsi)
return ret;
}
-/* Initialize VSI */
static int
-i40e_vsi_init(struct i40e_vsi *vsi)
+i40e_dev_rxtx_init(struct i40e_pf *pf)
{
int err;
- err = i40e_vsi_tx_init(vsi);
+ err = i40e_dev_tx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi TX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do TX initialization");
return err;
}
- err = i40e_vsi_rx_init(vsi);
+ err = i40e_dev_rx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi RX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do RX initialization");
return err;
}
@@ -4806,6 +4833,26 @@ i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev,
return ret;
}
+/* Calculate the maximum number of contiguous PF queues that are configured */
+static int
+i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
+{
+ struct rte_eth_dev_data *data = pf->dev_data;
+ int i, num;
+ struct i40e_rx_queue *rxq;
+
+ num = 0;
+ for (i = 0; i < pf->lan_nb_qps; i++) {
+ rxq = data->rx_queues[i];
+ if (rxq && rxq->q_set)
+ num++;
+ else
+ break;
+ }
+
+ return num;
+}
+
/* Configure RSS */
static int
i40e_pf_config_rss(struct i40e_pf *pf)
@@ -4813,7 +4860,25 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ uint16_t j, num;
+
+ /*
+ * If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calulate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ num = i40e_pf_calc_configured_queues_num(pf);
+ num = i40e_align_floor(num);
+ } else
+ num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+
+ PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_INIT_LOG(ERR, "No PF queues are configured to enable RSS");
+ return -ENOTSUP;
+ }
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -4911,18 +4976,21 @@ i40e_tunnel_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op,
static int
i40e_pf_config_mq_rx(struct i40e_pf *pf)
{
- if (!pf->dev_data->sriov.active) {
- switch (pf->dev_data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- i40e_pf_config_rss(pf);
- break;
- default:
- i40e_pf_disable_rss(pf);
- break;
- }
+ int ret = 0;
+ enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
+
+ if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
+ return -ENOTSUP;
}
- return 0;
+ /* RSS setup */
+ if (mq_mode & ETH_MQ_RX_RSS_FLAG)
+ ret = i40e_pf_config_rss(pf);
+ else
+ i40e_pf_disable_rss(pf);
+
+ return ret;
}
static int
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index 69512cd..afa14aa 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -355,7 +355,7 @@ struct i40e_adapter {
};
};
-int i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on);
+int i40e_dev_switch_queues(struct i40e_pf *pf, bool on);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf,
enum i40e_vsi_type type,
@@ -409,7 +409,7 @@ i40e_get_vsi_from_adapter(struct i40e_adapter *adapter)
return pf->main_vsi;
}
}
-#define I40E_DEV_PRIVATE_TO_VSI(adapter) \
+#define I40E_DEV_PRIVATE_TO_MAIN_VSI(adapter) \
i40e_get_vsi_from_adapter((struct i40e_adapter *)adapter)
/* I40E_VSI_TO */
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 315a9c0..487591d 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -1486,14 +1486,58 @@ i40e_xmit_pkts_simple(void *tx_queue,
return nb_tx;
}
+/*
+ * Find the VSI the queue belongs to. 'queue_idx' is the queue index
+ * application used, which assume having sequential ones. But from driver's
+ * perspective, it's different. For example, q0 belongs to FDIR VSI, q1-q64
+ * to MAIN VSI, , q65-96 to SRIOV VSIs, q97-128 to VMDQ VSIs. For application
+ * running on host, q1-64 and q97-128 can be used, total 96 queues. They can
+ * use queue_idx from 0 to 95 to access queues, while real queue would be
+ * different. This function will do a queue mapping to find VSI the queue
+ * belongs to.
+ */
+static struct i40e_vsi*
+i40e_pf_get_vsi_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return pf->main_vsi;
+
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ /* queue_idx is greater than VMDQ VSIs range */
+ if (queue_idx > pf->nb_cfg_vmdq_vsi * pf->vmdq_nb_qps - 1) {
+ PMD_INIT_LOG(ERR, "queue_idx out of range. VMDQ configured?");
+ return NULL;
+ }
+
+ return pf->vmdq[queue_idx / pf->vmdq_nb_qps].vsi;
+}
+
+static uint16_t
+i40e_get_queue_offset_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return queue_idx;
+
+ /* It's VMDQ queues */
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ if (pf->nb_cfg_vmdq_vsi)
+ return queue_idx % pf->vmdq_nb_qps;
+ else {
+ PMD_INIT_LOG(ERR, "Fail to get queue offset");
+ return (uint16_t)(-1);
+ }
+}
+
int
i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1511,7 +1555,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
/* Init the RX tail regieter. */
I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, TRUE);
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
@@ -1528,16 +1572,18 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (rx_queue_id < dev->data->nb_rx_queues) {
rxq = dev->data->rx_queues[rx_queue_id];
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, FALSE);
+ /*
+ * rx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
@@ -1554,15 +1600,20 @@ i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_tx_queue *txq;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
if (tx_queue_id < dev->data->nb_tx_queues) {
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, TRUE);
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE);
if (err)
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
tx_queue_id);
@@ -1574,16 +1625,18 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_tx_queue *txq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (tx_queue_id < dev->data->nb_tx_queues) {
txq = dev->data->tx_queues[tx_queue_id];
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, FALSE);
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * txq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of",
@@ -1606,14 +1659,23 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_rx_queue *rxq;
const struct rte_memzone *rz;
uint32_t ring_size;
uint16_t len;
int use_def_burst_func = 1;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI not available or queue "
"index exceeds the maximum");
return I40E_ERR_PARAM;
@@ -1646,7 +1708,12 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
- rxq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ rxq->reg_idx = queue_idx;
+ else /* PF device */
+ rxq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
rxq->port_id = dev->data->port_id;
rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ?
0 : ETHER_CRC_LEN);
@@ -1804,13 +1871,22 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI is NULL, or queue index (%u) "
"exceeds the maximum", queue_idx);
return I40E_ERR_PARAM;
@@ -1934,7 +2010,12 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->hthresh = tx_conf->tx_thresh.hthresh;
txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
- txq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ txq->reg_idx = queue_idx;
+ else /* PF device */
+ txq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
txq->port_id = dev->data->port_id;
txq->txq_flags = tx_conf->txq_flags;
txq->vsi = vsi;
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
` (5 preceding siblings ...)
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
@ 2014-11-04 11:19 ` Ananyev, Konstantin
2014-11-04 23:17 ` Thomas Monjalon
2014-12-11 6:09 ` Cao, Min
7 siblings, 1 reply; 45+ messages in thread
From: Ananyev, Konstantin @ 2014-11-04 11:19 UTC (permalink / raw)
To: Chen, Jing D, dev
> From: Chen, Jing D
> Sent: Tuesday, November 04, 2014 10:01 AM
> To: dev@dpdk.org
> Cc: Ananyev, Konstantin; thomas.monjalon@6wind.com; De Lara Guarch, Pablo; Chen, Jing D
> Subject: [PATCH v3 0/6] i40e VMDQ support
>
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
>
> v3:
> - Fix comments style.
> - Simplify words in comments.
> - Add variable defintion for BSD config file.
> - Code rebase to latest DPDK repo.
>
> v2:
> - Fix a few typos.
> - Add comments for RX mq mode flags.
> - Remove '\n' from some log messages.
> - Remove 'Acked-by' in commit log.
>
> v1:
> Define extra VMDQ arguments to expand VMDQ configuration. This also
> includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
> defects in rte_ether library.
>
> Add full VMDQ support in i40e PMD driver. renamed some functions, setup
> VMDQ VSI after it's enabled in application. It also make some improvement
> on macaddr add/delete to support setting multiple macaddr for single or
> multiple pools.
>
> Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
> configure/switch queues belonging to VMDQ pools.
>
>
> Chen Jing D(Mark) (6):
> ether: enhancement for VMDQ support
> igb: change for VMDQ arguments expansion
> ixgbe: change for VMDQ arguments expansion
> i40e: add VMDQ support
> i40e: macaddr add/del enhancement
> i40e: Add full VMDQ pools support
>
> config/common_bsdapp | 1 +
> config/common_linuxapp | 1 +
> lib/librte_ether/rte_ethdev.c | 6 +-
> lib/librte_ether/rte_ethdev.h | 41 ++-
> lib/librte_pmd_e1000/igb_ethdev.c | 3 +
> lib/librte_pmd_i40e/i40e_ethdev.c | 498 ++++++++++++++++++++++++++---------
> lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
> lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
> 9 files changed, 532 insertions(+), 165 deletions(-)
>
> --
> 1.7.7.6
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support
2014-11-04 11:19 ` [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support Ananyev, Konstantin
@ 2014-11-04 23:17 ` Thomas Monjalon
0 siblings, 0 replies; 45+ messages in thread
From: Thomas Monjalon @ 2014-11-04 23:17 UTC (permalink / raw)
To: Chen, Jing D; +Cc: dev
> > v3:
> > - Fix comments style.
> > - Simplify words in comments.
> > - Add variable defintion for BSD config file.
> > - Code rebase to latest DPDK repo.
> >
> > v2:
> > - Fix a few typos.
> > - Add comments for RX mq mode flags.
> > - Remove '\n' from some log messages.
> > - Remove 'Acked-by' in commit log.
> >
> > v1:
> > Define extra VMDQ arguments to expand VMDQ configuration. This also
> > includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
> > defects in rte_ether library.
> >
> > Add full VMDQ support in i40e PMD driver. renamed some functions, setup
> > VMDQ VSI after it's enabled in application. It also make some improvement
> > on macaddr add/delete to support setting multiple macaddr for single or
> > multiple pools.
> >
> > Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
> > configure/switch queues belonging to VMDQ pools.
> >
> > Chen Jing D(Mark) (6):
> > ether: enhancement for VMDQ support
> > igb: change for VMDQ arguments expansion
> > ixgbe: change for VMDQ arguments expansion
> > i40e: add VMDQ support
> > i40e: macaddr add/del enhancement
> > i40e: Add full VMDQ pools support
>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Applied
It will need to be well explained in the programmer's guide.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support
2014-11-04 10:01 ` [dpdk-dev] [PATCH v3 " Chen Jing D(Mark)
` (6 preceding siblings ...)
2014-11-04 11:19 ` [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support Ananyev, Konstantin
@ 2014-12-11 6:09 ` Cao, Min
7 siblings, 0 replies; 45+ messages in thread
From: Cao, Min @ 2014-12-11 6:09 UTC (permalink / raw)
To: dev
Tested-by: Min Cao <min.cao@intel.com>
Patch name: i40e VMDQ support
Brief description:
Test Flag: Tested-by
Tester name: min.cao@intel.com
Result summary: total 1 cases, 1 passed, 0 failed
Test Case 1:
Name: perf_vmdq_performance
Environment: OS: Fedora20 3.11.10-301.fc20.x86_64
gcc (GCC) 4.8.2
CPU: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz
NIC: Fortville eagle
Test result: PASSED
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark)
Sent: Tuesday, November 04, 2014 6:01 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v3 0/6] i40e VMDQ support
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
v3:
- Fix comments style.
- Simplify words in comments.
- Add variable defintion for BSD config file.
- Code rebase to latest DPDK repo.
v2:
- Fix a few typos.
- Add comments for RX mq mode flags.
- Remove '\n' from some log messages.
- Remove 'Acked-by' in commit log.
v1:
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_bsdapp | 1 +
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 6 +-
lib/librte_ether/rte_ethdev.h | 41 ++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 498 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
9 files changed, 532 insertions(+), 165 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH 2/6] igb: change for VMDQ arguments expansion
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 1/6] ether: enhancement for " Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 3/6] ixgbe: " Chen Jing D(Mark)
` (6 subsequent siblings)
8 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
lib/librte_pmd_e1000/igb_ethdev.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_ethdev.c b/lib/librte_pmd_e1000/igb_ethdev.c
index c9acdc5..dc0ea6d 100644
--- a/lib/librte_pmd_e1000/igb_ethdev.c
+++ b/lib/librte_pmd_e1000/igb_ethdev.c
@@ -1286,18 +1286,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev,
dev_info->max_rx_queues = 16;
dev_info->max_tx_queues = 16;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 16;
break;
case e1000_82580:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i350:
dev_info->max_rx_queues = 8;
dev_info->max_tx_queues = 8;
dev_info->max_vmdq_pools = ETH_8_POOLS;
+ dev_info->vmdq_queue_num = 8;
break;
case e1000_i354:
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH 3/6] ixgbe: change for VMDQ arguments expansion
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 1/6] ether: enhancement for " Chen Jing D(Mark)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 2/6] igb: change for VMDQ arguments expansion Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support Chen Jing D(Mark)
` (5 subsequent siblings)
8 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Assign new VMDQ arguments with correct values.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index f4b590b..d0f9bcb 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1933,6 +1933,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_vmdq_pools = ETH_16_POOLS;
else
dev_info->max_vmdq_pools = ETH_64_POOLS;
+ dev_info->vmdq_queue_num = dev_info->max_rx_queues;
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (2 preceding siblings ...)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 3/6] ixgbe: " Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-10-13 16:14 ` De Lara Guarch, Pablo
2014-09-23 13:14 ` [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
` (4 subsequent siblings)
8 siblings, 1 reply; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
The change includes several parts:
1. Get maximum number of VMDQ pools supported in dev_init.
2. Fill VMDQ info in i40e_dev_info_get.
3. Setup VMDQ pools in i40e_dev_configure.
4. i40e_vsi_setup change to support creation of VMDQ VSI.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
config/common_linuxapp | 1 +
lib/librte_pmd_i40e/i40e_ethdev.c | 237 ++++++++++++++++++++++++++++++++-----
lib/librte_pmd_i40e/i40e_ethdev.h | 17 +++-
3 files changed, 225 insertions(+), 30 deletions(-)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 5bee910..d0bb3f7 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -208,6 +208,7 @@ CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
+CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
# interval up to 8160 us, aligned to 2 (or default value)
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index a00d6ca..a267c96 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -168,6 +168,7 @@ static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
static void i40e_stat_update_48(struct i40e_hw *hw,
@@ -269,21 +270,11 @@ static struct eth_driver rte_i40e_pmd = {
};
static inline int
-i40e_prev_power_of_2(int n)
+i40e_align_floor(int n)
{
- int p = n;
-
- --p;
- p |= p >> 1;
- p |= p >> 2;
- p |= p >> 4;
- p |= p >> 8;
- p |= p >> 16;
- if (p == (n - 1))
- return n;
- p >>= 1;
-
- return ++p;
+ if (n == 0)
+ return 0;
+ return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n)));
}
static inline int
@@ -500,7 +491,7 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
if (!dev->data->mac_addrs) {
PMD_INIT_LOG(ERR, "Failed to allocated memory "
"for storing mac address");
- goto err_get_mac_addr;
+ goto err_mac_alloc;
}
ether_addr_copy((struct ether_addr *)hw->mac.perm_addr,
&dev->data->mac_addrs[0]);
@@ -521,8 +512,9 @@ eth_i40e_dev_init(__rte_unused struct eth_driver *eth_drv,
return 0;
+err_mac_alloc:
+ i40e_vsi_release(pf->main_vsi);
err_setup_pf_switch:
- rte_free(pf->main_vsi);
err_get_mac_addr:
err_configure_lan_hmc:
(void)i40e_shutdown_lan_hmc(hw);
@@ -541,6 +533,27 @@ err_get_capabilities:
static int
i40e_dev_configure(struct rte_eth_dev *dev)
{
+ int ret;
+ enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+
+ /* VMDQ setup.
+ * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and
+ * RSS setting have different requirements.
+ * General PMD driver call sequence are NIC init, configure,
+ * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it
+ * will try to lookup the VSI that specific queue belongs to if VMDQ
+ * applicable. So, VMDQ setting has to be done before
+ * rx/tx_queue_setup(). This function is good to place vmdq_setup.
+ * For RSS setting, it will try to calculate actual configured RX queue
+ * number, which will be available after rx_queue_setup(). dev_start()
+ * function is good to place RSS setup.
+ */
+ if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ ret = i40e_vmdq_setup(dev);
+ if (ret)
+ return ret;
+ }
+
return i40e_dev_init_vlan(dev);
}
@@ -1389,6 +1402,16 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
DEV_TX_OFFLOAD_SCTP_CKSUM;
+
+ if (pf->flags | I40E_FLAG_VMDQ) {
+ dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_queue_base = dev_info->max_rx_queues;
+ dev_info->vmdq_queue_num = pf->vmdq_nb_qps *
+ pf->max_nb_vmdq_vsi;
+ dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE;
+ dev_info->max_rx_queues += dev_info->vmdq_queue_num;
+ dev_info->max_tx_queues += dev_info->vmdq_queue_num;
+ }
}
static int
@@ -1814,7 +1837,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
- uint16_t sum_queues = 0, sum_vsis;
+ uint16_t sum_queues = 0, sum_vsis, left_queues;
/* First check if FW support SRIOV */
if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
@@ -1830,7 +1853,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
pf->flags |= I40E_FLAG_RSS;
pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
(uint32_t)(1 << hw->func_caps.rss_table_entry_width));
- pf->lan_nb_qps = i40e_prev_power_of_2(pf->lan_nb_qps);
+ pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
} else
pf->lan_nb_qps = 1;
sum_queues = pf->lan_nb_qps;
@@ -1864,11 +1887,19 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
if (hw->func_caps.vmdq) {
pf->flags |= I40E_FLAG_VMDQ;
- pf->vmdq_nb_qps = I40E_DEFAULT_QP_NUM_VMDQ;
- sum_queues += pf->vmdq_nb_qps;
- sum_vsis += 1;
- PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
+ pf->max_nb_vmdq_vsi = 1;
+ /*
+ * If VMDQ available, assume a single VSI can be created. Will adjust
+ * later.
+ */
+ sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
+ sum_vsis += pf->max_nb_vmdq_vsi;
+ } else {
+ pf->vmdq_nb_qps = 0;
+ pf->max_nb_vmdq_vsi = 0;
}
+ pf->nb_cfg_vmdq_vsi = 0;
if (hw->func_caps.fd) {
pf->flags |= I40E_FLAG_FDIR;
@@ -1889,6 +1920,22 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
return -EINVAL;
}
+ /* Adjust VMDQ setting to support as many VMs as possible */
+ if (pf->flags & I40E_FLAG_VMDQ) {
+ left_queues = hw->func_caps.num_rx_qp - sum_queues;
+
+ pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps,
+ pf->max_num_vsi - sum_vsis);
+
+ /* Limit the max VMDQ number that rte_ether that can support */
+ pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
+ ETH_64_POOLS - 1);
+
+ PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
+ pf->max_nb_vmdq_vsi);
+ PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps);
+ }
+
/* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
* cause */
if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
@@ -2281,7 +2328,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi,
vsi->enabled_tc = enabled_tcmap;
/* Number of queues per enabled TC */
- qpnum_per_tc = i40e_prev_power_of_2(vsi->nb_qps / total_tc);
+ qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc);
qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC);
bsf = rte_bsf32(qpnum_per_tc);
@@ -2587,6 +2634,9 @@ i40e_vsi_setup(struct i40e_pf *pf,
case I40E_VSI_SRIOV :
vsi->nb_qps = pf->vf_nb_qps;
break;
+ case I40E_VSI_VMDQ2:
+ vsi->nb_qps = pf->vmdq_nb_qps;
+ break;
default:
goto fail_mem;
}
@@ -2728,8 +2778,44 @@ i40e_vsi_setup(struct i40e_pf *pf,
* Since VSI is not created yet, only configure parameter,
* will add vsi below.
*/
- }
- else {
+ } else if (type == I40E_VSI_VMDQ2) {
+ memset(&ctxt, 0, sizeof(ctxt));
+ /*
+ * For other VSI, the uplink_seid equals to uplink VSI's
+ * uplink_seid since they share same VEB
+ */
+ vsi->uplink_seid = uplink_vsi->uplink_seid;
+ ctxt.pf_num = hw->pf_id;
+ ctxt.vf_num = 0;
+ ctxt.uplink_seid = vsi->uplink_seid;
+ ctxt.connection_type = 0x1;
+ ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2;
+
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID);
+ /* user_param carries flag to enable loop back */
+ if (user_param) {
+ ctxt.info.switch_id =
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB);
+ ctxt.info.switch_id |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
+ }
+
+ /* Configure port/vlan */
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
+ ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL;
+ ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,
+ I40E_DEFAULT_TCMAP);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to configure "
+ "TC queue mapping\n");
+ goto fail_msix_alloc;
+ }
+ ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP;
+ ctxt.info.valid_sections |=
+ rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID);
+ } else {
PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet");
goto fail_msix_alloc;
}
@@ -2901,7 +2987,6 @@ i40e_pf_setup(struct i40e_pf *pf)
{
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct i40e_filter_control_settings settings;
- struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_vsi *vsi;
int ret;
@@ -2923,8 +3008,6 @@ i40e_pf_setup(struct i40e_pf *pf)
return I40E_ERR_NOT_READY;
}
pf->main_vsi = vsi;
- dev_data->nb_rx_queues = vsi->nb_qps;
- dev_data->nb_tx_queues = vsi->nb_qps;
/* Configure filter control */
memset(&settings, 0, sizeof(settings));
@@ -3195,6 +3278,102 @@ i40e_vsi_init(struct i40e_vsi *vsi)
return err;
}
+static int
+i40e_vmdq_setup(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *conf = &dev->data->dev_conf;
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ int i, err, conf_vsis, j, loop;
+ struct i40e_vsi *vsi;
+ struct i40e_vmdq_info *vmdq_info;
+ struct rte_eth_vmdq_rx_conf *vmdq_conf;
+ struct i40e_hw *hw = I40E_PF_TO_HW(pf);
+
+ /*
+ * Disable interrupt to avoid message from VF. Furthermore, it will
+ * avoid race condition in VSI creation/destroy.
+ */
+ i40e_pf_disable_irq0(hw);
+
+ if ((pf->flags & I40E_FLAG_VMDQ) == 0) {
+ PMD_INIT_LOG(ERR, "FW doesn't support VMDQ");
+ return -ENOTSUP;
+ }
+
+ conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools;
+ if (conf_vsis > pf->max_nb_vmdq_vsi) {
+ PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u",
+ conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools,
+ pf->max_nb_vmdq_vsi);
+ return -ENOTSUP;
+ }
+
+ if (pf->vmdq != NULL) {
+ PMD_INIT_LOG(INFO, "VMDQ already configured");
+ return 0;
+ }
+
+ pf->vmdq = rte_zmalloc("vmdq_info_struct",
+ sizeof(*vmdq_info) * conf_vsis, 0);
+
+ if (pf->vmdq == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory");
+ return -ENOMEM;
+ }
+
+ vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf;
+
+ /* Create VMDQ VSI */
+ for (i = 0; i < conf_vsis; i++) {
+ vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi,
+ vmdq_conf->enable_loop_back);
+ if (vsi == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ vmdq_info = &pf->vmdq[i];
+ vmdq_info->pf = pf;
+ vmdq_info->vsi = vsi;
+ }
+ pf->nb_cfg_vmdq_vsi = conf_vsis;
+
+ /* Configure Vlan */
+ loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT;
+ for (i = 0; i < vmdq_conf->nb_pool_maps; i++) {
+ for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) {
+ if (vmdq_conf->pool_map[i].pools & (1UL << j)) {
+ PMD_INIT_LOG(INFO, "Add vlan %u to vmdq pool %u",
+ vmdq_conf->pool_map[i].vlan_id, j);
+
+ err = i40e_vsi_add_vlan(pf->vmdq[j].vsi,
+ vmdq_conf->pool_map[i].vlan_id);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to add vlan");
+ err = -1;
+ goto err_vsi_setup;
+ }
+ }
+ }
+ }
+
+ i40e_pf_enable_irq0(hw);
+
+ return 0;
+
+err_vsi_setup:
+ for (i = 0; i < conf_vsis; i++)
+ if (pf->vmdq[i].vsi == NULL)
+ break;
+ else
+ i40e_vsi_release(pf->vmdq[i].vsi);
+
+ rte_free(pf->vmdq);
+ pf->vmdq = NULL;
+ i40e_pf_enable_irq0(hw);
+ return err;
+}
+
static void
i40e_stat_update_32(struct i40e_hw *hw,
uint32_t reg,
@@ -4086,7 +4265,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_prev_power_of_2(pf->dev_data->nb_rx_queues);
+ uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index 64deef2..b06de05 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -45,13 +45,15 @@
#define I40E_QUEUE_BASE_ADDR_UNIT 128
/* number of VSIs and queue default setting */
#define I40E_MAX_QP_NUM_PER_VF 16
-#define I40E_DEFAULT_QP_NUM_VMDQ 64
#define I40E_DEFAULT_QP_NUM_FDIR 64
#define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
#define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE)
/* Default TC traffic in case DCB is not enabled */
#define I40E_DEFAULT_TCMAP 0x1
+/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
+#define I40E_VMDQ_POOL_BASE 1
+
/* i40e flags */
#define I40E_FLAG_RSS (1ULL << 0)
#define I40E_FLAG_DCB (1ULL << 1)
@@ -189,6 +191,14 @@ struct i40e_pf_vf {
};
/*
+ * Structure to store private data for VMDQ instance
+ */
+struct i40e_vmdq_info {
+ struct i40e_pf *pf;
+ struct i40e_vsi *vsi;
+};
+
+/*
* Structure to store private data specific for PF instance.
*/
struct i40e_pf {
@@ -216,6 +226,11 @@ struct i40e_pf {
uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
uint16_t vf_nb_qps; /* The number of queue pairs of VF */
uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */
+
+ /* VMDQ related info */
+ uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */
+ uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */
+ struct i40e_vmdq_info *vmdq;
};
enum pending_msg {
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support
2014-09-23 13:14 ` [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support Chen Jing D(Mark)
@ 2014-10-13 16:14 ` De Lara Guarch, Pablo
0 siblings, 0 replies; 45+ messages in thread
From: De Lara Guarch, Pablo @ 2014-10-13 16:14 UTC (permalink / raw)
To: Chen, Jing D, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark)
> Sent: Tuesday, September 23, 2014 2:14 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support
>
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
>
> The change includes several parts:
> 1. Get maximum number of VMDQ pools supported in dev_init.
> 2. Fill VMDQ info in i40e_dev_info_get.
> 3. Setup VMDQ pools in i40e_dev_configure.
> 4. i40e_vsi_setup change to support creation of VMDQ VSI.
>
> Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
> Acked-by: Jijiang Liu <jijiang.liu@intel.com>
> Acked-by: Huawei Xie <huawei.xie@intel.com>
> ---
> config/common_linuxapp | 1 +
> lib/librte_pmd_i40e/i40e_ethdev.c | 237
> ++++++++++++++++++++++++++++++++-----
> lib/librte_pmd_i40e/i40e_ethdev.h | 17 +++-
> 3 files changed, 225 insertions(+), 30 deletions(-)
>
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 5bee910..d0bb3f7 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -208,6 +208,7 @@
> CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC=y
> CONFIG_RTE_LIBRTE_I40E_ALLOW_UNSUPPORTED_SFP=n
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF=4
> +CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
Should we include this option in config_bsdapp as well?
> # interval up to 8160 us, aligned to 2 (or default value)
> CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
>
> diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c
> b/lib/librte_pmd_i40e/i40e_ethdev.c
> index a00d6ca..a267c96 100644
> --- a/lib/librte_pmd_i40e/i40e_ethdev.c
> +++ b/lib/librte_pmd_i40e/i40e_ethdev.c
> @@ -168,6 +168,7 @@ static int i40e_get_cap(struct i40e_hw *hw);
> static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
> static int i40e_pf_setup(struct i40e_pf *pf);
> static int i40e_vsi_init(struct i40e_vsi *vsi);
> +static int i40e_vmdq_setup(struct rte_eth_dev *dev);
> static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
> bool offset_loaded, uint64_t *offset, uint64_t *stat);
> static void i40e_stat_update_48(struct i40e_hw *hw,
> @@ -269,21 +270,11 @@ static struct eth_driver rte_i40e_pmd = {
> };
>
> static inline int
> -i40e_prev_power_of_2(int n)
> +i40e_align_floor(int n)
> {
> - int p = n;
> -
> - --p;
> - p |= p >> 1;
> - p |= p >> 2;
> - p |= p >> 4;
> - p |= p >> 8;
> - p |= p >> 16;
> - if (p == (n - 1))
> - return n;
> - p >>= 1;
> -
> - return ++p;
> + if (n == 0)
> + return 0;
> + return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n)));
> }
>
> static inline int
> @@ -500,7 +491,7 @@ eth_i40e_dev_init(__rte_unused struct eth_driver
> *eth_drv,
> if (!dev->data->mac_addrs) {
> PMD_INIT_LOG(ERR, "Failed to allocated memory "
> "for storing mac address");
> - goto err_get_mac_addr;
> + goto err_mac_alloc;
> }
> ether_addr_copy((struct ether_addr *)hw->mac.perm_addr,
> &dev->data->mac_addrs[0]);
> @@ -521,8 +512,9 @@ eth_i40e_dev_init(__rte_unused struct eth_driver
> *eth_drv,
>
> return 0;
>
> +err_mac_alloc:
> + i40e_vsi_release(pf->main_vsi);
> err_setup_pf_switch:
> - rte_free(pf->main_vsi);
> err_get_mac_addr:
> err_configure_lan_hmc:
> (void)i40e_shutdown_lan_hmc(hw);
> @@ -541,6 +533,27 @@ err_get_capabilities:
> static int
> i40e_dev_configure(struct rte_eth_dev *dev)
> {
> + int ret;
> + enum rte_eth_rx_mq_mode mq_mode = dev->data-
> >dev_conf.rxmode.mq_mode;
> +
> + /* VMDQ setup.
> + * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as
> VMDQ and
> + * RSS setting have different requirements.
> + * General PMD driver call sequence are NIC init, configure,
> + * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup()
> function, it
> + * will try to lookup the VSI that specific queue belongs to if VMDQ
> + * applicable. So, VMDQ setting has to be done before
> + * rx/tx_queue_setup(). This function is good to place vmdq_setup.
> + * For RSS setting, it will try to calculate actual configured RX queue
> + * number, which will be available after rx_queue_setup().
> dev_start()
> + * function is good to place RSS setup.
> + */
> + if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
> + ret = i40e_vmdq_setup(dev);
> + if (ret)
> + return ret;
> + }
> +
> return i40e_dev_init_vlan(dev);
> }
>
> @@ -1389,6 +1402,16 @@ i40e_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> DEV_TX_OFFLOAD_UDP_CKSUM |
> DEV_TX_OFFLOAD_TCP_CKSUM |
> DEV_TX_OFFLOAD_SCTP_CKSUM;
> +
> + if (pf->flags | I40E_FLAG_VMDQ) {
> + dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi;
> + dev_info->vmdq_queue_base = dev_info->max_rx_queues;
> + dev_info->vmdq_queue_num = pf->vmdq_nb_qps *
> + pf->max_nb_vmdq_vsi;
> + dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE;
> + dev_info->max_rx_queues += dev_info-
> >vmdq_queue_num;
> + dev_info->max_tx_queues += dev_info-
> >vmdq_queue_num;
> + }
> }
>
> static int
> @@ -1814,7 +1837,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
> {
> struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> - uint16_t sum_queues = 0, sum_vsis;
> + uint16_t sum_queues = 0, sum_vsis, left_queues;
>
> /* First check if FW support SRIOV */
> if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) {
> @@ -1830,7 +1853,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
> pf->flags |= I40E_FLAG_RSS;
> pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp,
> (uint32_t)(1 << hw-
> >func_caps.rss_table_entry_width));
> - pf->lan_nb_qps = i40e_prev_power_of_2(pf->lan_nb_qps);
> + pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps);
> } else
> pf->lan_nb_qps = 1;
> sum_queues = pf->lan_nb_qps;
> @@ -1864,11 +1887,19 @@ i40e_pf_parameter_init(struct rte_eth_dev
> *dev)
>
> if (hw->func_caps.vmdq) {
> pf->flags |= I40E_FLAG_VMDQ;
> - pf->vmdq_nb_qps = I40E_DEFAULT_QP_NUM_VMDQ;
> - sum_queues += pf->vmdq_nb_qps;
> - sum_vsis += 1;
> - PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf-
> >vmdq_nb_qps);
> + pf->vmdq_nb_qps =
> RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
> + pf->max_nb_vmdq_vsi = 1;
> + /*
> + * If VMDQ available, assume a single VSI can be created. Will
> adjust
> + * later.
> + */
> + sum_queues += pf->vmdq_nb_qps * pf-
> >max_nb_vmdq_vsi;
> + sum_vsis += pf->max_nb_vmdq_vsi;
> + } else {
> + pf->vmdq_nb_qps = 0;
> + pf->max_nb_vmdq_vsi = 0;
> }
> + pf->nb_cfg_vmdq_vsi = 0;
>
> if (hw->func_caps.fd) {
> pf->flags |= I40E_FLAG_FDIR;
> @@ -1889,6 +1920,22 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> + /* Adjust VMDQ setting to support as many VMs as possible */
> + if (pf->flags & I40E_FLAG_VMDQ) {
> + left_queues = hw->func_caps.num_rx_qp - sum_queues;
> +
> + pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf-
> >vmdq_nb_qps,
> + pf->max_num_vsi - sum_vsis);
> +
> + /* Limit the max VMDQ number that rte_ether that can
> support */
> + pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
> + ETH_64_POOLS - 1);
> +
> + PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u",
> + pf->max_nb_vmdq_vsi);
> + PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf-
> >vmdq_nb_qps);
> + }
> +
> /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr
> * cause */
> if (sum_vsis > hw->func_caps.num_msix_vectors - 1) {
> @@ -2281,7 +2328,7 @@ i40e_vsi_config_tc_queue_mapping(struct i40e_vsi
> *vsi,
> vsi->enabled_tc = enabled_tcmap;
>
> /* Number of queues per enabled TC */
> - qpnum_per_tc = i40e_prev_power_of_2(vsi->nb_qps / total_tc);
> + qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc);
> qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC);
> bsf = rte_bsf32(qpnum_per_tc);
>
> @@ -2587,6 +2634,9 @@ i40e_vsi_setup(struct i40e_pf *pf,
> case I40E_VSI_SRIOV :
> vsi->nb_qps = pf->vf_nb_qps;
> break;
> + case I40E_VSI_VMDQ2:
> + vsi->nb_qps = pf->vmdq_nb_qps;
> + break;
> default:
> goto fail_mem;
> }
> @@ -2728,8 +2778,44 @@ i40e_vsi_setup(struct i40e_pf *pf,
> * Since VSI is not created yet, only configure parameter,
> * will add vsi below.
> */
> - }
> - else {
> + } else if (type == I40E_VSI_VMDQ2) {
> + memset(&ctxt, 0, sizeof(ctxt));
> + /*
> + * For other VSI, the uplink_seid equals to uplink VSI's
> + * uplink_seid since they share same VEB
> + */
> + vsi->uplink_seid = uplink_vsi->uplink_seid;
> + ctxt.pf_num = hw->pf_id;
> + ctxt.vf_num = 0;
> + ctxt.uplink_seid = vsi->uplink_seid;
> + ctxt.connection_type = 0x1;
> + ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2;
> +
> + ctxt.info.valid_sections |=
> +
> rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID);
> + /* user_param carries flag to enable loop back */
> + if (user_param) {
> + ctxt.info.switch_id =
> +
> rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB);
> + ctxt.info.switch_id |=
> +
> rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB);
> + }
> +
> + /* Configure port/vlan */
> + ctxt.info.valid_sections |=
> +
> rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
> + ctxt.info.port_vlan_flags |=
> I40E_AQ_VSI_PVLAN_MODE_ALL;
> + ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,
> + I40E_DEFAULT_TCMAP);
> + if (ret != I40E_SUCCESS) {
> + PMD_DRV_LOG(ERR, "Failed to configure "
> + "TC queue mapping\n");
> + goto fail_msix_alloc;
> + }
> + ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP;
> + ctxt.info.valid_sections |=
> +
> rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID);
> + } else {
> PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet");
> goto fail_msix_alloc;
> }
> @@ -2901,7 +2987,6 @@ i40e_pf_setup(struct i40e_pf *pf)
> {
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> struct i40e_filter_control_settings settings;
> - struct rte_eth_dev_data *dev_data = pf->dev_data;
> struct i40e_vsi *vsi;
> int ret;
>
> @@ -2923,8 +3008,6 @@ i40e_pf_setup(struct i40e_pf *pf)
> return I40E_ERR_NOT_READY;
> }
> pf->main_vsi = vsi;
> - dev_data->nb_rx_queues = vsi->nb_qps;
> - dev_data->nb_tx_queues = vsi->nb_qps;
>
> /* Configure filter control */
> memset(&settings, 0, sizeof(settings));
> @@ -3195,6 +3278,102 @@ i40e_vsi_init(struct i40e_vsi *vsi)
> return err;
> }
>
> +static int
> +i40e_vmdq_setup(struct rte_eth_dev *dev)
> +{
> + struct rte_eth_conf *conf = &dev->data->dev_conf;
> + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> + int i, err, conf_vsis, j, loop;
> + struct i40e_vsi *vsi;
> + struct i40e_vmdq_info *vmdq_info;
> + struct rte_eth_vmdq_rx_conf *vmdq_conf;
> + struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> +
> + /*
> + * Disable interrupt to avoid message from VF. Furthermore, it will
> + * avoid race condition in VSI creation/destroy.
> + */
> + i40e_pf_disable_irq0(hw);
> +
> + if ((pf->flags & I40E_FLAG_VMDQ) == 0) {
> + PMD_INIT_LOG(ERR, "FW doesn't support VMDQ");
> + return -ENOTSUP;
> + }
> +
> + conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools;
> + if (conf_vsis > pf->max_nb_vmdq_vsi) {
> + PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u",
> + conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools,
> + pf->max_nb_vmdq_vsi);
> + return -ENOTSUP;
> + }
> +
> + if (pf->vmdq != NULL) {
> + PMD_INIT_LOG(INFO, "VMDQ already configured");
> + return 0;
> + }
> +
> + pf->vmdq = rte_zmalloc("vmdq_info_struct",
> + sizeof(*vmdq_info) * conf_vsis, 0);
> +
> + if (pf->vmdq == NULL) {
> + PMD_INIT_LOG(ERR, "Failed to allocate memory");
> + return -ENOMEM;
> + }
> +
> + vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf;
> +
> + /* Create VMDQ VSI */
> + for (i = 0; i < conf_vsis; i++) {
> + vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi,
> + vmdq_conf->enable_loop_back);
> + if (vsi == NULL) {
> + PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI");
> + err = -1;
> + goto err_vsi_setup;
> + }
> + vmdq_info = &pf->vmdq[i];
> + vmdq_info->pf = pf;
> + vmdq_info->vsi = vsi;
> + }
> + pf->nb_cfg_vmdq_vsi = conf_vsis;
> +
> + /* Configure Vlan */
> + loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT;
> + for (i = 0; i < vmdq_conf->nb_pool_maps; i++) {
> + for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) {
> + if (vmdq_conf->pool_map[i].pools & (1UL << j)) {
> + PMD_INIT_LOG(INFO, "Add vlan %u to vmdq
> pool %u",
> + vmdq_conf->pool_map[i].vlan_id, j);
> +
> + err = i40e_vsi_add_vlan(pf->vmdq[j].vsi,
> + vmdq_conf-
> >pool_map[i].vlan_id);
> + if (err) {
> + PMD_INIT_LOG(ERR, "Failed to add
> vlan");
> + err = -1;
> + goto err_vsi_setup;
> + }
> + }
> + }
> + }
> +
> + i40e_pf_enable_irq0(hw);
> +
> + return 0;
> +
> +err_vsi_setup:
> + for (i = 0; i < conf_vsis; i++)
> + if (pf->vmdq[i].vsi == NULL)
> + break;
> + else
> + i40e_vsi_release(pf->vmdq[i].vsi);
> +
> + rte_free(pf->vmdq);
> + pf->vmdq = NULL;
> + i40e_pf_enable_irq0(hw);
> + return err;
> +}
> +
> static void
> i40e_stat_update_32(struct i40e_hw *hw,
> uint32_t reg,
> @@ -4086,7 +4265,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
> struct i40e_hw *hw = I40E_PF_TO_HW(pf);
> struct rte_eth_rss_conf rss_conf;
> uint32_t i, lut = 0;
> - uint16_t j, num = i40e_prev_power_of_2(pf->dev_data-
> >nb_rx_queues);
> + uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
>
> for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
> if (j == num)
> diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h
> b/lib/librte_pmd_i40e/i40e_ethdev.h
> index 64deef2..b06de05 100644
> --- a/lib/librte_pmd_i40e/i40e_ethdev.h
> +++ b/lib/librte_pmd_i40e/i40e_ethdev.h
> @@ -45,13 +45,15 @@
> #define I40E_QUEUE_BASE_ADDR_UNIT 128
> /* number of VSIs and queue default setting */
> #define I40E_MAX_QP_NUM_PER_VF 16
> -#define I40E_DEFAULT_QP_NUM_VMDQ 64
> #define I40E_DEFAULT_QP_NUM_FDIR 64
> #define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
> #define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE)
> /* Default TC traffic in case DCB is not enabled */
> #define I40E_DEFAULT_TCMAP 0x1
>
> +/* Always assign pool 0 to main VSI, VMDQ will start from 1 */
> +#define I40E_VMDQ_POOL_BASE 1
> +
> /* i40e flags */
> #define I40E_FLAG_RSS (1ULL << 0)
> #define I40E_FLAG_DCB (1ULL << 1)
> @@ -189,6 +191,14 @@ struct i40e_pf_vf {
> };
>
> /*
> + * Structure to store private data for VMDQ instance
> + */
> +struct i40e_vmdq_info {
> + struct i40e_pf *pf;
> + struct i40e_vsi *vsi;
> +};
> +
> +/*
> * Structure to store private data specific for PF instance.
> */
> struct i40e_pf {
> @@ -216,6 +226,11 @@ struct i40e_pf {
> uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */
> uint16_t vf_nb_qps; /* The number of queue pairs of VF */
> uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director
> */
> +
> + /* VMDQ related info */
> + uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs
> supported */
> + uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */
> + struct i40e_vmdq_info *vmdq;
> };
>
> enum pending_msg {
> --
> 1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (3 preceding siblings ...)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 4/6] i40e: add VMDQ support Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-10-14 14:25 ` Thomas Monjalon
2014-09-23 13:14 ` [dpdk-dev] [PATCH 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
` (3 subsequent siblings)
8 siblings, 1 reply; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Change i40e_macaddr_add and i40e_macaddr_remove functions to support
multiple macaddr add/delete. In the meanwhile, support macaddr ops
on different pools.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 91 +++++++++++++++++-------------------
1 files changed, 43 insertions(+), 48 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index a267c96..3185654 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -1532,45 +1532,37 @@ i40e_priority_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev,
static void
i40e_macaddr_add(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
- __attribute__((unused)) uint32_t index,
- __attribute__((unused)) uint32_t pool)
+ __rte_unused uint32_t index,
+ uint32_t pool)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct ether_addr old_mac;
+ struct i40e_vsi *vsi;
int ret;
- if (!is_valid_assigned_ether_addr(mac_addr)) {
- PMD_DRV_LOG(ERR, "Invalid ethernet address");
- return;
- }
-
- if (is_same_ether_addr(mac_addr, &(pf->dev_addr))) {
- PMD_DRV_LOG(INFO, "Ignore adding permanent mac address");
+ /* If VMDQ not enabled or configured, return */
+ if (pool != 0 && (!(pf->flags | I40E_FLAG_VMDQ) || !pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u\n",
+ pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled",
+ pool);
return;
}
- /* Write mac address */
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- mac_addr->addr_bytes, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
+ if (pool > pf->nb_cfg_vmdq_vsi) {
+ PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u\n",
+ pool, pf->nb_cfg_vmdq_vsi);
return;
}
- (void)rte_memcpy(&old_mac, hw->mac.addr, ETHER_ADDR_LEN);
- (void)rte_memcpy(hw->mac.addr, mac_addr->addr_bytes,
- ETHER_ADDR_LEN);
+ if (pool == 0)
+ vsi = pf->main_vsi;
+ else
+ vsi = pf->vmdq[pool - 1].vsi;
ret = i40e_vsi_add_mac(vsi, mac_addr);
if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter");
+ PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter\n");
return;
}
-
- ether_addr_copy(mac_addr, &pf->dev_addr);
- i40e_vsi_delete_mac(vsi, &old_mac);
}
/* Remove a MAC address, and update filters */
@@ -1578,36 +1570,39 @@ static void
i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- struct rte_eth_dev_data *data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct i40e_vsi *vsi;
+ struct rte_eth_dev_data *data = dev->data;
struct ether_addr *macaddr;
int ret;
- struct i40e_hw *hw =
- I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
- if (index >= vsi->max_macaddrs)
- return;
+ uint32_t i;
+ uint64_t pool_sel;
macaddr = &(data->mac_addrs[index]);
- if (!is_valid_assigned_ether_addr(macaddr))
- return;
-
- ret = i40e_aq_mac_address_write(hw, I40E_AQC_WRITE_TYPE_LAA_ONLY,
- hw->mac.perm_addr, NULL);
- if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to write mac address");
- return;
- }
-
- (void)rte_memcpy(hw->mac.addr, hw->mac.perm_addr, ETHER_ADDR_LEN);
- ret = i40e_vsi_delete_mac(vsi, macaddr);
- if (ret != I40E_SUCCESS)
- return;
+ pool_sel = dev->data->mac_pool_sel[index];
+
+ for (i = 0; i < sizeof(pool_sel) * CHAR_BIT; i++) {
+ if (pool_sel & (1ULL << i)) {
+ if (i == 0)
+ vsi = pf->main_vsi;
+ else {
+ /* No VMDQ pool enabled or configured */
+ if (!(pf->flags | I40E_FLAG_VMDQ) ||
+ (i > pf->nb_cfg_vmdq_vsi)) {
+ PMD_DRV_LOG(ERR, "No VMDQ pool enabled"
+ "/configured\n");
+ return;
+ }
+ vsi = pf->vmdq[i - 1].vsi;
+ }
+ ret = i40e_vsi_delete_mac(vsi, macaddr);
- /* Clear device address as it has been removed */
- if (is_same_ether_addr(&(pf->dev_addr), macaddr))
- memset(&pf->dev_addr, 0, sizeof(struct ether_addr));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter\n");
+ return;
+ }
+ }
+ }
}
static int
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement
2014-09-23 13:14 ` [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
@ 2014-10-14 14:25 ` Thomas Monjalon
2014-10-15 7:01 ` Chen, Jing D
0 siblings, 1 reply; 45+ messages in thread
From: Thomas Monjalon @ 2014-10-14 14:25 UTC (permalink / raw)
To: Chen Jing D(Mark); +Cc: dev
2014-09-23 21:14, Chen Jing D:
> + PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u\n",
> + pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled",
> + pool);
[...]
> - if (ret != I40E_SUCCESS) {
> - PMD_DRV_LOG(ERR, "Failed to write mac address");
> + if (pool > pf->nb_cfg_vmdq_vsi) {
> + PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u\n",
> + pool, pf->nb_cfg_vmdq_vsi);
[...]
> - PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter");
> + PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter\n");
[...]
> + PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter\n");
I'm pretty sure you rebased this patch and solved the conflicts without
updating your patch accordingly. Indeed carriage returns have been removed
from logs recently.
Hint: rebase conflicts are really often meaningful ;)
--
Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement
2014-10-14 14:25 ` Thomas Monjalon
@ 2014-10-15 7:01 ` Chen, Jing D
0 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-10-15 7:01 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, October 14, 2014 10:25 PM
> To: Chen, Jing D
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement
>
> 2014-09-23 21:14, Chen Jing D:
> > + PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to
> pool %u\n",
> > + pf->flags | I40E_FLAG_VMDQ ? "configured" :
> "enabled",
> > + pool);
> [...]
> > - if (ret != I40E_SUCCESS) {
> > - PMD_DRV_LOG(ERR, "Failed to write mac address");
> > + if (pool > pf->nb_cfg_vmdq_vsi) {
> > + PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool
> is %u\n",
> > + pool, pf->nb_cfg_vmdq_vsi);
> [...]
> > - PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter");
> > + PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter\n");
> [...]
> > + PMD_DRV_LOG(ERR, "Failed to remove
> MACVLAN filter\n");
>
> I'm pretty sure you rebased this patch and solved the conflicts without
> updating your patch accordingly. Indeed carriage returns have been removed
> from logs recently.
> Hint: rebase conflicts are really often meaningful ;)
>
Thanks for your suggestion.
> --
> Thomas
^ permalink raw reply [flat|nested] 45+ messages in thread
* [dpdk-dev] [PATCH 6/6] i40e: Add full VMDQ pools support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (4 preceding siblings ...)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 5/6] i40e: macaddr add/del enhancement Chen Jing D(Mark)
@ 2014-09-23 13:14 ` Chen Jing D(Mark)
2014-10-10 10:45 ` [dpdk-dev] [PATCH 0/6] i40e VMDQ support Ananyev, Konstantin
` (2 subsequent siblings)
8 siblings, 0 replies; 45+ messages in thread
From: Chen Jing D(Mark) @ 2014-09-23 13:14 UTC (permalink / raw)
To: dev
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
1. Function i40e_vsi_* name change to i40e_dev_* since PF can contains
more than 1 VSI after VMDQ enabled.
2. i40e_dev_rx/tx_queue_setup change to have capability of setup
queues that belongs to VMDQ pools.
3. Add queue mapping. This will do a convertion between queue index
that application used and real NIC queue index.
3. i40e_dev_start/stop change to have capability switching VMDQ queues.
4. i40e_pf_config_rss change to calculate actual main VSI queue numbers
after VMDQ pools introduced.
Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: Jijiang Liu <jijiang.liu@intel.com>
Acked-by: Huawei Xie <huawei.xie@intel.com>
---
lib/librte_pmd_i40e/i40e_ethdev.c | 183 +++++++++++++++++++++++++------------
lib/librte_pmd_i40e/i40e_ethdev.h | 4 +-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++++++++++++++++-----
3 files changed, 231 insertions(+), 81 deletions(-)
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c
index 3185654..9009bd4 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.c
+++ b/lib/librte_pmd_i40e/i40e_ethdev.c
@@ -167,7 +167,7 @@ static int i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
static int i40e_get_cap(struct i40e_hw *hw);
static int i40e_pf_parameter_init(struct rte_eth_dev *dev);
static int i40e_pf_setup(struct i40e_pf *pf);
-static int i40e_vsi_init(struct i40e_vsi *vsi);
+static int i40e_dev_rxtx_init(struct i40e_pf *pf);
static int i40e_vmdq_setup(struct rte_eth_dev *dev);
static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg,
bool offset_loaded, uint64_t *offset, uint64_t *stat);
@@ -770,8 +770,8 @@ i40e_dev_start(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
- int ret;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int ret, i;
if ((dev->data->dev_conf.link_duplex != ETH_LINK_AUTONEG_DUPLEX) &&
(dev->data->dev_conf.link_duplex != ETH_LINK_FULL_DUPLEX)) {
@@ -782,41 +782,53 @@ i40e_dev_start(struct rte_eth_dev *dev)
}
/* Initialize VSI */
- ret = i40e_vsi_init(vsi);
+ ret = i40e_dev_rxtx_init(pf);
if (ret != I40E_SUCCESS) {
- PMD_DRV_LOG(ERR, "Failed to init VSI");
+ PMD_DRV_LOG(ERR, "Failed to init rx/tx queues\n");
goto err_up;
}
/* Map queues with MSIX interrupt */
- i40e_vsi_queues_bind_intr(vsi);
- i40e_vsi_enable_queues_intr(vsi);
+ i40e_vsi_queues_bind_intr(main_vsi);
+ i40e_vsi_enable_queues_intr(main_vsi);
+
+ /* Map VMDQ VSI queues with MSIX interrupt */
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_queues_bind_intr(pf->vmdq[i].vsi);
+ i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi);
+ }
/* Enable all queues which have been configured */
- ret = i40e_vsi_switch_queues(vsi, TRUE);
+ ret = i40e_dev_switch_queues(pf, TRUE);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to enable VSI");
goto err_up;
}
/* Enable receiving broadcast packets */
- if ((vsi->type == I40E_VSI_MAIN) || (vsi->type == I40E_VSI_VMDQ2)) {
- ret = i40e_aq_set_vsi_broadcast(hw, vsi->seid, true, NULL);
+ ret = i40e_aq_set_vsi_broadcast(hw, main_vsi->seid, true, NULL);
+ if (ret != I40E_SUCCESS)
+ PMD_DRV_LOG(INFO, "fail to set vsi broadcast\n");
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ ret = i40e_aq_set_vsi_broadcast(hw, pf->vmdq[i].vsi->seid,
+ true, NULL);
if (ret != I40E_SUCCESS)
- PMD_DRV_LOG(INFO, "fail to set vsi broadcast");
+ PMD_DRV_LOG(INFO, "fail to set vsi broadcast\n");
}
/* Apply link configure */
ret = i40e_apply_link_speed(dev);
if (I40E_SUCCESS != ret) {
- PMD_DRV_LOG(ERR, "Fail to apply link setting");
+ PMD_DRV_LOG(ERR, "Fail to apply link setting\n");
goto err_up;
}
return I40E_SUCCESS;
err_up:
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+ i40e_dev_clear_queues(dev);
return ret;
}
@@ -825,17 +837,26 @@ static void
i40e_dev_stop(struct rte_eth_dev *dev)
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct i40e_vsi *vsi = pf->main_vsi;
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ int i;
/* Disable all queues */
- i40e_vsi_switch_queues(vsi, FALSE);
+ i40e_dev_switch_queues(pf, FALSE);
+
+ /* un-map queues with interrupt registers */
+ i40e_vsi_disable_queues_intr(main_vsi);
+ i40e_vsi_queues_unbind_intr(main_vsi);
+
+ for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) {
+ i40e_vsi_disable_queues_intr(pf->vmdq[i].vsi);
+ i40e_vsi_queues_unbind_intr(pf->vmdq[i].vsi);
+ }
+
+ /* Clear all queues and release memory */
+ i40e_dev_clear_queues(dev);
/* Set link down */
i40e_dev_set_link_down(dev);
-
- /* un-map queues with interrupt registers */
- i40e_vsi_disable_queues_intr(vsi);
- i40e_vsi_queues_unbind_intr(vsi);
}
static void
@@ -3083,11 +3104,11 @@ i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
/* Swith on or off the tx queues */
static int
-i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_tx_queue *txq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3095,8 +3116,9 @@ i40e_vsi_switch_tx_queues(struct i40e_vsi *vsi, bool on)
txq = dev_data->tx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!txq->q_set || (on && txq->start_tx_per_q))
+ if (!txq || !txq->q_set || (on && txq->start_tx_per_q))
continue;
+
if (on)
ret = i40e_dev_tx_queue_start(dev, i);
else
@@ -3161,11 +3183,11 @@ i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on)
}
/* Switch on or off the rx queues */
static int
-i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on)
{
- struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(vsi);
+ struct rte_eth_dev_data *dev_data = pf->dev_data;
struct i40e_rx_queue *rxq;
- struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
+ struct rte_eth_dev *dev = pf->adapter->eth_dev;
uint16_t i;
int ret;
@@ -3173,7 +3195,7 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
rxq = dev_data->rx_queues[i];
/* Don't operate the queue if not configured or
* if starting only per queue */
- if (!rxq->q_set || (on && rxq->start_rx_per_q))
+ if (!rxq || !rxq->q_set || (on && rxq->start_rx_per_q))
continue;
if (on)
ret = i40e_dev_rx_queue_start(dev, i);
@@ -3188,26 +3210,26 @@ i40e_vsi_switch_rx_queues(struct i40e_vsi *vsi, bool on)
/* Switch on or off all the rx/tx queues */
int
-i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
+i40e_dev_switch_queues(struct i40e_pf *pf, bool on)
{
int ret;
if (on) {
/* enable rx queues before enabling tx queues */
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
if (ret) {
- PMD_DRV_LOG(ERR, "Failed to switch rx queues");
+ PMD_DRV_LOG(ERR, "Failed to switch rx queues\n");
return ret;
}
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
} else {
/* Stop tx queues before stopping rx queues */
- ret = i40e_vsi_switch_tx_queues(vsi, on);
+ ret = i40e_dev_switch_tx_queues(pf, on);
if (ret) {
- PMD_DRV_LOG(ERR, "Failed to switch tx queues");
+ PMD_DRV_LOG(ERR, "Failed to switch tx queues\n");
return ret;
}
- ret = i40e_vsi_switch_rx_queues(vsi, on);
+ ret = i40e_dev_switch_rx_queues(pf, on);
}
return ret;
@@ -3215,15 +3237,18 @@ i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on)
/* Initialize VSI for TX */
static int
-i40e_vsi_tx_init(struct i40e_vsi *vsi)
+i40e_dev_tx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
uint16_t i;
uint32_t ret = I40E_SUCCESS;
+ struct i40e_tx_queue *txq;
for (i = 0; i < data->nb_tx_queues; i++) {
- ret = i40e_tx_queue_init(data->tx_queues[i]);
+ txq = data->tx_queues[i];
+ if (!txq || !txq->q_set)
+ continue;
+ ret = i40e_tx_queue_init(txq);
if (ret != I40E_SUCCESS)
break;
}
@@ -3233,16 +3258,20 @@ i40e_vsi_tx_init(struct i40e_vsi *vsi)
/* Initialize VSI for RX */
static int
-i40e_vsi_rx_init(struct i40e_vsi *vsi)
+i40e_dev_rx_init(struct i40e_pf *pf)
{
- struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
struct rte_eth_dev_data *data = pf->dev_data;
int ret = I40E_SUCCESS;
uint16_t i;
+ struct i40e_rx_queue *rxq;
i40e_pf_config_mq_rx(pf);
for (i = 0; i < data->nb_rx_queues; i++) {
- ret = i40e_rx_queue_init(data->rx_queues[i]);
+ rxq = data->rx_queues[i];
+ if (!rxq || !rxq->q_set)
+ continue;
+
+ ret = i40e_rx_queue_init(rxq);
if (ret != I40E_SUCCESS) {
PMD_DRV_LOG(ERR, "Failed to do RX queue "
"initialization");
@@ -3253,20 +3282,19 @@ i40e_vsi_rx_init(struct i40e_vsi *vsi)
return ret;
}
-/* Initialize VSI */
static int
-i40e_vsi_init(struct i40e_vsi *vsi)
+i40e_dev_rxtx_init(struct i40e_pf *pf)
{
int err;
- err = i40e_vsi_tx_init(vsi);
+ err = i40e_dev_tx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi TX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do TX initialization");
return err;
}
- err = i40e_vsi_rx_init(vsi);
+ err = i40e_dev_rx_init(pf);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to do vsi RX initialization");
+ PMD_DRV_LOG(ERR, "Failed to do RX initialization");
return err;
}
@@ -4253,6 +4281,26 @@ i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
return 0;
}
+/* Calculate the maximum number of contiguous PF queues that are configured */
+static int
+i40e_pf_calc_configured_queues_num(struct i40e_pf *pf)
+{
+ struct rte_eth_dev_data *data = pf->dev_data;
+ int i, num;
+ struct i40e_rx_queue *rxq;
+
+ num = 0;
+ for (i = 0; i < pf->lan_nb_qps; i++) {
+ rxq = data->rx_queues[i];
+ if (rxq && rxq->q_set)
+ num++;
+ else
+ break;
+ }
+
+ return num;
+}
+
/* Configure RSS */
static int
i40e_pf_config_rss(struct i40e_pf *pf)
@@ -4260,7 +4308,25 @@ i40e_pf_config_rss(struct i40e_pf *pf)
struct i40e_hw *hw = I40E_PF_TO_HW(pf);
struct rte_eth_rss_conf rss_conf;
uint32_t i, lut = 0;
- uint16_t j, num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+ uint16_t j, num;
+
+ /*
+ * If both VMDQ and RSS enabled, not all of PF queues are configured.
+ * It's necessary to calulate the actual PF queues that are configured.
+ */
+ if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+ num = i40e_pf_calc_configured_queues_num(pf);
+ num = i40e_align_floor(num);
+ } else
+ num = i40e_align_floor(pf->dev_data->nb_rx_queues);
+
+ PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured",
+ num);
+
+ if (num == 0) {
+ PMD_INIT_LOG(ERR, "No PF queues are configured to enable RSS");
+ return -ENOTSUP;
+ }
for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) {
if (j == num)
@@ -4292,16 +4358,19 @@ i40e_pf_config_rss(struct i40e_pf *pf)
static int
i40e_pf_config_mq_rx(struct i40e_pf *pf)
{
- if (!pf->dev_data->sriov.active) {
- switch (pf->dev_data->dev_conf.rxmode.mq_mode) {
- case ETH_MQ_RX_RSS:
- i40e_pf_config_rss(pf);
- break;
- default:
- i40e_pf_disable_rss(pf);
- break;
- }
+ int ret = 0;
+ enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
+
+ if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+ PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet");
+ return -ENOTSUP;
}
- return 0;
+ /* RSS setup */
+ if (mq_mode & ETH_MQ_RX_RSS_FLAG)
+ ret = i40e_pf_config_rss(pf);
+ else
+ i40e_pf_disable_rss(pf);
+
+ return ret;
}
diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h
index b06de05..9ad5611 100644
--- a/lib/librte_pmd_i40e/i40e_ethdev.h
+++ b/lib/librte_pmd_i40e/i40e_ethdev.h
@@ -305,7 +305,7 @@ struct i40e_adapter {
};
};
-int i40e_vsi_switch_queues(struct i40e_vsi *vsi, bool on);
+int i40e_dev_switch_queues(struct i40e_pf *pf, bool on);
int i40e_vsi_release(struct i40e_vsi *vsi);
struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf,
enum i40e_vsi_type type,
@@ -357,7 +357,7 @@ i40e_get_vsi_from_adapter(struct i40e_adapter *adapter)
return pf->main_vsi;
}
}
-#define I40E_DEV_PRIVATE_TO_VSI(adapter) \
+#define I40E_DEV_PRIVATE_TO_MAIN_VSI(adapter) \
i40e_get_vsi_from_adapter((struct i40e_adapter *)adapter)
/* I40E_VSI_TO */
diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c
index 099699c..c6facea 100644
--- a/lib/librte_pmd_i40e/i40e_rxtx.c
+++ b/lib/librte_pmd_i40e/i40e_rxtx.c
@@ -1443,14 +1443,58 @@ i40e_xmit_pkts_simple(void *tx_queue,
return nb_tx;
}
+/*
+ * Find the VSI the queue belongs to. 'queue_idx' is the queue index
+ * application used, which assume having sequential ones. But from driver's
+ * perspective, it's different. For example, q0 belongs to FDIR VSI, q1-q64
+ * to MAIN VSI, , q65-96 to SRIOV VSIs, q97-128 to VMDQ VSIs. For application
+ * running on host, q1-64 and q97-128 can be used, total 96 queues. They can
+ * use queue_idx from 0 to 95 to access queues, while real queue would be
+ * different. This function will do a queue mapping to find VSI the queue
+ * belongs to.
+ */
+static struct i40e_vsi*
+i40e_pf_get_vsi_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return pf->main_vsi;
+
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ /* queue_idx is greater than VMDQ VSIs range */
+ if (queue_idx > pf->nb_cfg_vmdq_vsi * pf->vmdq_nb_qps - 1) {
+ PMD_INIT_LOG(ERR, "queue_idx out of range. VMDQ configured?");
+ return NULL;
+ }
+
+ return pf->vmdq[queue_idx / pf->vmdq_nb_qps].vsi;
+}
+
+static uint16_t
+i40e_get_queue_offset_by_qindex(struct i40e_pf *pf, uint16_t queue_idx)
+{
+ /* the queue in MAIN VSI range */
+ if (queue_idx < pf->main_vsi->nb_qps)
+ return queue_idx;
+
+ /* It's VMDQ queues */
+ queue_idx -= pf->main_vsi->nb_qps;
+
+ if (pf->nb_cfg_vmdq_vsi)
+ return queue_idx % pf->vmdq_nb_qps;
+ else {
+ PMD_INIT_LOG(ERR, "Fail to get queue offset");
+ return (uint16_t)(-1);
+ }
+}
+
int
i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -1468,7 +1512,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
/* Init the RX tail regieter. */
I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, TRUE);
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
@@ -1485,16 +1529,18 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_rx_queue *rxq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (rx_queue_id < dev->data->nb_rx_queues) {
rxq = dev->data->rx_queues[rx_queue_id];
- err = i40e_switch_rx_queue(hw, rx_queue_id + q_base, FALSE);
+ /*
+ * rx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
@@ -1511,15 +1557,20 @@ i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
int
i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
int err = -1;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_tx_queue *txq;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
if (tx_queue_id < dev->data->nb_tx_queues) {
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, TRUE);
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * rxq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE);
if (err)
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
tx_queue_id);
@@ -1531,16 +1582,18 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
int
i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
struct i40e_tx_queue *txq;
int err;
- struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
- uint16_t q_base = vsi->base_queue;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (tx_queue_id < dev->data->nb_tx_queues) {
txq = dev->data->tx_queues[tx_queue_id];
- err = i40e_switch_tx_queue(hw, tx_queue_id + q_base, FALSE);
+ /*
+ * tx_queue_id is queue id aplication refers to, while
+ * txq->reg_idx is the real queue index.
+ */
+ err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE);
if (err) {
PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of",
@@ -1563,14 +1616,23 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_rx_queue *rxq;
const struct rte_memzone *rz;
uint32_t ring_size;
uint16_t len;
int use_def_burst_func = 1;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI not available or queue "
"index exceeds the maximum");
return I40E_ERR_PARAM;
@@ -1603,7 +1665,12 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_conf->rx_free_thresh;
rxq->queue_id = queue_idx;
- rxq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ rxq->reg_idx = queue_idx;
+ else /* PF device */
+ rxq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
rxq->port_id = dev->data->port_id;
rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ?
0 : ETHER_CRC_LEN);
@@ -1761,13 +1828,22 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf)
{
- struct i40e_vsi *vsi = I40E_DEV_PRIVATE_TO_VSI(dev->data->dev_private);
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tx_queue *txq;
const struct rte_memzone *tz;
uint32_t ring_size;
uint16_t tx_rs_thresh, tx_free_thresh;
- if (!vsi || queue_idx >= vsi->nb_qps) {
+ if (hw->mac.type == I40E_MAC_VF) {
+ struct i40e_vf *vf =
+ I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ vsi = &vf->vsi;
+ } else
+ vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx);
+
+ if (vsi == NULL) {
PMD_DRV_LOG(ERR, "VSI is NULL, or queue index (%u) "
"exceeds the maximum", queue_idx);
return I40E_ERR_PARAM;
@@ -1891,7 +1967,12 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->hthresh = tx_conf->tx_thresh.hthresh;
txq->wthresh = tx_conf->tx_thresh.wthresh;
txq->queue_id = queue_idx;
- txq->reg_idx = vsi->base_queue + queue_idx;
+ if (hw->mac.type == I40E_MAC_VF)
+ txq->reg_idx = queue_idx;
+ else /* PF device */
+ txq->reg_idx = vsi->base_queue +
+ i40e_get_queue_offset_by_qindex(pf, queue_idx);
+
txq->port_id = dev->data->port_id;
txq->txq_flags = tx_conf->txq_flags;
txq->vsi = vsi;
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] i40e VMDQ support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (5 preceding siblings ...)
2014-09-23 13:14 ` [dpdk-dev] [PATCH 6/6] i40e: Add full VMDQ pools support Chen Jing D(Mark)
@ 2014-10-10 10:45 ` Ananyev, Konstantin
2014-10-14 8:27 ` Chen, Jing D
2014-10-21 3:30 ` Cao, Min
8 siblings, 0 replies; 45+ messages in thread
From: Ananyev, Konstantin @ 2014-10-10 10:45 UTC (permalink / raw)
To: Chen, Jing D, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark)
> Sent: Tuesday, September 23, 2014 2:14 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 0/6] i40e VMDQ support
>
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
>
> Define extra VMDQ arguments to expand VMDQ configuration. This also
> includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
> defects in rte_ether library.
>
> Add full VMDQ support in i40e PMD driver. renamed some functions, setup
> VMDQ VSI after it's enabled in application. It also make some improvement
> on macaddr add/delete to support setting multiple macaddr for single or
> multiple pools.
>
> Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
> configure/switch queues belonging to VMDQ pools.
>
> Chen Jing D(Mark) (6):
> ether: enhancement for VMDQ support
> igb: change for VMDQ arguments expansion
> ixgbe: change for VMDQ arguments expansion
> i40e: add VMDQ support
> i40e: macaddr add/del enhancement
> i40e: Add full VMDQ pools support
>
> config/common_linuxapp | 1 +
> lib/librte_ether/rte_ethdev.c | 12 +-
> lib/librte_ether/rte_ethdev.h | 39 ++-
> lib/librte_pmd_e1000/igb_ethdev.c | 3 +
> lib/librte_pmd_i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++++++---------
> lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
> lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
> lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
> 8 files changed, 537 insertions(+), 174 deletions(-)
>
> --
> 1.7.7.6
Acked-by: Konstantin Ananyev <konstantin.ananyev at intel.com>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] i40e VMDQ support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (6 preceding siblings ...)
2014-10-10 10:45 ` [dpdk-dev] [PATCH 0/6] i40e VMDQ support Ananyev, Konstantin
@ 2014-10-14 8:27 ` Chen, Jing D
2014-10-21 3:30 ` Cao, Min
8 siblings, 0 replies; 45+ messages in thread
From: Chen, Jing D @ 2014-10-14 8:27 UTC (permalink / raw)
To: dev, Thomas Monjalon (thomas.monjalon@6wind.com)
Hi Thomas,
Any comments with below patch?
-----Original Message-----
From: Chen, Jing D
Sent: Tuesday, September 23, 2014 9:14 PM
To: dev@dpdk.org
Cc: Chen, Jing D
Subject: [PATCH 0/6] i40e VMDQ support
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 12 +-
lib/librte_ether/rte_ethdev.h | 39 ++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
8 files changed, 537 insertions(+), 174 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] i40e VMDQ support
2014-09-23 13:14 [dpdk-dev] [PATCH 0/6] i40e VMDQ support Chen Jing D(Mark)
` (7 preceding siblings ...)
2014-10-14 8:27 ` Chen, Jing D
@ 2014-10-21 3:30 ` Cao, Min
8 siblings, 0 replies; 45+ messages in thread
From: Cao, Min @ 2014-10-21 3:30 UTC (permalink / raw)
To: Chen, Jing D, dev
Tested-by: Min Cao <min.cao@intel.com>
This patch has been verified on fortville and it is ready to be integrated to dpdk.org.
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Chen Jing D(Mark)
Sent: Tuesday, September 23, 2014 9:14 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 0/6] i40e VMDQ support
From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
Define extra VMDQ arguments to expand VMDQ configuration. This also
includes change in igb and ixgbe PMD driver. In the meanwhile, fix 2
defects in rte_ether library.
Add full VMDQ support in i40e PMD driver. renamed some functions, setup
VMDQ VSI after it's enabled in application. It also make some improvement
on macaddr add/delete to support setting multiple macaddr for single or
multiple pools.
Finally, change i40e rx/tx_queue_setup and dev_start/stop functions to
configure/switch queues belonging to VMDQ pools.
Chen Jing D(Mark) (6):
ether: enhancement for VMDQ support
igb: change for VMDQ arguments expansion
ixgbe: change for VMDQ arguments expansion
i40e: add VMDQ support
i40e: macaddr add/del enhancement
i40e: Add full VMDQ pools support
config/common_linuxapp | 1 +
lib/librte_ether/rte_ethdev.c | 12 +-
lib/librte_ether/rte_ethdev.h | 39 ++-
lib/librte_pmd_e1000/igb_ethdev.c | 3 +
lib/librte_pmd_i40e/i40e_ethdev.c | 509 ++++++++++++++++++++++++++---------
lib/librte_pmd_i40e/i40e_ethdev.h | 21 ++-
lib/librte_pmd_i40e/i40e_rxtx.c | 125 +++++++--
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 1 +
8 files changed, 537 insertions(+), 174 deletions(-)
--
1.7.7.6
^ permalink raw reply [flat|nested] 45+ messages in thread