DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example
@ 2014-11-12 22:34 Huawei Xie
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Huawei Xie @ 2014-11-12 22:34 UTC (permalink / raw)
  To: dev

I40E has several different types of VSI and queues are allocated among them. VMDQ queue base and pool base doesn't start from zero due to this change and VMDQ doesn't own all queues.
rte_eth_dev_info structure is extended to provide VMDQ queue base, pool base, queue number information for us to properly set up VMDQ, i.e, add mac/vlan filter.
This patchset enables the vhost example to use this information to set up VMDQ.

Huawei Xie (2):
  support new VMDQ API and new nic i40e
  use factorized default Rx/Tx configuration

 examples/vhost/main.c | 103 ++++++++++++++++++++------------------------------
 1 file changed, 41 insertions(+), 62 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-12 22:34 [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Huawei Xie
@ 2014-11-12 22:34 ` Huawei Xie
  2014-11-13  0:49   ` Ouyang, Changchun
  2014-11-13  5:58   ` Chen, Jing D
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration Huawei Xie
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 13+ messages in thread
From: Huawei Xie @ 2014-11-12 22:34 UTC (permalink / raw)
  To: dev

In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in DPDK.
In I40E, only configured part of continous queues are allocated to VMDQ.
The rte_eth_dev_info structure is extened to provide VMDQ queue base, queue number, and VMDQ pool base information.
This patch support the new VMDQ API in vhost example.

FIXME in PMD:
 * added mac address will be flushed at rte_eth_dev_start.
 * we don't support selectively setting up queues well. 

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
---
 examples/vhost/main.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index a93f7a0..2b1bf02 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -53,7 +53,7 @@
 
 #include "main.h"
 
-#define MAX_QUEUES 128
+#define MAX_QUEUES 256
 
 /* the maximum number of external ports supported */
 #define MAX_SUP_PORTS 1
@@ -282,6 +282,9 @@ static struct rte_eth_conf vmdq_conf_default = {
 static unsigned lcore_ids[RTE_MAX_LCORE];
 static uint8_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports = 0; /**< The number of ports specified in command line */
+static uint16_t num_pf_queues, num_vmdq_queues;
+static uint16_t vmdq_pool_base, vmdq_queue_base;
+static uint16_t queues_per_pool;
 
 static const uint16_t external_pkt_default_vlan_tag = 2000;
 const uint16_t vlan_tags[] = {
@@ -417,7 +420,6 @@ port_init(uint8_t port)
 
 	/*configure the number of supported virtio devices based on VMDQ limits */
 	num_devices = dev_info.max_vmdq_pools;
-	num_queues = dev_info.max_rx_queues;
 
 	if (zero_copy) {
 		rx_ring_size = num_rx_descriptor;
@@ -437,10 +439,19 @@ port_init(uint8_t port)
 	retval = get_eth_conf(&port_conf, num_devices);
 	if (retval < 0)
 		return retval;
+	/* NIC queues are divided into pf queues and vmdq queues.  */
+	num_pf_queues = dev_info.max_rx_queues - dev_info.vmdq_queue_num;
+	queues_per_pool = dev_info.vmdq_queue_num / dev_info.max_vmdq_pools;
+	num_vmdq_queues = num_devices * queues_per_pool;
+	num_queues = num_pf_queues + num_vmdq_queues;
+	vmdq_queue_base = dev_info.vmdq_queue_base;
+	vmdq_pool_base  = dev_info.vmdq_pool_base;
+	printf("pf queue num: %u, configured vmdq pool num: %u, each vmdq pool has %u queues\n",
+		num_pf_queues, num_devices, queues_per_pool);
 
 	if (port >= rte_eth_dev_count()) return -1;
 
-	rx_rings = (uint16_t)num_queues,
+	rx_rings = (uint16_t)dev_info.max_rx_queues;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0)
@@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf *m)
 		vdev->vlan_tag);
 
 	/* Register the MAC address. */
-	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address, (uint32_t)dev->device_fh);
+	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
+				(uint32_t)dev->device_fh + vmdq_pool_base);
 	if (ret)
 		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add device MAC address to VMDQ\n",
 					dev->device_fh);
@@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
 	ll_dev->vdev = vdev;
 	add_data_ll_entry(&ll_root_used, ll_dev);
 	vdev->vmdq_rx_q
-		= dev->device_fh * (num_queues / num_devices);
+		= dev->device_fh * queues_per_pool + vmdq_queue_base;
 
 	if (zero_copy) {
 		uint32_t index = vdev->vmdq_rx_q;
@@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
 	unsigned lcore_id, core_id = 0;
 	unsigned nb_ports, valid_num_ports;
 	int ret;
-	uint8_t portid, queue_id = 0;
+	uint8_t portid;
+	uint16_t queue_id;
 	static pthread_t tid;
 
 	/* init EAL */
-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration
  2014-11-12 22:34 [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Huawei Xie
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
@ 2014-11-12 22:34 ` Huawei Xie
  2014-11-13  6:02   ` Chen, Jing D
  2014-11-12 22:52 ` [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Xie, Huawei
  2014-12-05 10:51 ` Fu, JingguoX
  3 siblings, 1 reply; 13+ messages in thread
From: Huawei Xie @ 2014-11-12 22:34 UTC (permalink / raw)
  To: dev

Refer to Pablo's commit:
    "use factorized default Rx/Tx configuration

    For apps that were using default rte_eth_rxconf and rte_eth_txconf
    structures, these have been removed and now they are obtained by
    calling rte_eth_dev_info_get, just before setting up RX/TX queues."

move zero copy's deferred start set up ahead.

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
---
 examples/vhost/main.c | 78 +++++++++++++++------------------------------------
 1 file changed, 22 insertions(+), 56 deletions(-)

diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 2b1bf02..fa36913 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -79,25 +79,6 @@
 	+ RTE_PKTMBUF_HEADROOM)
 #define MBUF_CACHE_SIZE_ZCP 0
 
-/*
- * RX and TX Prefetch, Host, and Write-back threshold values should be
- * carefully set for optimal performance. Consult the network
- * controller's datasheet and supporting DPDK documentation for guidance
- * on how these parameters should be set.
- */
-#define RX_PTHRESH 8 /* Default values of RX prefetch threshold reg. */
-#define RX_HTHRESH 8 /* Default values of RX host threshold reg. */
-#define RX_WTHRESH 4 /* Default values of RX write-back threshold reg. */
-
-/*
- * These default values are optimized for use with the Intel(R) 82599 10 GbE
- * Controller and the DPDK ixgbe PMD. Consider using other values for other
- * network controllers and/or network drivers.
- */
-#define TX_PTHRESH 36 /* Default values of TX prefetch threshold reg. */
-#define TX_HTHRESH 0  /* Default values of TX host threshold reg. */
-#define TX_WTHRESH 0  /* Default values of TX write-back threshold reg. */
-
 #define MAX_PKT_BURST 32 		/* Max burst size for RX/TX */
 #define BURST_TX_DRAIN_US 100 	/* TX drain every ~100us */
 
@@ -217,32 +198,6 @@ static uint32_t burst_rx_retry_num = BURST_RX_RETRIES;
 /* Character device basename. Can be set by user. */
 static char dev_basename[MAX_BASENAME_SZ] = "vhost-net";
 
-
-/* Default configuration for rx and tx thresholds etc. */
-static struct rte_eth_rxconf rx_conf_default = {
-	.rx_thresh = {
-		.pthresh = RX_PTHRESH,
-		.hthresh = RX_HTHRESH,
-		.wthresh = RX_WTHRESH,
-	},
-	.rx_drop_en = 1,
-};
-
-/*
- * These default values are optimized for use with the Intel(R) 82599 10 GbE
- * Controller and the DPDK ixgbe/igb PMD. Consider using other values for other
- * network controllers and/or network drivers.
- */
-static struct rte_eth_txconf tx_conf_default = {
-	.tx_thresh = {
-		.pthresh = TX_PTHRESH,
-		.hthresh = TX_HTHRESH,
-		.wthresh = TX_WTHRESH,
-	},
-	.tx_free_thresh = 0, /* Use PMD default values */
-	.tx_rs_thresh = 0, /* Use PMD default values */
-};
-
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
@@ -410,7 +365,9 @@ port_init(uint8_t port)
 {
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_conf port_conf;
-	uint16_t rx_rings, tx_rings;
+	struct rte_eth_rxconf *rxconf;
+	struct rte_eth_txconf *txconf;
+	int16_t rx_rings, tx_rings;
 	uint16_t rx_ring_size, tx_ring_size;
 	int retval;
 	uint16_t q;
@@ -418,6 +375,21 @@ port_init(uint8_t port)
 	/* The max pool number from dev_info will be used to validate the pool number specified in cmd line */
 	rte_eth_dev_info_get (port, &dev_info);
 
+	rxconf = &dev_info.default_rxconf;
+	txconf = &dev_info.default_txconf;
+	rxconf->rx_drop_en = 1;
+
+	/*
+	 * Zero copy defers queue RX/TX start to the time when guest
+	 * finishes its startup and packet buffers from that guest are
+	 * available.
+	 */
+	if (zero_copy) {
+		rxconf->rx_deferred_start = 1;
+		rxconf->rx_drop_en = 0;
+		txconf->tx_deferred_start = 1;
+	}
+
 	/*configure the number of supported virtio devices based on VMDQ limits */
 	num_devices = dev_info.max_vmdq_pools;
 
@@ -460,14 +432,16 @@ port_init(uint8_t port)
 	/* Setup the queues. */
 	for (q = 0; q < rx_rings; q ++) {
 		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
-						rte_eth_dev_socket_id(port), &rx_conf_default,
+						rte_eth_dev_socket_id(port),
+						rxconf,
 						vpool_array[q].pool);
 		if (retval < 0)
 			return retval;
 	}
 	for (q = 0; q < tx_rings; q ++) {
 		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
-						rte_eth_dev_socket_id(port), &tx_conf_default);
+						rte_eth_dev_socket_id(port),
+						txconf);
 		if (retval < 0)
 			return retval;
 	}
@@ -2920,14 +2894,6 @@ MAIN(int argc, char *argv[])
 		char pool_name[RTE_MEMPOOL_NAMESIZE];
 		char ring_name[RTE_MEMPOOL_NAMESIZE];
 
-		/*
-		 * Zero copy defers queue RX/TX start to the time when guest
-		 * finishes its startup and packet buffers from that guest are
-		 * available.
-		 */
-		rx_conf_default.rx_deferred_start = (uint8_t)zero_copy;
-		rx_conf_default.rx_drop_en = 0;
-		tx_conf_default.tx_deferred_start = (uint8_t)zero_copy;
 		nb_mbuf = num_rx_descriptor
 			+ num_switching_cores * MBUF_CACHE_SIZE_ZCP
 			+ num_switching_cores * MAX_PKT_BURST;
-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example
  2014-11-12 22:34 [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Huawei Xie
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration Huawei Xie
@ 2014-11-12 22:52 ` Xie, Huawei
  2014-12-06 10:16   ` Thomas Monjalon
  2014-12-05 10:51 ` Fu, JingguoX
  3 siblings, 1 reply; 13+ messages in thread
From: Xie, Huawei @ 2014-11-12 22:52 UTC (permalink / raw)
  To: Xie, Huawei, dev

This patch depends on the vlan filter set fix.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
> Sent: Wednesday, November 12, 2014 3:34 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and
> new nic i40e in vhost example
> 
> I40E has several different types of VSI and queues are allocated among them.
> VMDQ queue base and pool base doesn't start from zero due to this change and
> VMDQ doesn't own all queues.
> rte_eth_dev_info structure is extended to provide VMDQ queue base, pool base,
> queue number information for us to properly set up VMDQ, i.e, add mac/vlan
> filter.
> This patchset enables the vhost example to use this information to set up VMDQ.
> 
> Huawei Xie (2):
>   support new VMDQ API and new nic i40e
>   use factorized default Rx/Tx configuration
> 
>  examples/vhost/main.c | 103 ++++++++++++++++++++------------------------------
>  1 file changed, 41 insertions(+), 62 deletions(-)
> 
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
@ 2014-11-13  0:49   ` Ouyang, Changchun
  2014-11-13  1:20     ` Xie, Huawei
  2014-11-13  5:58   ` Chen, Jing D
  1 sibling, 1 reply; 13+ messages in thread
From: Ouyang, Changchun @ 2014-11-13  0:49 UTC (permalink / raw)
  To: Xie, Huawei, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
> Sent: Thursday, November 13, 2014 6:34 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API
> and new nic i40e
> 
> In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in DPDK.
> In I40E, only configured part of continous queues are allocated to VMDQ.
> The rte_eth_dev_info structure is extened to provide VMDQ queue base,
> queue number, and VMDQ pool base information.
> This patch support the new VMDQ API in vhost example.
> 
> FIXME in PMD:
>  * added mac address will be flushed at rte_eth_dev_start.
>  * we don't support selectively setting up queues well.
> 
> Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> ---
>  examples/vhost/main.c | 25 +++++++++++++++++++------
>  1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> a93f7a0..2b1bf02 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -53,7 +53,7 @@
> 
>  #include "main.h"
> 
> -#define MAX_QUEUES 128
> +#define MAX_QUEUES 256
> 
>  /* the maximum number of external ports supported */  #define
> MAX_SUP_PORTS 1 @@ -282,6 +282,9 @@ static struct rte_eth_conf
> vmdq_conf_default = {  static unsigned lcore_ids[RTE_MAX_LCORE];  static
> uint8_t ports[RTE_MAX_ETHPORTS];  static unsigned num_ports = 0; /**<
> The number of ports specified in command line */
> +static uint16_t num_pf_queues, num_vmdq_queues; static uint16_t
> +vmdq_pool_base, vmdq_queue_base; static uint16_t queues_per_pool;
> 
>  static const uint16_t external_pkt_default_vlan_tag = 2000;  const uint16_t
> vlan_tags[] = { @@ -417,7 +420,6 @@ port_init(uint8_t port)
> 
>  	/*configure the number of supported virtio devices based on VMDQ
> limits */
>  	num_devices = dev_info.max_vmdq_pools;
> -	num_queues = dev_info.max_rx_queues;
> 
>  	if (zero_copy) {
>  		rx_ring_size = num_rx_descriptor;
> @@ -437,10 +439,19 @@ port_init(uint8_t port)
>  	retval = get_eth_conf(&port_conf, num_devices);
>  	if (retval < 0)
>  		return retval;
> +	/* NIC queues are divided into pf queues and vmdq queues.  */
> +	num_pf_queues = dev_info.max_rx_queues -
> dev_info.vmdq_queue_num;
> +	queues_per_pool = dev_info.vmdq_queue_num /
> dev_info.max_vmdq_pools;
> +	num_vmdq_queues = num_devices * queues_per_pool;
> +	num_queues = num_pf_queues + num_vmdq_queues;
> +	vmdq_queue_base = dev_info.vmdq_queue_base;
> +	vmdq_pool_base  = dev_info.vmdq_pool_base;
> +	printf("pf queue num: %u, configured vmdq pool num: %u, each
> vmdq pool has %u queues\n",
> +		num_pf_queues, num_devices, queues_per_pool);
>

Better to use RTE_LOG to replace printf.
 
>  	if (port >= rte_eth_dev_count()) return -1;
> 
> -	rx_rings = (uint16_t)num_queues,
> +	rx_rings = (uint16_t)dev_info.max_rx_queues;
>  	/* Configure ethernet device. */
>  	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
>  	if (retval != 0)
> @@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf
> *m)
>  		vdev->vlan_tag);
> 
>  	/* Register the MAC address. */
> -	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> (uint32_t)dev->device_fh);
> +	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> +				(uint32_t)dev->device_fh +
> vmdq_pool_base);
>  	if (ret)
>  		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add
> device MAC address to VMDQ\n",
>  					dev->device_fh);
> @@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
>  	ll_dev->vdev = vdev;
>  	add_data_ll_entry(&ll_root_used, ll_dev);
>  	vdev->vmdq_rx_q
> -		= dev->device_fh * (num_queues / num_devices);
> +		= dev->device_fh * queues_per_pool + vmdq_queue_base;
> 
>  	if (zero_copy) {
>  		uint32_t index = vdev->vmdq_rx_q;
> @@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
>  	unsigned lcore_id, core_id = 0;
>  	unsigned nb_ports, valid_num_ports;
>  	int ret;
> -	uint8_t portid, queue_id = 0;
> +	uint8_t portid;
> +	uint16_t queue_id;

If max queue is 256, and queue_id vary from 0 to 255, then uint8_t is enough to denote it.
Any other consideration here to change it to uint16_t?

>  	static pthread_t tid;
> 
>  	/* init EAL */
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-13  0:49   ` Ouyang, Changchun
@ 2014-11-13  1:20     ` Xie, Huawei
  0 siblings, 0 replies; 13+ messages in thread
From: Xie, Huawei @ 2014-11-13  1:20 UTC (permalink / raw)
  To: Ouyang, Changchun, dev



> -----Original Message-----
> From: Ouyang, Changchun
> Sent: Wednesday, November 12, 2014 5:50 PM
> To: Xie, Huawei; dev@dpdk.org
> Cc: Ouyang, Changchun
> Subject: RE: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API
> and new nic i40e
> 
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
> > Sent: Thursday, November 13, 2014 6:34 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API
> > and new nic i40e
> >
> > In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in DPDK.
> > In I40E, only configured part of continous queues are allocated to VMDQ.
> > The rte_eth_dev_info structure is extened to provide VMDQ queue base,
> > queue number, and VMDQ pool base information.
> > This patch support the new VMDQ API in vhost example.
> >
> > FIXME in PMD:
> >  * added mac address will be flushed at rte_eth_dev_start.
> >  * we don't support selectively setting up queues well.
> >
> > Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> > ---
> >  examples/vhost/main.c | 25 +++++++++++++++++++------
> >  1 file changed, 19 insertions(+), 6 deletions(-)
> >
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index
> > a93f7a0..2b1bf02 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -53,7 +53,7 @@
> >
> >  #include "main.h"
> >
> > -#define MAX_QUEUES 128
> > +#define MAX_QUEUES 256
> >
> >  /* the maximum number of external ports supported */  #define
> > MAX_SUP_PORTS 1 @@ -282,6 +282,9 @@ static struct rte_eth_conf
> > vmdq_conf_default = {  static unsigned lcore_ids[RTE_MAX_LCORE];  static
> > uint8_t ports[RTE_MAX_ETHPORTS];  static unsigned num_ports = 0; /**<
> > The number of ports specified in command line */
> > +static uint16_t num_pf_queues, num_vmdq_queues; static uint16_t
> > +vmdq_pool_base, vmdq_queue_base; static uint16_t queues_per_pool;
> >
> >  static const uint16_t external_pkt_default_vlan_tag = 2000;  const uint16_t
> > vlan_tags[] = { @@ -417,7 +420,6 @@ port_init(uint8_t port)
> >
> >  	/*configure the number of supported virtio devices based on VMDQ
> > limits */
> >  	num_devices = dev_info.max_vmdq_pools;
> > -	num_queues = dev_info.max_rx_queues;
> >
> >  	if (zero_copy) {
> >  		rx_ring_size = num_rx_descriptor;
> > @@ -437,10 +439,19 @@ port_init(uint8_t port)
> >  	retval = get_eth_conf(&port_conf, num_devices);
> >  	if (retval < 0)
> >  		return retval;
> > +	/* NIC queues are divided into pf queues and vmdq queues.  */
> > +	num_pf_queues = dev_info.max_rx_queues -
> > dev_info.vmdq_queue_num;
> > +	queues_per_pool = dev_info.vmdq_queue_num /
> > dev_info.max_vmdq_pools;
> > +	num_vmdq_queues = num_devices * queues_per_pool;
> > +	num_queues = num_pf_queues + num_vmdq_queues;
> > +	vmdq_queue_base = dev_info.vmdq_queue_base;
> > +	vmdq_pool_base  = dev_info.vmdq_pool_base;
> > +	printf("pf queue num: %u, configured vmdq pool num: %u, each
> > vmdq pool has %u queues\n",
> > +		num_pf_queues, num_devices, queues_per_pool);
> >
> 
> Better to use RTE_LOG to replace printf.
> 
> >  	if (port >= rte_eth_dev_count()) return -1;
> >
> > -	rx_rings = (uint16_t)num_queues,
> > +	rx_rings = (uint16_t)dev_info.max_rx_queues;
> >  	/* Configure ethernet device. */
> >  	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> >  	if (retval != 0)
> > @@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf
> > *m)
> >  		vdev->vlan_tag);
> >
> >  	/* Register the MAC address. */
> > -	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > (uint32_t)dev->device_fh);
> > +	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > +				(uint32_t)dev->device_fh +
> > vmdq_pool_base);
> >  	if (ret)
> >  		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add
> > device MAC address to VMDQ\n",
> >  					dev->device_fh);
> > @@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
> >  	ll_dev->vdev = vdev;
> >  	add_data_ll_entry(&ll_root_used, ll_dev);
> >  	vdev->vmdq_rx_q
> > -		= dev->device_fh * (num_queues / num_devices);
> > +		= dev->device_fh * queues_per_pool + vmdq_queue_base;
> >
> >  	if (zero_copy) {
> >  		uint32_t index = vdev->vmdq_rx_q;
> > @@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
> >  	unsigned lcore_id, core_id = 0;
> >  	unsigned nb_ports, valid_num_ports;
> >  	int ret;
> > -	uint8_t portid, queue_id = 0;
> > +	uint8_t portid;
> > +	uint16_t queue_id;
> 
> If max queue is 256, and queue_id vary from 0 to 255, then uint8_t is enough to
> denote it.
> Any other consideration here to change it to uint16_t?
queue_id is compared with MAX_QUEUE + 1 which will be always false.
check the patch, I couldn't copy the code to here.
> 
> >  	static pthread_t tid;
> >
> >  	/* init EAL */
> > --
> > 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
  2014-11-13  0:49   ` Ouyang, Changchun
@ 2014-11-13  5:58   ` Chen, Jing D
  2014-11-14  6:30     ` Xie, Huawei
  1 sibling, 1 reply; 13+ messages in thread
From: Chen, Jing D @ 2014-11-13  5:58 UTC (permalink / raw)
  To: Xie, Huawei, dev

Hi,

> -----Original Message-----
> From: Xie, Huawei
> Sent: Thursday, November 13, 2014 6:34 AM
> To: dev@dpdk.org
> Cc: Chen, Jing D; Xie, Huawei
> Subject: [PATCH 1/2] examples/vhost: support new VMDQ API and new nic
> i40e
> 
> In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in DPDK.
> In I40E, only configured part of continous queues are allocated to VMDQ.
> The rte_eth_dev_info structure is extened to provide VMDQ queue base,
> queue number, and VMDQ pool base information.
> This patch support the new VMDQ API in vhost example.
> 
> FIXME in PMD:
>  * added mac address will be flushed at rte_eth_dev_start.
>  * we don't support selectively setting up queues well.
> 
> Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> ---
>  examples/vhost/main.c | 25 +++++++++++++++++++------
>  1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index a93f7a0..2b1bf02 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -53,7 +53,7 @@
> 
>  #include "main.h"
> 
> -#define MAX_QUEUES 128
> +#define MAX_QUEUES 256
> 
>  /* the maximum number of external ports supported */
>  #define MAX_SUP_PORTS 1
> @@ -282,6 +282,9 @@ static struct rte_eth_conf vmdq_conf_default = {
>  static unsigned lcore_ids[RTE_MAX_LCORE];
>  static uint8_t ports[RTE_MAX_ETHPORTS];
>  static unsigned num_ports = 0; /**< The number of ports specified in
> command line */
> +static uint16_t num_pf_queues, num_vmdq_queues;
> +static uint16_t vmdq_pool_base, vmdq_queue_base;
> +static uint16_t queues_per_pool;
> 
>  static const uint16_t external_pkt_default_vlan_tag = 2000;
>  const uint16_t vlan_tags[] = {
> @@ -417,7 +420,6 @@ port_init(uint8_t port)
> 
>  	/*configure the number of supported virtio devices based on VMDQ
> limits */
>  	num_devices = dev_info.max_vmdq_pools;
> -	num_queues = dev_info.max_rx_queues;
> 
>  	if (zero_copy) {
>  		rx_ring_size = num_rx_descriptor;
> @@ -437,10 +439,19 @@ port_init(uint8_t port)
>  	retval = get_eth_conf(&port_conf, num_devices);
>  	if (retval < 0)
>  		return retval;
> +	/* NIC queues are divided into pf queues and vmdq queues.  */
> +	num_pf_queues = dev_info.max_rx_queues -
> dev_info.vmdq_queue_num;
> +	queues_per_pool = dev_info.vmdq_queue_num /
> dev_info.max_vmdq_pools;
> +	num_vmdq_queues = num_devices * queues_per_pool;
> +	num_queues = num_pf_queues + num_vmdq_queues;
> +	vmdq_queue_base = dev_info.vmdq_queue_base;
> +	vmdq_pool_base  = dev_info.vmdq_pool_base;
> +	printf("pf queue num: %u, configured vmdq pool num: %u, each
> vmdq pool has %u queues\n",
> +		num_pf_queues, num_devices, queues_per_pool);
> 
>  	if (port >= rte_eth_dev_count()) return -1;
> 
> -	rx_rings = (uint16_t)num_queues,
> +	rx_rings = (uint16_t)dev_info.max_rx_queues;

You removed line 'num_queues = dev_info.max_rx_queues'  and calculate 'num_queues' 
with another equation. I assume you thought it may not equals.
So, why you assign dev_info.max_rx_queues to rx_rings again? Won't it better to use 'num_queues' 

>  	/* Configure ethernet device. */
>  	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
>  	if (retval != 0)
> @@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf
> *m)
>  		vdev->vlan_tag);
> 
>  	/* Register the MAC address. */
> -	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> (uint32_t)dev->device_fh);
> +	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> +				(uint32_t)dev->device_fh +
> vmdq_pool_base);
>  	if (ret)
>  		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add
> device MAC address to VMDQ\n",
>  					dev->device_fh);
> @@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
>  	ll_dev->vdev = vdev;
>  	add_data_ll_entry(&ll_root_used, ll_dev);
>  	vdev->vmdq_rx_q
> -		= dev->device_fh * (num_queues / num_devices);
> +		= dev->device_fh * queues_per_pool + vmdq_queue_base;
> 
>  	if (zero_copy) {
>  		uint32_t index = vdev->vmdq_rx_q;
> @@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
>  	unsigned lcore_id, core_id = 0;
>  	unsigned nb_ports, valid_num_ports;
>  	int ret;
> -	uint8_t portid, queue_id = 0;
> +	uint8_t portid;
> +	uint16_t queue_id;
>  	static pthread_t tid;
> 
>  	/* init EAL */
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration
  2014-11-12 22:34 ` [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration Huawei Xie
@ 2014-11-13  6:02   ` Chen, Jing D
  2014-11-14  2:17     ` Xie, Huawei
  0 siblings, 1 reply; 13+ messages in thread
From: Chen, Jing D @ 2014-11-13  6:02 UTC (permalink / raw)
  To: Xie, Huawei, dev

Hi,

> -----Original Message-----
> From: Xie, Huawei
> Sent: Thursday, November 13, 2014 6:34 AM
> To: dev@dpdk.org
> Cc: Chen, Jing D; Xie, Huawei
> Subject: [PATCH 2/2] examples/vhost: use factorized default Rx/Tx
> configuration
> 
> Refer to Pablo's commit:
>     "use factorized default Rx/Tx configuration
> 
>     For apps that were using default rte_eth_rxconf and rte_eth_txconf
>     structures, these have been removed and now they are obtained by
>     calling rte_eth_dev_info_get, just before setting up RX/TX queues."
> 
> move zero copy's deferred start set up ahead.
> 
> Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> ---
>  examples/vhost/main.c | 78 +++++++++++++++----------------------------------
> --
>  1 file changed, 22 insertions(+), 56 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 2b1bf02..fa36913 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -79,25 +79,6 @@
>  	+ RTE_PKTMBUF_HEADROOM)
>  #define MBUF_CACHE_SIZE_ZCP 0
> 
> -/*
> - * RX and TX Prefetch, Host, and Write-back threshold values should be
> - * carefully set for optimal performance. Consult the network
> - * controller's datasheet and supporting DPDK documentation for guidance
> - * on how these parameters should be set.
> - */
> -#define RX_PTHRESH 8 /* Default values of RX prefetch threshold reg. */
> -#define RX_HTHRESH 8 /* Default values of RX host threshold reg. */
> -#define RX_WTHRESH 4 /* Default values of RX write-back threshold reg. */
> -
> -/*
> - * These default values are optimized for use with the Intel(R) 82599 10 GbE
> - * Controller and the DPDK ixgbe PMD. Consider using other values for other
> - * network controllers and/or network drivers.
> - */
> -#define TX_PTHRESH 36 /* Default values of TX prefetch threshold reg. */
> -#define TX_HTHRESH 0  /* Default values of TX host threshold reg. */
> -#define TX_WTHRESH 0  /* Default values of TX write-back threshold reg. */
> -
>  #define MAX_PKT_BURST 32 		/* Max burst size for RX/TX */
>  #define BURST_TX_DRAIN_US 100 	/* TX drain every ~100us */
> 
> @@ -217,32 +198,6 @@ static uint32_t burst_rx_retry_num =
> BURST_RX_RETRIES;
>  /* Character device basename. Can be set by user. */
>  static char dev_basename[MAX_BASENAME_SZ] = "vhost-net";
> 
> -
> -/* Default configuration for rx and tx thresholds etc. */
> -static struct rte_eth_rxconf rx_conf_default = {
> -	.rx_thresh = {
> -		.pthresh = RX_PTHRESH,
> -		.hthresh = RX_HTHRESH,
> -		.wthresh = RX_WTHRESH,
> -	},
> -	.rx_drop_en = 1,
> -};
> -
> -/*
> - * These default values are optimized for use with the Intel(R) 82599 10 GbE
> - * Controller and the DPDK ixgbe/igb PMD. Consider using other values for
> other
> - * network controllers and/or network drivers.
> - */
> -static struct rte_eth_txconf tx_conf_default = {
> -	.tx_thresh = {
> -		.pthresh = TX_PTHRESH,
> -		.hthresh = TX_HTHRESH,
> -		.wthresh = TX_WTHRESH,
> -	},
> -	.tx_free_thresh = 0, /* Use PMD default values */
> -	.tx_rs_thresh = 0, /* Use PMD default values */
> -};
> -
>  /* empty vmdq configuration structure. Filled in programatically */
>  static struct rte_eth_conf vmdq_conf_default = {
>  	.rxmode = {
> @@ -410,7 +365,9 @@ port_init(uint8_t port)
>  {
>  	struct rte_eth_dev_info dev_info;
>  	struct rte_eth_conf port_conf;
> -	uint16_t rx_rings, tx_rings;
> +	struct rte_eth_rxconf *rxconf;
> +	struct rte_eth_txconf *txconf;
> +	int16_t rx_rings, tx_rings;
>  	uint16_t rx_ring_size, tx_ring_size;
>  	int retval;
>  	uint16_t q;
> @@ -418,6 +375,21 @@ port_init(uint8_t port)
>  	/* The max pool number from dev_info will be used to validate the
> pool number specified in cmd line */
>  	rte_eth_dev_info_get (port, &dev_info);
> 
> +	rxconf = &dev_info.default_rxconf;
> +	txconf = &dev_info.default_txconf;
> +	rxconf->rx_drop_en = 1;
> +
> +	/*
> +	 * Zero copy defers queue RX/TX start to the time when guest
> +	 * finishes its startup and packet buffers from that guest are
> +	 * available.
> +	 */
> +	if (zero_copy) {
> +		rxconf->rx_deferred_start = 1;
> +		rxconf->rx_drop_en = 0;
> +		txconf->tx_deferred_start = 1;
> +	}
> +

May I know why 'rx_drop_en' is cleared after 'zero_copy' set?

>  	/*configure the number of supported virtio devices based on VMDQ
> limits */
>  	num_devices = dev_info.max_vmdq_pools;
> 
> @@ -460,14 +432,16 @@ port_init(uint8_t port)
>  	/* Setup the queues. */
>  	for (q = 0; q < rx_rings; q ++) {
>  		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
> -						rte_eth_dev_socket_id(port),
> &rx_conf_default,
> +						rte_eth_dev_socket_id(port),
> +						rxconf,
>  						vpool_array[q].pool);
>  		if (retval < 0)
>  			return retval;
>  	}
>  	for (q = 0; q < tx_rings; q ++) {
>  		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
> -						rte_eth_dev_socket_id(port),
> &tx_conf_default);
> +						rte_eth_dev_socket_id(port),
> +						txconf);
>  		if (retval < 0)
>  			return retval;
>  	}
> @@ -2920,14 +2894,6 @@ MAIN(int argc, char *argv[])
>  		char pool_name[RTE_MEMPOOL_NAMESIZE];
>  		char ring_name[RTE_MEMPOOL_NAMESIZE];
> 
> -		/*
> -		 * Zero copy defers queue RX/TX start to the time when
> guest
> -		 * finishes its startup and packet buffers from that guest are
> -		 * available.
> -		 */
> -		rx_conf_default.rx_deferred_start = (uint8_t)zero_copy;
> -		rx_conf_default.rx_drop_en = 0;
> -		tx_conf_default.tx_deferred_start = (uint8_t)zero_copy;
>  		nb_mbuf = num_rx_descriptor
>  			+ num_switching_cores * MBUF_CACHE_SIZE_ZCP
>  			+ num_switching_cores * MAX_PKT_BURST;
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration
  2014-11-13  6:02   ` Chen, Jing D
@ 2014-11-14  2:17     ` Xie, Huawei
  0 siblings, 0 replies; 13+ messages in thread
From: Xie, Huawei @ 2014-11-14  2:17 UTC (permalink / raw)
  To: Chen, Jing D, dev



> -----Original Message-----
> From: Chen, Jing D
> Sent: Wednesday, November 12, 2014 11:02 PM
> To: Xie, Huawei; dev@dpdk.org
> Subject: RE: [PATCH 2/2] examples/vhost: use factorized default Rx/Tx
> configuration
> 
> Hi,
> 
> > -----Original Message-----
> > From: Xie, Huawei
> > Sent: Thursday, November 13, 2014 6:34 AM
> > To: dev@dpdk.org
> > Cc: Chen, Jing D; Xie, Huawei
> > Subject: [PATCH 2/2] examples/vhost: use factorized default Rx/Tx
> > configuration
> >
> > Refer to Pablo's commit:
> >     "use factorized default Rx/Tx configuration
> >
> >     For apps that were using default rte_eth_rxconf and rte_eth_txconf
> >     structures, these have been removed and now they are obtained by
> >     calling rte_eth_dev_info_get, just before setting up RX/TX queues."
> >
> > move zero copy's deferred start set up ahead.
> >
> > Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> > ---
> >  examples/vhost/main.c | 78 +++++++++++++++----------------------------------
> > --
> >  1 file changed, 22 insertions(+), 56 deletions(-)
> >
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index 2b1bf02..fa36913 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -79,25 +79,6 @@
> >  	+ RTE_PKTMBUF_HEADROOM)
> >  #define MBUF_CACHE_SIZE_ZCP 0
> >
> > -/*
> > - * RX and TX Prefetch, Host, and Write-back threshold values should be
> > - * carefully set for optimal performance. Consult the network
> > - * controller's datasheet and supporting DPDK documentation for guidance
> > - * on how these parameters should be set.
> > - */
> > -#define RX_PTHRESH 8 /* Default values of RX prefetch threshold reg. */
> > -#define RX_HTHRESH 8 /* Default values of RX host threshold reg. */
> > -#define RX_WTHRESH 4 /* Default values of RX write-back threshold reg. */
> > -
> > -/*
> > - * These default values are optimized for use with the Intel(R) 82599 10 GbE
> > - * Controller and the DPDK ixgbe PMD. Consider using other values for other
> > - * network controllers and/or network drivers.
> > - */
> > -#define TX_PTHRESH 36 /* Default values of TX prefetch threshold reg. */
> > -#define TX_HTHRESH 0  /* Default values of TX host threshold reg. */
> > -#define TX_WTHRESH 0  /* Default values of TX write-back threshold reg. */
> > -
> >  #define MAX_PKT_BURST 32 		/* Max burst size for RX/TX */
> >  #define BURST_TX_DRAIN_US 100 	/* TX drain every ~100us */
> >
> > @@ -217,32 +198,6 @@ static uint32_t burst_rx_retry_num =
> > BURST_RX_RETRIES;
> >  /* Character device basename. Can be set by user. */
> >  static char dev_basename[MAX_BASENAME_SZ] = "vhost-net";
> >
> > -
> > -/* Default configuration for rx and tx thresholds etc. */
> > -static struct rte_eth_rxconf rx_conf_default = {
> > -	.rx_thresh = {
> > -		.pthresh = RX_PTHRESH,
> > -		.hthresh = RX_HTHRESH,
> > -		.wthresh = RX_WTHRESH,
> > -	},
> > -	.rx_drop_en = 1,
> > -};
> > -
> > -/*
> > - * These default values are optimized for use with the Intel(R) 82599 10 GbE
> > - * Controller and the DPDK ixgbe/igb PMD. Consider using other values for
> > other
> > - * network controllers and/or network drivers.
> > - */
> > -static struct rte_eth_txconf tx_conf_default = {
> > -	.tx_thresh = {
> > -		.pthresh = TX_PTHRESH,
> > -		.hthresh = TX_HTHRESH,
> > -		.wthresh = TX_WTHRESH,
> > -	},
> > -	.tx_free_thresh = 0, /* Use PMD default values */
> > -	.tx_rs_thresh = 0, /* Use PMD default values */
> > -};
> > -
> >  /* empty vmdq configuration structure. Filled in programatically */
> >  static struct rte_eth_conf vmdq_conf_default = {
> >  	.rxmode = {
> > @@ -410,7 +365,9 @@ port_init(uint8_t port)
> >  {
> >  	struct rte_eth_dev_info dev_info;
> >  	struct rte_eth_conf port_conf;
> > -	uint16_t rx_rings, tx_rings;
> > +	struct rte_eth_rxconf *rxconf;
> > +	struct rte_eth_txconf *txconf;
> > +	int16_t rx_rings, tx_rings;
> >  	uint16_t rx_ring_size, tx_ring_size;
> >  	int retval;
> >  	uint16_t q;
> > @@ -418,6 +375,21 @@ port_init(uint8_t port)
> >  	/* The max pool number from dev_info will be used to validate the
> > pool number specified in cmd line */
> >  	rte_eth_dev_info_get (port, &dev_info);
> >
> > +	rxconf = &dev_info.default_rxconf;
> > +	txconf = &dev_info.default_txconf;
> > +	rxconf->rx_drop_en = 1;
> > +
> > +	/*
> > +	 * Zero copy defers queue RX/TX start to the time when guest
> > +	 * finishes its startup and packet buffers from that guest are
> > +	 * available.
> > +	 */
> > +	if (zero_copy) {
> > +		rxconf->rx_deferred_start = 1;
> > +		rxconf->rx_drop_en = 0;
> > +		txconf->tx_deferred_start = 1;
> > +	}
> > +
> 
> May I know why 'rx_drop_en' is cleared after 'zero_copy' set?
:), first of all, this is not related to this patch. This patch inheritate old rx_drop_en behavior, and  apply pablo's change.
Secondly, this is due to rx ring in zero copy case has very limited descriptor number. Clearing this setting will cache the packets
when there are not enough descriptors
> 
> >  	/*configure the number of supported virtio devices based on VMDQ
> > limits */
> >  	num_devices = dev_info.max_vmdq_pools;
> >
> > @@ -460,14 +432,16 @@ port_init(uint8_t port)
> >  	/* Setup the queues. */
> >  	for (q = 0; q < rx_rings; q ++) {
> >  		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
> > -						rte_eth_dev_socket_id(port),
> > &rx_conf_default,
> > +						rte_eth_dev_socket_id(port),
> > +						rxconf,
> >  						vpool_array[q].pool);
> >  		if (retval < 0)
> >  			return retval;
> >  	}
> >  	for (q = 0; q < tx_rings; q ++) {
> >  		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
> > -						rte_eth_dev_socket_id(port),
> > &tx_conf_default);
> > +						rte_eth_dev_socket_id(port),
> > +						txconf);
> >  		if (retval < 0)
> >  			return retval;
> >  	}
> > @@ -2920,14 +2894,6 @@ MAIN(int argc, char *argv[])
> >  		char pool_name[RTE_MEMPOOL_NAMESIZE];
> >  		char ring_name[RTE_MEMPOOL_NAMESIZE];
> >
> > -		/*
> > -		 * Zero copy defers queue RX/TX start to the time when
> > guest
> > -		 * finishes its startup and packet buffers from that guest are
> > -		 * available.
> > -		 */
> > -		rx_conf_default.rx_deferred_start = (uint8_t)zero_copy;
> > -		rx_conf_default.rx_drop_en = 0;
> > -		tx_conf_default.tx_deferred_start = (uint8_t)zero_copy;
> >  		nb_mbuf = num_rx_descriptor
> >  			+ num_switching_cores * MBUF_CACHE_SIZE_ZCP
> >  			+ num_switching_cores * MAX_PKT_BURST;
> > --
> > 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-13  5:58   ` Chen, Jing D
@ 2014-11-14  6:30     ` Xie, Huawei
  2014-11-14  7:24       ` Chen, Jing D
  0 siblings, 1 reply; 13+ messages in thread
From: Xie, Huawei @ 2014-11-14  6:30 UTC (permalink / raw)
  To: Chen, Jing D, dev



> -----Original Message-----
> From: Chen, Jing D
> Sent: Wednesday, November 12, 2014 10:58 PM
> To: Xie, Huawei; dev@dpdk.org
> Subject: RE: [PATCH 1/2] examples/vhost: support new VMDQ API and new nic
> i40e
> 
> Hi,
> 
> > -----Original Message-----
> > From: Xie, Huawei
> > Sent: Thursday, November 13, 2014 6:34 AM
> > To: dev@dpdk.org
> > Cc: Chen, Jing D; Xie, Huawei
> > Subject: [PATCH 1/2] examples/vhost: support new VMDQ API and new nic
> > i40e
> >
> > In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in DPDK.
> > In I40E, only configured part of continous queues are allocated to VMDQ.
> > The rte_eth_dev_info structure is extened to provide VMDQ queue base,
> > queue number, and VMDQ pool base information.
> > This patch support the new VMDQ API in vhost example.
> >
> > FIXME in PMD:
> >  * added mac address will be flushed at rte_eth_dev_start.
> >  * we don't support selectively setting up queues well.
> >
> > Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> > ---
> >  examples/vhost/main.c | 25 +++++++++++++++++++------
> >  1 file changed, 19 insertions(+), 6 deletions(-)
> >
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index a93f7a0..2b1bf02 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -53,7 +53,7 @@
> >
> >  #include "main.h"
> >
> > -#define MAX_QUEUES 128
> > +#define MAX_QUEUES 256
> >
> >  /* the maximum number of external ports supported */
> >  #define MAX_SUP_PORTS 1
> > @@ -282,6 +282,9 @@ static struct rte_eth_conf vmdq_conf_default = {
> >  static unsigned lcore_ids[RTE_MAX_LCORE];
> >  static uint8_t ports[RTE_MAX_ETHPORTS];
> >  static unsigned num_ports = 0; /**< The number of ports specified in
> > command line */
> > +static uint16_t num_pf_queues, num_vmdq_queues;
> > +static uint16_t vmdq_pool_base, vmdq_queue_base;
> > +static uint16_t queues_per_pool;
> >
> >  static const uint16_t external_pkt_default_vlan_tag = 2000;
> >  const uint16_t vlan_tags[] = {
> > @@ -417,7 +420,6 @@ port_init(uint8_t port)
> >
> >  	/*configure the number of supported virtio devices based on VMDQ
> > limits */
> >  	num_devices = dev_info.max_vmdq_pools;
> > -	num_queues = dev_info.max_rx_queues;
> >
> >  	if (zero_copy) {
> >  		rx_ring_size = num_rx_descriptor;
> > @@ -437,10 +439,19 @@ port_init(uint8_t port)
> >  	retval = get_eth_conf(&port_conf, num_devices);
> >  	if (retval < 0)
> >  		return retval;
> > +	/* NIC queues are divided into pf queues and vmdq queues.  */
> > +	num_pf_queues = dev_info.max_rx_queues -
> > dev_info.vmdq_queue_num;
> > +	queues_per_pool = dev_info.vmdq_queue_num /
> > dev_info.max_vmdq_pools;
> > +	num_vmdq_queues = num_devices * queues_per_pool;
> > +	num_queues = num_pf_queues + num_vmdq_queues;
> > +	vmdq_queue_base = dev_info.vmdq_queue_base;
> > +	vmdq_pool_base  = dev_info.vmdq_pool_base;
> > +	printf("pf queue num: %u, configured vmdq pool num: %u, each
> > vmdq pool has %u queues\n",
> > +		num_pf_queues, num_devices, queues_per_pool);
> >
> >  	if (port >= rte_eth_dev_count()) return -1;
> >
> > -	rx_rings = (uint16_t)num_queues,
> > +	rx_rings = (uint16_t)dev_info.max_rx_queues;
> 
> You removed line 'num_queues = dev_info.max_rx_queues'  and calculate
> 'num_queues'
> with another equation. I assume you thought it may not equals.
> So, why you assign dev_info.max_rx_queues to rx_rings again? Won't it better to
> use 'num_queues'

Actually they are the same here.
We use max_rx_queues just to say that we initialize all queues rather than part of
queues.
If all PMDs(1G,10G,i40e) supports selectively initializing queues, then we could only initialize
num_device queues rather than total queues, even without initializing PF queues..
> 
> >  	/* Configure ethernet device. */
> >  	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> >  	if (retval != 0)
> > @@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct rte_mbuf
> > *m)
> >  		vdev->vlan_tag);
> >
> >  	/* Register the MAC address. */
> > -	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > (uint32_t)dev->device_fh);
> > +	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > +				(uint32_t)dev->device_fh +
> > vmdq_pool_base);
> >  	if (ret)
> >  		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add
> > device MAC address to VMDQ\n",
> >  					dev->device_fh);
> > @@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
> >  	ll_dev->vdev = vdev;
> >  	add_data_ll_entry(&ll_root_used, ll_dev);
> >  	vdev->vmdq_rx_q
> > -		= dev->device_fh * (num_queues / num_devices);
> > +		= dev->device_fh * queues_per_pool + vmdq_queue_base;
> >
> >  	if (zero_copy) {
> >  		uint32_t index = vdev->vmdq_rx_q;
> > @@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
> >  	unsigned lcore_id, core_id = 0;
> >  	unsigned nb_ports, valid_num_ports;
> >  	int ret;
> > -	uint8_t portid, queue_id = 0;
> > +	uint8_t portid;
> > +	uint16_t queue_id;
> >  	static pthread_t tid;
> >
> >  	/* init EAL */
> > --
> > 1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e
  2014-11-14  6:30     ` Xie, Huawei
@ 2014-11-14  7:24       ` Chen, Jing D
  0 siblings, 0 replies; 13+ messages in thread
From: Chen, Jing D @ 2014-11-14  7:24 UTC (permalink / raw)
  To: Xie, Huawei, dev



> -----Original Message-----
> From: Xie, Huawei
> Sent: Friday, November 14, 2014 2:31 PM
> To: Chen, Jing D; dev@dpdk.org
> Subject: RE: [PATCH 1/2] examples/vhost: support new VMDQ API and new
> nic i40e
> 
> 
> 
> > -----Original Message-----
> > From: Chen, Jing D
> > Sent: Wednesday, November 12, 2014 10:58 PM
> > To: Xie, Huawei; dev@dpdk.org
> > Subject: RE: [PATCH 1/2] examples/vhost: support new VMDQ API and
> new nic
> > i40e
> >
> > Hi,
> >
> > > -----Original Message-----
> > > From: Xie, Huawei
> > > Sent: Thursday, November 13, 2014 6:34 AM
> > > To: dev@dpdk.org
> > > Cc: Chen, Jing D; Xie, Huawei
> > > Subject: [PATCH 1/2] examples/vhost: support new VMDQ API and new
> nic
> > > i40e
> > >
> > > In Niantic, if VMDQ mode is set, all queues are allocated to VMDQ in
> DPDK.
> > > In I40E, only configured part of continous queues are allocated to VMDQ.
> > > The rte_eth_dev_info structure is extened to provide VMDQ queue base,
> > > queue number, and VMDQ pool base information.
> > > This patch support the new VMDQ API in vhost example.
> > >
> > > FIXME in PMD:
> > >  * added mac address will be flushed at rte_eth_dev_start.
> > >  * we don't support selectively setting up queues well.
> > >
> > > Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> > > ---
> > >  examples/vhost/main.c | 25 +++++++++++++++++++------
> > >  1 file changed, 19 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > > index a93f7a0..2b1bf02 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -53,7 +53,7 @@
> > >
> > >  #include "main.h"
> > >
> > > -#define MAX_QUEUES 128
> > > +#define MAX_QUEUES 256
> > >
> > >  /* the maximum number of external ports supported */
> > >  #define MAX_SUP_PORTS 1
> > > @@ -282,6 +282,9 @@ static struct rte_eth_conf vmdq_conf_default = {
> > >  static unsigned lcore_ids[RTE_MAX_LCORE];
> > >  static uint8_t ports[RTE_MAX_ETHPORTS];
> > >  static unsigned num_ports = 0; /**< The number of ports specified in
> > > command line */
> > > +static uint16_t num_pf_queues, num_vmdq_queues;
> > > +static uint16_t vmdq_pool_base, vmdq_queue_base;
> > > +static uint16_t queues_per_pool;
> > >
> > >  static const uint16_t external_pkt_default_vlan_tag = 2000;
> > >  const uint16_t vlan_tags[] = {
> > > @@ -417,7 +420,6 @@ port_init(uint8_t port)
> > >
> > >  	/*configure the number of supported virtio devices based on VMDQ
> > > limits */
> > >  	num_devices = dev_info.max_vmdq_pools;
> > > -	num_queues = dev_info.max_rx_queues;
> > >
> > >  	if (zero_copy) {
> > >  		rx_ring_size = num_rx_descriptor;
> > > @@ -437,10 +439,19 @@ port_init(uint8_t port)
> > >  	retval = get_eth_conf(&port_conf, num_devices);
> > >  	if (retval < 0)
> > >  		return retval;
> > > +	/* NIC queues are divided into pf queues and vmdq queues.  */
> > > +	num_pf_queues = dev_info.max_rx_queues -
> > > dev_info.vmdq_queue_num;
> > > +	queues_per_pool = dev_info.vmdq_queue_num /
> > > dev_info.max_vmdq_pools;
> > > +	num_vmdq_queues = num_devices * queues_per_pool;
> > > +	num_queues = num_pf_queues + num_vmdq_queues;
> > > +	vmdq_queue_base = dev_info.vmdq_queue_base;
> > > +	vmdq_pool_base  = dev_info.vmdq_pool_base;
> > > +	printf("pf queue num: %u, configured vmdq pool num: %u, each
> > > vmdq pool has %u queues\n",
> > > +		num_pf_queues, num_devices, queues_per_pool);
> > >
> > >  	if (port >= rte_eth_dev_count()) return -1;
> > >
> > > -	rx_rings = (uint16_t)num_queues,
> > > +	rx_rings = (uint16_t)dev_info.max_rx_queues;
> >
> > You removed line 'num_queues = dev_info.max_rx_queues'  and calculate
> > 'num_queues'
> > with another equation. I assume you thought it may not equals.
> > So, why you assign dev_info.max_rx_queues to rx_rings again? Won't it
> better to
> > use 'num_queues'
> 
> Actually they are the same here.
> We use max_rx_queues just to say that we initialize all queues rather than
> part of
> queues.
> If all PMDs(1G,10G,i40e) supports selectively initializing queues, then we
> could only initialize
> num_device queues rather than total queues, even without initializing PF
> queues..
> >
> > >  	/* Configure ethernet device. */
> > >  	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> > >  	if (retval != 0)
> > > @@ -931,7 +942,8 @@ link_vmdq(struct vhost_dev *vdev, struct
> rte_mbuf
> > > *m)
> > >  		vdev->vlan_tag);
> > >
> > >  	/* Register the MAC address. */
> > > -	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > > (uint32_t)dev->device_fh);
> > > +	ret = rte_eth_dev_mac_addr_add(ports[0], &vdev->mac_address,
> > > +				(uint32_t)dev->device_fh +
> > > vmdq_pool_base);
> > >  	if (ret)
> > >  		RTE_LOG(ERR, VHOST_DATA, "(%"PRIu64") Failed to add
> > > device MAC address to VMDQ\n",
> > >  					dev->device_fh);
> > > @@ -2602,7 +2614,7 @@ new_device (struct virtio_net *dev)
> > >  	ll_dev->vdev = vdev;
> > >  	add_data_ll_entry(&ll_root_used, ll_dev);
> > >  	vdev->vmdq_rx_q
> > > -		= dev->device_fh * (num_queues / num_devices);
> > > +		= dev->device_fh * queues_per_pool + vmdq_queue_base;
> > >
> > >  	if (zero_copy) {
> > >  		uint32_t index = vdev->vmdq_rx_q;
> > > @@ -2837,7 +2849,8 @@ MAIN(int argc, char *argv[])
> > >  	unsigned lcore_id, core_id = 0;
> > >  	unsigned nb_ports, valid_num_ports;
> > >  	int ret;
> > > -	uint8_t portid, queue_id = 0;
> > > +	uint8_t portid;
> > > +	uint16_t queue_id;
> > >  	static pthread_t tid;
> > >
> > >  	/* init EAL */
> > > --
> > > 1.8.1.4

Acked-by : Jing Chen <jing.d.chen@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example
  2014-11-12 22:34 [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Huawei Xie
                   ` (2 preceding siblings ...)
  2014-11-12 22:52 ` [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Xie, Huawei
@ 2014-12-05 10:51 ` Fu, JingguoX
  3 siblings, 0 replies; 13+ messages in thread
From: Fu, JingguoX @ 2014-12-05 10:51 UTC (permalink / raw)
  To: Xie, Huawei, dev

As we validate dpdk 1.8.0 rc2 from org, we report defect: Work_Request  IXA00388707
The defect is about vhost virtio based 4 x 10 G NIC, and vhost-switch cannot startup, all cases for one copy on Fortville cannot validate. This defect can be fixed by this patches. Below are details for validation.

Basic Information
        Patch name      examples/vhost: support new VMDQ api and new nic i40e in vhost example
        Brief description about test purpose    Verify the four scenarios for virtio one copy
        Test Flag       Tested-by
        Tester name     jingguox.fu@intel.com
        Test environment
-       OS Environment
-       Compilation (GCC)
-       Hardware Info (CPU & NIC)
-       Virtualization environment /Configure   
OS: Fedora20 3.11.10-301.fc20.x86_64
GCC: gcc version 4.8.3 20140911
CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
NIC: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572]

        Test Tool Chain information     N/A
	  Commit ID	6e0248660819b8a0b42725c6c881729daf80739f (after apply patches)
Detailed Testing information    DPDK SW Configuration   open CONFIG_RTE_LIBRTE_VHOST x86_64-native-linuxapp-gcc configuration
        Test Result Summary     Total 4 cases, 4 passed, 0 failed
	  Test Case - name        one vm by dpdk forward
        Test Case - Description Check vhost switch can forward packets received the first virtio to the second virtio that all on the same vm
        Test Case -test sample/application      Start vhost-switch on host and start testpmd on guest
On host:
	  taskset -c 8-10 vhost-switch -c 0xf00 -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
On guest:
	  testpmd -c 0xf -n 4 -- -i --txqflags 0x0f00
Set fwd type tx_first
testpmd>set fwd mac
testpmd>start tx_first

Send packets with vlan id: ether|ip|udp packets
	  Test Case -command / instruction
        Test Case - expected test result packet generator can get the packets from the second virtio
	  
	  Test Case - name        one vm by linux forward
        Test Case - Description Check vhost switch can forward packets received the first virtio to the second virtio that all on the same vm
        Test Case -test sample/application      Start vhost-switch on host, use virtios as Ethernet devices
On host:
	  taskset -c 8-10 vhost-switch -c 0xf00 -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
On guest:
	  Routing table config

ip addr add 192.168.10.2/24 dev eth1
ip addr add 192.168.20.2/24 dev eth2
ip neigh add 192.168.10.1 lladdr 00:00:00:00:00:01 dev eth1
ip neigh add 192.168.20.1 lladdr 00:00:00:00:00:01 dev eth2
ip link set dev eth1 up
ip link set dev eth2 up

Send packets with vlan id: ether|ip|udp packets
	  Test Case -command / instruction
        Test Case - expected test result packet generator can get the packets from the second virtio


	  Test Case - name        vm to vm by dpdk forward soft switch
        Test Case - Description Check vhost switch can forward packets received from the first virtio on VM1 to the second virtio on VM2
        Test Case -test sample/application      Start vhost-switch on host and start testpmd on guests
On host:
	  taskset -c 8-10 vhost-switch -c 0xf00 -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
On guests:
VM1:
	  ./testpmd -c 0xf -n 4 -- -i --txqflags 0x0f00 --eth-peer=0,00:00:00:00:00:0A

Set fwd type tx_first
testpmd>set fwd mac
testpmd>start tx_first

VM2:
	  testpmd -c 0xf -n 4 -- -i --txqflags 0x0f00
Set fwd type tx_first
testpmd>set fwd mac
testpmd>start tx_first

Send packets without vlan id: ether|ip|udp packets
	  Test Case -command / instruction
        Test Case - expected test result packet generator can get the packets from the vf on VM2


	  Test Case - name        vm to vm by linux forward soft switch
        Test Case - Description Check vhost switch can forward packets received from the first virtio on VM1 to the second virtio on VM2
        Test Case -test sample/application      Start vhost-switch on host, use virtios as Ethernet devices
On host:
	  taskset -c 8-10 vhost-switch -c 0xf00 -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
On guests:
VM1:
	  ip addr add 192.168.2.2/24 dev eth1
	  ip neigh add 192.168.2.1 lladdr 00:00:02:00:00:a1 dev eth1
	  ip link set dev eth1 up 
	  ip addr add 192.168.3.2/24 dev eth0
	  ip neigh add 192.168.3.1 lladdr 52:00:00:54:00:02 dev eth0
	  ip link set dev eth0 up
VM2:
	  ip addr add 192.168.3.2/24 dev eth1
	  ip neigh add 192.168.3.1 lladdr 00:00:02:00:00:a1 dev eth1
	  ip link set dev eth1 up
	  ip addr add 192.168.2.2/24 dev eth0
	  ip neigh add 192.168.2.1 lladdr 00:00:02:00:00:a1 dev eth0
	  ip link set dev eth0 up
	  
	  arp -s 192.168.3.1 00:00:02:0a:0a

Send packets without vlan id: ether|ip|udp packets
	  Test Case -command / instruction
        Test Case - expected test result packet generator can get the packets from the vf on VM2


-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
Sent: Thursday, November 13, 2014 06:34
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example

I40E has several different types of VSI and queues are allocated among them. VMDQ queue base and pool base doesn't start from zero due to this change and VMDQ doesn't own all queues.
rte_eth_dev_info structure is extended to provide VMDQ queue base, pool base, queue number information for us to properly set up VMDQ, i.e, add mac/vlan filter.
This patchset enables the vhost example to use this information to set up VMDQ.

Huawei Xie (2):
  support new VMDQ API and new nic i40e
  use factorized default Rx/Tx configuration

 examples/vhost/main.c | 103 ++++++++++++++++++++------------------------------
 1 file changed, 41 insertions(+), 62 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example
  2014-11-12 22:52 ` [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Xie, Huawei
@ 2014-12-06 10:16   ` Thomas Monjalon
  0 siblings, 0 replies; 13+ messages in thread
From: Thomas Monjalon @ 2014-12-06 10:16 UTC (permalink / raw)
  To: Xie, Huawei; +Cc: dev

2014-11-12 22:52, Xie, Huawei:
> This patch depends on the vlan filter set fix.
> 
> > I40E has several different types of VSI and queues are allocated among them.
> > VMDQ queue base and pool base doesn't start from zero due to this change and
> > VMDQ doesn't own all queues.
> > rte_eth_dev_info structure is extended to provide VMDQ queue base, pool base,
> > queue number information for us to properly set up VMDQ, i.e, add mac/vlan
> > filter.
> > This patchset enables the vhost example to use this information to set up VMDQ.
> > 
> > Huawei Xie (2):
> >   support new VMDQ API and new nic i40e
> >   use factorized default Rx/Tx configuration

Applied after vlan filter fix

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-12-06 10:17 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-12 22:34 [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Huawei Xie
2014-11-12 22:34 ` [dpdk-dev] [PATCH 1/2] examples/vhost: support new VMDQ API and new nic i40e Huawei Xie
2014-11-13  0:49   ` Ouyang, Changchun
2014-11-13  1:20     ` Xie, Huawei
2014-11-13  5:58   ` Chen, Jing D
2014-11-14  6:30     ` Xie, Huawei
2014-11-14  7:24       ` Chen, Jing D
2014-11-12 22:34 ` [dpdk-dev] [PATCH 2/2] examples/vhost: use factorized default Rx/Tx configuration Huawei Xie
2014-11-13  6:02   ` Chen, Jing D
2014-11-14  2:17     ` Xie, Huawei
2014-11-12 22:52 ` [dpdk-dev] [PATCH 0/2] examples/vhost: support new VMDQ api and new nic i40e in vhost example Xie, Huawei
2014-12-06 10:16   ` Thomas Monjalon
2014-12-05 10:51 ` Fu, JingguoX

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git