DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wodkowski, PawelX" <pawelx.wodkowski@intel.com>
To: "Ouyang, Changchun" <changchun.ouyang@intel.com>,
	Vlad Zolotarov <vladz@cloudius-systems.com>,
	"Jastrzebski, MichalX K" <michalx.k.jastrzebski@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe
Date: Wed, 14 Jan 2015 09:46:55 +0000	[thread overview]
Message-ID: <F6F2A6264E145F47A18AB6DF8E87425D12B809FD@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: <F52918179C57134FAEC9EA62FA2F96251196BFB9@shsmsx102.ccr.corp.intel.com>

> > >
> > >   - split nb_q_per_pool to nb_rx_q_per_pool and nb_tx_q_per_pool
> > >
> > >     Rationale:
> > >
> > >     rx and tx number of queue might be different if RX and TX are
> > >
> > >     configured in different mode. This allow to inform VF about
> > >
> > >     proper number of queues.
> >
> >
> > Nice move! Ouyang, this is a nice answer to my recent remarks about your
> > PATCH4 in "Enable VF RSS for Niantic" series.
> 
> After I respond your last comments, I see this,  :-), I am sure we both agree it is
> the right way to resolve it in vmdq dcb case.
> 

I am now dividing this patch with your suggestions and I am little confused.

In this (DCB in SRIOV) case the primary cause for spliting nb_q_per_pool into
nb_rx_q_per_pool and nb_tx_q_per_pool was because of this code:

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index af9e261..be3afe4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -537,8 +537,8 @@
 		default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
-			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
-				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+			if (RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool > 1)
+				RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool = 1;
 			break;
 		}
 
@@ -553,17 +553,18 @@
 		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
-			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
-				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
+			if (RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool > 1)
+				RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool = 1;
 			break;
 		}
 
 		/* check valid queue number */
-		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
-		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
+		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool) ||
+		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool)) {
 			PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
-				    "queue number must less equal to %d\n",
-					port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
+				    "rx/tx queue number must less equal to %d/%d\n",
+					port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
+					RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
 			return (-EINVAL);
 		}
 	} else {
--

This introduced an issue when RX and TX was configure in different way. The problem was
that the RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool as common for RX and TX and it is
changed. So I did the above. But when testpmd was adjusted for DCB in SRIOV there 
was another issue. Testpmd is pre-configuring ports by default and since
nb_rx_q_per_pool  and nb_tx_q_per_pool was already reset to 1 there was no way to 
use it for DCB in SRIOV. So I did another modification:

> +		uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool;
> +		uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool;
> +
>   		switch (dev_conf->rxmode.mq_mode) {
> -		case ETH_MQ_RX_VMDQ_RSS:
>   		case ETH_MQ_RX_VMDQ_DCB:
> +			break;
> +		case ETH_MQ_RX_VMDQ_RSS:
>   		case ETH_MQ_RX_VMDQ_DCB_RSS:
> -			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
> +			/* RSS, DCB+RSS VMDQ in SRIOV mode, not implement yet */
>   			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
>   					" SRIOV active, "
>   					"unsupported VMDQ mq_mode rx %u\n",
> @@ -537,37 +560,32 @@ rte_eth_dev_check_mq_mode(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>   		default: /* ETH_MQ_RX_VMDQ_ONLY or ETH_MQ_RX_NONE */
>   			/* if nothing mq mode configure, use default scheme */
>   			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
> -			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> -				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> +			if (nb_rx_q_per_pool > 1)
> +				nb_rx_q_per_pool = 1;
>   			break;
>   		}
>   
>   		switch (dev_conf->txmode.mq_mode) {
> -		case ETH_MQ_TX_VMDQ_DCB:
> -			/* DCB VMDQ in SRIOV mode, not implement yet */
> -			PMD_DEBUG_TRACE("ethdev port_id=%" PRIu8
> -					" SRIOV active, "
> -					"unsupported VMDQ mq_mode tx %u\n",
> -					port_id, dev_conf->txmode.mq_mode);
> -			return (-EINVAL);
> +		case ETH_MQ_TX_VMDQ_DCB: /* DCB VMDQ in SRIOV mode*/
> +			break;
>   		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
>   			/* if nothing mq mode configure, use default scheme */
>   			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
> -			if (RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool > 1)
> -				RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
> +			if (nb_tx_q_per_pool > 1)
> +				nb_tx_q_per_pool = 1;
>   			break;
>   		}
>   
>   		/* check valid queue number */
> -		if ((nb_rx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool) ||
> -		    (nb_tx_q > RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)) {
> +		if (nb_rx_q > nb_rx_q_per_pool || nb_tx_q > nb_tx_q_per_pool) {
>   			PMD_DEBUG_TRACE("ethdev port_id=%d SRIOV active, "
> -				    "queue number must less equal to %d\n",
> -					port_id, RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool);
> +				    "rx/tx queue number must less equal to %d/%d\n",
> +					port_id, RTE_ETH_DEV_SRIOV(dev).nb_rx_q_per_pool,
> +					RTE_ETH_DEV_SRIOV(dev).nb_tx_q_per_pool);
>   			return (-EINVAL);
>   		}

For this point I think that splitting RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool might be not
needed. From my point  of view (DCB), since nb_q_per_pool is untouched, I think I can stay with:

> +		uint16_t nb_rx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> +		uint16_t nb_tx_q_per_pool = RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool;
> +

What do you think? I noticed that you was discussing some issue about nb_q_per_pool in face
of RSS functionality. Can you spoke about my doubts in face of that RSS?

Pawel

  reply	other threads:[~2015-01-14  9:47 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-12 14:43 Michal Jastrzebski
2015-01-12 15:46 ` Jastrzebski, MichalX K
2015-01-13 10:02   ` Vlad Zolotarov
2015-01-13 10:08 ` Vlad Zolotarov
2015-01-14  0:51   ` Ouyang, Changchun
2015-01-14  9:46     ` Wodkowski, PawelX [this message]
2015-01-12 15:50 [dpdk-dev] [PATCH 0/2] Enable DCB in SRIOV mode for ixgbe driver Michal Jastrzebski
2015-01-12 15:50 ` [dpdk-dev] [PATCH 1/2] pmd: add DCB for VF for ixgbe Michal Jastrzebski
2015-01-13 10:14   ` Vlad Zolotarov
2015-01-13 11:00     ` Wodkowski, PawelX
2015-01-14  1:00     ` Ouyang, Changchun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F6F2A6264E145F47A18AB6DF8E87425D12B809FD@IRSMSX102.ger.corp.intel.com \
    --to=pawelx.wodkowski@intel.com \
    --cc=changchun.ouyang@intel.com \
    --cc=dev@dpdk.org \
    --cc=michalx.k.jastrzebski@intel.com \
    --cc=vladz@cloudius-systems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).