DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Steve Shin (jonshin)" <jonshin@cisco.com>
To: Igor Ryzhov <iryzhov@nfware.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Ferruh Yigit <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] ethdev: fix MAC address replay
Date: Mon, 23 Jan 2017 23:19:51 +0000	[thread overview]
Message-ID: <3017401F-ACE6-4C77-920C-F072BFC4044F@cisco.com> (raw)
In-Reply-To: <CAF+s_FzUBrCROomzx_nyS5esVoT4frgbXk9UxcE940RaOB=XXg@mail.gmail.com>

Dear Igor,

Yes, you’re right. We need to handle a case (ex, SR-IOV) where multiple pools exist in the mac_pool_sel array.
A new diff file will be uploaded with PATCH v3.

Thanks & Regards,
Steve

From: Igor Ryzhov <iryzhov@nfware.com>
Date: Monday, January 23, 2017 at 12:50 AM
To: Steve Shin <jonshin@cisco.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Ferruh Yigit <ferruh.yigit@intel.com>
Subject: Re: [PATCH v2] ethdev: fix MAC address replay

Hello Steve,

Thank you for all the fixes.
But I think I noticed another one issue in MAC replay process.

Pool number is extracted only once:

if (RTE_ETH_DEV_SRIOV(dev).active)
pool = RTE_ETH_DEV_SRIOV(dev).def_vmdq_idx;

But when MAC address is added using rte_eth_dev_mac_addr_add several different pool numbers can be used.
Shouldn't we extract pool number for each MAC address separately from mac_pool_sel array during restoration process?

Best regards,
Igor

On Sat, Jan 21, 2017 at 1:23 AM, Steve Shin <jonshin@cisco.com<mailto:jonshin@cisco.com>> wrote:
This patch fixes a bug in replaying MAC address to the hardware
in rte_eth_dev_config_restore() routine. Added default MAC replay as well.

Fixes: 4bdefaade6d1 ("ethdev: VMDQ enhancements")

---
v2: Added default MAC replay & Code optimization

Signed-off-by: Steve Shin <jonshin@cisco.com<mailto:jonshin@cisco.com>>
---
 lib/librte_ether/rte_ethdev.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 4790faf..150f350 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -931,7 +931,7 @@ rte_eth_dev_config_restore(uint8_t port_id)
 {
        struct rte_eth_dev *dev;
        struct rte_eth_dev_info dev_info;
-       struct ether_addr addr;
+       struct ether_addr *addr;
        uint16_t i;
        uint32_t pool = 0;

@@ -942,23 +942,23 @@ rte_eth_dev_config_restore(uint8_t port_id)
        if (RTE_ETH_DEV_SRIOV(dev).active)
                pool = RTE_ETH_DEV_SRIOV(dev).def_vmdq_idx;

-       /* replay MAC address configuration */
-       for (i = 0; i < dev_info.max_mac_addrs; i++) {
-               addr = dev->data->mac_addrs[i];
+       /* replay MAC address configuration including default MAC */
+       if (*dev->dev_ops->mac_addr_set != NULL) {
+               addr = &dev->data->mac_addrs[0];
+               (*dev->dev_ops->mac_addr_set)(dev, addr);
+       }

-               /* skip zero address */
-               if (is_zero_ether_addr(&addr))
-                       continue;
+       if (*dev->dev_ops->mac_addr_add != NULL) {
+               for (i = 1; i < dev_info.max_mac_addrs; i++) {
+                       addr = &dev->data->mac_addrs[i];

-               /* add address to the hardware */
-               if  (*dev->dev_ops->mac_addr_add &&
-                       (dev->data->mac_pool_sel[i] & (1ULL << pool)))
-                       (*dev->dev_ops->mac_addr_add)(dev, &addr, i, pool);
-               else {
-                       RTE_PMD_DEBUG_TRACE("port %d: MAC address array not supported\n",
-                                       port_id);
-                       /* exit the loop but not return an error */
-                       break;
+                       /* skip zero address */
+                       if (is_zero_ether_addr(addr))
+                               continue;
+
+                       /* add address to the hardware */
+                       if (dev->data->mac_pool_sel[i] & (1ULL << pool))
+                               (*dev->dev_ops->mac_addr_add)(dev, addr, i, pool);
                }
        }

--
2.9.3


  reply	other threads:[~2017-01-23 23:20 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-19 18:47 [dpdk-dev] [PATCH] lib/librte_ether: error handling on " Steve Shin
2017-01-19 19:35 ` Steve Shin (jonshin)
2017-01-19 22:39   ` Igor Ryzhov
2017-01-20  2:30     ` Steve Shin (jonshin)
2017-01-20 12:17       ` Igor Ryzhov
2017-01-20 19:12         ` Steve Shin (jonshin)
2017-01-20 22:23 ` [dpdk-dev] [PATCH v2] ethdev: fix " Steve Shin
2017-01-23  8:50   ` Igor Ryzhov
2017-01-23 23:19     ` Steve Shin (jonshin) [this message]
2017-01-23 23:50   ` [dpdk-dev] [PATCH v3] " Steve Shin
2017-01-24  2:21     ` [dpdk-dev] [PATCH v4] " Steve Shin
2017-01-24 10:09       ` Igor Ryzhov
2017-01-24 13:21         ` Ferruh Yigit
2017-01-24 14:00           ` Igor Ryzhov
2017-01-25 10:25           ` Thomas Monjalon
2017-01-27 17:57       ` [dpdk-dev] [PATCH v5] " Steve Shin
2017-01-30  9:21         ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3017401F-ACE6-4C77-920C-F072BFC4044F@cisco.com \
    --to=jonshin@cisco.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=iryzhov@nfware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).