From: Olivier Matz <olivier.matz@6wind.com>
To: Wenzhuo Lu <wenzhuo.lu@intel.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 3/4] ixgbe: automatic link recovery on VF
Date: Mon, 16 May 2016 14:01:28 +0200 [thread overview]
Message-ID: <5739B698.8010909@6wind.com> (raw)
In-Reply-To: <1462396246-26517-4-git-send-email-wenzhuo.lu@intel.com>
Hi Wenzhuo,
On 05/04/2016 11:10 PM, Wenzhuo Lu wrote:
> When the physical link is down and recover later,
> the VF link cannot recover until the user stop and
> start it manually.
> This patch implements the automatic recovery of VF
> port.
> The automatic recovery bases on the link up/down
> message received from PF. When VF receives the link
> up/down message, it will replace the RX/TX and
> operation functions with fake ones to stop RX/TX
> and any future operation. Then reset the VF port.
> After successfully resetting the port, recover the
> RX/TX and operation functions.
>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>
> [...]
>
> +void
> +ixgbevf_dev_link_up_down_handler(struct rte_eth_dev *dev)
> +{
> + struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct ixgbe_adapter *adapter =
> + (struct ixgbe_adapter *)dev->data->dev_private;
> + int diag;
> + uint32_t vteiam;
> +
> + /* Only one working core need to performance VF reset */
> + if (rte_spinlock_trylock(&adapter->vf_reset_lock)) {
> + /**
> + * When fake rec/xmit is replaced, working thread may is running
> + * into real RX/TX func, so wait long enough to assume all
> + * working thread exit. The assumption is it will spend less
> + * than 100us for each execution of RX and TX func.
> + */
> + rte_delay_us(100);
> +
> + do {
> + dev->data->dev_started = 0;
> + ixgbevf_dev_stop(dev);
> + rte_delay_us(1000000);
If I understand well, ixgbevf_dev_link_up_down_handler() is called
by ixgbevf_recv_pkts_fake() on a dataplane core. It means that the
core that acquired the lock will loop during 100us + 1sec at least.
If this core was also in charge of polling other queues of other
ports, or timers, many packets will be dropped (even with a 100us
loop). I don't think it is acceptable to actively wait inside a
rx function.
I think it would avoid many issues to delegate this work to the
application, maybe by notifying it that the port is in a bad state
and must be restarted. The application could then properly stop
polling the queues, and stop and restart the port in a separate thread,
without bothering the dataplane cores.
Regards,
Olivier
next prev parent reply other threads:[~2016-05-16 12:01 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-04 21:10 [dpdk-dev] [PATCH 0/4] automatic link recovery on ixgbe/igb VF Wenzhuo Lu
2016-05-04 21:10 ` [dpdk-dev] [PATCH 1/4] ixgbe: VF supports mailbox interruption for PF link up/down Wenzhuo Lu
2016-05-04 21:10 ` [dpdk-dev] [PATCH 2/4] igb: " Wenzhuo Lu
2016-05-04 21:10 ` [dpdk-dev] [PATCH 3/4] ixgbe: automatic link recovery on VF Wenzhuo Lu
2016-05-16 12:01 ` Olivier Matz [this message]
2016-05-17 1:11 ` Lu, Wenzhuo
2016-05-17 7:50 ` Olivier MATZ
2016-05-17 8:20 ` Lu, Wenzhuo
2016-05-04 21:10 ` [dpdk-dev] [PATCH 4/4] igb: " Wenzhuo Lu
2016-05-24 5:46 ` [dpdk-dev] [PATCH 0/4] automatic link recovery on ixgbe/igb VF Lu, Wenzhuo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5739B698.8010909@6wind.com \
--to=olivier.matz@6wind.com \
--cc=dev@dpdk.org \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).