automatic DPDK test reports
 help / color / mirror / Atom feed
From: sys_stv@intel.com
To: test-report@dpdk.org
Cc: maxime.coquelin@redhat.com
Subject: [dpdk-test-report] |FAILURE| pw32028 [PATCH] vhost_user: protect active rings from async ring changes Re: [PATCH] vhost_user: protect active rings from async ring>> changes>>>>>>>> On 12/08/2017 03:14 AM Tan Jianfeng wrote:>>>>>>>>>> -----Original Message----->>>> Re: [PATCH] vhost_user: protect active rings from async ring>>>> changes>>>>>>>>>>>>>>>> On 12/07/2017 10:33 AM Tan Jianfeng wrote:>>>>>>>>>>>>>>>> -----Original Message----->>>>>> [PATCH] vhost_user: protect active rings from async ring>> changes>>>>>>>>>>>> When performing live migration or memory hot-plugging >>>>>> the changes to the device and vrings made by message handler>>>>>> done independently from vring usage by PMD threads.>>>>>>>>>>>> This causes for example segfauls during live-migration>>>>>>>>>> segfauls ->segfaults?>>>>>>>>>>> with MQ enable but in general virtually any request>>>>>> sent by qemu changing the state of device can cause>>>>>> problems.>>>>>>>>>>>> These patches fixes all above issues by adding a spinlock>>>>>> to every vring and requiring message handler to start operation>>>>>> only after ensuring that all PMD threads related to the divece>>>>>>>>>> Another typo: divece.>>>>>>>>>>> are out of critical section accessing the vring data.>>>>>>>>>>>> Each vring has its own lock in order to not create contention>>>>>> between PMD threads of different vrings and to prevent>>>>>> performance degradation by scaling queue pair number.>>>>>>>>>> Also wonder how much overhead it brings.>>>>>>>>>> Instead of locking each vring can we just waiting a while (10us for>> example)>>>> after call destroy_device() callback so that every PMD thread has enough>>>> time to skip out the criterial area?>>>>>>>> No because we are not destroying the device when it is needed.>>>> Actually once destroy_device() is called it is likely that the>>>> application has taken care the ring aren't being processed anymore>>>> before returning from the callback (This is at least the case with Vhost>>>> PMD).>>>>>> OK I did not put it right way as there are multiple cases above: migration>> and memory hot plug. Let me try again:>>>>>> Whenever a vhost thread handles a message affecting PMD threads (like>> SET_MEM_TABLE GET_VRING_BASE etc) we can remove the dev flag ->> VIRTIO_DEV_RUNNING and wait for a while so that PMD threads skip out of>> those criterial area. After message handling reset the flag ->> VIRTIO_DEV_RUNNING.>>>> I think you mean clearing vq's enabled flag because PMD threads never>> check the VIRTIO_DEV_RUNNING flag.> > Ah yes.> >>>>> I suppose it can work basing on an assumption that PMD threads work in>> polling mode and can skip criterial area quickly and inevitably.>>>> That sounds very fragile because if the CPU aren't perfectly isolated >> your PMD thread can be preempted for interrupt handling for example.>>>> Or what if for some reason the PMD thread CPU stalls for a short while?>>>> The later is unlikely but if it happens it will be hard to debug.>>>> Let's see first the performance impact of using the spinlock. It might>> not be that important because 99.9999% of the times it will not even>> spin.> > Fair enough.I did some benchmarks on my Broadwell test bench (see patch below) andit seems that depending on the benchmark perfs are on par or betterwith the spinlock! I guess it explains because with the spinlock thereis a better batching and less concurrent accesses on the rings but I'mnot sure.Please find my results below (CPU E5-2667 v4 @ 3.20GHz): Bench v17.11 v17.11 + spinlock ---------------- ----------- ------------------- PVP Bidir run1 19.29Mpps 19.26Mpps PVP Bidir run2 19.26Mpps 19.28Mpps TxOnly 18.47Mpps 18.83Mpps RxOnly 13.83Mpps 13.83Mpps IO Loopback 7.94Mpps 7.97MppsPatch:--------------------------------------------------------- From 7e18cefce682235558fed66a1fb87ab937ec9297 Mon Sep 17 00:00:00 2001
Date: 08 Dec 2017 05:57:35 -0800	[thread overview]
Message-ID: <b81db2$fpisn@orsmga002.jf.intel.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 3695 bytes --]

Test-Label: Intel-compilation
Test-Status: FAILURE
http://dpdk.org/patch/32028

_apply patch file failure_

Submitter: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Fri, 8 Dec 2017 11:11:40 +0100Fri, 8 Dec 2017 04:21:25 -0500
DPDK git baseline: Repo:dpdk-next-eventdev, Branch:master, CommitID:c2d12ce63d4d9ea274093fe8c42fe806287fef8a
                   Repo:dpdk-next-crypto, Branch:master, CommitID:224374cc0e3ca44af5141fb7035a97f338d00c18
                   Repo:dpdk-next-net, Branch:master, CommitID:ce7eab40845ec3bd097b1071f12e3d9d6809fc0a
                   Repo:dpdk-next-virtio, Branch:master, CommitID:4f761c94b520ce71c56be1c913e54a4179b81c43
                   Repo:dpdk, Branch:master, CommitID:224374cc0e3ca44af5141fb7035a97f338d00c18
                   
Apply patch file failed:
Repo: dpdk
32028:
patching file lib/librte_vhost/vhost.c
Hunk #1 FAILED at 219.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.c.rej
patching file lib/librte_vhost/vhost.h
Hunk #1 FAILED at 131.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.h.rej
patching file lib/librte_vhost/virtio_net.c
Hunk #1 FAILED at 329.
Hunk #2 FAILED at 419.
Hunk #3 FAILED at 654.
Hunk #4 FAILED at 715.
Hunk #5 FAILED at 1183.
Hunk #6 FAILED at 1356.
6 out of 6 hunks FAILED -- saving rejects to file lib/librte_vhost/virtio_net.c.rej

Repo: dpdk-next-crypto
32028:
patching file lib/librte_vhost/vhost.c
Hunk #1 FAILED at 219.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.c.rej
patching file lib/librte_vhost/vhost.h
Hunk #1 FAILED at 131.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.h.rej
patching file lib/librte_vhost/virtio_net.c
Hunk #1 FAILED at 329.
Hunk #2 FAILED at 419.
Hunk #3 FAILED at 654.
Hunk #4 FAILED at 715.
Hunk #5 FAILED at 1183.
Hunk #6 FAILED at 1356.
6 out of 6 hunks FAILED -- saving rejects to file lib/librte_vhost/virtio_net.c.rej

Repo: dpdk-next-net
32028:
patching file lib/librte_vhost/vhost.c
Hunk #1 FAILED at 219.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.c.rej
patching file lib/librte_vhost/vhost.h
Hunk #1 FAILED at 131.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.h.rej
patching file lib/librte_vhost/virtio_net.c
Hunk #1 FAILED at 329.
Hunk #2 FAILED at 419.
Hunk #3 FAILED at 654.
Hunk #4 FAILED at 715.
Hunk #5 FAILED at 1183.
Hunk #6 FAILED at 1356.
6 out of 6 hunks FAILED -- saving rejects to file lib/librte_vhost/virtio_net.c.rej

Repo: dpdk-next-virtio
32028:
patching file lib/librte_vhost/vhost.c
Hunk #1 FAILED at 219.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.c.rej
patching file lib/librte_vhost/vhost.h
Hunk #1 FAILED at 131.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.h.rej
patching file lib/librte_vhost/virtio_net.c
Hunk #1 FAILED at 329.
Hunk #2 FAILED at 419.
Hunk #3 FAILED at 654.
Hunk #4 FAILED at 715.
Hunk #5 FAILED at 1183.
Hunk #6 FAILED at 1356.
6 out of 6 hunks FAILED -- saving rejects to file lib/librte_vhost/virtio_net.c.rej

Repo: dpdk-next-eventdev
32028:
patching file lib/librte_vhost/vhost.c
Hunk #1 FAILED at 219.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.c.rej
patching file lib/librte_vhost/vhost.h
Hunk #1 FAILED at 131.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_vhost/vhost.h.rej
patching file lib/librte_vhost/virtio_net.c
Hunk #1 FAILED at 329.
Hunk #2 FAILED at 419.
Hunk #3 FAILED at 654.
Hunk #4 FAILED at 715.
Hunk #5 FAILED at 1183.
Hunk #6 FAILED at 1356.
6 out of 6 hunks FAILED -- saving rejects to file lib/librte_vhost/virtio_net.c.rej


DPDK STV team

                 reply	other threads:[~2017-12-08 13:57 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='b81db2$fpisn@orsmga002.jf.intel.com' \
    --to=sys_stv@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=test-report@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).