DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Fischetti, Antonio" <antonio.fischetti@intel.com>
To: "Fischetti, Antonio" <antonio.fischetti@intel.com>,
	"Bie, Tiwei" <tiwei.bie@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "yliu@fridaylinux.org" <yliu@fridaylinux.org>,
	"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
	"jfreimann@redhat.com" <jfreimann@redhat.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior	of	device stop/start
Date: Fri, 1 Dec 2017 17:17:58 +0000	[thread overview]
Message-ID: <BCFD2BB875535045A5368D9ADBF296993506013E@IRSMSX101.ger.corp.intel.com> (raw)
In-Reply-To: <BCFD2BB875535045A5368D9ADBF2969935059706@IRSMSX101.ger.corp.intel.com>

Hi All,
I've got an update on this.
I could replicate the same issue by using testpmd + a VM (= Virtual Machine).

The test topology I'm using is:


[Traffic gen]----[NIC port #0]----[testpmd]----[vhost port #2]----+
                                                                  | 
                                                                  |
                                                             [testpmd in
                                                                the VM]
                                                                  |
                                                                  |
[Traffic gen]----[NIC port #1]----[testpmd]----[vhost port #3]----+


So there's no OvS now in the picture: one testpmd running on the host
and one testpmd running on the VM.

The issue is that no packet goes through testpmd in the VM.
It seems this is happening after this patch was upstreamed.

Please note
-----------
To replicate this issue both the next 2 conditions must be met:
 - the traffic is already being sent before launching testpmd in the VM
 - there are at least 2 forwarding lcores.

I found that a way to fix this is:

diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index c3a536f..17dd87d 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -80,7 +80,7 @@ virtqueue_flush(struct virtqueue *vq)
                        rte_pktmbuf_free(dxp->cookie);
                        dxp->cookie = NULL;
                }
-               vq->vq_used_cons_idx++;
+               //vq->vq_used_cons_idx++;
                vq_ring_free_chain(vq, desc_idx);
        }

or alternatively by using a commit before 

commit d8227497ec5c3de75fe378e09fc9673ae097fa73
Author: Tiwei Bie <tiwei.bie@intel.com>
Date:   Fri Oct 20 10:09:28 2017 +0800

    net/virtio: flush Rx queues on start


Other details of my testbench below.

- testpmd on the host has 2 forwarding lcores: #2 and #3 
(otherwise the issue doesn't show up with 1 fwd lcore).

- the VM (qemu) is using lcore #10

- the VM has 2 vhost ports, with 1 queue each

- in this case testpmd on the host uses vhostuser ports. 
In the other testbench with OvS - see previous email - OvS 
was using vhostuserclient ports and the VM was acting as a 
vhost server. The issue happens in both cases.

For further info, please let me know.


Thanks,
Antonio


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fischetti, Antonio
> Sent: Tuesday, November 14, 2017 5:39 PM
> To: Bie, Tiwei <tiwei.bie@intel.com>; dev@dpdk.org
> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> jfreimann@redhat.com; stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior
> of device stop/start
> 
> Hi Tiwei,
> 
> I'm doing some regression tests with v17.11-rc4. I ran
> into a hitch with testpmd running into a guest VM. It happens
> that no packet gets forwarded by testpmd.
> The issue seems to appear after this patch was upstreamed.
> 
> I saw there's a way to make it work, ie by avoiding to
> increment the last consumed descriptor:
> 
> --- a/drivers/net/virtio/virtqueue.c
> +++ b/drivers/net/virtio/virtqueue.c
> @@ -80,7 +80,7 @@ virtqueue_flush(struct virtqueue *vq)
>                         rte_pktmbuf_free(dxp->cookie);
>                         dxp->cookie = NULL;
>                 }
> -               vq->vq_used_cons_idx++;
> +               //vq->vq_used_cons_idx++;
>                 vq_ring_free_chain(vq, desc_idx);
> 
> Not quite sure if this change make any sense to you?
> 
> Some details below.
> 
> The issue appears only if the traffic generator is already
> sending packets before I launch testpmd in the guest.
> 
> In my testbench I have Open-vSwitch (OvS-DPDK) which launches
> a VM with 2 vhostuserclient ports (vhu0 and vhu1), each with
> a single queue.
> My OvS has 2 physical ports: dpdk0 and dpdk1.
> dpdk0 forwards packets back and forth from/to the generator
> to/from vhu0.
> Similarly, dpdk1 forwards packets back and forth from/to the generator
> to/from vhu1.
> 
> In OvS there are 2 different PMD threads serving the 2
> vhostuserclient ports.
> 
> While the traffic generator is already sending packets, in the
> guest VM I launch
>   ./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i --
> txqflags=0xf00 --disable-hw-vlan
> 
> The issue is that I see no packet received on the traffic generator
> and in fact testpmd shows
> 
> ---------------------- Forward statistics for port 0  ------------------
> ----
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>   ----------------------------------------------------------------------
> ------
> 
>   ---------------------- Forward statistics for port 1  ----------------
> ------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>   ----------------------------------------------------------------------
> ------
> 
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++
> 
> Please let me know if I missed something or if you need
> more info on my testbench.
> 
> 
> Thanks,
> Antonio
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Tiwei Bie
> > Sent: Friday, October 20, 2017 3:09 AM
> > To: dev@dpdk.org
> > Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> > jfreimann@redhat.com; stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior
> of
> > device stop/start
> >
> > After starting a device, the driver shouldn't deliver the
> > packets that already existed before the device is started
> > to applications. Otherwise it will lead to incorrect packet
> > collection for port state. This patch fixes this issue by
> > flushing the Rx queues when starting the device.
> >
> > Fixes: a85786dc816f ("virtio: fix states handling during
> > initialization")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > Reviewed-by: Jens Freimann <jfreimann@redhat.com>
> > ---
> > v2:
> > - Use the existing `for` loop
> > - Improve the commit log
> >
> >  drivers/net/virtio/virtio_ethdev.c |  2 ++
> >  drivers/net/virtio/virtio_rxtx.c   |  2 +-
> >  drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
> >  drivers/net/virtio/virtqueue.h     |  5 +++++
> >  4 files changed, 33 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c
> > b/drivers/net/virtio/virtio_ethdev.c
> > index 42c2836..9ccb0f4 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -1819,6 +1819,8 @@ virtio_dev_start(struct rte_eth_dev *dev)
> >
> >  	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> >  		rxvq = dev->data->rx_queues[i];
> > +		/* Flush the old packets */
> > +		virtqueue_flush(rxvq->vq);
> >  		virtqueue_notify(rxvq->vq);
> >  	}
> >
> > diff --git a/drivers/net/virtio/virtio_rxtx.c
> > b/drivers/net/virtio/virtio_rxtx.c
> > index 609b413..f5b6f94 100644
> > --- a/drivers/net/virtio/virtio_rxtx.c
> > +++ b/drivers/net/virtio/virtio_rxtx.c
> > @@ -80,7 +80,7 @@ virtio_dev_rx_queue_done(void *rxq, uint16_t offset)
> >  	return VIRTQUEUE_NUSED(vq) >= offset;
> >  }
> >
> > -static void
> > +void
> >  vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
> >  {
> >  	struct vring_desc *dp, *dp_tail;
> > diff --git a/drivers/net/virtio/virtqueue.c
> > b/drivers/net/virtio/virtqueue.c
> > index 9ad77b8..c3a536f 100644
> > --- a/drivers/net/virtio/virtqueue.c
> > +++ b/drivers/net/virtio/virtqueue.c
> > @@ -59,3 +59,28 @@ virtqueue_detatch_unused(struct virtqueue *vq)
> >  		}
> >  	return NULL;
> >  }
> > +
> > +/* Flush the elements in the used ring. */
> > +void
> > +virtqueue_flush(struct virtqueue *vq)
> > +{
> > +	struct vring_used_elem *uep;
> > +	struct vq_desc_extra *dxp;
> > +	uint16_t used_idx, desc_idx;
> > +	uint16_t nb_used, i;
> > +
> > +	nb_used = VIRTQUEUE_NUSED(vq);
> > +
> > +	for (i = 0; i < nb_used; i++) {
> > +		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
> > +		uep = &vq->vq_ring.used->ring[used_idx];
> > +		desc_idx = (uint16_t)uep->id;
> > +		dxp = &vq->vq_descx[desc_idx];
> > +		if (dxp->cookie != NULL) {
> > +			rte_pktmbuf_free(dxp->cookie);
> > +			dxp->cookie = NULL;
> > +		}
> > +		vq->vq_used_cons_idx++;
> > +		vq_ring_free_chain(vq, desc_idx);
> > +	}
> > +}
> > diff --git a/drivers/net/virtio/virtqueue.h
> > b/drivers/net/virtio/virtqueue.h
> > index 9c4f96d..11059fb 100644
> > --- a/drivers/net/virtio/virtqueue.h
> > +++ b/drivers/net/virtio/virtqueue.h
> > @@ -304,6 +304,9 @@ void virtqueue_dump(struct virtqueue *vq);
> >   */
> >  struct rte_mbuf *virtqueue_detatch_unused(struct virtqueue *vq);
> >
> > +/* Flush the elements in the used ring. */
> > +void virtqueue_flush(struct virtqueue *vq);
> > +
> >  static inline int
> >  virtqueue_full(const struct virtqueue *vq)
> >  {
> > @@ -312,6 +315,8 @@ virtqueue_full(const struct virtqueue *vq)
> >
> >  #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx -
> (vq)-
> > >vq_used_cons_idx))
> >
> > +void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
> > +
> >  static inline void
> >  vq_update_avail_idx(struct virtqueue *vq)
> >  {
> > --
> > 2.7.4

  reply	other threads:[~2017-12-01 17:18 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-29  8:26 [dpdk-dev] [PATCH] " Tiwei Bie
2017-08-30  9:13 ` Jens Freimann
2017-08-30 10:24   ` Tiwei Bie
2017-09-01  6:26     ` Jens Freimann
2017-09-01  7:14       ` Tiwei Bie
2017-10-19 13:53         ` Yuanhan Liu
2017-10-20  2:09 ` [dpdk-dev] [PATCH v2] " Tiwei Bie
2017-10-20  5:35   ` Yuanhan Liu
2017-11-14 17:38   ` Fischetti, Antonio
2017-12-01 17:17     ` Fischetti, Antonio [this message]
2017-12-02  4:30       ` Tiwei Bie
2017-12-04  7:19         ` Tiwei Bie
2017-12-04 11:46           ` Fischetti, Antonio
2017-12-05  3:11             ` Tiwei Bie
2017-12-05  8:52               ` Fischetti, Antonio
2017-12-02  1:24     ` Tiwei Bie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BCFD2BB875535045A5368D9ADBF296993506013E@IRSMSX101.ger.corp.intel.com \
    --to=antonio.fischetti@intel.com \
    --cc=dev@dpdk.org \
    --cc=jfreimann@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=tiwei.bie@intel.com \
    --cc=yliu@fridaylinux.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).