From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 43F4B1B834; Fri, 9 Feb 2018 10:40:46 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B950C9BA98; Fri, 9 Feb 2018 09:40:45 +0000 (UTC) Received: from [10.36.112.46] (ovpn-112-46.ams2.redhat.com [10.36.112.46]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3A86A202699C; Fri, 9 Feb 2018 09:40:43 +0000 (UTC) From: Maxime Coquelin To: "Wang, Zhihong" , Olivier Matz , "Xu, Qian Q" Cc: "Yao, Lei A" , "dev@dpdk.org" , "yliu@fridaylinux.org" , Thomas Monjalon , "stable@dpdk.org" References: <20170831134015.1383-1-olivier.matz@6wind.com> <20170907121347.16208-1-olivier.matz@6wind.com> <20170907121347.16208-7-olivier.matz@6wind.com> <2DBBFF226F7CF64BAFCA79B681719D953A354152@shsmsx102.ccr.corp.intel.com> <20180201082735.w4nyppsg4vpgenei@platinum> <82F45D86ADE5454A95A89742C8D1410E3B902F26@shsmsx102.ccr.corp.intel.com> <20180207220103.r4whargio7piayeb@neon> <8F6C2BD409508844A0EFC19955BE0941513DAFE9@SHSMSX103.ccr.corp.intel.com> <60f7639e-eb93-4273-3016-1bd98dda66fd@redhat.com> Message-ID: <064a2749-9468-6dde-c11e-87d219aaedd6@redhat.com> Date: Fri, 9 Feb 2018 10:40:39 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <60f7639e-eb93-4273-3016-1bd98dda66fd@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 09 Feb 2018 09:40:45 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Fri, 09 Feb 2018 09:40:45 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: Re: [dpdk-dev] [PATCH v2 06/10] net/virtio: fix queue setup consistency X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Feb 2018 09:40:46 -0000 On 02/09/2018 09:59 AM, Maxime Coquelin wrote: > Hi Zhihong, > > On 02/09/2018 06:44 AM, Wang, Zhihong wrote: >> Hi Olivier, >> >> Given the situation that the vec path can be selected silently now once >> condition is met. So theoretically speaking this issue impacts the whole >> virtio pmd. If you plan to fix it in the next release, do you want to do >> a temporary workaround to disable the vec path selection till then? > > That may be the less worse solution if we don't fix it quickly. > Reverting the patch isn't trivial, so this is not an option. > > I'm trying to reproduce it now, I'll let you know if I find something. Hmm, I reproduced Tiwei instructions, and in my case, Vhost's testpmd crashes because Virtio-user makes it doing an out of bound access. Could you provide a patch to disable vector path selection? I'll continue to debug, but we can start reviewing it so that it is ready if we need it. Thanks, Maxime > > Cheers, > Maxime >> Thanks >> -Zhihong >> >>> -----Original Message----- >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz >>> Sent: Thursday, February 8, 2018 6:01 AM >>> To: Xu, Qian Q >>> Cc: Yao, Lei A ; dev@dpdk.org; >>> yliu@fridaylinux.org; >>> maxime.coquelin@redhat.com; Thomas Monjalon ; >>> stable@dpdk.org >>> Subject: Re: [dpdk-dev] [PATCH v2 06/10] net/virtio: fix queue setup >>> consistency >>> >>> Hi, >>> >>> It's in my short plans, but unfortunately some other high priority tasks >>> were inserted before. Honnestly, I'm not sure I'll be able to make it >>> for the release, but I'll do my best. >>> >>> Olivier >>> >>> >>> >>> On Wed, Feb 07, 2018 at 08:31:07AM +0000, Xu, Qian Q wrote: >>>> Any update, Olivier? >>>> We are near to release, and the bug-fix is important for the virtio >>>> vector >>> path usage. Thanks. >>>> >>>>> -----Original Message----- >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz >>>>> Sent: Thursday, February 1, 2018 4:28 PM >>>>> To: Yao, Lei A >>>>> Cc: dev@dpdk.org; yliu@fridaylinux.org; maxime.coquelin@redhat.com; >>>>> Thomas Monjalon ; stable@dpdk.org >>>>> Subject: Re: [dpdk-dev] [PATCH v2 06/10] net/virtio: fix queue setup >>> consistency >>>>> >>>>> Hi Lei, >>>>> >>>>> It's on my todo list, I'll check this as soon as possible. >>>>> >>>>> Olivier >>>>> >>>>> >>>>> On Thu, Feb 01, 2018 at 03:14:15AM +0000, Yao, Lei A wrote: >>>>>> Hi, Olivier >>>>>> >>>>>> This is Lei from DPDK validation team in Intel. During our DPDK >>>>>> 18.02-rc1 test, I find the following patch will cause one serious >>>>>> issue >>> with virtio >>>>> vector path: >>>>>> the traffic can't resume after stop/start the virtio device. >>>>>> >>>>>> The step like following: >>>>>> 1. Launch vhost-user port using testpmd at Host 2. Launch VM with >>>>>> virtio device, mergeable is off 3. Bind the virtio device to pmd >>>>>> driver, launch testpmd, let the tx/rx use vector path >>>>>>      virtio_xmit_pkts_simple >>>>>>      virtio_recv_pkts_vec >>>>>> 4. Send traffic to virtio device from vhost side, then stop the >>>>>> virtio >>>>>> device 5. Start the virtio device again After step 5, the traffic >>>>>> can't resume. >>>>>> >>>>>> Could you help check this and give a fix? This issue will impact the >>>>>> virtio pmd user experience heavily. By the way, this patch is already >>>>>> included into V17.11. Looks like we need give a patch to this LTS >>>>>> version. >>>>> Thanks a lot! >>>>>> >>>>>> BRs >>>>>> Lei >>>>>>> -----Original Message----- >>>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz >>>>>>> Sent: Thursday, September 7, 2017 8:14 PM >>>>>>> To: dev@dpdk.org; yliu@fridaylinux.org; >>> maxime.coquelin@redhat.com >>>>>>> Cc: stephen@networkplumber.org; stable@dpdk.org >>>>>>> Subject: [dpdk-dev] [PATCH v2 06/10] net/virtio: fix queue setup >>>>>>> consistency >>>>>>> >>>>>>> In rx/tx queue setup functions, some code is executed only if >>>>>>> use_simple_rxtx == 1. The value of this variable can change >>>>>>> depending on the offload flags or sse support. If Rx queue setup is >>>>>>> called before Tx queue setup, it can result in an invalid >>>>>>> configuration: >>>>>>> >>>>>>> - dev_configure is called: use_simple_rxtx is initialized to 0 >>>>>>> - rx queue setup is called: queues are initialized without simple >>>>>>> path >>>>>>>    support >>>>>>> - tx queue setup is called: use_simple_rxtx switch to 1, and simple >>>>>>>    Rx/Tx handlers are selected >>>>>>> >>>>>>> Fix this by postponing a part of Rx/Tx queue initialization in >>>>>>> dev_start(), as it was the case in the initial implementation. >>>>>>> >>>>>>> Fixes: 48cec290a3d2 ("net/virtio: move queue configure code to >>>>>>> proper >>>>>>> place") >>>>>>> Cc: stable@dpdk.org >>>>>>> >>>>>>> Signed-off-by: Olivier Matz >>>>>>> --- >>>>>>>   drivers/net/virtio/virtio_ethdev.c | 13 +++++++++++++ >>>>>>> drivers/net/virtio/virtio_ethdev.h |  6 ++++++ >>>>>>>   drivers/net/virtio/virtio_rxtx.c   | 40 >>> ++++++++++++++++++++++++++++++- >>>>>>> ------- >>>>>>>   3 files changed, 51 insertions(+), 8 deletions(-) >>>>>>> >>>>>>> diff --git a/drivers/net/virtio/virtio_ethdev.c >>>>>>> b/drivers/net/virtio/virtio_ethdev.c >>>>>>> index 8eee3ff80..c7888f103 100644 >>>>>>> --- a/drivers/net/virtio/virtio_ethdev.c >>>>>>> +++ b/drivers/net/virtio/virtio_ethdev.c >>>>>>> @@ -1737,6 +1737,19 @@ virtio_dev_start(struct rte_eth_dev *dev) >>>>>>>       struct virtnet_rx *rxvq; >>>>>>>       struct virtnet_tx *txvq __rte_unused; >>>>>>>       struct virtio_hw *hw = dev->data->dev_private; >>>>>>> +    int ret; >>>>>>> + >>>>>>> +    /* Finish the initialization of the queues */ >>>>>>> +    for (i = 0; i < dev->data->nb_rx_queues; i++) { >>>>>>> +        ret = virtio_dev_rx_queue_setup_finish(dev, i); >>>>>>> +        if (ret < 0) >>>>>>> +            return ret; >>>>>>> +    } >>>>>>> +    for (i = 0; i < dev->data->nb_tx_queues; i++) { >>>>>>> +        ret = virtio_dev_tx_queue_setup_finish(dev, i); >>>>>>> +        if (ret < 0) >>>>>>> +            return ret; >>>>>>> +    } >>>>>>> >>>>>>>       /* check if lsc interrupt feature is enabled */ >>>>>>>       if (dev->data->dev_conf.intr_conf.lsc) { diff --git >>>>>>> a/drivers/net/virtio/virtio_ethdev.h >>>>>>> b/drivers/net/virtio/virtio_ethdev.h >>>>>>> index c3413c6d9..2039bc547 100644 >>>>>>> --- a/drivers/net/virtio/virtio_ethdev.h >>>>>>> +++ b/drivers/net/virtio/virtio_ethdev.h >>>>>>> @@ -92,10 +92,16 @@ int  virtio_dev_rx_queue_setup(struct >>>>>>> rte_eth_dev *dev, uint16_t rx_queue_id, >>>>>>>           const struct rte_eth_rxconf *rx_conf, >>>>>>>           struct rte_mempool *mb_pool); >>>>>>> >>>>>>> +int virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, >>>>>>> +                uint16_t rx_queue_id); >>>>>>> + >>>>>>>   int  virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t >>>>>>> tx_queue_id, >>>>>>>           uint16_t nb_tx_desc, unsigned int socket_id, >>>>>>>           const struct rte_eth_txconf *tx_conf); >>>>>>> >>>>>>> +int virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, >>>>>>> +                uint16_t tx_queue_id); >>>>>>> + >>>>>>>   uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf >>> **rx_pkts, >>>>>>>           uint16_t nb_pkts); >>>>>>> >>>>>>> diff --git a/drivers/net/virtio/virtio_rxtx.c >>>>>>> b/drivers/net/virtio/virtio_rxtx.c >>>>>>> index e30377c51..a32e3229f 100644 >>>>>>> --- a/drivers/net/virtio/virtio_rxtx.c >>>>>>> +++ b/drivers/net/virtio/virtio_rxtx.c >>>>>>> @@ -421,9 +421,6 @@ virtio_dev_rx_queue_setup(struct >>> rte_eth_dev *dev, >>>>>>>       struct virtio_hw *hw = dev->data->dev_private; >>>>>>>       struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; >>>>>>>       struct virtnet_rx *rxvq; >>>>>>> -    int error, nbufs; >>>>>>> -    struct rte_mbuf *m; >>>>>>> -    uint16_t desc_idx; >>>>>>> >>>>>>>       PMD_INIT_FUNC_TRACE(); >>>>>>> >>>>>>> @@ -440,10 +437,24 @@ virtio_dev_rx_queue_setup(struct >>> rte_eth_dev >>>>>>> *dev, >>>>>>>       } >>>>>>>       dev->data->rx_queues[queue_idx] = rxvq; >>>>>>> >>>>>>> +    return 0; >>>>>>> +} >>>>>>> + >>>>>>> +int >>>>>>> +virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, >>> uint16_t >>>>>>> queue_idx) >>>>>>> +{ >>>>>>> +    uint16_t vtpci_queue_idx = 2 * queue_idx + >>>>>>> VTNET_SQ_RQ_QUEUE_IDX; >>>>>>> +    struct virtio_hw *hw = dev->data->dev_private; >>>>>>> +    struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; >>>>>>> +    struct virtnet_rx *rxvq = &vq->rxq; >>>>>>> +    struct rte_mbuf *m; >>>>>>> +    uint16_t desc_idx; >>>>>>> +    int error, nbufs; >>>>>>> + >>>>>>> +    PMD_INIT_FUNC_TRACE(); >>>>>>> >>>>>>>       /* Allocate blank mbufs for the each rx descriptor */ >>>>>>>       nbufs = 0; >>>>>>> -    error = ENOSPC; >>>>>>> >>>>>>>       if (hw->use_simple_rxtx) { >>>>>>>           for (desc_idx = 0; desc_idx < vq->vq_nentries; @@ - >>> 534,7 >>>>> +545,6 >>>>>>> @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, >>>>>>>       struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; >>>>>>>       struct virtnet_tx *txvq; >>>>>>>       uint16_t tx_free_thresh; >>>>>>> -    uint16_t desc_idx; >>>>>>> >>>>>>>       PMD_INIT_FUNC_TRACE(); >>>>>>> >>>>>>> @@ -563,9 +573,24 @@ virtio_dev_tx_queue_setup(struct >>> rte_eth_dev >>>>>>> *dev, >>>>>>> >>>>>>>       vq->vq_free_thresh = tx_free_thresh; >>>>>>> >>>>>>> -    if (hw->use_simple_rxtx) { >>>>>>> -        uint16_t mid_idx  = vq->vq_nentries >> 1; >>>>>>> +    dev->data->tx_queues[queue_idx] = txvq; >>>>>>> +    return 0; >>>>>>> +} >>>>>>> + >>>>>>> +int >>>>>>> +virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, >>>>>>> +                uint16_t queue_idx) >>>>>>> +{ >>>>>>> +    uint8_t vtpci_queue_idx = 2 * queue_idx + >>>>>>> VTNET_SQ_TQ_QUEUE_IDX; >>>>>>> +    struct virtio_hw *hw = dev->data->dev_private; >>>>>>> +    struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; >>>>>>> +    uint16_t mid_idx = vq->vq_nentries >> 1; >>>>>>> +    struct virtnet_tx *txvq = &vq->txq; >>>>>>> +    uint16_t desc_idx; >>>>>>> >>>>>>> +    PMD_INIT_FUNC_TRACE(); >>>>>>> + >>>>>>> +    if (hw->use_simple_rxtx) { >>>>>>>           for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) { >>>>>>>               vq->vq_ring.avail->ring[desc_idx] = >>>>>>>                   desc_idx + mid_idx; >>>>>>> @@ -587,7 +612,6 @@ virtio_dev_tx_queue_setup(struct >>> rte_eth_dev >>>>>>> *dev, >>>>>>> >>>>>>>       VIRTQUEUE_DUMP(vq); >>>>>>> >>>>>>> -    dev->data->tx_queues[queue_idx] = txvq; >>>>>>>       return 0; >>>>>>>   } >>>>>>> >>>>>>> -- >>>>>>> 2.11.0 >>>>>>