From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B773EA0C4C; Thu, 14 Oct 2021 10:25:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A37640042; Thu, 14 Oct 2021 10:25:43 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id CE6EC40041 for ; Thu, 14 Oct 2021 10:25:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634199941; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qhJriviFqmioZgdtFs1nSosapMn/IpsaSP7mRU/qw1U=; b=BTSk1sIw0/urwlcPajwK1X/rbcczgb/+m91D2YdCC1mqvJ+MiwEwznTyC565hcnbxmN3Xo iGgbvNtKGkI+rMUc+w4iPH2wAtd/wjrtD9KQsJsiVbKtSk4gic53xXROY0OHV+i4uHvHh7 ZWqP0AqkRFmhw1zWtYsa5h9r4S/oGyQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-212-6iHhwccoOiC9KVTssXdjXg-1; Thu, 14 Oct 2021 04:25:40 -0400 X-MC-Unique: 6iHhwccoOiC9KVTssXdjXg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0A422180830F; Thu, 14 Oct 2021 08:25:39 +0000 (UTC) Received: from [10.39.208.20] (unknown [10.39.208.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 47C9960CC9; Thu, 14 Oct 2021 08:25:37 +0000 (UTC) Message-ID: Date: Thu, 14 Oct 2021 10:25:36 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.1.0 From: Maxime Coquelin To: Li Feng , Chenbo Xia Cc: dev@dpdk.org References: <20210827051241.2448098-1-fengli@smartx.com> In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v1] vhost: add sanity check for resubmiting reqs in split ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/14/21 10:17, Maxime Coquelin wrote: > Hi Li, > > Adding Jin Yu who introduced this function. Looks like Jin Yu has left Intel, Chenbo, could you find someone from the Intel SPDK team to look at it? > On 8/27/21 07:12, Li Feng wrote: >> When getting reqs from the avail ring, the id may exceed inflight >> queue size. Then the dpdk will crash forever. > > You need to add Fixes tag and Cc stable@dpdk.org so that it can be > backported. > >> Signed-off-by: Li Feng >> --- >>   lib/vhost/vhost_user.c | 10 ++++++++-- >>   1 file changed, 8 insertions(+), 2 deletions(-) >> >> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c >> index 29a4c9af60..f09d0f6a48 100644 >> --- a/lib/vhost/vhost_user.c >> +++ b/lib/vhost/vhost_user.c >> @@ -1823,8 +1823,14 @@ vhost_check_queue_inflights_split(struct >> virtio_net *dev, >>       last_io = inflight_split->last_inflight_io; >>       if (inflight_split->used_idx != used->idx) { >> -        inflight_split->desc[last_io].inflight = 0; >> -        rte_atomic_thread_fence(__ATOMIC_SEQ_CST); >> +        if (unlikely(last_io >= inflight_split->desc_num)) { >> +            VHOST_LOG_CONFIG(ERR, "last_inflight_io '%"PRIu16"' >> exceeds inflight " >> +                "queue size (%"PRIu16").\n", last_io, >> +                inflight_split->desc_num); > > If such error happens, shouldn't we return RTE_VHOST_MSG_RESULT_ERR > instead of just logging an error? > >> +        } else { >> +            inflight_split->desc[last_io].inflight = 0; >> +            rte_atomic_thread_fence(__ATOMIC_SEQ_CST); >> +        } >>           inflight_split->used_idx = used->idx; >>       } >> > > Regards, > Maxime