From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D1F5A00C5; Mon, 6 Jul 2020 12:45:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DEA121DA49; Mon, 6 Jul 2020 12:45:21 +0200 (CEST) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by dpdk.org (Postfix) with ESMTP id 024101D929 for ; Mon, 6 Jul 2020 12:45:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1594032320; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=cTDAASQ5EIOFvF9lr4rCX1kbItRHThsbgJXkHKUSHPM=; b=b2pHby0S+Spn2BXBFwCUKzMwUjPl4ghDm9oKd7tjI1FPztLFWgTHc62RvTEI06enQZERVs Mp2vVV3kg/iel9RHkAliGQDKUSvJsRiEWb6hRI4osAOHA4e+2LBPcoOVGJSShIIOdvjRut /F3Nwcj/X6ewPxyiReUJyu+vPi93WRw= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-496-isZAwZMGPP-y-ZkR3mLr8A-1; Mon, 06 Jul 2020 06:45:15 -0400 X-MC-Unique: isZAwZMGPP-y-ZkR3mLr8A-1 Received: by mail-wm1-f69.google.com with SMTP id g187so45367053wme.0 for ; Mon, 06 Jul 2020 03:45:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:autocrypt :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=cTDAASQ5EIOFvF9lr4rCX1kbItRHThsbgJXkHKUSHPM=; b=jaYxFjqdT4wCgaF9RGM2EG440z2vD2Zjph5U7WzPtg2kd4K2c3y4/wAUfad5gLBB4y 3G2JV/UDk2AcfA6NTluJtlMWjtRpbn4Ywy2yhp/tTss52BV0qYtWZPIDJDVREfxnnjkG lu05thiNBAqtQSpicWoQDfBtN79R8NBgStx2Gv150GbwhUBWcyyQtfWJL70eFE2PWZIA DfnQXwyxrW++EvYcLl1lbgpUZ0f0jcpSC1n0RP0GAUOYPIgKygN7qtrFcBy+QOQQGhx/ 4SpKNBAEUDWLbN7S0IE9PgJsNCcualInZStLD8EtsEvGvOdiIqRc4N32yPa/6sNCq4bA mywA== X-Gm-Message-State: AOAM531BmI/ehZ7V8So0KPYpfj9s3yUHnK/VREnF38IUYi23qC1ahY2e axhuyu8ShWYTElJwrMRBuMckplMqLIIbhMmwlSj3HZnL63VMRNq3QkTpZ48eskfwXwzwMEmRWkH jy5g= X-Received: by 2002:adf:db4d:: with SMTP id f13mr47286850wrj.336.1594032313583; Mon, 06 Jul 2020 03:45:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxuuXQuzyCHF0ijFmJT2o58BcS+ZJdAKwzkM1/wkfMP39s0mZEfwAII55dT14BMQTlCclTJEA== X-Received: by 2002:adf:db4d:: with SMTP id f13mr47286827wrj.336.1594032313353; Mon, 06 Jul 2020 03:45:13 -0700 (PDT) Received: from amorenoz.users.ipa.redhat.com ([94.73.58.18]) by smtp.gmail.com with ESMTPSA id v12sm11172950wrt.31.2020.07.06.03.45.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 06 Jul 2020 03:45:12 -0700 (PDT) To: "Xia, Chenbo" , "dev@dpdk.org" , "shahafs@mellanox.com" , "matan@mellanox.com" , "maxime.coquelin@redhat.com" , "Wang, Xiao W" , "viacheslavo@mellanox.com" Cc: "jasowang@redhat.com" , "lulu@redhat.com" References: <20200702083237.1215652-1-amorenoz@redhat.com> <20200702083237.1215652-7-amorenoz@redhat.com> <8b55488a-2aea-a836-2756-743194644029@redhat.com> From: Adrian Moreno Autocrypt: addr=amorenoz@redhat.com; prefer-encrypt=mutual; keydata= mQENBF1syNUBCADQ9dk3fDMxOZ/+OQpmbanpodYxEv8IRtDz8PXw8YX7UyGfozOpLjQ8Fftj ZxuubYNbt2QVbSgviFilFdNWu2eTnN/JaGtfhmTOLPVoakkPHZF8lbgImMoch7L0fH8wN2IM KPxQyPNlX+K9FD5brHsV1lfe1TwAxmhcvLW8yNrVq+9eDIDykxc7tH4exIqXgZroahGxMHKy c8Ti2kJka/t6pDfRaY0J+6J7I1nrn6GXXSMNA45EH8+0N/QlcXhP3rfftnoPeVmpjswzvJqY FNjf/Q5VPLx7RX0Qx+y8mMB2JcChV5Bl7D7x5EUbItj6+Sy7QfOgCtPegk9HSrBCNYaLABEB AAG0I0FkcmlhbiBNb3Jlbm8gPGFtb3Jlbm96QHJlZGhhdC5jb20+iQFUBBMBCAA+FiEEogUD gihhmbOPHy26d5C5fbYeFsUFAl1syNUCGwMFCQHhM4AFCwkIBwIGFQoJCAsCBBYCAwECHgEC F4AACgkQd5C5fbYeFsX7qwgArGHSkX+ILNcujkVzjTG4OtkpJMPFlkn/1PxSEKD0jLuzx14B COzpg/Mqj3Re/QBuOas+ci9bsUA0/2nORtmmEBvzDOJpR5FH1jaGCx8USlY4WM6QqEDNZgTw hsy9KhjFzFjMk+oo3HyItXA+Uq9yrRBTjNBGTXxezMRcMuUZ4MIAfY0IRBglL2BufiuL43jD BvTENNFLoQ/wFV7qkFWSkv+8IjTsxr7M6XUo1QLd29Hn0dvwssN579HL1+BP46i2REpzeBEG L75iVChi+YnIQQNMJ9NYarVabZx4Y1Gn8+7B/1SNArDV+IDgnYgt7E58otoV2Ap310dmtuvE VbxGpbkBDQRdbMjVAQgAqyp9oA7WDu7/Y9T4Ommt69iZx8os7shUIfdgPEy5xrcPn6qGwN1/ HQ4j8nWfBG9uuX1X0RXUZIUEtYTxtED4yaCQMTqDUf9cBAwAA2mYxBfoiNYx8YqxM+sT0/J4 2qmDd+y+20UR4yzHE8AmIbspTzDFIJDAi+jKSR8F355z0sfW7CIMDC4ZWrPsskjEy1YN/U10 r6tRRH1kNyrCSbTG0d9MtcQO58h7DLXuzUhErB+BtG52A04t5cweIJTJC+koV5XPeilzlHnm RFoj0ncruGa9Odns21BNt3cy9wLfK+aUnWuAB1uc6bJGQPiAwjkilz7g7MBRUuIQ2Zt7HGLc SwARAQABiQE8BBgBCAAmFiEEogUDgihhmbOPHy26d5C5fbYeFsUFAl1syNUCGwwFCQHhM4AA CgkQd5C5fbYeFsUlSwf8CH+u/IXaE7WeWxwFkMaORfW8cM4q0xrL3M6yRGuQNW+kMjnrvK9U J9G+L1/5uTRbDQ/4LdoKqize8LjehA+iF6ba4t9Npikh8fLKWgaJfQ/hPhH4C3O5gWPOLTW6 ylGxiuER4CdFwQIoAMMslhFA7G+teeOKBq36E+1+zrybI6Xy1UBSlpDK9j4CtTnMQejjuSQb Qhle+l8VroaUHq869wjAhRHHhqmtJKggI+OvzgQpDIwfHIDypb1BuKydi2W6cVYEALUYyCLS dTBDhzj8zR5tPCsga8J7+TclQzkWOiI2C6ZtiWrMsL/Uym3uXk5nsoc7lSj7yLd/MrBRhYfP JQ== Message-ID: Date: Mon, 6 Jul 2020 12:45:11 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=amorenoz@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2 6/8] vhost: add support for virtio get status message X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/6/20 12:29 PM, Xia, Chenbo wrote: > Hi Adrian, > >> -----Original Message----- >> From: Adrian Moreno >> Sent: Monday, July 6, 2020 4:49 PM >> To: Xia, Chenbo ; dev@dpdk.org; >> shahafs@mellanox.com; matan@mellanox.com; maxime.coquelin@redhat.com; >> Wang, Xiao W ; viacheslavo@mellanox.com >> Cc: jasowang@redhat.com; lulu@redhat.com >> Subject: Re: [dpdk-dev] [PATCH v2 6/8] vhost: add support for virtio get status >> message >> >> >> >> On 7/6/20 5:22 AM, Xia, Chenbo wrote: >>> Hi Adrian, >>> >>>> -----Original Message----- >>>> From: dev On Behalf Of Adrian Moreno >>>> Sent: Thursday, July 2, 2020 4:33 PM >>>> To: dev@dpdk.org; Ye, Xiaolong ; >>>> shahafs@mellanox.com; matan@mellanox.com; >> maxime.coquelin@redhat.com; >>>> Wang, Xiao W ; viacheslavo@mellanox.com >>>> Cc: jasowang@redhat.com; lulu@redhat.com; Adrian Moreno >>>> >>>> Subject: [dpdk-dev] [PATCH v2 6/8] vhost: add support for virtio get >>>> status message >>>> >>>> This patch adds support to the new Virtio device get status Vhost-user >> message. >>>> >>>> The driver can send this new message to read the device status. >>>> >>>> One of the uses of this message is to ensure the feature negotiation >>>> has succeded. According to the virtio spec, after completing the >>>> feature negotiation, the driver sets the FEATURE_OK status bit and >>>> re-reads it to ensure the device has accepted the features. >>>> >>>> This patch also clears the FEATURE_OK status bit if the feature >>>> negotiation has failed to let the driver know about his failure. >>>> >>>> Signed-off-by: Adrian Moreno >>>> --- >>>> lib/librte_vhost/vhost.h | 2 ++ >>>> lib/librte_vhost/vhost_user.c | 32 ++++++++++++++++++++++++++++++++ >>>> lib/librte_vhost/vhost_user.h | 1 + >>>> 3 files changed, 35 insertions(+) >>>> >>>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h >>>> index 25d31c71b..e743821cc 100644 >>>> --- a/lib/librte_vhost/vhost.h >>>> +++ b/lib/librte_vhost/vhost.h >>>> @@ -32,6 +32,8 @@ >>>> #define VIRTIO_DEV_BUILTIN_VIRTIO_NET 4 >>>> /* Used to indicate that the device has its own data path and >>>> configured */ #define VIRTIO_DEV_VDPA_CONFIGURED 8 >>>> +/* Used to indicate that the feature negotiation failed */ #define >>>> +VIRTIO_DEV_FEATURES_FAILED 16 >>>> >>>> /* Backend value set by guest. */ >>>> #define VIRTIO_DEV_STOPPED -1 >>>> diff --git a/lib/librte_vhost/vhost_user.c >>>> b/lib/librte_vhost/vhost_user.c index >>>> 8d3d13913..00da7bf18 100644 >>>> --- a/lib/librte_vhost/vhost_user.c >>>> +++ b/lib/librte_vhost/vhost_user.c >>>> @@ -88,6 +88,7 @@ static const char >>>> *vhost_message_str[VHOST_USER_MAX] >>>> = { >>>> [VHOST_USER_GET_INFLIGHT_FD] = "VHOST_USER_GET_INFLIGHT_FD", >>>> [VHOST_USER_SET_INFLIGHT_FD] = "VHOST_USER_SET_INFLIGHT_FD", >>>> [VHOST_USER_SET_STATUS] = "VHOST_USER_SET_STATUS", >>>> + [VHOST_USER_GET_STATUS] = "VHOST_USER_GET_STATUS", >>>> }; >>>> >>>> static int send_vhost_reply(int sockfd, struct VhostUserMsg *msg); >>>> @@ -339,6 >>>> +340,9 @@ vhost_user_set_features(struct virtio_net **pdev, struct >>>> VhostUserMsg *msg, >>>> VHOST_LOG_CONFIG(ERR, >>>> "(%d) received invalid negotiated features.\n", >>>> dev->vid); >>>> + dev->flags |= VIRTIO_DEV_FEATURES_FAILED; >>>> + dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; >>>> + >>>> return RTE_VHOST_MSG_RESULT_ERR; >>>> } >>>> >>>> @@ -402,6 +406,7 @@ vhost_user_set_features(struct virtio_net **pdev, >>>> struct VhostUserMsg *msg, >>>> if (vdpa_dev) >>>> vdpa_dev->ops->set_features(dev->vid); >>>> >>>> + dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED; >>>> return RTE_VHOST_MSG_RESULT_OK; >>>> } >>>> >>>> @@ -2458,6 +2463,22 @@ vhost_user_postcopy_end(struct virtio_net >>>> **pdev, struct VhostUserMsg *msg, >>>> return RTE_VHOST_MSG_RESULT_REPLY; >>>> } >>>> >>>> +static int >>>> +vhost_user_get_status(struct virtio_net **pdev, struct VhostUserMsg *msg, >>>> + int main_fd __rte_unused) >>>> +{ >>>> + struct virtio_net *dev = *pdev; >>>> + >>>> + if (validate_msg_fds(msg, 0) != 0) >>>> + return RTE_VHOST_MSG_RESULT_ERR; >>>> + >>>> + msg->payload.u64 = dev->status; >>>> + msg->size = sizeof(msg->payload.u64); >>>> + msg->fd_num = 0; >>>> + >>>> + return RTE_VHOST_MSG_RESULT_OK; >>>> +} >>>> + >>>> static int >>>> vhost_user_set_status(struct virtio_net **pdev, struct VhostUserMsg *msg, >>>> int main_fd __rte_unused) >>>> @@ -2476,6 +2497,16 @@ vhost_user_set_status(struct virtio_net >>>> **pdev, struct VhostUserMsg *msg, >>>> >>>> dev->status = msg->payload.u64; >>>> >>>> + if ((dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK) && >>>> + (dev->flags & VIRTIO_DEV_FEATURES_FAILED)) { >>>> + VHOST_LOG_CONFIG(ERR, "FEATURES_OK bit is set but feature >>>> negotiation failed\n"); >>>> + /* >>>> + * Clear the bit to let the driver know about the feature >>>> + * negotiation failure >>>> + */ >>>> + dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; >>>> + } >>>> + >>> >>> There's a coding style issue because of above '}' alignment. Could you fix this? >>> >>> Another thing I'm not sure: if above condition happens, should it be >>> treated as err? If set status is with replay-ack (this will happen, >>> right?), would QEMU like to know this status is not set? As QEMU >>> should know it during SET_FEATURES, I'm not sure whether this will >>> also need NACK when reply-ack enabled. What's your opinion? >>> >> >> My interpretation was that, since we have already NACKed SET_FEATURES, >> SET_STATUS should only NACK if we were unable to set the status (device is not >> present, invalid message, etc), and according to the virtio standard the driver >> must read again and verify FEATURES_OK is still set, therefore NACKing the >> SET_STATUS would only hide the real problem. >> >> Besides, for a driver (e.g: qemu) that implements the virtio/vhost logic agnostic >> of the underlying vhost type (vhost-net or vhost-user) a spec-oriented way of >> expressing errors is preferred. See as an example a (still unmerged) use of this >> feature in function "static int vhost_vdpa_set_features()" in: >> >> https://patchwork.ozlabs.org/project/qemu- >> devel/patch/20200701145538.22333-14-lulu@redhat.com/ >> >> Having said all this, I realize this should be a rare case. This mechanism is in place >> to prevent the driver from configuring an incompatible combination of features. >> However, the vhost backend only checks that qemu has honored it's original >> feature set which the driver must do according to the spec. So I'm happy to >> change it if you have a strong opinion on this. > > Yeah, it makes sense that we should NACK SET_FEATURES for this. So I'm fine with > current implementation. BTW, about REPLY_ACK, does spec say something about > which messages should set NEED_REPLY if REPLY_ACK is supported? I only see some > msg like SET_SLAVE_REQ_FD has description about REPLY_ACK. > OK. I'll resend the series addressing the other comments. With regards to the VHOST_USER_PROTOCOL_F_REPLY_ACK feature bit, the spec says: "With this protocol extension negotiated, the sender (QEMU) can set the need_reply [Bit 3] flag to any command." So, in general, the vhost backend which usually acts as "slave" (until we find a better word), only needs to send the response if qemu has requested it. Now, I said in general because there are some messages that are originated by the backend if VHOST_USER_PROTOCOL_F_SLAVE_REQ is negotiated [1]. Those messages start with "VHOST_USER_SLAVE_*". That's why you will find code that sets the need_reply bit on those messages. Thanks, Adrián [1] https://github.com/qemu/qemu/blob/master/docs/interop/vhost-user.rst#slave-communication >> >> >>> Thanks! >>> Chenbo >>> >>>> VHOST_LOG_CONFIG(INFO, "New device status(0x%08x):\n" >>>> "\t-ACKNOWLEDGE: %u\n" >>>> "\t-DRIVER: %u\n" >>>> @@ -2527,6 +2558,7 @@ static vhost_message_handler_t >>>> vhost_message_handlers[VHOST_USER_MAX] = { >>>> [VHOST_USER_GET_INFLIGHT_FD] = vhost_user_get_inflight_fd, >>>> [VHOST_USER_SET_INFLIGHT_FD] = vhost_user_set_inflight_fd, >>>> [VHOST_USER_SET_STATUS] = vhost_user_set_status, >>>> + [VHOST_USER_GET_STATUS] = vhost_user_get_status, >>>> }; >>>> >>>> /* return bytes# of read on success or negative val on failure. */ >>>> diff --git a/lib/librte_vhost/vhost_user.h >>>> b/lib/librte_vhost/vhost_user.h index >>>> 82885ab5e..16fe03f88 100644 >>>> --- a/lib/librte_vhost/vhost_user.h >>>> +++ b/lib/librte_vhost/vhost_user.h >>>> @@ -58,6 +58,7 @@ typedef enum VhostUserRequest { >>>> VHOST_USER_GET_INFLIGHT_FD = 31, >>>> VHOST_USER_SET_INFLIGHT_FD = 32, >>>> VHOST_USER_SET_STATUS = 39, >>>> + VHOST_USER_GET_STATUS = 40, >>>> VHOST_USER_MAX = 41 >>>> } VhostUserRequest; >>>> >>>> -- >>>> 2.26.2 >>> >> >> -- >> Adrián Moreno > -- Adrián Moreno