DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: David Marchand <david.marchand@redhat.com>
Cc: dev <dev@dpdk.org>,
	oda@valinux.co.jp, yinan.wang@intel.com,
	Tiwei Bie <tiwei.bie@intel.com>,
	Adrian Moreno Zapata <amorenoz@redhat.com>
Subject: Re: [dpdk-dev] [PATCH 1/2] net/vhost: fix Vhost setup error path
Date: Tue, 18 Feb 2020 17:25:52 +0100	[thread overview]
Message-ID: <9178cafd-fb2b-bd7a-3da3-ca3927fa846b@redhat.com> (raw)
In-Reply-To: <CAJFAV8x9AzRyz5LrDSA2MYiGq74HKGTp99+eVKLBsKAW6dKHfg@mail.gmail.com>



On 2/18/20 5:15 PM, David Marchand wrote:
> On Tue, Feb 18, 2020 at 3:35 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> If for some reason vhost_driver_setup() fails, the list
>> element for the device may be freed without being removed
>> from the internal list of devices.
>>
>> This patch fixes all the error paths, by unregistering the
>> device from Vhost library it has been registered, remove
>> the device from the list, reset device vring_state pointer
>> from the global table and only free vring state if it had
>> been allocated.
>>
>> Fixes: 3d01b759d267 ("net/vhost: delay driver setup")
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>  drivers/net/vhost/rte_eth_vhost.c | 21 ++++++++++++++-------
>>  1 file changed, 14 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
>> index 90263ae77c..c0056bc8bf 100644
>> --- a/drivers/net/vhost/rte_eth_vhost.c
>> +++ b/drivers/net/vhost/rte_eth_vhost.c
>> @@ -878,12 +878,12 @@ vhost_driver_setup(struct rte_eth_dev *eth_dev)
>>
>>         list = rte_zmalloc_socket(name, sizeof(*list), 0, numa_node);
>>         if (list == NULL)
>> -               goto error;
>> +               return -1;
>>
>>         vring_state = rte_zmalloc_socket(name, sizeof(*vring_state),
>>                                          0, numa_node);
>>         if (vring_state == NULL)
>> -               goto error;
>> +               goto free_list;
>>
>>         list->eth_dev = eth_dev;
>>         pthread_mutex_lock(&internal_list_lock);
>> @@ -894,30 +894,37 @@ vhost_driver_setup(struct rte_eth_dev *eth_dev)
>>         vring_states[eth_dev->data->port_id] = vring_state;
>>
>>         if (rte_vhost_driver_register(internal->iface_name, internal->flags))
>> -               goto error;
>> +               goto list_remove;
>>
>>         if (internal->disable_flags) {
>>                 if (rte_vhost_driver_disable_features(internal->iface_name,
>>                                                       internal->disable_flags))
>> -                       goto error;
>> +                       goto drv_unreg;
>>         }
>>
>>         if (rte_vhost_driver_callback_register(internal->iface_name,
>>                                                &vhost_ops) < 0) {
>>                 VHOST_LOG(ERR, "Can't register callbacks\n");
>> -               goto error;
>> +               goto drv_unreg;
>>         }
>>
>>         if (rte_vhost_driver_start(internal->iface_name) < 0) {
>>                 VHOST_LOG(ERR, "Failed to start driver for %s\n",
>>                           internal->iface_name);
>> -               goto error;
>> +               goto drv_unreg;
>>         }
>>
>>         return 0;
>>
>> -error:
>> +drv_unreg:
>> +       rte_vhost_driver_unregister(internal->iface_name);
>> +list_remove:
>> +       pthread_mutex_lock(&internal_list_lock);
>> +       TAILQ_REMOVE(&internal_list, list, next);
>> +       pthread_mutex_unlock(&internal_list_lock);
>> +       vring_states[eth_dev->data->port_id] = NULL;
> 
> We allocate/store in vring_states after inserting list in &internal_list.
> Probably nitpicking but I would expect the opposite order on cleanup.

No nitpicking, it is a good practice to cleanup in opposite order.
I will post a v2.

> 
>>         rte_free(vring_state);
>> +free_list:
>>         rte_free(list);
>>
>>         return -1;
>> --
>> 2.24.1
>>
> 
> In any case,
> Reviewed-by: David Marchand <david.marchand@redhat.com>
> 

Thanks!
Maxime

> 
> --
> David Marchand
> 


  reply	other threads:[~2020-02-18 16:26 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-18 14:35 [dpdk-dev] [PATCH 0/2] Fix Vhost PMD setup Maxime Coquelin
2020-02-18 14:35 ` [dpdk-dev] [PATCH 1/2] net/vhost: fix Vhost setup error path Maxime Coquelin
2020-02-18 16:15   ` David Marchand
2020-02-18 16:25     ` Maxime Coquelin [this message]
2020-02-18 14:35 ` [dpdk-dev] [PATCH 2/2] net/vhost: prevent multiple setup on reconfig Maxime Coquelin
2020-02-18 15:26   ` Maxime Coquelin
2020-02-18 15:27   ` Wang, Yinan
2020-02-18 16:16   ` David Marchand
2020-02-18 15:24 ` [dpdk-dev] [PATCH 0/2] Fix Vhost PMD setup Wang, Yinan
2020-02-18 15:25   ` Maxime Coquelin
2020-02-18 17:22 Maxime Coquelin
2020-02-18 17:22 ` [dpdk-dev] [PATCH 1/2] net/vhost: fix Vhost setup error path Maxime Coquelin
2020-02-19  3:05   ` Tiwei Bie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9178cafd-fb2b-bd7a-3da3-ca3927fa846b@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=amorenoz@redhat.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=oda@valinux.co.jp \
    --cc=tiwei.bie@intel.com \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).