From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Chenbo Xia <chenbox@nvidia.com>,
David Marchand <david.marchand@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [RFC 0/3] Vhost: fix FD entries cleanup
Date: Wed, 5 Feb 2025 16:29:26 +0100 [thread overview]
Message-ID: <48f33e8b-8ba5-49f1-a38c-7e0f6efec30b@redhat.com> (raw)
In-Reply-To: <3EE665CF-8BD9-4F3E-8CF4-30F0A285A7CC@nvidia.com>
Hi Chenbo & David,
On 2/5/25 8:27 AM, Chenbo Xia wrote:
> Hi David,
>
>> On Feb 4, 2025, at 21:18, David Marchand <david.marchand@redhat.com> wrote:
>>
>> External email: Use caution opening links or attachments
>>
>>
>> Hello vhost maintainers,
>>
>> On Tue, Dec 24, 2024 at 4:50 PM Maxime Coquelin
>> <maxime.coquelin@redhat.com> wrote:
>>>
>>> The vhost FD manager provides a way for the read/write
>>> callbacks to request removal of their associated FD from
>>> the epoll FD set. Problem is that it is missing a cleanup
>>> callback, so the read/write callback requesting the removal
>>> have to perform cleanups before the FD is removed from the
>>> FD set. It includes closing the FD before it is removed
>>> from the epoll FD set.
>>>
>>> This series introduces a new cleanup callback which, if
>>> implemented, is closed right after the FD is removed from
>>> FD set.
>>>
>>> Maxime Coquelin (3):
>>> vhost: add cleanup callback to FD entries
>>> vhost: fix vhost-user socket cleanup order
>>> vhost: improve VDUSE reconnect handler cleanup
>>>
>>> lib/vhost/fd_man.c | 16 ++++++++++++----
>>> lib/vhost/fd_man.h | 3 ++-
>>> lib/vhost/socket.c | 46 ++++++++++++++++++++++++++--------------------
>>> lib/vhost/vduse.c | 16 +++++++++++-----
>>> 4 files changed, 51 insertions(+), 30 deletions(-)
>>
>> I tried this series, and it fixes the error log I reported.
>>
>> On the other hand, I wonder if we could do something simpler.
>>
>> The fd is only used by the registered handlers.
>> If a handler reports that it does not want to watch this fd anymore,
>> then there is no remaining user in the vhost library for this fd.
>>
>> So my proposal would be to rename the "remove" flag as a "close" flag:
>>
>> @@ -12,7 +12,7 @@ struct fdset;
>>
>> #define MAX_FDS 1024
>>
>> -typedef void (*fd_cb)(int fd, void *dat, int *remove);
>> +typedef void (*fd_cb)(int fd, void *dat, int *close);
>>
>> struct fdset *fdset_init(const char *name);
>>
>> And defer closing to fd_man.
>> Something like:
>>
>> @@ -367,9 +367,9 @@ fdset_event_dispatch(void *arg)
>> pthread_mutex_unlock(&pfdset->fd_mutex);
>>
>> if (rcb && events[i].events & (EPOLLIN |
>> EPOLLERR | EPOLLHUP))
>> - rcb(fd, dat, &remove1);
>> + rcb(fd, dat, &close1);
>> if (wcb && events[i].events & (EPOLLOUT |
>> EPOLLERR | EPOLLHUP))
>> - wcb(fd, dat, &remove2);
>> + wcb(fd, dat, &close2);
>> pfdentry->busy = 0;
>> /*
>> * fdset_del needs to check busy flag.
>> @@ -381,8 +381,10 @@ fdset_event_dispatch(void *arg)
>> * fdentry not to be busy, so we can't call
>> * fdset_del_locked().
>> */
>> - if (remove1 || remove2)
>> + if (close1 || close2) {
>> fdset_del(pfdset, fd);
>> + close(fd);
>> + }
>> }
>>
>> if (pfdset->destroy)
>>
>>
>> And the only thing to move out of the socket and vduse handlers is the
>> close(fd) call.
>>
>> Like:
>>
>> @@ -303,7 +303,7 @@ vhost_user_server_new_connection(int fd, void
>> *dat, int *remove __rte_unused)
>> }
>>
>> static void
>> -vhost_user_read_cb(int connfd, void *dat, int *remove)
>> +vhost_user_read_cb(int connfd, void *dat, int *close)
>> {
>> struct vhost_user_connection *conn = dat;
>> struct vhost_user_socket *vsocket = conn->vsocket;
>> @@ -313,8 +313,7 @@ vhost_user_read_cb(int connfd, void *dat, int *remove)
>> if (ret < 0) {
>> struct virtio_net *dev = get_device(conn->vid);
>>
>> - close(connfd);
>> - *remove = 1;
>> + *close = 1;
>
> I have one concern here is compared with this RFC, the proposal changed the timing
> of close connfd,which means on QEMU side, cleaning up resources will happen later.
>
> Currently I can’t think of issues could be introduced by this change (maybe you and
> Maxime could remind me of something :)
That's a good point.
I just tested David's suggestion with Vhost-user with OVS and QEMU:
- guest shutdown + reconnect
- live-migration
- OVS restart
It seems to behave very well.
> Besides this, definitely this proposal is cleaner.
I agree, I will send a new revision re-using David's proposal.
Thanks,
Maxime
>
> Thanks,
> Chenbo
>
>>
>> if (dev)
>> vhost_destroy_device_notify(dev);
>>
>>
>> Maxime, Chenbo, opinions?
>>
>>
>> --
>> David Marchand
>>
>
prev parent reply other threads:[~2025-02-05 15:29 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-24 15:49 Maxime Coquelin
2024-12-24 15:49 ` [RFC 1/3] vhost: add cleanup callback to FD entries Maxime Coquelin
2024-12-24 15:49 ` [RFC 2/3] vhost: fix vhost-user socket cleanup order Maxime Coquelin
2024-12-24 15:49 ` [RFC 3/3] vhost: improve VDUSE reconnect handler cleanup Maxime Coquelin
2025-02-04 13:18 ` [RFC 0/3] Vhost: fix FD entries cleanup David Marchand
2025-02-05 7:27 ` Chenbo Xia
2025-02-05 7:27 ` Chenbo Xia
2025-02-05 15:29 ` Maxime Coquelin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48f33e8b-8ba5-49f1-a38c-7e0f6efec30b@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=chenbox@nvidia.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).