DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ilya Maximets <i.maximets@ovn.org>
To: Stefan Hajnoczi <stefanha@redhat.com>,
	Ilya Maximets <i.maximets@ovn.org>
Cc: "Maxime Coquelin" <maxime.coquelin@redhat.com>,
	"Chenbo Xia" <chenbo.xia@intel.com>,
	dev@dpdk.org, "Adrian Moreno" <amorenoz@redhat.com>,
	"Julia Suvorova" <jusual@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Daniel Berrange" <berrange@redhat.com>
Subject: Re: [dpdk-dev] [RFC 0/4] SocketPair Broker support for vhost and virtio-user.
Date: Thu, 18 Mar 2021 20:47:12 +0100	[thread overview]
Message-ID: <eeea4d9f-e600-9b4d-58f3-f8ced9485854@ovn.org> (raw)
In-Reply-To: <YFOTU0M50y5GlF25@stefanha-x1.localdomain>

On 3/18/21 6:52 PM, Stefan Hajnoczi wrote:
> On Wed, Mar 17, 2021 at 09:25:26PM +0100, Ilya Maximets wrote:
> Hi,
> Some questions to understand the problems that SocketPair Broker solves:
> 
>> Even more configuration tricks required in order to share some sockets
>> between different containers and not only with the host, e.g. to
>> create service chains.
> 
> How does SocketPair Broker solve this? I guess the idea is that
> SocketPair Broker must be started before other containers. That way
> applications don't need to sleep and reconnect when a socket isn't
> available yet.
> 
> On the other hand, the SocketPair Broker might be unavailable (OOM
> killer, crash, etc), so applications still need to sleep and reconnect
> to the broker itself. I'm not sure the problem has actually been solved
> unless there is a reason why the broker is always guaranteed to be
> available?

Hi, Stefan.  Thanks for your feedback!

The idea is to have the SocketPair Broker running right from the
boot of the host.  If it will use a systemd socket-based service
activation, the socket should persist while systemd is alive, IIUC.
OOM, crash and restart of the broker should not affect existence
of the socket and systemd will spawn a service if it's not running
for any reason without loosing incoming connections.

> 
>> And some housekeeping usually required for applications in case the
>> socket server terminated abnormally and socket files left on a file
>> system:
>>  "failed to bind to vhu: Address already in use; remove it and try again"
> 
> QEMU avoids this by unlinking before binding. The drawback is that users
> might accidentally hijack an existing listen socket, but that can be
> solved with a pidfile.

How exactly this could be solved with a pidfile?  And what if this is
a different application that tries to create a socket on a same path?
e.g. QEMU creates a socket (started in a server mode) and user
accidentally created dpdkvhostuser port in Open vSwitch instead of
dpdkvhostuserclient.  This way rte_vhost library will try to bind
to an existing socket file and will fail.  Subsequently port creation
in OVS will fail.   We can't allow OVS to unlink files because this
way OVS users will have ability to unlink random sockets that OVS has
access to and we also has no idea if it's a QEMU that created a file
or it was a virtio-user application or someone else.
There are, probably, ways to detect if there is any alive process that
has this socket open, but that sounds like too much for this purpose,
also I'm not sure if it's possible if actual user is in a different
container.
So I don't see a good reliable way to detect these conditions.  This
falls on shoulders of a higher level management software or a user to
clean these socket files up before adding ports.

> 
>> Additionally, all applications (system and user's!) should follow
>> naming conventions and place socket files in particular location on a
>> file system to make things work.
> 
> Does SocketPair Broker solve this? Applications now need to use a naming
> convention for keys, so it seems like this issue has not been
> eliminated.

Key is an arbitrary sequence of bytes, so it's hard to call it a naming
convention.  But they need to know keys, you're right.  And to be
careful I said "eliminates most of the inconveniences". :)

> 
>> This patch-set aims to eliminate most of the inconveniences by
>> leveraging an infrastructure service provided by a SocketPair Broker.
> 
> I don't understand yet why this is useful for vhost-user, where the
> creation of the vhost-user device backend and its use by a VMM are
> closely managed by one piece of software:
> 
> 1. Unlink the socket path.
> 2. Create, bind, and listen on the socket path.
> 3. Instantiate the vhost-user device backend (e.g. talk to DPDK/SPDK
>    RPC, spawn a process, etc) and pass in the listen fd.
> 4. In the meantime the VMM can open the socket path and call connect(2).
>    As soon as the vhost-user device backend calls accept(2) the
>    connection will proceed (there is no need for sleeping).
> 
> This approach works across containers without a broker.

Not sure if I fully understood a question here, but anyway.

This approach works fine if you know what application to run.
In case of a k8s cluster, it might be a random DPDK application
with virtio-user ports running inside a container and want to
have a network connection.  Also, this application needs to run
virtio-user in server mode, otherwise restart of the OVS will
require restart of the application.  So, you basically need to
rely on a third-party application to create a socket with a right
name and in a correct location that is shared with a host, so
OVS can find it and connect.

In a VM world everything is much more simple, since you have
a libvirt and QEMU that will take care of all of these stuff
and which are also under full control of management software
and a system administrator.
In case of a container with a "random" DPDK application inside
there is no such entity that can help.  Of course, some solution
might be implemented in docker/podman daemon to create and manage
outside-looking sockets for an application inside the container,
but that is not available today AFAIK and I'm not sure if it
ever will.

> 
> BTW what is the security model of the broker? Unlike pathname UNIX
> domain sockets there is no ownership permission check.

I thought about this.  Yes, we should allow connection to this socket
for a wide group of applications.  That might be a problem.
However, 2 applications need to know the 1024 (at most) byte key in
order to connect to each other.  This might be considered as a
sufficient security model in case these keys are not predictable.
Suggestions on how to make this more secure are welcome.

If it's really necessary to completely isolate some connections
from other ones, one more broker could be started.  But I'm not
sure what the case it should be.

Broker itself closes the socketpair on its side, so the connection
between 2 applications is direct and should be secure as far as
kernel doesn't allow other system processes to intercept data on
arbitrary unix sockets.

Best regards, Ilya Maximets.

  reply	other threads:[~2021-03-18 19:47 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-17 20:25 Ilya Maximets
2021-03-17 20:25 ` [dpdk-dev] [PATCH 1/4] net/virtio: fix interrupt unregistering for listening socket Ilya Maximets
2021-03-25  8:32   ` Maxime Coquelin
2021-04-07  7:21     ` Xia, Chenbo
2021-03-17 20:25 ` [dpdk-dev] [RFC 2/4] vhost: add support for SocketPair Broker Ilya Maximets
2021-03-17 20:25 ` [dpdk-dev] [RFC 3/4] net/vhost: " Ilya Maximets
2021-03-17 20:25 ` [dpdk-dev] [RFC 4/4] net/virtio: " Ilya Maximets
2021-03-18 17:52 ` [dpdk-dev] [RFC 0/4] SocketPair Broker support for vhost and virtio-user Stefan Hajnoczi
2021-03-18 19:47   ` Ilya Maximets [this message]
2021-03-18 20:14     ` Ilya Maximets
2021-03-19 14:16       ` Stefan Hajnoczi
2021-03-19 15:37         ` Ilya Maximets
2021-03-19 16:01           ` Stefan Hajnoczi
2021-03-19 16:02           ` Marc-André Lureau
2021-03-19  8:51     ` Marc-André Lureau
2021-03-19 11:25       ` Ilya Maximets
2021-03-19 14:05     ` Stefan Hajnoczi
2021-03-19 15:29       ` Ilya Maximets
2021-03-19 17:21         ` Stefan Hajnoczi
2021-03-23 17:57           ` Adrian Moreno
2021-03-23 18:27             ` Ilya Maximets
2021-03-23 20:54               ` Billy McFall
2021-03-24 12:05                 ` Stefan Hajnoczi
2021-03-24 13:11                   ` Ilya Maximets
2021-03-24 15:07                     ` Stefan Hajnoczi
2021-03-25  9:35                     ` Stefan Hajnoczi
2021-03-25 11:00                       ` Ilya Maximets
2021-03-25 16:43                         ` Stefan Hajnoczi
2021-03-25 17:58                           ` Ilya Maximets
2021-03-30 15:01                             ` Stefan Hajnoczi
2021-03-19 14:39 ` Stefan Hajnoczi
2021-03-19 16:11   ` Ilya Maximets
2021-03-19 16:45     ` Ilya Maximets
2021-03-24 20:56       ` Maxime Coquelin
2021-03-24 21:39         ` Ilya Maximets
2021-03-24 21:51           ` Maxime Coquelin
2021-03-24 22:17             ` Ilya Maximets
2023-06-30  3:45 ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=eeea4d9f-e600-9b4d-58f3-f8ced9485854@ovn.org \
    --to=i.maximets@ovn.org \
    --cc=amorenoz@redhat.com \
    --cc=berrange@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=jusual@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).