DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Matan Azrad <matan@mellanox.com>, Xiao Wang <xiao.w.wang@intel.com>
Cc: dev@dpdk.org, Adrian Moreno <amorenoz@redhat.com>
Subject: Re: [dpdk-dev] [PATCH v1 3/4] vhost: improve device ready definition
Date: Fri, 19 Jun 2020 14:04:53 +0200
Message-ID: <7a84992e-ce53-9832-3157-55524335d15e@redhat.com> (raw)
In-Reply-To: <631a3cf3-f21a-b292-b475-93552d8f73e8@redhat.com>

Hi Matan,

On 6/19/20 9:41 AM, Maxime Coquelin wrote:
> 
> 
> On 6/18/20 6:28 PM, Matan Azrad wrote:
>> Some guest drivers may not configure disabled virtio queues.
>>
>> In this case, the vhost management never triggers the vDPA device
>> configuration because it waits to the device to be ready.
> 
> This is not vDPA-only, even with SW datapath the application's
> new_device callback never gets called.
> 
>> The current ready state means that all the virtio queues should be
>> configured regardless the enablement status.
>>
>> In order to support this case, this patch changes the ready state:
>> The device is ready when at least 1 queue pair is configured and
>> enabled.
>>
>> So, now, the vDPA driver will be configured when the first queue pair is
>> configured and enabled.
>>
>> Also the queue state operation is change to the next rules:
>> 	1. queue becomes ready (enabled and fully configured) -
>> 		set_vring_state(enabled).
>> 	2. queue becomes not ready - set_vring_state(disabled).
>> 	3. queue stay ready and VHOST_USER_SET_VRING_ENABLE massage was
>> 		handled - set_vring_state(enabled).
>>
>> The parallel operations for the application are adjusted too.
>>
>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>> ---
>>  lib/librte_vhost/vhost_user.c | 51 ++++++++++++++++++++++++++++---------------
>>  1 file changed, 33 insertions(+), 18 deletions(-)
>>
>> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
>> index b0849b9..cfd5f27 100644
>> --- a/lib/librte_vhost/vhost_user.c
>> +++ b/lib/librte_vhost/vhost_user.c
>> @@ -1295,7 +1295,7 @@
>>  {
>>  	bool rings_ok;
>>  
>> -	if (!vq)
>> +	if (!vq || !vq->enabled)
>>  		return false;
>>  
>>  	if (vq_is_packed(dev))
>> @@ -1309,24 +1309,27 @@
>>  	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
>>  }
>>  
>> +#define VIRTIO_DEV_NUM_VQS_TO_BE_READY 2u
>> +
>>  static int
>>  virtio_is_ready(struct virtio_net *dev)
>>  {
>>  	struct vhost_virtqueue *vq;
>>  	uint32_t i;
>>  
>> -	if (dev->nr_vring == 0)
>> +	if (dev->nr_vring < VIRTIO_DEV_NUM_VQS_TO_BE_READY)
>>  		return 0;
>>  
>> -	for (i = 0; i < dev->nr_vring; i++) {
>> +	for (i = 0; i < VIRTIO_DEV_NUM_VQS_TO_BE_READY; i++) {
>>  		vq = dev->virtqueue[i];
>>  
>>  		if (!vq_is_ready(dev, vq))
>>  			return 0;
>>  	}
>>  
>> -	VHOST_LOG_CONFIG(INFO,
>> -		"virtio is now ready for processing.\n");
>> +	if (!(dev->flags & VIRTIO_DEV_READY))
>> +		VHOST_LOG_CONFIG(INFO,
>> +			"virtio is now ready for processing.\n");
>>  	return 1;
>>  }
>>  
>> @@ -1970,8 +1973,6 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev __rte_unused,
>>  	struct virtio_net *dev = *pdev;
>>  	int enable = (int)msg->payload.state.num;
>>  	int index = (int)msg->payload.state.index;
>> -	struct rte_vdpa_device *vdpa_dev;
>> -	int did = -1;
>>  
>>  	if (validate_msg_fds(msg, 0) != 0)
>>  		return RTE_VHOST_MSG_RESULT_ERR;
>> @@ -1980,15 +1981,6 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev __rte_unused,
>>  		"set queue enable: %d to qp idx: %d\n",
>>  		enable, index);
>>  
>> -	did = dev->vdpa_dev_id;
>> -	vdpa_dev = rte_vdpa_get_device(did);
>> -	if (vdpa_dev && vdpa_dev->ops->set_vring_state)
>> -		vdpa_dev->ops->set_vring_state(dev->vid, index, enable);
>> -
>> -	if (dev->notify_ops->vring_state_changed)
>> -		dev->notify_ops->vring_state_changed(dev->vid,
>> -				index, enable);
>> -
>>  	/* On disable, rings have to be stopped being processed. */
>>  	if (!enable && dev->dequeue_zero_copy)
>>  		drain_zmbuf_list(dev->virtqueue[index]);
>> @@ -2622,11 +2614,13 @@ typedef int (*vhost_message_handler_t)(struct virtio_net **pdev,
>>  	struct virtio_net *dev;
>>  	struct VhostUserMsg msg;
>>  	struct rte_vdpa_device *vdpa_dev;
>> +	bool ready[VHOST_MAX_VRING];
>>  	int did = -1;
>>  	int ret;
>>  	int unlock_required = 0;
>>  	bool handled;
>>  	int request;
>> +	uint32_t i;
>>  
>>  	dev = get_device(vid);
>>  	if (dev == NULL)
>> @@ -2668,6 +2662,10 @@ typedef int (*vhost_message_handler_t)(struct virtio_net **pdev,
>>  		VHOST_LOG_CONFIG(DEBUG, "External request %d\n", request);
>>  	}
>>  
>> +	/* Save ready status for all the VQs before message handle. */
>> +	for (i = 0; i < VHOST_MAX_VRING; i++)
>> +		ready[i] = vq_is_ready(dev, dev->virtqueue[i]);
>> +
> 
> This big array can be avoided if you save the ready status in the
> virtqueue once message have been handled.
> 
>>  	ret = vhost_user_check_and_alloc_queue_pair(dev, &msg);
>>  	if (ret < 0) {
>>  		VHOST_LOG_CONFIG(ERR,
>> @@ -2802,6 +2800,25 @@ typedef int (*vhost_message_handler_t)(struct virtio_net **pdev,
>>  		return -1;
>>  	}
>>  
>> +	did = dev->vdpa_dev_id;
>> +	vdpa_dev = rte_vdpa_get_device(did);
>> +	/* Update ready status. */
>> +	for (i = 0; i < VHOST_MAX_VRING; i++) {
>> +		bool cur_ready = vq_is_ready(dev, dev->virtqueue[i]);
>> +
>> +		if ((cur_ready && request == VHOST_USER_SET_VRING_ENABLE &&
>> +				i == msg.payload.state.index) ||
> 
> Couldn't we remove above condition? Aren't the callbacks already called
> in the set_vring_enable handler?
> 
>> +				cur_ready != ready[i]) {
>> +			if (vdpa_dev && vdpa_dev->ops->set_vring_state)
>> +				vdpa_dev->ops->set_vring_state(dev->vid, i,
>> +								(int)cur_ready);
>> +
>> +			if (dev->notify_ops->vring_state_changed)
>> +				dev->notify_ops->vring_state_changed(dev->vid,
>> +							i, (int)cur_ready);
>> +		}
>> +	}
> 
> I think we should move this into a dedicated function, which we would
> call in every message handler that can modify the ready state.
> 
> Doing so, we would not have to assume the master sent us disable request
> for the queue before, ans also would have proper synchronization if the
> request uses reply-ack feature as it could assume the backend is no more
> processing the ring once reply-ack is received.
> 
>>  	if (!(dev->flags & VIRTIO_DEV_RUNNING) && virtio_is_ready(dev)) {
>>  		dev->flags |= VIRTIO_DEV_READY;
>>  
>> @@ -2816,8 +2833,6 @@ typedef int (*vhost_message_handler_t)(struct virtio_net **pdev,
>>  		}
>>  	}
>>  
>> -	did = dev->vdpa_dev_id;
>> -	vdpa_dev = rte_vdpa_get_device(did);
>>  	if (vdpa_dev && virtio_is_ready(dev) &&
>>  			!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED) &&
>>  			msg.request.master == VHOST_USER_SET_VRING_CALL) {
> 
> Shouldn't check on SET_VRING_CALL above be removed?
> 

Thinking at it again, I think ready state should include whether or not
the queue is enabled. And as soon as a request impacting ring addresses
or call or kick FDs is handled, we should reset the modified value and
notify for state change in the imapcted queue. Then request is handled
and once the requests are handled, we can send state change updates if
any one changed.

Doing that, we don't have to assume the Vhost-user master will have send
the disable request before doing the state change. And if it did, the
'not ready' update won't be sent twice to the driver or application.

In case I am not clear enough, I have prototyped this idea (only
compile-tested). If it works for you, feel free to add it in your
series.

Thanks,
Maxime


diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index df98d15de6..48e8fcfbc0 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -150,6 +150,7 @@ struct vhost_virtqueue {
        /* Backend value to determine if device should started/stopped */
        int                     backend;
        int                     enabled;
+       bool                    ready;
        int                     access_ok;
        rte_spinlock_t          access_lock;

diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index ea9cd107b9..f3cda536c6 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -228,6 +228,87 @@ vhost_backend_cleanup(struct virtio_net *dev)
        dev->postcopy_listening = 0;
 }

+
+static bool
+vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+       bool rings_ok;
+
+       if (!vq)
+               return false;
+
+       if (vq_is_packed(dev))
+               rings_ok = vq->desc_packed && vq->driver_event &&
+                       vq->device_event;
+       else
+               rings_ok = vq->desc && vq->avail && vq->used;
+
+       return rings_ok &&
+              vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
+              vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD &&
+              vq->enabled;
+}
+
+static int
+virtio_is_ready(struct virtio_net *dev)
+{
+       struct vhost_virtqueue *vq;
+       uint32_t i;
+
+       if (dev->nr_vring == 0)
+               return 0;
+
+       for (i = 0; i < 2; i++) {
+               vq = dev->virtqueue[i];
+
+               if (!vq_is_ready(dev, vq))
+                       return 0;
+       }
+
+       VHOST_LOG_CONFIG(INFO,
+               "virtio is now ready for processing.\n");
+       return 1;
+}
+
+static void
+vhost_user_update_vring_state(struct virtio_net *dev, int idx)
+{
+       struct vhost_virtqueue *vq = dev->virtqueue[idx];
+       struct rte_vdpa_device *vdpa_dev;
+       int did;
+       bool was_ready = vq->ready;
+
+       vq->ready = vq_is_ready(dev, vq);
+       if (was_ready == vq->ready)
+               return;
+
+       if (dev->notify_ops->vring_state_changed)
+               dev->notify_ops->vring_state_changed(dev->vid, idx,
vq->ready);
+
+       did = dev->vdpa_dev_id;
+       vdpa_dev = rte_vdpa_get_device(did);
+       if (vdpa_dev && vdpa_dev->ops->set_vring_state)
+               vdpa_dev->ops->set_vring_state(dev->vid, idx, vq->ready);
+}
+
+static void
+vhost_user_update_vring_state_all(struct virtio_net *dev)
+{
+       uint32_t i;
+
+       for (i = 0; i < dev->nr_vring; i++)
+               vhost_user_update_vring_state(dev, i);
+}
+
+static void
+vhost_user_invalidate_vring(struct virtio_net *dev, int index)
+{
+       struct vhost_virtqueue *vq = dev->virtqueue[index];
+
+       vring_invalidate(dev, vq);
+       vhost_user_update_vring_state(dev, index);
+}
+
 /*
  * This function just returns success at the moment unless
  * the device hasn't been initialised.
@@ -841,7 +922,7 @@ vhost_user_set_vring_addr(struct virtio_net **pdev,
struct VhostUserMsg *msg,
         */
        memcpy(&vq->ring_addrs, addr, sizeof(*addr));

-       vring_invalidate(dev, vq);
+       vhost_user_invalidate_vring(dev, msg->payload.addr.index);

        if ((vq->enabled && (dev->features &
                                (1ULL <<
VHOST_USER_F_PROTOCOL_FEATURES))) ||
@@ -1267,7 +1348,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev,
struct VhostUserMsg *msg,
                         * need to be translated again as virtual
addresses have
                         * changed.
                         */
-                       vring_invalidate(dev, vq);
+                       vhost_user_invalidate_vring(dev, i);

                        dev = translate_ring_addresses(dev, i);
                        if (!dev) {
@@ -1290,46 +1371,6 @@ vhost_user_set_mem_table(struct virtio_net
**pdev, struct VhostUserMsg *msg,
        return RTE_VHOST_MSG_RESULT_ERR;
 }

-static bool
-vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
-{
-       bool rings_ok;
-
-       if (!vq)
-               return false;
-
-       if (vq_is_packed(dev))
-               rings_ok = vq->desc_packed && vq->driver_event &&
-                       vq->device_event;
-       else
-               rings_ok = vq->desc && vq->avail && vq->used;
-
-       return rings_ok &&
-              vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
-              vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
-}
-
-static int
-virtio_is_ready(struct virtio_net *dev)
-{
-       struct vhost_virtqueue *vq;
-       uint32_t i;
-
-       if (dev->nr_vring == 0)
-               return 0;
-
-       for (i = 0; i < dev->nr_vring; i++) {
-               vq = dev->virtqueue[i];
-
-               if (!vq_is_ready(dev, vq))
-                       return 0;
-       }
-
-       VHOST_LOG_CONFIG(INFO,
-               "virtio is now ready for processing.\n");
-       return 1;
-}
-
 static void *
 inflight_mem_alloc(const char *name, size_t size, int *fd)
 {
@@ -1599,6 +1640,10 @@ vhost_user_set_vring_call(struct virtio_net
**pdev, struct VhostUserMsg *msg,
        if (vq->callfd >= 0)
                close(vq->callfd);

+       vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
+
+       vhost_user_update_vring_state(dev, file.index);
+
        vq->callfd = file.fd;

        return RTE_VHOST_MSG_RESULT_OK;
@@ -1847,15 +1892,16 @@ vhost_user_set_vring_kick(struct virtio_net
**pdev, struct VhostUserMsg *msg,
         * the ring starts already enabled. Otherwise, it is enabled via
         * the SET_VRING_ENABLE message.
         */
-       if (!(dev->features & (1ULL << VHOST_USER_F_PROTOCOL_FEATURES))) {
+       if (!(dev->features & (1ULL << VHOST_USER_F_PROTOCOL_FEATURES)))
                vq->enabled = 1;
-               if (dev->notify_ops->vring_state_changed)
-                       dev->notify_ops->vring_state_changed(
-                               dev->vid, file.index, 1);
-       }

        if (vq->kickfd >= 0)
                close(vq->kickfd);
+
+       vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
+
+       vhost_user_update_vring_state(dev, file.index);
+
        vq->kickfd = file.fd;

        if (vq_is_packed(dev)) {
@@ -1953,6 +1999,10 @@ vhost_user_get_vring_base(struct virtio_net **pdev,
        msg->size = sizeof(msg->payload.state);
        msg->fd_num = 0;

+       /*
+        * No need to call vhost_user_invalidate_vring here,
+        * device is destroyed.
+        */
        vring_invalidate(dev, vq);

        return RTE_VHOST_MSG_RESULT_REPLY;
@@ -1970,8 +2020,7 @@ vhost_user_set_vring_enable(struct virtio_net **pdev,
        struct virtio_net *dev = *pdev;
        int enable = (int)msg->payload.state.num;
        int index = (int)msg->payload.state.index;
-       struct rte_vdpa_device *vdpa_dev;
-       int did = -1;
+       struct vhost_virtqueue *vq = dev->virtqueue[index];

        if (validate_msg_fds(msg, 0) != 0)
                return RTE_VHOST_MSG_RESULT_ERR;
@@ -1980,20 +2029,13 @@ vhost_user_set_vring_enable(struct virtio_net
**pdev,
                "set queue enable: %d to qp idx: %d\n",
                enable, index);

-       did = dev->vdpa_dev_id;
-       vdpa_dev = rte_vdpa_get_device(did);
-       if (vdpa_dev && vdpa_dev->ops->set_vring_state)
-               vdpa_dev->ops->set_vring_state(dev->vid, index, enable);
-
-       if (dev->notify_ops->vring_state_changed)
-               dev->notify_ops->vring_state_changed(dev->vid,
-                               index, enable);
-
        /* On disable, rings have to be stopped being processed. */
        if (!enable && dev->dequeue_zero_copy)
-               drain_zmbuf_list(dev->virtqueue[index]);
+               drain_zmbuf_list(vq);
+
+       vq->enabled = enable;

-       dev->virtqueue[index]->enabled = enable;
+       vhost_user_update_vring_state(dev, index);

        return RTE_VHOST_MSG_RESULT_OK;
 }
@@ -2332,7 +2374,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev,
struct VhostUserMsg *msg,
                                        imsg->size);

                        if (is_vring_iotlb(dev, vq, imsg))
-                               vring_invalidate(dev, vq);
+                               vhost_user_invalidate_vring(dev, i);
                }
                break;
        default:
@@ -2791,6 +2833,8 @@ vhost_user_msg_handler(int vid, int fd)
                return -1;
        }

+       vhost_user_update_vring_state_all(dev);
+
        if (!(dev->flags & VIRTIO_DEV_RUNNING) && virtio_is_ready(dev)) {
                dev->flags |= VIRTIO_DEV_READY;

@@ -2808,8 +2852,7 @@ vhost_user_msg_handler(int vid, int fd)
        did = dev->vdpa_dev_id;
        vdpa_dev = rte_vdpa_get_device(did);
        if (vdpa_dev && virtio_is_ready(dev) &&
-                       !(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED) &&
-                       msg.request.master == VHOST_USER_SET_VRING_CALL) {
+                       !(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) {
                if (vdpa_dev->ops->dev_conf)
                        vdpa_dev->ops->dev_conf(dev->vid);
                dev->flags |= VIRTIO_DEV_VDPA_CONFIGURED;


  reply	other threads:[~2020-06-19 12:05 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-18 16:28 [dpdk-dev] [PATCH v1 0/4] vhost: improve ready state Matan Azrad
2020-06-18 16:28 ` [dpdk-dev] [PATCH v1 1/4] vhost: support host notifier queue configuration Matan Azrad
2020-06-19  6:44   ` Maxime Coquelin
2020-06-19 13:28     ` Matan Azrad
2020-06-19 14:01       ` Maxime Coquelin
2020-06-21  6:26         ` Matan Azrad
2020-06-22  8:06           ` Maxime Coquelin
2020-06-18 16:28 ` [dpdk-dev] [PATCH v1 2/4] vhost: skip access lock when vDPA is configured Matan Azrad
2020-06-19  6:49   ` Maxime Coquelin
2020-06-18 16:28 ` [dpdk-dev] [PATCH v1 3/4] vhost: improve device ready definition Matan Azrad
2020-06-19  7:41   ` Maxime Coquelin
2020-06-19 12:04     ` Maxime Coquelin [this message]
2020-06-19 13:11     ` Matan Azrad
2020-06-19 13:54       ` Maxime Coquelin
2020-06-21  6:20         ` Matan Azrad
2020-06-22  8:04           ` Maxime Coquelin
2020-06-22  8:41             ` Matan Azrad
2020-06-22  8:56               ` Maxime Coquelin
2020-06-22 10:06                 ` Matan Azrad
2020-06-22 12:32                   ` Maxime Coquelin
2020-06-22 13:43                     ` Matan Azrad
2020-06-22 14:55                       ` Maxime Coquelin
2020-06-22 15:51                         ` Matan Azrad
2020-06-22 16:47                           ` Maxime Coquelin
2020-06-23  9:02                             ` Matan Azrad
2020-06-23  9:19                               ` Maxime Coquelin
2020-06-23 11:53                                 ` Matan Azrad
2020-06-23 13:55                                   ` Maxime Coquelin
2020-06-23 14:33                                     ` Maxime Coquelin
2020-06-23 14:52                                     ` Matan Azrad
2020-06-23 15:18                                       ` Maxime Coquelin
2020-06-24  5:54                                         ` Matan Azrad
2020-06-24  7:22                                           ` Maxime Coquelin
2020-06-24  8:38                                             ` Matan Azrad
2020-06-24  9:12                                               ` Maxime Coquelin
2020-06-18 16:28 ` [dpdk-dev] [PATCH v1 4/4] vdpa/mlx5: support queue update Matan Azrad
2020-06-25 13:38 ` [dpdk-dev] [PATCH v2 0/5] vhost: improve ready state Matan Azrad
2020-06-25 13:38   ` [dpdk-dev] [PATCH v2 1/5] vhost: skip access lock when vDPA is configured Matan Azrad
2020-06-28  3:06     ` Xia, Chenbo
2020-06-25 13:38   ` [dpdk-dev] [PATCH v2 2/5] vhost: improve device readiness notifications Matan Azrad
2020-06-26 12:10     ` Maxime Coquelin
2020-06-28  3:08     ` Xia, Chenbo
2020-06-25 13:38   ` [dpdk-dev] [PATCH v2 3/5] vhost: handle memory hotplug with vDPA devices Matan Azrad
2020-06-26 12:15     ` Maxime Coquelin
2020-06-28  3:18     ` Xia, Chenbo
2020-06-25 13:38   ` [dpdk-dev] [PATCH v2 4/5] vhost: notify virtq file descriptor update Matan Azrad
2020-06-26 12:19     ` Maxime Coquelin
2020-06-28  3:19     ` Xia, Chenbo
2020-06-25 13:38   ` [dpdk-dev] [PATCH v2 5/5] vdpa/mlx5: support queue update Matan Azrad
2020-06-26 12:29     ` Maxime Coquelin
2020-06-29 14:08   ` [dpdk-dev] [PATCH v3 0/6] vhost: improve ready state Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 1/6] vhost: support host notifier queue configuration Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 2/6] vhost: skip access lock when vDPA is configured Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 3/6] vhost: improve device readiness notifications Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 4/6] vhost: handle memory hotplug with vDPA devices Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 5/6] vhost: notify virtq file descriptor update Matan Azrad
2020-06-29 14:08     ` [dpdk-dev] [PATCH v3 6/6] vdpa/mlx5: support queue update Matan Azrad
2020-06-29 17:24     ` [dpdk-dev] [PATCH v3 0/6] vhost: improve ready state Maxime Coquelin
2020-07-17  1:41       ` Wang, Yinan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a84992e-ce53-9832-3157-55524335d15e@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=amorenoz@redhat.com \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git