* [dpdk-dev] [PATCH v2 1/8] vhost: announce VIRTIO_F_IN_ORDER support
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-26 7:54 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 2/8] net/virtio: add VIRTIO_F_IN_ORDER definition Marvin Liu
` (7 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
If devices always use descriptors in the same order in which they have
been made available. These devices can offer the VIRTIO_F_IN_ORDER
feature. If negotiated, this knowledge allows devices to notify the use
of a batch of buffers to virtio driver by only writing used ring index.
Vhost user device has supported this feature by default. If vhost
dequeue zero is enabled, should disable VIRTIO_F_IN_ORDER as vhost can’t
assure that descriptors returned from NIC are in order.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index 0399c37bc..d63031747 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -853,6 +853,12 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
vsocket->supported_features = VIRTIO_NET_SUPPORTED_FEATURES;
vsocket->features = VIRTIO_NET_SUPPORTED_FEATURES;
+ /* Dequeue zero copy can't assure descriptors returned in order */
+ if (vsocket->dequeue_zero_copy) {
+ vsocket->supported_features &= ~(1ULL << VIRTIO_F_IN_ORDER);
+ vsocket->features &= ~(1ULL << VIRTIO_F_IN_ORDER);
+ }
+
if (!(flags & RTE_VHOST_USER_IOMMU_SUPPORT)) {
vsocket->supported_features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
vsocket->features &= ~(1ULL << VIRTIO_F_IOMMU_PLATFORM);
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 58c425a5c..7539b0621 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -191,6 +191,13 @@ struct vhost_msg {
#define VIRTIO_F_VERSION_1 32
#endif
+/*
+ * Available and used descs are in same order
+ */
+#ifndef VIRTIO_F_IN_ORDER
+#define VIRTIO_F_IN_ORDER 35
+#endif
+
/* Features supported by this builtin vhost-user net driver. */
#define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
(1ULL << VIRTIO_F_ANY_LAYOUT) | \
@@ -214,7 +221,8 @@ struct vhost_msg {
(1ULL << VIRTIO_NET_F_GUEST_ECN) | \
(1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
(1ULL << VIRTIO_RING_F_EVENT_IDX) | \
- (1ULL << VIRTIO_NET_F_MTU) | \
+ (1ULL << VIRTIO_NET_F_MTU) | \
+ (1ULL << VIRTIO_F_IN_ORDER) | \
(1ULL << VIRTIO_F_IOMMU_PLATFORM))
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/8] vhost: announce VIRTIO_F_IN_ORDER support
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 1/8] vhost: announce VIRTIO_F_IN_ORDER support Marvin Liu
@ 2018-06-26 7:54 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-26 7:54 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> If devices always use descriptors in the same order in which they have
> been made available. These devices can offer the VIRTIO_F_IN_ORDER
> feature. If negotiated, this knowledge allows devices to notify the use
> of a batch of buffers to virtio driver by only writing used ring index.
>
> Vhost user device has supported this feature by default. If vhost
> dequeue zero is enabled, should disable VIRTIO_F_IN_ORDER as vhost can’t
> assure that descriptors returned from NIC are in order.
>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 2/8] net/virtio: add VIRTIO_F_IN_ORDER definition
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 1/8] vhost: announce VIRTIO_F_IN_ORDER support Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-26 7:58 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 3/8] net/virtio-user: add mrg_rxbuf and in_order vdev parameters Marvin Liu
` (6 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
If VIRTIO_F_IN_ORDER has been negotiated, driver will use descriptors in
ring order: starting from offset 0 in the table, and wrapping around at
the end of the table. Also introduce use_inorder_[rt]x flag for
selection of IN_ORDER [RT]x handlers.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index a28ba8339..77f805df6 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -121,6 +121,12 @@ struct virtnet_ctl;
#define VIRTIO_TRANSPORT_F_START 28
#define VIRTIO_TRANSPORT_F_END 34
+/*
+ * Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define VIRTIO_F_IN_ORDER 35
+
/* The Guest publishes the used index for which it expects an interrupt
* at the end of the avail ring. Host should ignore the avail->flags field. */
/* The Host publishes the avail index for which it expects a kick
@@ -233,6 +239,8 @@ struct virtio_hw {
uint8_t modern;
uint8_t use_simple_rx;
uint8_t use_simple_tx;
+ uint8_t use_inorder_rx;
+ uint8_t use_inorder_tx;
uint16_t port_id;
uint8_t mac_addr[ETHER_ADDR_LEN];
uint32_t notify_off_multiplier;
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 3/8] net/virtio-user: add mrg_rxbuf and in_order vdev parameters
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 1/8] vhost: announce VIRTIO_F_IN_ORDER support Marvin Liu
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 2/8] net/virtio: add VIRTIO_F_IN_ORDER definition Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 6:53 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 4/8] net/virtio: free IN_ORDER descriptors before device start Marvin Liu
` (5 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
Add parameters for configuring VIRTIO_NET_F_MRG_RXBUF and
VIRTIO_F_IN_ORDER feature bits. If feature is disabled, also update
corresponding unsupported feature bit.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 360a92eae..34a7f53ac 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -375,7 +375,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
int
virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
- int cq, int queue_size, const char *mac, char **ifname)
+ int cq, int queue_size, const char *mac, char **ifname,
+ int mrg_rxbuf, int in_order)
{
pthread_mutex_init(&dev->mutex, NULL);
snprintf(dev->path, PATH_MAX, "%s", path);
@@ -420,6 +421,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
dev->device_features = VIRTIO_USER_SUPPORTED_FEATURES;
}
+ if (!mrg_rxbuf) {
+ dev->device_features &= ~(1ull << VIRTIO_NET_F_MRG_RXBUF);
+ dev->unmask_features |= (1ull << VIRTIO_NET_F_MRG_RXBUF);
+ }
+
+ if (!in_order) {
+ dev->device_features &= ~(1ull << VIRTIO_F_IN_ORDER);
+ dev->unmask_features |= (1ull << VIRTIO_F_IN_ORDER);
+ }
+
if (dev->mac_specified)
dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
else {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index a1da5f56c..b97eec075 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -48,7 +48,8 @@ int is_vhost_user_by_type(const char *path);
int virtio_user_start_device(struct virtio_user_dev *dev);
int virtio_user_stop_device(struct virtio_user_dev *dev);
int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
- int cq, int queue_size, const char *mac, char **ifname);
+ int cq, int queue_size, const char *mac, char **ifname,
+ int mrg_rxbuf, int in_order);
void virtio_user_dev_uninit(struct virtio_user_dev *dev);
void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 0c02c6851..e9a5039aa 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -358,8 +358,12 @@ static const char *valid_args[] = {
VIRTIO_USER_ARG_QUEUE_SIZE,
#define VIRTIO_USER_ARG_INTERFACE_NAME "iface"
VIRTIO_USER_ARG_INTERFACE_NAME,
-#define VIRTIO_USER_ARG_SERVER_MODE "server"
+#define VIRTIO_USER_ARG_SERVER_MODE "server"
VIRTIO_USER_ARG_SERVER_MODE,
+#define VIRTIO_USER_ARG_MRG_RXBUF "mrg_rxbuf"
+ VIRTIO_USER_ARG_MRG_RXBUF,
+#define VIRTIO_USER_ARG_IN_ORDER "in_order"
+ VIRTIO_USER_ARG_IN_ORDER,
NULL
};
@@ -462,6 +466,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
uint64_t cq = VIRTIO_USER_DEF_CQ_EN;
uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
+ uint64_t mrg_rxbuf = 1;
+ uint64_t in_order = 1;
char *path = NULL;
char *ifname = NULL;
char *mac_addr = NULL;
@@ -561,6 +567,24 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
goto end;
}
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF,
+ &get_integer_arg, &mrg_rxbuf) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_MRG_RXBUF);
+ goto end;
+ }
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_IN_ORDER) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_IN_ORDER,
+ &get_integer_arg, &in_order) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_IN_ORDER);
+ goto end;
+ }
+ }
+
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
struct virtio_user_dev *vu_dev;
@@ -577,7 +601,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
else
vu_dev->is_server = false;
if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
- queue_size, mac_addr, &ifname) < 0) {
+ queue_size, mac_addr, &ifname, mrg_rxbuf,
+ in_order) < 0) {
PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
virtio_user_eth_dev_free(eth_dev);
goto end;
@@ -655,4 +680,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_virtio_user,
"cq=<int> "
"queue_size=<int> "
"queues=<int> "
- "iface=<string>");
+ "iface=<string>"
+ "mrg_rxbuf=<0|1>"
+ "in_order=<0|1>");
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/8] net/virtio-user: add mrg_rxbuf and in_order vdev parameters
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 3/8] net/virtio-user: add mrg_rxbuf and in_order vdev parameters Marvin Liu
@ 2018-06-27 6:53 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-27 6:53 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> Add parameters for configuring VIRTIO_NET_F_MRG_RXBUF and
> VIRTIO_F_IN_ORDER feature bits. If feature is disabled, also update
> corresponding unsupported feature bit.
>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 360a92eae..34a7f53ac 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -375,7 +375,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
>
> int
> virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
> - int cq, int queue_size, const char *mac, char **ifname)
> + int cq, int queue_size, const char *mac, char **ifname,
> + int mrg_rxbuf, int in_order)
> {
> pthread_mutex_init(&dev->mutex, NULL);
> snprintf(dev->path, PATH_MAX, "%s", path);
> @@ -420,6 +421,16 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
> dev->device_features = VIRTIO_USER_SUPPORTED_FEATURES;
> }
>
> + if (!mrg_rxbuf) {
> + dev->device_features &= ~(1ull << VIRTIO_NET_F_MRG_RXBUF);
> + dev->unmask_features |= (1ull << VIRTIO_NET_F_MRG_RXBUF);
> + }
> +
> + if (!in_order) {
> + dev->device_features &= ~(1ull << VIRTIO_F_IN_ORDER);
> + dev->unmask_features |= (1ull << VIRTIO_F_IN_ORDER);
> + }
> +
> if (dev->mac_specified)
> dev->device_features |= (1ull << VIRTIO_NET_F_MAC);
> else {
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> index a1da5f56c..b97eec075 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> @@ -48,7 +48,8 @@ int is_vhost_user_by_type(const char *path);
> int virtio_user_start_device(struct virtio_user_dev *dev);
> int virtio_user_stop_device(struct virtio_user_dev *dev);
> int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,
> - int cq, int queue_size, const char *mac, char **ifname);
> + int cq, int queue_size, const char *mac, char **ifname,
> + int mrg_rxbuf, int in_order);
> void virtio_user_dev_uninit(struct virtio_user_dev *dev);
> void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx);
> uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs);
> diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
> index 0c02c6851..e9a5039aa 100644
> --- a/drivers/net/virtio/virtio_user_ethdev.c
> +++ b/drivers/net/virtio/virtio_user_ethdev.c
> @@ -358,8 +358,12 @@ static const char *valid_args[] = {
> VIRTIO_USER_ARG_QUEUE_SIZE,
> #define VIRTIO_USER_ARG_INTERFACE_NAME "iface"
> VIRTIO_USER_ARG_INTERFACE_NAME,
> -#define VIRTIO_USER_ARG_SERVER_MODE "server"
> +#define VIRTIO_USER_ARG_SERVER_MODE "server"
> VIRTIO_USER_ARG_SERVER_MODE,
> +#define VIRTIO_USER_ARG_MRG_RXBUF "mrg_rxbuf"
> + VIRTIO_USER_ARG_MRG_RXBUF,
> +#define VIRTIO_USER_ARG_IN_ORDER "in_order"
> + VIRTIO_USER_ARG_IN_ORDER,
> NULL
> };
>
> @@ -462,6 +466,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
> uint64_t cq = VIRTIO_USER_DEF_CQ_EN;
> uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
> uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
> + uint64_t mrg_rxbuf = 1;
> + uint64_t in_order = 1;
> char *path = NULL;
> char *ifname = NULL;
> char *mac_addr = NULL;
> @@ -561,6 +567,24 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
> goto end;
> }
>
> + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_MRG_RXBUF) == 1) {
> + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_MRG_RXBUF,
> + &get_integer_arg, &mrg_rxbuf) < 0) {
> + PMD_INIT_LOG(ERR, "error to parse %s",
> + VIRTIO_USER_ARG_MRG_RXBUF);
> + goto end;
> + }
> + }
> +
> + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_IN_ORDER) == 1) {
> + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_IN_ORDER,
> + &get_integer_arg, &in_order) < 0) {
> + PMD_INIT_LOG(ERR, "error to parse %s",
> + VIRTIO_USER_ARG_IN_ORDER);
> + goto end;
> + }
> + }
> +
> if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> struct virtio_user_dev *vu_dev;
>
> @@ -577,7 +601,8 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev)
> else
> vu_dev->is_server = false;
> if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq,
> - queue_size, mac_addr, &ifname) < 0) {
> + queue_size, mac_addr, &ifname, mrg_rxbuf,
> + in_order) < 0) {
Doesn't the indent change looks weird here?
> PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
> virtio_user_eth_dev_free(eth_dev);
> goto end;
> @@ -655,4 +680,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_virtio_user,
> "cq=<int> "
> "queue_size=<int> "
> "queues=<int> "
> - "iface=<string>");
> + "iface=<string>"
> + "mrg_rxbuf=<0|1>"
> + "in_order=<0|1>");
>
Maybe the new options should be documented?
Regards,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 4/8] net/virtio: free IN_ORDER descriptors before device start
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (2 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 3/8] net/virtio-user: add mrg_rxbuf and in_order vdev parameters Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 6:58 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 5/8] net/virtio: extract common part for IN_ORDER functions Marvin Liu
` (4 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
Add new function for freeing IN_ORDER descriptors. As descriptors will
be allocated and freed sequentially when IN_ORDER feature was
negotiated. There will be no need to utilize chain for freed descriptors
management, only index update is enough.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 92fab2174..0bca29855 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -47,6 +47,13 @@ virtio_dev_rx_queue_done(void *rxq, uint16_t offset)
return VIRTQUEUE_NUSED(vq) >= offset;
}
+void
+vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx, uint16_t num)
+{
+ vq->vq_free_cnt += num;
+ vq->vq_desc_tail_idx = desc_idx & (vq->vq_nentries - 1);
+}
+
void
vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
{
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index a7d0a9cbe..56a77cc71 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -74,6 +74,14 @@ virtqueue_rxvq_flush(struct virtqueue *vq)
desc_idx = used_idx;
rte_pktmbuf_free(vq->sw_ring[desc_idx]);
vq->vq_free_cnt++;
+ } else if (hw->use_inorder_rx) {
+ desc_idx = (uint16_t)uep->id;
+ dxp = &vq->vq_descx[desc_idx];
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ vq_ring_free_inorder(vq, desc_idx, 1);
} else {
desc_idx = (uint16_t)uep->id;
dxp = &vq->vq_descx[desc_idx];
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 14364f356..26518ed98 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -306,6 +306,8 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
+void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
+ uint16_t num);
static inline void
vq_update_avail_idx(struct virtqueue *vq)
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 5/8] net/virtio: extract common part for IN_ORDER functions
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (3 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 4/8] net/virtio: free IN_ORDER descriptors before device start Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 6:59 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx Marvin Liu
` (3 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
IN_ORDER virtio-user Tx function support Tx checksum offloading and
TSO which also support on normal Tx function. So extracts common part
into separated function.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 0bca29855..e9b1b496e 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -246,6 +246,55 @@ tx_offload_enabled(struct virtio_hw *hw)
(var) = (val); \
} while (0)
+static inline void
+virtqueue_xmit_offload(struct virtio_net_hdr *hdr,
+ struct rte_mbuf *cookie,
+ int offload)
+{
+ if (offload) {
+ if (cookie->ol_flags & PKT_TX_TCP_SEG)
+ cookie->ol_flags |= PKT_TX_TCP_CKSUM;
+
+ switch (cookie->ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_UDP_CKSUM:
+ hdr->csum_start = cookie->l2_len + cookie->l3_len;
+ hdr->csum_offset = offsetof(struct udp_hdr,
+ dgram_cksum);
+ hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+ break;
+
+ case PKT_TX_TCP_CKSUM:
+ hdr->csum_start = cookie->l2_len + cookie->l3_len;
+ hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
+ hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
+ break;
+
+ default:
+ ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+ break;
+ }
+
+ /* TCP Segmentation Offload */
+ if (cookie->ol_flags & PKT_TX_TCP_SEG) {
+ virtio_tso_fix_cksum(cookie);
+ hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
+ VIRTIO_NET_HDR_GSO_TCPV6 :
+ VIRTIO_NET_HDR_GSO_TCPV4;
+ hdr->gso_size = cookie->tso_segsz;
+ hdr->hdr_len =
+ cookie->l2_len +
+ cookie->l3_len +
+ cookie->l4_len;
+ } else {
+ ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+ }
+ }
+}
+
static inline void
virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
uint16_t needed, int use_indirect, int can_push)
@@ -315,49 +364,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
idx = start_dp[idx].next;
}
- /* Checksum Offload / TSO */
- if (offload) {
- if (cookie->ol_flags & PKT_TX_TCP_SEG)
- cookie->ol_flags |= PKT_TX_TCP_CKSUM;
-
- switch (cookie->ol_flags & PKT_TX_L4_MASK) {
- case PKT_TX_UDP_CKSUM:
- hdr->csum_start = cookie->l2_len + cookie->l3_len;
- hdr->csum_offset = offsetof(struct udp_hdr,
- dgram_cksum);
- hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
- break;
-
- case PKT_TX_TCP_CKSUM:
- hdr->csum_start = cookie->l2_len + cookie->l3_len;
- hdr->csum_offset = offsetof(struct tcp_hdr, cksum);
- hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
- break;
-
- default:
- ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
- ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
- ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
- break;
- }
-
- /* TCP Segmentation Offload */
- if (cookie->ol_flags & PKT_TX_TCP_SEG) {
- virtio_tso_fix_cksum(cookie);
- hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
- VIRTIO_NET_HDR_GSO_TCPV6 :
- VIRTIO_NET_HDR_GSO_TCPV4;
- hdr->gso_size = cookie->tso_segsz;
- hdr->hdr_len =
- cookie->l2_len +
- cookie->l3_len +
- cookie->l4_len;
- } else {
- ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
- ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
- ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
- }
- }
+ virtqueue_xmit_offload(hdr, cookie, offload);
do {
start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
@@ -621,6 +628,15 @@ virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf)
}
}
+static inline void
+virtio_rx_stats_updated(struct virtnet_rx *rxvq, struct rte_mbuf *m)
+{
+ VIRTIO_DUMP_PACKET(m, m->data_len);
+
+ rxvq->stats.bytes += m->pkt_len;
+ virtio_update_packet_stats(&rxvq->stats, m);
+}
+
/* Optionally fill offload information in structure */
static int
virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr)
@@ -773,12 +789,9 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
continue;
}
- VIRTIO_DUMP_PACKET(rxm, rxm->data_len);
+ virtio_rx_stats_updated(rxvq, rxm);
rx_pkts[nb_rx++] = rxm;
-
- rxvq->stats.bytes += rxm->pkt_len;
- virtio_update_packet_stats(&rxvq->stats, rxm);
}
rxvq->stats.packets += nb_rx;
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (4 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 5/8] net/virtio: extract common part for IN_ORDER functions Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 8:18 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection Marvin Liu
` (2 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
IN_ORDER Rx function can support merge-able feature. Descriptors
allocation and free will be done in bulk.
Virtio dequeue logic:
dequeue_burst_rx(burst mbufs)
for (each mbuf b) {
if (b need merge) {
merge remained mbufs
add merged mbuf to return mbufs list
} else {
add mbuf to return mbufs list
}
}
if (last mbuf c need merge) {
dequeue_burst_rx(required mbufs)
merge last mbuf c
}
refill_avail_ring_bulk()
update_avail_ring()
return mbufs list
IN_ORDER Tx function can support offloading features. Packets can't be
transmitted by IN_ORDER Tx will be handled by normal Tx.
Virtio enqueue logic:
xmit_cleanup(used descs)
for (each xmit mbuf b) {
if (b can inorder xmit) {
add mbuf b to inorder burst list
continue
} else {
xmit inorder burst list
xmit mbuf b with normal xmit
}
}
if (inorder burst list not empty) {
xmit inorder burst list
}
update_avail_ring()
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index bb40064ea..8ab1267b0 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -83,9 +83,15 @@ uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+
uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
uint16_t virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e9b1b496e..f659200fd 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -122,6 +122,44 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
return i;
}
+static uint16_t
+virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
+ struct rte_mbuf **rx_pkts,
+ uint32_t *len,
+ uint16_t num)
+{
+ struct vring_used_elem *uep;
+ struct rte_mbuf *cookie;
+ uint16_t used_idx = 0;
+ uint16_t i;
+
+ if (unlikely(num == 0))
+ return 0;
+
+ for (i = 0; i < num; i++) {
+ used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
+ /* Desc idx same as used idx */
+ uep = &vq->vq_ring.used->ring[used_idx];
+ len[i] = uep->len;
+ cookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie;
+
+ if (unlikely(cookie == NULL)) {
+ PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+
+ rte_prefetch0(cookie);
+ rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *));
+ rx_pkts[i] = cookie;
+ vq->vq_used_cons_idx++;
+ vq->vq_descx[used_idx].cookie = NULL;
+ }
+
+ vq_ring_free_inorder(vq, used_idx, i);
+ return i;
+}
+
#ifndef DEFAULT_TX_FREE_THRESH
#define DEFAULT_TX_FREE_THRESH 32
#endif
@@ -150,6 +188,83 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
}
}
+/* Cleanup from completed inorder transmits. */
+static void
+virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
+{
+ uint16_t i, used_idx, desc_idx, last_idx;
+ int16_t free_cnt = 0;
+ struct vq_desc_extra *dxp = NULL;
+
+ if (unlikely(num == 0))
+ return;
+
+ for (i = 0; i < num; i++) {
+ struct vring_used_elem *uep;
+
+ used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
+ uep = &vq->vq_ring.used->ring[used_idx];
+ desc_idx = (uint16_t) uep->id;
+
+ dxp = &vq->vq_descx[desc_idx];
+ vq->vq_used_cons_idx++;
+
+ if (dxp->cookie != NULL) {
+ rte_pktmbuf_free(dxp->cookie);
+ dxp->cookie = NULL;
+ }
+ }
+
+ last_idx = desc_idx + dxp->ndescs - 1;
+ free_cnt = last_idx - vq->vq_desc_tail_idx;
+ if (free_cnt <= 0)
+ free_cnt += vq->vq_nentries;
+
+ vq_ring_free_inorder(vq, last_idx, free_cnt);
+}
+
+static inline int
+virtqueue_enqueue_recv_refill_inorder(struct virtqueue *vq,
+ struct rte_mbuf **cookies,
+ uint16_t num)
+{
+ struct vq_desc_extra *dxp;
+ struct virtio_hw *hw = vq->hw;
+ struct vring_desc *start_dp;
+ uint16_t head_idx, idx, i = 0;
+
+ if (unlikely(vq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(vq->vq_free_cnt < num))
+ return -EMSGSIZE;
+
+ head_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);
+ start_dp = vq->vq_ring.desc;
+
+ while (i < num) {
+ idx = head_idx & (vq->vq_nentries - 1);
+ dxp = &vq->vq_descx[idx];
+ dxp->cookie = (void *)cookies[i];
+ dxp->ndescs = 1;
+
+ start_dp[idx].addr =
+ VIRTIO_MBUF_ADDR(cookies[i], vq) +
+ RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+ start_dp[idx].len =
+ cookies[i]->buf_len -
+ RTE_PKTMBUF_HEADROOM +
+ hw->vtnet_hdr_size;
+ start_dp[idx].flags = VRING_DESC_F_WRITE;
+
+ vq_update_avail_ring(vq, idx);
+ head_idx++;
+ i++;
+ }
+
+ vq->vq_desc_head_idx += num;
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
+ return 0;
+}
static inline int
virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie)
@@ -295,9 +410,65 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr,
}
}
+static inline void
+virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
+ struct rte_mbuf **cookies,
+ uint16_t num)
+{
+ struct vq_desc_extra *dxp;
+ struct virtqueue *vq = txvq->vq;
+ struct vring_desc *start_dp;
+ struct virtio_net_hdr *hdr;
+ uint16_t idx;
+ uint16_t head_size = vq->hw->vtnet_hdr_size;
+ int offload;
+ uint16_t i = 0;
+
+ idx = vq->vq_desc_head_idx;
+ start_dp = vq->vq_ring.desc;
+
+ offload = tx_offload_enabled(vq->hw);
+
+ while (i < num) {
+ idx = idx & (vq->vq_nentries - 1);
+ dxp = &vq->vq_descx[idx];
+ dxp->cookie = (void *)cookies[i];
+ dxp->ndescs = 1;
+
+ hdr = (struct virtio_net_hdr *)
+ rte_pktmbuf_prepend(cookies[i], head_size);
+ cookies[i]->pkt_len -= head_size;
+
+ /* if offload disabled, it is not zeroed below, do it now */
+ if (offload == 0) {
+ ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
+ ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
+ }
+
+ virtqueue_xmit_offload(hdr, cookies[i], offload);
+
+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq);
+ start_dp[idx].len = cookies[i]->data_len;
+ start_dp[idx].flags = 0;
+
+ vq_update_avail_ring(vq, idx);
+
+ idx++;
+ i++;
+ };
+
+ vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
+ vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1);
+}
+
static inline void
virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
- uint16_t needed, int use_indirect, int can_push)
+ uint16_t needed, int use_indirect, int can_push,
+ int in_order)
{
struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr;
struct vq_desc_extra *dxp;
@@ -310,6 +481,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
int offload;
offload = tx_offload_enabled(vq->hw);
+
head_idx = vq->vq_desc_head_idx;
idx = head_idx;
dxp = &vq->vq_descx[idx];
@@ -326,6 +498,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
* which is wrong. Below subtract restores correct pkt size.
*/
cookie->pkt_len -= head_size;
+
/* if offload disabled, it is not zeroed below, do it now */
if (offload == 0) {
ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
@@ -376,11 +549,15 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
if (use_indirect)
idx = vq->vq_ring.desc[head_idx].next;
- vq->vq_desc_head_idx = idx;
- if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
- vq->vq_desc_tail_idx = idx;
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
+
+ vq->vq_desc_head_idx = idx;
vq_update_avail_ring(vq, head_idx);
+
+ if (!in_order) {
+ if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ vq->vq_desc_tail_idx = idx;
+ }
}
void
@@ -435,7 +612,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
struct virtnet_rx *rxvq = &vq->rxq;
struct rte_mbuf *m;
uint16_t desc_idx;
- int error, nbufs;
+ int error, nbufs, i;
PMD_INIT_FUNC_TRACE();
@@ -465,6 +642,24 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
virtio_rxq_rearm_vec(rxvq);
nbufs += RTE_VIRTIO_VPMD_RX_REARM_THRESH;
}
+ } else if (hw->use_inorder_rx) {
+ if ((!virtqueue_full(vq))) {
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *pkts[free_cnt];
+
+ if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, pkts, free_cnt)) {
+ error = virtqueue_enqueue_recv_refill_inorder(vq,
+ pkts,
+ free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(pkts[i]);
+ }
+ }
+
+ nbufs += free_cnt;
+ vq_update_avail_idx(vq);
+ }
} else {
while (!virtqueue_full(vq)) {
m = rte_mbuf_raw_alloc(rxvq->mpool);
@@ -574,6 +769,8 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
for (desc_idx = mid_idx; desc_idx < vq->vq_nentries;
desc_idx++)
vq->vq_ring.avail->ring[desc_idx] = desc_idx;
+ } else if (hw->use_inorder_tx) {
+ vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
}
VIRTQUEUE_DUMP(vq);
@@ -590,6 +787,19 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m)
* successful since it was just dequeued.
*/
error = virtqueue_enqueue_recv_refill(vq, m);
+
+ if (unlikely(error)) {
+ RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf");
+ rte_pktmbuf_free(m);
+ }
+}
+
+static void
+virtio_discard_rxbuf_inorder(struct virtqueue *vq, struct rte_mbuf *m)
+{
+ int error;
+
+ error = virtqueue_enqueue_recv_refill_inorder(vq, &m, 1);
if (unlikely(error)) {
RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf");
rte_pktmbuf_free(m);
@@ -826,6 +1036,191 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
return nb_rx;
}
+uint16_t
+virtio_recv_mergeable_pkts_inorder(void *rx_queue,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct virtnet_rx *rxvq = rx_queue;
+ struct virtqueue *vq = rxvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *prev;
+ uint16_t nb_used, num, nb_rx;
+ uint32_t len[VIRTIO_MBUF_BURST_SZ];
+ struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
+ int error;
+ uint32_t i, nb_enqueued;
+ uint32_t seg_num;
+ uint32_t seg_res;
+ uint32_t hdr_size;
+ int offload;
+
+ nb_rx = 0;
+ if (unlikely(hw->started == 0))
+ return nb_rx;
+
+ nb_used = VIRTQUEUE_NUSED(vq);
+ nb_used = RTE_MIN(nb_used, nb_pkts);
+ nb_used = RTE_MIN(nb_used, VIRTIO_MBUF_BURST_SZ);
+
+ virtio_rmb();
+
+ PMD_RX_LOG(DEBUG, "used:%d", nb_used);
+
+ nb_enqueued = 0;
+ seg_num = 1;
+ seg_res = 0;
+ hdr_size = hw->vtnet_hdr_size;
+ offload = rx_offload_enabled(hw);
+
+ num = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len, nb_used);
+
+ for (i = 0; i < num; i++) {
+ struct virtio_net_hdr_mrg_rxbuf *header;
+
+ PMD_RX_LOG(DEBUG, "dequeue:%d", num);
+ PMD_RX_LOG(DEBUG, "packet len:%d", len[i]);
+
+ rxm = rcv_pkts[i];
+
+ if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) {
+ PMD_RX_LOG(ERR, "Packet drop");
+ nb_enqueued++;
+ virtio_discard_rxbuf_inorder(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ header = (struct virtio_net_hdr_mrg_rxbuf *)((char *)rxm->buf_addr +
+ RTE_PKTMBUF_HEADROOM - hdr_size);
+ seg_num = header->num_buffers;
+
+ if (seg_num == 0)
+ seg_num = 1;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->nb_segs = seg_num;
+ rxm->ol_flags = 0;
+ rxm->vlan_tci = 0;
+ rxm->pkt_len = (uint32_t)(len[i] - hdr_size);
+ rxm->data_len = (uint16_t)(len[i] - hdr_size);
+
+ rxm->port = rxvq->port_id;
+
+ rx_pkts[nb_rx] = rxm;
+ prev = rxm;
+
+ if (offload && virtio_rx_offload(rxm, &header->hdr) < 0) {
+ virtio_discard_rxbuf_inorder(vq, rxm);
+ rxvq->stats.errors++;
+ continue;
+ }
+
+ if (hw->vlan_strip)
+ rte_vlan_strip(rx_pkts[nb_rx]);
+
+ seg_res = seg_num - 1;
+
+ /* Merge remaining segments */
+ while (seg_res != 0 && i < num) {
+ rxm = rcv_pkts[i];
+ rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;
+ rxm->pkt_len = (uint32_t)(len[i]);
+ rxm->data_len = (uint16_t)(len[i]);
+
+ rx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]);
+ rx_pkts[nb_rx]->data_len += (uint16_t)(len[i]);
+
+ i++;
+
+ if (prev)
+ prev->next = rxm;
+
+ prev = rxm;
+ seg_res -= 1;
+ }
+
+ if (!seg_res) {
+ virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+ nb_rx++;
+ }
+ }
+
+ /* Last packet still need merge segments */
+ while (seg_res != 0) {
+ uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res,
+ VIRTIO_MBUF_BURST_SZ);
+
+ prev = rcv_pkts[nb_rx];
+ if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) {
+ num = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len,
+ rcv_cnt);
+ uint16_t extra_idx = 0;
+
+ rcv_cnt = num;
+ while (extra_idx < rcv_cnt) {
+ rxm = rcv_pkts[extra_idx];
+ rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size;
+ rxm->pkt_len = (uint32_t)(len[extra_idx]);
+ rxm->data_len = (uint16_t)(len[extra_idx]);
+ prev->next = rxm;
+ prev = rxm;
+ rx_pkts[nb_rx]->pkt_len += len[extra_idx];
+ rx_pkts[nb_rx]->data_len += len[extra_idx];
+ extra_idx += 1;
+ };
+ seg_res -= rcv_cnt;
+
+ if (!seg_res) {
+ virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]);
+ nb_rx++;
+ }
+ } else {
+ PMD_RX_LOG(ERR,
+ "No enough segments for packet.");
+ virtio_discard_rxbuf_inorder(vq, prev);
+ rxvq->stats.errors++;
+ break;
+ }
+ }
+
+ rxvq->stats.packets += nb_rx;
+
+ /* Allocate new mbuf for the used descriptor */
+
+ if (likely(!virtqueue_full(vq))) {
+ /* free_cnt may include mrg descs */
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) {
+ error = virtqueue_enqueue_recv_refill_inorder(vq, new_pkts,
+ free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
+ struct rte_eth_dev *dev
+ = &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
+ }
+ }
+
+ if (likely(nb_enqueued)) {
+ vq_update_avail_idx(vq);
+
+ if (unlikely(virtqueue_kick_prepare(vq))) {
+ virtqueue_notify(vq);
+ PMD_RX_LOG(DEBUG, "Notified");
+ }
+ }
+
+ return nb_rx;
+}
+
uint16_t
virtio_recv_mergeable_pkts(void *rx_queue,
struct rte_mbuf **rx_pkts,
@@ -1073,7 +1468,8 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
}
/* Enqueue Packet buffers */
- virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect, can_push);
+ virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
+ can_push, 0);
txvq->stats.bytes += txm->pkt_len;
virtio_update_packet_stats(&txvq->stats, txm);
@@ -1092,3 +1488,116 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_tx;
}
+
+uint16_t
+virtio_xmit_pkts_inorder(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct virtnet_tx *txvq = tx_queue;
+ struct virtqueue *vq = txvq->vq;
+ struct virtio_hw *hw = vq->hw;
+ uint16_t hdr_size = hw->vtnet_hdr_size;
+ uint16_t nb_used, nb_avail, nb_tx = 0, nb_inorder_pkts = 0;
+ struct rte_mbuf *inorder_pkts[nb_pkts];
+ int error;
+
+ if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts))
+ return nb_tx;
+
+ if (unlikely(nb_pkts < 1))
+ return nb_pkts;
+
+ VIRTQUEUE_DUMP(vq);
+ PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+ nb_used = VIRTQUEUE_NUSED(vq);
+
+ virtio_rmb();
+ if (likely(nb_used > vq->vq_nentries - vq->vq_free_thresh))
+ virtio_xmit_cleanup_inorder(vq, nb_used);
+
+ if (unlikely(!vq->vq_free_cnt))
+ virtio_xmit_cleanup_inorder(vq, nb_used);
+
+ nb_avail = RTE_MIN(vq->vq_free_cnt, nb_pkts);
+
+ for (nb_tx = 0; nb_tx < nb_avail; nb_tx++) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx];
+ int slots, need;
+
+ /* Do VLAN tag insertion */
+ if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) {
+ error = rte_vlan_insert(&txm);
+ if (unlikely(error)) {
+ rte_pktmbuf_free(txm);
+ continue;
+ }
+ }
+
+ /* optimize ring usage */
+ if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) ||
+ vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) &&
+ rte_mbuf_refcnt_read(txm) == 1 &&
+ RTE_MBUF_DIRECT(txm) &&
+ txm->nb_segs == 1 &&
+ rte_pktmbuf_headroom(txm) >= hdr_size &&
+ rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
+ __alignof__(struct virtio_net_hdr_mrg_rxbuf))) {
+ inorder_pkts[nb_inorder_pkts] = txm;
+ nb_inorder_pkts++;
+
+ txvq->stats.bytes += txm->pkt_len;
+ virtio_update_packet_stats(&txvq->stats, txm);
+ continue;
+ }
+
+ if (nb_inorder_pkts) {
+ virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,
+ nb_inorder_pkts);
+ nb_inorder_pkts = 0;
+ }
+
+ slots = txm->nb_segs + 1;
+ need = slots - vq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ nb_used = VIRTQUEUE_NUSED(vq);
+ virtio_rmb();
+ need = RTE_MIN(need, (int)nb_used);
+
+ virtio_xmit_cleanup_inorder(vq, need);
+
+ need = slots - vq->vq_free_cnt;
+
+ if (unlikely(need > 0)) {
+ PMD_TX_LOG(ERR,
+ "No free tx descriptors to transmit");
+ break;
+ }
+ }
+ /* Enqueue Packet buffers */
+ virtqueue_enqueue_xmit(txvq, txm, slots, 0, 0, 1);
+
+ txvq->stats.bytes += txm->pkt_len;
+ virtio_update_packet_stats(&txvq->stats, txm);
+ }
+
+ /* Transmit all inorder packets */
+ if (nb_inorder_pkts)
+ virtqueue_enqueue_xmit_inorder(txvq, inorder_pkts,
+ nb_inorder_pkts);
+
+ txvq->stats.packets += nb_tx;
+
+ if (likely(nb_tx)) {
+ vq_update_avail_idx(vq);
+
+ if (unlikely(virtqueue_kick_prepare(vq))) {
+ virtqueue_notify(vq);
+ PMD_TX_LOG(DEBUG, "Notified backend after xmit");
+ }
+ }
+
+ VIRTQUEUE_DUMP(vq);
+
+ return nb_tx;
+}
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx Marvin Liu
@ 2018-06-27 8:18 ` Maxime Coquelin
2018-06-27 14:27 ` Liu, Yong
0 siblings, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-27 8:18 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> IN_ORDER Rx function can support merge-able feature. Descriptors
My understanding of the code is that IN_ORDER Rx function only supports
mergeable feature (and it seems to be confirmed by patch 7).
So maybe change the text to:
"IN_ORDER Rx function requires merge-able feature."
> allocation and free will be done in bulk.
>
> Virtio dequeue logic:
> dequeue_burst_rx(burst mbufs)
> for (each mbuf b) {
> if (b need merge) {
> merge remained mbufs
> add merged mbuf to return mbufs list
> } else {
> add mbuf to return mbufs list
> }
> }
> if (last mbuf c need merge) {
> dequeue_burst_rx(required mbufs)
> merge last mbuf c
> }
> refill_avail_ring_bulk()
> update_avail_ring()
> return mbufs list
>
> IN_ORDER Tx function can support offloading features. Packets can't be
s/can support/supports/
> transmitted by IN_ORDER Tx will be handled by normal Tx.
Can you clarify why some packets can't be transmitted by IN_ORDER Tx?
> Virtio enqueue logic:
> xmit_cleanup(used descs)
> for (each xmit mbuf b) {
> if (b can inorder xmit) {
> add mbuf b to inorder burst list
> continue
> } else {
> xmit inorder burst list
> xmit mbuf b with normal xmit
> }
> }
> if (inorder burst list not empty) {
> xmit inorder burst list
> }
> update_avail_ring()
>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
Other than above clarification needed, the code looks good to me.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
2018-06-27 8:18 ` Maxime Coquelin
@ 2018-06-27 14:27 ` Liu, Yong
2018-06-27 14:32 ` Maxime Coquelin
0 siblings, 1 reply; 22+ messages in thread
From: Liu, Yong @ 2018-06-27 14:27 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei; +Cc: Wang, Zhihong, dev
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, June 27, 2018 4:18 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
>
>
>
> On 06/25/2018 05:17 PM, Marvin Liu wrote:
> > IN_ORDER Rx function can support merge-able feature. Descriptors
> My understanding of the code is that IN_ORDER Rx function only supports
> mergeable feature (and it seems to be confirmed by patch 7).
>
> So maybe change the text to:
> "IN_ORDER Rx function requires merge-able feature."
Thanks for point out this, Rx function choice was depended on merge-able feature. Will change comment in next release.
>
> > allocation and free will be done in bulk.
> >
> > Virtio dequeue logic:
> > dequeue_burst_rx(burst mbufs)
> > for (each mbuf b) {
> > if (b need merge) {
> > merge remained mbufs
> > add merged mbuf to return mbufs list
> > } else {
> > add mbuf to return mbufs list
> > }
> > }
> > if (last mbuf c need merge) {
> > dequeue_burst_rx(required mbufs)
> > merge last mbuf c
> > }
> > refill_avail_ring_bulk()
> > update_avail_ring()
> > return mbufs list
> >
> > IN_ORDER Tx function can support offloading features. Packets can't be
> s/can support/supports/
>
> > transmitted by IN_ORDER Tx will be handled by normal Tx.
>
> Can you clarify why some packets can't be transmitted by IN_ORDER Tx?
My previous description may not clear, all Tx descriptors will be saved to avail ring in-order.
For making code clean, packets which matched "can_push" option will be handled by simple xmit inorder function.
Those packets can't match "can_push" will be handled one by one by previous xmit function.
Thanks,
Marvin
>
> > Virtio enqueue logic:
> > xmit_cleanup(used descs)
> > for (each xmit mbuf b) {
> > if (b can inorder xmit) {
> > add mbuf b to inorder burst list
> > continue
> > } else {
> > xmit inorder burst list
> > xmit mbuf b with normal xmit
> > }
> > }
> > if (inorder burst list not empty) {
> > xmit inorder burst list
> > }
> > update_avail_ring()
> >
> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
> >
>
>
> Other than above clarification needed, the code looks good to me.
>
> Thanks,
> Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
2018-06-27 14:27 ` Liu, Yong
@ 2018-06-27 14:32 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-27 14:32 UTC (permalink / raw)
To: Liu, Yong, Bie, Tiwei; +Cc: Wang, Zhihong, dev
On 06/27/2018 04:27 PM, Liu, Yong wrote:
>
>
>> -----Original Message-----
>> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
>> Sent: Wednesday, June 27, 2018 4:18 PM
>> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
>> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
>> Subject: Re: [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx
>>
>>
>>
>> On 06/25/2018 05:17 PM, Marvin Liu wrote:
>>> IN_ORDER Rx function can support merge-able feature. Descriptors
>> My understanding of the code is that IN_ORDER Rx function only supports
>> mergeable feature (and it seems to be confirmed by patch 7).
>>
>> So maybe change the text to:
>> "IN_ORDER Rx function requires merge-able feature."
>
> Thanks for point out this, Rx function choice was depended on merge-able feature. Will change comment in next release.
>
>>
>>> allocation and free will be done in bulk.
>>>
>>> Virtio dequeue logic:
>>> dequeue_burst_rx(burst mbufs)
>>> for (each mbuf b) {
>>> if (b need merge) {
>>> merge remained mbufs
>>> add merged mbuf to return mbufs list
>>> } else {
>>> add mbuf to return mbufs list
>>> }
>>> }
>>> if (last mbuf c need merge) {
>>> dequeue_burst_rx(required mbufs)
>>> merge last mbuf c
>>> }
>>> refill_avail_ring_bulk()
>>> update_avail_ring()
>>> return mbufs list
>>>
>>> IN_ORDER Tx function can support offloading features. Packets can't be
>> s/can support/supports/
>>
>>> transmitted by IN_ORDER Tx will be handled by normal Tx.
>>
>> Can you clarify why some packets can't be transmitted by IN_ORDER Tx?
>
> My previous description may not clear, all Tx descriptors will be saved to avail ring in-order.
> For making code clean, packets which matched "can_push" option will be handled by simple xmit inorder function.
> Those packets can't match "can_push" will be handled one by one by previous xmit function.
Thanks! Sounds good with above clarification.
Maxime
> Thanks,
> Marvin
>
>>
>>> Virtio enqueue logic:
>>> xmit_cleanup(used descs)
>>> for (each xmit mbuf b) {
>>> if (b can inorder xmit) {
>>> add mbuf b to inorder burst list
>>> continue
>>> } else {
>>> xmit inorder burst list
>>> xmit mbuf b with normal xmit
>>> }
>>> }
>>> if (inorder burst list not empty) {
>>> xmit inorder burst list
>>> }
>>> update_avail_ring()
>>>
>>> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>>>
>>
>>
>> Other than above clarification needed, the code looks good to me.
>>
>> Thanks,
>> Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (5 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 6/8] net/virtio: support IN_ORDER Rx and Tx Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 8:19 ` Maxime Coquelin
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 8/8] net/virtio: annouce VIRTIO_F_IN_ORDER support Marvin Liu
2018-06-26 7:56 ` [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Maxime Coquelin
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
logic.
Rx path select logic: If IN_ORDER is disabled will select normal Rx
path. If IN_ORDER is enabled, Rx offload and merge-able are disabled
will select simple Rx path. Otherwise will select IN_ORDER Rx path.
Tx path select logic: If IN_ORDER is disabled will select normal Tx
path. If IN_ORDER is enabled and merge-able is disabled will select
simple Tx path. Otherwise will select IN_ORDER Tx path.
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index df50a571a..f48a84a11 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
eth_dev->data->port_id);
eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
+ } else if (hw->use_inorder_rx) {
+ PMD_INIT_LOG(INFO,
+ "virtio: using inorder mergeable buffer Rx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
} else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
PMD_INIT_LOG(INFO,
"virtio: using mergeable buffer Rx path on port %u",
@@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
eth_dev->data->port_id);
eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
+ } else if (hw->use_inorder_tx) {
+ PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
+ eth_dev->data->port_id);
+ eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
} else {
PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
eth_dev->data->port_id);
@@ -1871,23 +1880,25 @@ virtio_dev_configure(struct rte_eth_dev *dev)
rte_spinlock_init(&hw->state_lock);
- hw->use_simple_rx = 1;
- hw->use_simple_tx = 1;
-
#if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
hw->use_simple_rx = 0;
hw->use_simple_tx = 0;
}
#endif
- if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
- hw->use_simple_rx = 0;
- hw->use_simple_tx = 0;
- }
+ if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
+ if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
+ hw->use_inorder_rx = 1;
+ hw->use_inorder_tx = 1;
+ } else {
+ hw->use_simple_rx = 1;
+ hw->use_simple_tx = 1;
+ }
- if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
- DEV_RX_OFFLOAD_TCP_CKSUM))
- hw->use_simple_rx = 0;
+ if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM))
+ hw->use_inorder_rx = 1;
+ }
return 0;
}
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection Marvin Liu
@ 2018-06-27 8:19 ` Maxime Coquelin
2018-06-28 14:11 ` Liu, Yong
0 siblings, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-27 8:19 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
> logic.
>
> Rx path select logic: If IN_ORDER is disabled will select normal Rx
> path. If IN_ORDER is enabled, Rx offload and merge-able are disabled
> will select simple Rx path. Otherwise will select IN_ORDER Rx path.
>
> Tx path select logic: If IN_ORDER is disabled will select normal Tx
> path. If IN_ORDER is enabled and merge-able is disabled will select
> simple Tx path. Otherwise will select IN_ORDER Tx path.
>
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index df50a571a..f48a84a11 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
> eth_dev->data->port_id);
> eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
> + } else if (hw->use_inorder_rx) {
> + PMD_INIT_LOG(INFO,
> + "virtio: using inorder mergeable buffer Rx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
> } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> PMD_INIT_LOG(INFO,
> "virtio: using mergeable buffer Rx path on port %u",
> @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
> eth_dev->data->port_id);
> eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
> + } else if (hw->use_inorder_tx) {
> + PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> + eth_dev->data->port_id);
> + eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> } else {
> PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
> eth_dev->data->port_id);
> @@ -1871,23 +1880,25 @@ virtio_dev_configure(struct rte_eth_dev *dev)
>
> rte_spinlock_init(&hw->state_lock);
>
> - hw->use_simple_rx = 1;
> - hw->use_simple_tx = 1;
> -
> #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
> if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
> hw->use_simple_rx = 0;
> hw->use_simple_tx = 0;
> }
> #endif
> - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> - hw->use_simple_rx = 0;
> - hw->use_simple_tx = 0;
> - }
> + if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
> + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> + hw->use_inorder_rx = 1;
> + hw->use_inorder_tx = 1;
> + } else {
> + hw->use_simple_rx = 1;
> + hw->use_simple_tx = 1;
> + }
>
> - if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> - DEV_RX_OFFLOAD_TCP_CKSUM))
> - hw->use_simple_rx = 0;
> + if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> + DEV_RX_OFFLOAD_TCP_CKSUM))
> + hw->use_inorder_rx = 1;
Isn't there a problem here if mrg_rxbuf has been negotiated?
> + }
>
> return 0;
> }
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection
2018-06-27 8:19 ` Maxime Coquelin
@ 2018-06-28 14:11 ` Liu, Yong
0 siblings, 0 replies; 22+ messages in thread
From: Liu, Yong @ 2018-06-28 14:11 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei; +Cc: Wang, Zhihong, dev
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, June 27, 2018 4:20 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection
>
>
>
> On 06/25/2018 05:17 PM, Marvin Liu wrote:
> > After IN_ORDER Rx/Tx paths added, need to update Rx/Tx path selection
> > logic.
> >
> > Rx path select logic: If IN_ORDER is disabled will select normal Rx
> > path. If IN_ORDER is enabled, Rx offload and merge-able are disabled
> > will select simple Rx path. Otherwise will select IN_ORDER Rx path.
> >
> > Tx path select logic: If IN_ORDER is disabled will select normal Tx
> > path. If IN_ORDER is enabled and merge-able is disabled will select
> > simple Tx path. Otherwise will select IN_ORDER Tx path.
> >
> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> > index df50a571a..f48a84a11 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -1320,6 +1320,11 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> > PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u",
> > eth_dev->data->port_id);
> > eth_dev->rx_pkt_burst = virtio_recv_pkts_vec;
> > + } else if (hw->use_inorder_rx) {
> > + PMD_INIT_LOG(INFO,
> > + "virtio: using inorder mergeable buffer Rx path on
> port %u",
> > + eth_dev->data->port_id);
> > + eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder;
> > } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> > PMD_INIT_LOG(INFO,
> > "virtio: using mergeable buffer Rx path on port %u",
> > @@ -1335,6 +1340,10 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> > PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u",
> > eth_dev->data->port_id);
> > eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple;
> > + } else if (hw->use_inorder_tx) {
> > + PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u",
> > + eth_dev->data->port_id);
> > + eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder;
> > } else {
> > PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u",
> > eth_dev->data->port_id);
> > @@ -1871,23 +1880,25 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> >
> > rte_spinlock_init(&hw->state_lock);
> >
> > - hw->use_simple_rx = 1;
> > - hw->use_simple_tx = 1;
> > -
> > #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM
> > if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) {
> > hw->use_simple_rx = 0;
> > hw->use_simple_tx = 0;
> > }
> > #endif
> > - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> > - hw->use_simple_rx = 0;
> > - hw->use_simple_tx = 0;
> > - }
> > + if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) {
> > + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) {
> > + hw->use_inorder_rx = 1;
> > + hw->use_inorder_tx = 1;
> > + } else {
> > + hw->use_simple_rx = 1;
> > + hw->use_simple_tx = 1;
> > + }
> >
> > - if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> > - DEV_RX_OFFLOAD_TCP_CKSUM))
> > - hw->use_simple_rx = 0;
> > + if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> > + DEV_RX_OFFLOAD_TCP_CKSUM))
> > + hw->use_inorder_rx = 1;
> Isn't there a problem here if mrg_rxbuf has been negotiated?
>
That's the problem, has changed selection logic in v3. Thanks for review.
> > + }
> >
> > return 0;
> > }
> >
^ permalink raw reply [flat|nested] 22+ messages in thread
* [dpdk-dev] [PATCH v2 8/8] net/virtio: annouce VIRTIO_F_IN_ORDER support
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (6 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 7/8] net/virtio: add IN_ORDER Rx/Tx into selection Marvin Liu
@ 2018-06-25 15:17 ` Marvin Liu
2018-06-27 8:20 ` Maxime Coquelin
2018-06-26 7:56 ` [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Maxime Coquelin
8 siblings, 1 reply; 22+ messages in thread
From: Marvin Liu @ 2018-06-25 15:17 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie; +Cc: zhihong.wang, dev, Marvin Liu
Signed-off-by: Marvin Liu <yong.liu@intel.com>
diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
index 8ab1267b0..69da18d9c 100644
--- a/drivers/net/virtio/virtio_ethdev.h
+++ b/drivers/net/virtio/virtio_ethdev.h
@@ -36,6 +36,7 @@
1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE | \
1u << VIRTIO_RING_F_INDIRECT_DESC | \
1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
1ULL << VIRTIO_F_IOMMU_PLATFORM)
#define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES \
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 34a7f53ac..91347b324 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -371,6 +371,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
1ULL << VIRTIO_NET_F_GUEST_CSUM | \
1ULL << VIRTIO_NET_F_GUEST_TSO4 | \
1ULL << VIRTIO_NET_F_GUEST_TSO6 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
1ULL << VIRTIO_F_VERSION_1)
int
--
2.17.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 8/8] net/virtio: annouce VIRTIO_F_IN_ORDER support
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 8/8] net/virtio: annouce VIRTIO_F_IN_ORDER support Marvin Liu
@ 2018-06-27 8:20 ` Maxime Coquelin
0 siblings, 0 replies; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-27 8:20 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
>
> diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h
> index 8ab1267b0..69da18d9c 100644
> --- a/drivers/net/virtio/virtio_ethdev.h
> +++ b/drivers/net/virtio/virtio_ethdev.h
> @@ -36,6 +36,7 @@
> 1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE | \
> 1u << VIRTIO_RING_F_INDIRECT_DESC | \
> 1ULL << VIRTIO_F_VERSION_1 | \
> + 1ULL << VIRTIO_F_IN_ORDER | \
> 1ULL << VIRTIO_F_IOMMU_PLATFORM)
>
> #define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES \
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 34a7f53ac..91347b324 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -371,6 +371,7 @@ virtio_user_dev_setup(struct virtio_user_dev *dev)
> 1ULL << VIRTIO_NET_F_GUEST_CSUM | \
> 1ULL << VIRTIO_NET_F_GUEST_TSO4 | \
> 1ULL << VIRTIO_NET_F_GUEST_TSO6 | \
> + 1ULL << VIRTIO_F_IN_ORDER | \
> 1ULL << VIRTIO_F_VERSION_1)
>
> int
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature
2018-06-25 15:17 [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Marvin Liu
` (7 preceding siblings ...)
2018-06-25 15:17 ` [dpdk-dev] [PATCH v2 8/8] net/virtio: annouce VIRTIO_F_IN_ORDER support Marvin Liu
@ 2018-06-26 7:56 ` Maxime Coquelin
2018-06-26 8:01 ` Liu, Yong
8 siblings, 1 reply; 22+ messages in thread
From: Maxime Coquelin @ 2018-06-26 7:56 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie; +Cc: zhihong.wang, dev
Hi,
On 06/25/2018 05:17 PM, Marvin Liu wrote:
> In latest virtio-spec, new feature bit VIRTIO_F_IN_ORDER was introduced.
> When this feature has been negotiated, virtio driver will use
> descriptors in ring order: starting from offset 0 in the table, and
> wrapping around at the end of the table. Vhost devices will always use
> descriptors in the same order in which they have been made available.
> This can reduce virtio accesses to used ring.
>
> Based on updated virtio-spec, this series realized IN_ORDER prototype
> in virtio driver. Due to new [RT]x path added into selection, also add
> two new parameters mrg_rx and in_order into virtio-user vdev parameters
> list. This will allow user to configure feature bits thus can impact
> [RT]x path selection.
>
> Performance of virtio user with IN_ORDER feature:
>
> Platform: Purely
> CPU: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
> DPDK baseline: 18.05
> Setup: testpmd with vhost vdev + testpmd with virtio vdev
>
> +--------------+----------+----------+---------+
> |Vhost->Virtio |1 Queue |2 Queues |4 Queues |
> +--------------+----------+----------+---------+
> |Inorder |12.0Mpps |24.2Mpps |26.0Mpps |
> |Normal |12.1Mpps |18.5Mpps |18.9Mpps |
> +--------------+----------+----------+---------+
>
> +--------------+----------+----------------+---------+
> |Virtio->Vhost |1 Queue |2 Queues |4 Queues |
> +--------------+----------+----------------+---------+
> |Inorder |13.8Mpps |10.7 ~ 15.2Mpps |11.5Mpps |
> |Normal |13.3Mpps |9.8 ~ 14Mpps |10.5Mpps |
> +--------------+----------+----------------+---------+
>
> +---------+----------+----------------+----------------+
> |Loopback |1 Queue |2 Queues |4 Queues |
> +---------+----------+----------------+----------------+
> |Inorder |7.4Mpps |9.1 ~ 11.6Mpps |10.5 ~ 11.3Mpps |
> +---------+----------+----------------+----------------+
> |Normal |7.5Mpps |7.7 ~ 9.0Mpps |7.6 ~ 7.8Mpps |
> +---------+----------+----------------+----------------+
>
> v2:
> - merge to latest dpdk-net-virtio
> - not use in_direct for normal xmit packets
> - update available ring for each descriptor
> - clean up IN_ORDER xmit function
> - unmask feature bits when disabled in_order or mgr_rxbuf
> - extract common part between IN_ORDER and normal functions
> - update performance result
>
> Marvin Liu (8):
> vhost: announce VIRTIO_F_IN_ORDER support
> net/virtio: add VIRTIO_F_IN_ORDER definition
> net/virtio-user: add mrg_rxbuf and in_order vdev parameters
> net/virtio: free IN_ORDER descriptors before device start
> net/virtio: extract common part for IN_ORDER functions
> net/virtio: support IN_ORDER Rx and Tx
> net/virtio: add IN_ORDER Rx/Tx into selection
> net/virtio: annouce VIRTIO_F_IN_ORDER support
I haven't checked but guess the titles doesn't pass the check-git-log.sh
script as they contains underscores.
> drivers/net/virtio/virtio_ethdev.c | 31 +-
> drivers/net/virtio/virtio_ethdev.h | 7 +
> drivers/net/virtio/virtio_pci.h | 8 +
> drivers/net/virtio/virtio_rxtx.c | 635 ++++++++++++++++--
> .../net/virtio/virtio_user/virtio_user_dev.c | 14 +-
> .../net/virtio/virtio_user/virtio_user_dev.h | 3 +-
> drivers/net/virtio/virtio_user_ethdev.c | 33 +-
> drivers/net/virtio/virtqueue.c | 8 +
> drivers/net/virtio/virtqueue.h | 2 +
> lib/librte_vhost/socket.c | 6 +
> lib/librte_vhost/vhost.h | 10 +-
> 11 files changed, 688 insertions(+), 69 deletions(-)
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature
2018-06-26 7:56 ` [dpdk-dev] [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature Maxime Coquelin
@ 2018-06-26 8:01 ` Liu, Yong
0 siblings, 0 replies; 22+ messages in thread
From: Liu, Yong @ 2018-06-26 8:01 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei; +Cc: Wang, Zhihong, dev
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Tuesday, June 26, 2018 3:56 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH v2 0/8] support VIRTIO_F_IN_ORDER feature
>
> Hi,
>
> On 06/25/2018 05:17 PM, Marvin Liu wrote:
> > In latest virtio-spec, new feature bit VIRTIO_F_IN_ORDER was introduced.
> > When this feature has been negotiated, virtio driver will use
> > descriptors in ring order: starting from offset 0 in the table, and
> > wrapping around at the end of the table. Vhost devices will always use
> > descriptors in the same order in which they have been made available.
> > This can reduce virtio accesses to used ring.
> >
> > Based on updated virtio-spec, this series realized IN_ORDER prototype
> > in virtio driver. Due to new [RT]x path added into selection, also add
> > two new parameters mrg_rx and in_order into virtio-user vdev parameters
> > list. This will allow user to configure feature bits thus can impact
> > [RT]x path selection.
> >
> > Performance of virtio user with IN_ORDER feature:
> >
> > Platform: Purely
> > CPU: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
> > DPDK baseline: 18.05
> > Setup: testpmd with vhost vdev + testpmd with virtio vdev
> >
> > +--------------+----------+----------+---------+
> > |Vhost->Virtio |1 Queue |2 Queues |4 Queues |
> > +--------------+----------+----------+---------+
> > |Inorder |12.0Mpps |24.2Mpps |26.0Mpps |
> > |Normal |12.1Mpps |18.5Mpps |18.9Mpps |
> > +--------------+----------+----------+---------+
> >
> > +--------------+----------+----------------+---------+
> > |Virtio->Vhost |1 Queue |2 Queues |4 Queues |
> > +--------------+----------+----------------+---------+
> > |Inorder |13.8Mpps |10.7 ~ 15.2Mpps |11.5Mpps |
> > |Normal |13.3Mpps |9.8 ~ 14Mpps |10.5Mpps |
> > +--------------+----------+----------------+---------+
> >
> > +---------+----------+----------------+----------------+
> > |Loopback |1 Queue |2 Queues |4 Queues |
> > +---------+----------+----------------+----------------+
> > |Inorder |7.4Mpps |9.1 ~ 11.6Mpps |10.5 ~ 11.3Mpps |
> > +---------+----------+----------------+----------------+
> > |Normal |7.5Mpps |7.7 ~ 9.0Mpps |7.6 ~ 7.8Mpps |
> > +---------+----------+----------------+----------------+
> >
> > v2:
> > - merge to latest dpdk-net-virtio
> > - not use in_direct for normal xmit packets
> > - update available ring for each descriptor
> > - clean up IN_ORDER xmit function
> > - unmask feature bits when disabled in_order or mgr_rxbuf
> > - extract common part between IN_ORDER and normal functions
> > - update performance result
> >
> > Marvin Liu (8):
> > vhost: announce VIRTIO_F_IN_ORDER support
> > net/virtio: add VIRTIO_F_IN_ORDER definition
> > net/virtio-user: add mrg_rxbuf and in_order vdev parameters
> > net/virtio: free IN_ORDER descriptors before device start
> > net/virtio: extract common part for IN_ORDER functions
> > net/virtio: support IN_ORDER Rx and Tx
> > net/virtio: add IN_ORDER Rx/Tx into selection
> > net/virtio: annouce VIRTIO_F_IN_ORDER support
>
> I haven't checked but guess the titles doesn't pass the check-git-log.sh
> script as they contains underscores.
Thanks for your reminder, Maxime. I will replace "IN_ORDER" to "in order" which can pass check.
>
> > drivers/net/virtio/virtio_ethdev.c | 31 +-
> > drivers/net/virtio/virtio_ethdev.h | 7 +
> > drivers/net/virtio/virtio_pci.h | 8 +
> > drivers/net/virtio/virtio_rxtx.c | 635 ++++++++++++++++--
> > .../net/virtio/virtio_user/virtio_user_dev.c | 14 +-
> > .../net/virtio/virtio_user/virtio_user_dev.h | 3 +-
> > drivers/net/virtio/virtio_user_ethdev.c | 33 +-
> > drivers/net/virtio/virtqueue.c | 8 +
> > drivers/net/virtio/virtqueue.h | 2 +
> > lib/librte_vhost/socket.c | 6 +
> > lib/librte_vhost/vhost.h | 10 +-
> > 11 files changed, 688 insertions(+), 69 deletions(-)
> >
^ permalink raw reply [flat|nested] 22+ messages in thread