From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 727DDA09FF; Mon, 28 Dec 2020 05:03:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 00CA92BB8; Mon, 28 Dec 2020 05:03:55 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 33E0A2A6C for ; Mon, 28 Dec 2020 05:03:51 +0100 (CET) IronPort-SDR: q6ohEeiTuN++/c3fJUukp3UKoKCdrKqu9ZaTmao4Ml1WSpQF1mdhneGIZhC0j3zmEJnp531ZYf 5G+yfnqK74mg== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="164058419" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="164058419" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 20:03:47 -0800 IronPort-SDR: F+4MOiB5v3Fny8aOEHis/DQTzaIzUJGdBTtVl8Glzqr6VgVTNDjaKmsEKfXbrRo40t+MJpXQZ2 XxQ/Dz1eG8yw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="418663803" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by orsmga001.jf.intel.com with ESMTP; 27 Dec 2020 20:03:46 -0800 Received: from shsmsx604.ccr.corp.intel.com (10.109.6.214) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Sun, 27 Dec 2020 20:03:45 -0800 Received: from shsmsx606.ccr.corp.intel.com (10.109.6.216) by SHSMSX604.ccr.corp.intel.com (10.109.6.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 28 Dec 2020 12:03:43 +0800 Received: from shsmsx606.ccr.corp.intel.com ([10.109.6.216]) by SHSMSX606.ccr.corp.intel.com ([10.109.6.216]) with mapi id 15.01.1713.004; Mon, 28 Dec 2020 12:03:43 +0800 From: "Hu, Jiayu" To: "Jiang, Cheng1" , "maxime.coquelin@redhat.com" , "Xia, Chenbo" CC: "dev@dpdk.org" , "Yang, YvonneX" Thread-Topic: [PATCH v4 2/2] examples/vhost: refactor vhost data path Thread-Index: AQHW2paRP+eGdntVq0KYAb4OMWIcxKoL3CoQ Date: Mon, 28 Dec 2020 04:03:43 +0000 Message-ID: References: <20201218113327.70528-1-Cheng1.jiang@intel.com> <20201225080712.36177-1-Cheng1.jiang@intel.com> <20201225080712.36177-3-Cheng1.jiang@intel.com> In-Reply-To: <20201225080712.36177-3-Cheng1.jiang@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.5.1.3 dlp-product: dlpe-windows x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 2/2] examples/vhost: refactor vhost data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Cheng, Some comments are inline. Thanks, Jiayu > -----Original Message----- > From: Jiang, Cheng1 > Sent: Friday, December 25, 2020 4:07 PM > To: maxime.coquelin@redhat.com; Xia, Chenbo > Cc: dev@dpdk.org; Hu, Jiayu ; Yang, YvonneX > ; Jiang, Cheng1 > Subject: [PATCH v4 2/2] examples/vhost: refactor vhost data path >=20 > Change the vm2vm data path to batch enqueue for better performance. > Support latest async vhost API, refactor vhost async data path, > replase rte_atomicNN_xxx to atomic_XXX and clean some codes. Typo: replase -> replace >=20 > Signed-off-by: Cheng Jiang > --- > examples/vhost/main.c | 202 +++++++++++++++++++++++++++++++----------- > examples/vhost/main.h | 7 +- > 2 files changed, 154 insertions(+), 55 deletions(-) >=20 > diff --git a/examples/vhost/main.c b/examples/vhost/main.c > index 8d8c3038b..3ea12a474 100644 > --- a/examples/vhost/main.c > +++ b/examples/vhost/main.c > @@ -179,9 +179,18 @@ struct mbuf_table { > struct rte_mbuf *m_table[MAX_PKT_BURST]; > }; >=20 > +struct vhost_bufftable { > + uint32_t len; > + uint64_t pre_tsc; > + struct rte_mbuf *m_table[MAX_PKT_BURST]; > +}; > + > /* TX queue for each data core. */ > struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE]; >=20 > +/* TX queue for each vhost device. */ Every lcore maintains a TX buffer for every vhost device, which is to batch pkts to enqueue for higher performance. I suggest you to update the description of vhost_txbuff above, as it is not very clear. > +struct vhost_bufftable *vhost_txbuff[RTE_MAX_LCORE * > MAX_VHOST_DEVICE]; > + > #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \ > / US_PER_S * BURST_TX_DRAIN_US) > #define VLAN_HLEN 4 > @@ -804,39 +813,114 @@ unlink_vmdq(struct vhost_dev *vdev) > } > } >=20 > +static inline void > +free_pkts(struct rte_mbuf **pkts, uint16_t n) > +{ > + while (n--) > + rte_pktmbuf_free(pkts[n]); > +} > + > static __rte_always_inline void > -virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, > +complete_async_pkts(struct vhost_dev *vdev) > +{ > + struct rte_mbuf *p_cpl[MAX_PKT_BURST]; > + uint16_t complete_count; > + > + complete_count =3D rte_vhost_poll_enqueue_completed(vdev->vid, > + VIRTIO_RXQ, p_cpl, > MAX_PKT_BURST); > + if (complete_count) { > + atomic_fetch_sub(&vdev->nr_async_pkts, complete_count); > + free_pkts(p_cpl, complete_count); > + } > +} > + > +static __rte_always_inline void > +sync_virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, > struct rte_mbuf *m) > { > uint16_t ret; > - struct rte_mbuf *m_cpl[1]; >=20 > if (builtin_net_driver) { > ret =3D vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1); > - } else if (async_vhost_driver) { > - ret =3D rte_vhost_submit_enqueue_burst(dst_vdev->vid, > VIRTIO_RXQ, > - &m, 1); > - > - if (likely(ret)) > - dst_vdev->nr_async_pkts++; > - > - while (likely(dst_vdev->nr_async_pkts)) { > - if (rte_vhost_poll_enqueue_completed(dst_vdev- > >vid, > - VIRTIO_RXQ, m_cpl, 1)) > - dst_vdev->nr_async_pkts--; > - } > } else { > ret =3D rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, > &m, 1); > } >=20 > if (enable_stats) { > - rte_atomic64_inc(&dst_vdev->stats.rx_total_atomic); > - rte_atomic64_add(&dst_vdev->stats.rx_atomic, ret); > + atomic_fetch_add(&dst_vdev->stats.rx_total_atomic, 1); > + atomic_fetch_add(&dst_vdev->stats.rx_atomic, ret); > src_vdev->stats.tx_total++; > src_vdev->stats.tx +=3D ret; > } > } >=20 > +static __rte_always_inline void > +drain_vhost(struct vhost_dev *vdev) > +{ > + uint16_t ret; > + uint64_t queue_id =3D rte_lcore_id() * MAX_VHOST_DEVICE + vdev- > >vid; > + uint16_t nr_xmit =3D vhost_txbuff[queue_id]->len; > + struct rte_mbuf **m =3D vhost_txbuff[queue_id]->m_table; "queue_id" is not a very good name, as it's not the queue id of vhost devic= e, but a buffer index which holds pkts to enqueue. > + > + if (builtin_net_driver) { > + ret =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit); > + } else if (async_vhost_driver) { > + uint32_t cpu_cpl_nr =3D 0; > + uint16_t enqueue_fail =3D 0; > + struct rte_mbuf *m_cpu_cpl[nr_xmit]; > + > + complete_async_pkts(vdev); > + ret =3D rte_vhost_submit_enqueue_burst(vdev->vid, > VIRTIO_RXQ, > + m, nr_xmit, m_cpu_cpl, &cpu_cpl_nr); > + atomic_fetch_add(&vdev->nr_async_pkts, ret - cpu_cpl_nr); > + > + if (cpu_cpl_nr) > + free_pkts(m_cpu_cpl, cpu_cpl_nr); > + > + enqueue_fail =3D nr_xmit - ret; > + if (enqueue_fail) > + free_pkts(&m[ret], nr_xmit - ret); > + } else { > + ret =3D rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, > + m, nr_xmit); > + } > + > + if (enable_stats) { > + atomic_fetch_add(&vdev->stats.rx_total_atomic, nr_xmit); > + atomic_fetch_add(&vdev->stats.rx_atomic, ret); > + } > + > + if (!async_vhost_driver) > + free_pkts(m, nr_xmit); > +} > + > +static __rte_always_inline void > +drain_vhost_table(void) > +{ > + const uint16_t lcore_id =3D rte_lcore_id(); > + struct vhost_bufftable *vhost_txq; > + struct vhost_dev *vdev; > + uint64_t cur_tsc; > + > + TAILQ_FOREACH(vdev, &lcore_info[lcore_id].vdev_list, > lcore_vdev_entry) { A lcore may have pkts to enqueue for any vhost device, as it's decided by mac address. So drain_vhost_table() shouldn't just process vhost ports allocated to its lcore, but all vhost ports that have un-sent packets. > + if (!vdev->remove) { > + vhost_txq =3D vhost_txbuff[lcore_id * > MAX_VHOST_DEVICE > + + vdev->vid]; > + cur_tsc =3D rte_rdtsc(); > + > + if (unlikely(cur_tsc - vhost_txq->pre_tsc > + > MBUF_TABLE_DRAIN_TSC)) { > + RTE_LOG_DP(DEBUG, VHOST_DATA, > + "Vhost tX queue drained after "tX" -> "TX" > timeout with burst size %u\n", > + vhost_txq->len); > + drain_vhost(vdev); > + vhost_txq->len =3D 0; > + vhost_txq->pre_tsc =3D cur_tsc; > + } > + } > + } > +} > + > /* > * Check if the packet destination MAC address is for a local device. If= so then > put > * the packet on that devices RX queue. If not then return. > @@ -846,7 +930,8 @@ virtio_tx_local(struct vhost_dev *vdev, struct > rte_mbuf *m) > { > struct rte_ether_hdr *pkt_hdr; > struct vhost_dev *dst_vdev; > - > + struct vhost_bufftable *vhost_txq; > + const uint16_t lcore_id =3D rte_lcore_id(); Why use "const"? > pkt_hdr =3D rte_pktmbuf_mtod(m, struct rte_ether_hdr *); >=20 > dst_vdev =3D find_vhost_dev(&pkt_hdr->d_addr); > @@ -869,7 +954,19 @@ virtio_tx_local(struct vhost_dev *vdev, struct > rte_mbuf *m) > return 0; > } >=20 > - virtio_xmit(dst_vdev, vdev, m); > + vhost_txq =3D vhost_txbuff[lcore_id * MAX_VHOST_DEVICE + dst_vdev- > >vid]; > + vhost_txq->m_table[vhost_txq->len++] =3D m; > + > + if (enable_stats) { > + vdev->stats.tx_total++; > + vdev->stats.tx++; > + } > + > + if (unlikely(vhost_txq->len =3D=3D MAX_PKT_BURST)) { > + drain_vhost(dst_vdev); > + vhost_txq->len =3D 0; > + vhost_txq->pre_tsc =3D rte_rdtsc(); > + } > return 0; > } >=20 > @@ -940,13 +1037,6 @@ static void virtio_tx_offload(struct rte_mbuf *m) > tcp_hdr->cksum =3D get_psd_sum(l3_hdr, m->ol_flags); > } >=20 > -static inline void > -free_pkts(struct rte_mbuf **pkts, uint16_t n) > -{ > - while (n--) > - rte_pktmbuf_free(pkts[n]); > -} > - > static __rte_always_inline void > do_drain_mbuf_table(struct mbuf_table *tx_q) > { > @@ -979,16 +1069,14 @@ virtio_tx_route(struct vhost_dev *vdev, struct > rte_mbuf *m, uint16_t vlan_tag) >=20 > TAILQ_FOREACH(vdev2, &vhost_dev_list, global_vdev_entry) > { > if (vdev2 !=3D vdev) > - virtio_xmit(vdev2, vdev, m); > + sync_virtio_xmit(vdev2, vdev, m); > } > goto queue2nic; > } >=20 > /*check if destination is local VM*/ > - if ((vm2vm_mode =3D=3D VM2VM_SOFTWARE) && (virtio_tx_local(vdev, > m) =3D=3D 0)) { > - rte_pktmbuf_free(m); > + if ((vm2vm_mode =3D=3D VM2VM_SOFTWARE) && (virtio_tx_local(vdev, > m) =3D=3D 0)) > return; > - } >=20 > if (unlikely(vm2vm_mode =3D=3D VM2VM_HARDWARE)) { > if (unlikely(find_local_dest(vdev, m, &offset, > @@ -1073,19 +1161,6 @@ drain_mbuf_table(struct mbuf_table *tx_q) > } > } >=20 > -static __rte_always_inline void > -complete_async_pkts(struct vhost_dev *vdev, uint16_t qid) > -{ > - struct rte_mbuf *p_cpl[MAX_PKT_BURST]; > - uint16_t complete_count; > - > - complete_count =3D rte_vhost_poll_enqueue_completed(vdev->vid, > - qid, p_cpl, MAX_PKT_BURST); > - vdev->nr_async_pkts -=3D complete_count; > - if (complete_count) > - free_pkts(p_cpl, complete_count); > -} > - > static __rte_always_inline void > drain_eth_rx(struct vhost_dev *vdev) > { > @@ -1095,9 +1170,6 @@ drain_eth_rx(struct vhost_dev *vdev) > rx_count =3D rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q, > pkts, MAX_PKT_BURST); >=20 > - while (likely(vdev->nr_async_pkts)) > - complete_async_pkts(vdev, VIRTIO_RXQ); > - > if (!rx_count) > return; >=20 > @@ -1123,17 +1195,31 @@ drain_eth_rx(struct vhost_dev *vdev) > enqueue_count =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, > pkts, rx_count); > } else if (async_vhost_driver) { > + uint32_t cpu_cpl_nr =3D 0; > + uint16_t enqueue_fail =3D 0; > + struct rte_mbuf *m_cpu_cpl[MAX_PKT_BURST]; > + > + complete_async_pkts(vdev); > enqueue_count =3D rte_vhost_submit_enqueue_burst(vdev- > >vid, > - VIRTIO_RXQ, pkts, rx_count); > - vdev->nr_async_pkts +=3D enqueue_count; > + VIRTIO_RXQ, pkts, rx_count, > + m_cpu_cpl, &cpu_cpl_nr); > + atomic_fetch_add(&vdev->nr_async_pkts, > + enqueue_count - cpu_cpl_nr); > + if (cpu_cpl_nr) > + free_pkts(m_cpu_cpl, cpu_cpl_nr); > + > + enqueue_fail =3D rx_count - enqueue_count; > + if (enqueue_fail) > + free_pkts(&pkts[enqueue_count], enqueue_fail); > + > } else { > enqueue_count =3D rte_vhost_enqueue_burst(vdev->vid, > VIRTIO_RXQ, > pkts, rx_count); > } >=20 > if (enable_stats) { > - rte_atomic64_add(&vdev->stats.rx_total_atomic, rx_count); > - rte_atomic64_add(&vdev->stats.rx_atomic, enqueue_count); > + atomic_fetch_add(&vdev->stats.rx_total_atomic, rx_count); > + atomic_fetch_add(&vdev->stats.rx_atomic, enqueue_count); > } >=20 > if (!async_vhost_driver) > @@ -1202,7 +1288,7 @@ switch_worker(void *arg __rte_unused) >=20 > while(1) { > drain_mbuf_table(tx_q); > - > + drain_vhost_table(); > /* > * Inform the configuration core that we have exited the > * linked list and that no devices are in use if requested. > @@ -1298,6 +1384,7 @@ static int > new_device(int vid) > { > int lcore, core_add =3D 0; > + uint16_t i; > uint32_t device_num_min =3D num_devices; > struct vhost_dev *vdev; > vdev =3D rte_zmalloc("vhost device", sizeof(*vdev), > RTE_CACHE_LINE_SIZE); > @@ -1309,6 +1396,13 @@ new_device(int vid) > } > vdev->vid =3D vid; >=20 > + for (i =3D 0; i < RTE_MAX_LCORE; i++) { > + vhost_txbuff[i * MAX_VHOST_DEVICE + vid] > + =3D rte_zmalloc("vhost bufftable", > + sizeof(struct vhost_bufftable), > + RTE_CACHE_LINE_SIZE); Rte_zmalloc() may fail. Need to handle failure. > + } > + > if (builtin_net_driver) > vs_vhost_net_setup(vdev); >=20 > @@ -1343,12 +1437,15 @@ new_device(int vid) > if (async_vhost_driver) { > struct rte_vhost_async_features f; > struct rte_vhost_async_channel_ops channel_ops; > + > if (strncmp(dma_type, "ioat", 4) =3D=3D 0) { > channel_ops.transfer_data =3D ioat_transfer_data_cb; > channel_ops.check_completed_copies =3D > ioat_check_completed_copies_cb; > + > f.async_inorder =3D 1; > f.async_threshold =3D 256; > + > return rte_vhost_async_channel_register(vid, > VIRTIO_RXQ, > f.intval, &channel_ops); > } > @@ -1392,8 +1489,8 @@ print_stats(__rte_unused void *arg) > tx =3D vdev->stats.tx; > tx_dropped =3D tx_total - tx; >=20 > - rx_total =3D rte_atomic64_read(&vdev- > >stats.rx_total_atomic); > - rx =3D rte_atomic64_read(&vdev->stats.rx_atomic); > + rx_total =3D atomic_load(&vdev- > >stats.rx_total_atomic); > + rx =3D atomic_load(&vdev->stats.rx_atomic); > rx_dropped =3D rx_total - rx; >=20 > printf("Statistics for device %d\n" > @@ -1592,6 +1689,7 @@ main(int argc, char *argv[]) > /* Register vhost user driver to handle vhost messages. */ > for (i =3D 0; i < nb_sockets; i++) { > char *file =3D socket_files + i * PATH_MAX; > + > if (async_vhost_driver) > flags =3D flags | RTE_VHOST_USER_ASYNC_COPY; >=20 > diff --git a/examples/vhost/main.h b/examples/vhost/main.h > index 4317b6ae8..6aa798a3e 100644 > --- a/examples/vhost/main.h > +++ b/examples/vhost/main.h > @@ -8,6 +8,7 @@ > #include >=20 > #include > +#include >=20 > /* Macros for printing using RTE_LOG */ > #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1 > @@ -21,8 +22,8 @@ enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM}; > struct device_statistics { > uint64_t tx; > uint64_t tx_total; > - rte_atomic64_t rx_atomic; > - rte_atomic64_t rx_total_atomic; > + atomic_int_least64_t rx_atomic; > + atomic_int_least64_t rx_total_atomic; > }; >=20 > struct vhost_queue { > @@ -51,7 +52,7 @@ struct vhost_dev { > uint64_t features; > size_t hdr_len; > uint16_t nr_vrings; > - uint16_t nr_async_pkts; > + atomic_int_least16_t nr_async_pkts; > struct rte_vhost_memory *mem; > struct device_statistics stats; > TAILQ_ENTRY(vhost_dev) global_vdev_entry; > -- > 2.29.2