DPDK patches and discussions
 help / color / mirror / Atom feed
From: Xuan Ding <xuan.ding@intel.com>
To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com
Cc: jiayu.hu@intel.com, cheng1.jiang@intel.com,
	bruce.richardson@intel.com, sunil.pai.g@intel.com,
	YvonneX.Yang@intel.com, Xuan Ding <xuan.ding@intel.com>
Subject: [dpdk-dev] [PATCH v2] vhost: normalize return type and function name
Date: Thu, 16 Sep 2021 04:34:15 +0000	[thread overview]
Message-ID: <20210916043415.82219-1-xuan.ding@intel.com> (raw)
In-Reply-To: <20210916032517.75827-1-xuan.ding@intel.com>

In some function definitions, adjust return type and function name on
a separate line to be consistent with DPDK coding style.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
v2:
* Fixed one format issue.
---
 lib/vhost/vhost.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 24ae1025ea..69e9d229af 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -1500,7 +1500,8 @@ rte_vhost_get_vdpa_device(int vid)
 	return dev->vdpa_dev;
 }
 
-int rte_vhost_get_log_base(int vid, uint64_t *log_base,
+int
+rte_vhost_get_log_base(int vid, uint64_t *log_base,
 		uint64_t *log_size)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -1514,7 +1515,8 @@ int rte_vhost_get_log_base(int vid, uint64_t *log_base,
 	return 0;
 }
 
-int rte_vhost_get_vring_base(int vid, uint16_t queue_id,
+int
+rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 		uint16_t *last_avail_idx, uint16_t *last_used_idx)
 {
 	struct vhost_virtqueue *vq;
@@ -1543,7 +1545,8 @@ int rte_vhost_get_vring_base(int vid, uint16_t queue_id,
 	return 0;
 }
 
-int rte_vhost_set_vring_base(int vid, uint16_t queue_id,
+int
+rte_vhost_set_vring_base(int vid, uint16_t queue_id,
 		uint16_t last_avail_idx, uint16_t last_used_idx)
 {
 	struct vhost_virtqueue *vq;
@@ -1606,7 +1609,8 @@ rte_vhost_get_vring_base_from_inflight(int vid,
 	return 0;
 }
 
-int rte_vhost_extern_callback_register(int vid,
+int
+rte_vhost_extern_callback_register(int vid,
 		struct rte_vhost_user_extern_ops const * const ops, void *ctx)
 {
 	struct virtio_net *dev = get_device(vid);
@@ -1854,7 +1858,8 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id)
 	return 0;
 }
 
-int rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
+int
+rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
 {
 	struct vhost_virtqueue *vq;
 	struct virtio_net *dev = get_device(vid);
-- 
2.17.1


  reply	other threads:[~2021-09-16  4:35 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-16  3:25 [dpdk-dev] [PATCH] vhost: Normalize " Xuan Ding
2021-09-16  4:34 ` Xuan Ding [this message]
2021-09-16  7:20   ` [dpdk-dev] [PATCH v2] vhost: normalize " Xia, Chenbo
2021-09-28 15:34   ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210916043415.82219-1-xuan.ding@intel.com \
    --to=xuan.ding@intel.com \
    --cc=YvonneX.Yang@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=cheng1.jiang@intel.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=sunil.pai.g@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).