DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Zoltan Kiss <zoltan.kiss@linaro.org>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of	local copies
Date: Thu, 26 Mar 2015 01:20:47 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725821407936@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <1427309006-26590-1-git-send-email-zoltan.kiss@linaro.org>



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss
> Sent: Wednesday, March 25, 2015 6:43 PM
> To: dev@dpdk.org
> Cc: Zoltan Kiss
> Subject: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies
> 
> This macro and function were copies from the mbuf library, no reason to keep
> them.

NACK
You can't use RTE_MBUF_INDIRECT macro here.
If you'll look at vhost code carefully, you'll realise that we don't use standard rte_pktmbuf_attach() here.
As we attach mbuf not to another mbuf but to external memory buffer, passed to us by virtio device.
Look at attach_rxmbuf_zcp().
Though I suppose, we can replace pktmbuf_detach_zcp() , with  rte_pktmbuf_detach() - they are doing identical things.
BTW, I wonder did you ever  test your patch?
My guess it would cause vhost with '--zero-copy' to crash or  corrupt the packets straightway. 

Konstantin

> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
> ---
>  examples/vhost/main.c | 38 +++++---------------------------------
>  1 file changed, 5 insertions(+), 33 deletions(-)
> 
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index c3fcb80..1c998a5 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -139,8 +139,6 @@
>  /* Number of descriptors per cacheline. */
>  #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
> 
> -#define MBUF_EXT_MEM(mb)   (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb))
> -
>  /* mask of enabled ports */
>  static uint32_t enabled_port_mask = 0;
> 
> @@ -1538,32 +1536,6 @@ attach_rxmbuf_zcp(struct virtio_net *dev)
>  	return;
>  }
> 
> -/*
> - * Detach an attched packet mbuf -
> - *  - restore original mbuf address and length values.
> - *  - reset pktmbuf data and data_len to their default values.
> - *  All other fields of the given packet mbuf will be left intact.
> - *
> - * @param m
> - *   The attached packet mbuf.
> - */
> -static inline void pktmbuf_detach_zcp(struct rte_mbuf *m)
> -{
> -	const struct rte_mempool *mp = m->pool;
> -	void *buf = RTE_MBUF_TO_BADDR(m);
> -	uint32_t buf_ofs;
> -	uint32_t buf_len = mp->elt_size - sizeof(*m);
> -	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof(*m);
> -
> -	m->buf_addr = buf;
> -	m->buf_len = (uint16_t)buf_len;
> -
> -	buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> -			RTE_PKTMBUF_HEADROOM : m->buf_len;
> -	m->data_off = buf_ofs;
> -
> -	m->data_len = 0;
> -}
> 
>  /*
>   * This function is called after packets have been transimited. It fetchs mbuf
> @@ -1590,8 +1562,8 @@ txmbuf_clean_zcp(struct virtio_net *dev, struct vpool *vpool)
> 
>  	for (index = 0; index < mbuf_count; index++) {
>  		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
> -		if (likely(MBUF_EXT_MEM(mbuf)))
> -			pktmbuf_detach_zcp(mbuf);
> +		if (likely(RTE_MBUF_INDIRECT(mbuf)))
> +			rte_pktmbuf_detach(mbuf);
>  		rte_ring_sp_enqueue(vpool->ring, mbuf);
> 
>  		/* Update used index buffer information. */
> @@ -1653,8 +1625,8 @@ static void mbuf_destroy_zcp(struct vpool *vpool)
>  	for (index = 0; index < mbuf_count; index++) {
>  		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
>  		if (likely(mbuf != NULL)) {
> -			if (likely(MBUF_EXT_MEM(mbuf)))
> -				pktmbuf_detach_zcp(mbuf);
> +			if (likely(RTE_MBUF_INDIRECT(mbuf)))
> +				rte_pktmbuf_detach(mbuf);
>  			rte_ring_sp_enqueue(vpool->ring, (void *)mbuf);
>  		}
>  	}
> @@ -2149,7 +2121,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)
>  					}
>  					while (likely(rx_count)) {
>  						rx_count--;
> -						pktmbuf_detach_zcp(
> +						rte_pktmbuf_detach(
>  							pkts_burst[rx_count]);
>  						rte_ring_sp_enqueue(
>  							vpool_array[index].ring,
> --
> 1.9.1

  reply	other threads:[~2015-03-26  1:20 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-25 18:43 Zoltan Kiss
2015-03-26  1:20 ` Ananyev, Konstantin [this message]
2015-03-26  3:32   ` Ouyang, Changchun
2015-03-26 16:46   ` Zoltan Kiss
2015-03-26 17:34     ` Ananyev, Konstantin
2015-03-26 18:01       ` Zoltan Kiss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB97725821407936@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=zoltan.kiss@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).