DPDK patches and discussions
 help / color / mirror / Atom feed
From: Zoltan Kiss <zoltan.kiss@linaro.org>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	 "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of	local copies
Date: Thu, 26 Mar 2015 18:01:42 +0000	[thread overview]
Message-ID: <55144986.6020606@linaro.org> (raw)
In-Reply-To: <2601191342CEEE43887BDE71AB97725821407CAA@irsmsx105.ger.corp.intel.com>



On 26/03/15 17:34, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org]
>> Sent: Thursday, March 26, 2015 4:46 PM
>> To: Ananyev, Konstantin; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies
>>
>>
>>
>> On 26/03/15 01:20, Ananyev, Konstantin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss
>>>> Sent: Wednesday, March 25, 2015 6:43 PM
>>>> To: dev@dpdk.org
>>>> Cc: Zoltan Kiss
>>>> Subject: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies
>>>>
>>>> This macro and function were copies from the mbuf library, no reason to keep
>>>> them.
>>>
>>> NACK
>>> You can't use RTE_MBUF_INDIRECT macro here.
>>> If you'll look at vhost code carefully, you'll realise that we don't use standard rte_pktmbuf_attach() here.
>>> As we attach mbuf not to another mbuf but to external memory buffer, passed to us by virtio device.
>>> Look at attach_rxmbuf_zcp().
>> Yes, I think the proper fix is to set the flag in attach_rxmbuf_zcp()
>> and virtio_tx_route_zcp(), then you can use the library macro here.
>
> No, it is not.
> IND_ATTACHED_MBUF flag indicates that that mbuf attached to another mbuf and __rte_pktmbuf_prefree_seg()
> would try to do mbuf detach.
> We definetly don't want to set IND_ATTACHED_MBUF here.
I see. Quite confusing how vhost reuse some library code to do something 
slightly different.

> I think there is no need to fix anything here.
>
> Konstantin
>
>>
>>> Though I suppose, we can replace pktmbuf_detach_zcp() , with  rte_pktmbuf_detach() - they are doing identical things.
>> Yes, the only difference is that the latter do "m->ol_flags = 0" as well.
>>
>>> BTW, I wonder did you ever  test your patch?
>> Indeed I did not, shame on me. I don't have a KVM setup at hand. This
>> fix were born as a side effect of the cleaning up in the library,
>> and I'm afraid I don't have the time right now to create a KVM setup.
>> Could anyone who has it at hand help out to run a quick test? (for the
>> v2 of this patch, which I'll send in shortly)
>
>
>>
>> Regards,
>>
>> Zoltan
>>
>>> My guess it would cause vhost with '--zero-copy' to crash or  corrupt the packets straightway.
>>>
>>> Konstantin
>>>
>>>>
>>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@linaro.org>
>>>> ---
>>>>    examples/vhost/main.c | 38 +++++---------------------------------
>>>>    1 file changed, 5 insertions(+), 33 deletions(-)
>>>>
>>>> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
>>>> index c3fcb80..1c998a5 100644
>>>> --- a/examples/vhost/main.c
>>>> +++ b/examples/vhost/main.c
>>>> @@ -139,8 +139,6 @@
>>>>    /* Number of descriptors per cacheline. */
>>>>    #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
>>>>
>>>> -#define MBUF_EXT_MEM(mb)   (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb))
>>>> -
>>>>    /* mask of enabled ports */
>>>>    static uint32_t enabled_port_mask = 0;
>>>>
>>>> @@ -1538,32 +1536,6 @@ attach_rxmbuf_zcp(struct virtio_net *dev)
>>>>    	return;
>>>>    }
>>>>
>>>> -/*
>>>> - * Detach an attched packet mbuf -
>>>> - *  - restore original mbuf address and length values.
>>>> - *  - reset pktmbuf data and data_len to their default values.
>>>> - *  All other fields of the given packet mbuf will be left intact.
>>>> - *
>>>> - * @param m
>>>> - *   The attached packet mbuf.
>>>> - */
>>>> -static inline void pktmbuf_detach_zcp(struct rte_mbuf *m)
>>>> -{
>>>> -	const struct rte_mempool *mp = m->pool;
>>>> -	void *buf = RTE_MBUF_TO_BADDR(m);
>>>> -	uint32_t buf_ofs;
>>>> -	uint32_t buf_len = mp->elt_size - sizeof(*m);
>>>> -	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof(*m);
>>>> -
>>>> -	m->buf_addr = buf;
>>>> -	m->buf_len = (uint16_t)buf_len;
>>>> -
>>>> -	buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
>>>> -			RTE_PKTMBUF_HEADROOM : m->buf_len;
>>>> -	m->data_off = buf_ofs;
>>>> -
>>>> -	m->data_len = 0;
>>>> -}
>>>>
>>>>    /*
>>>>     * This function is called after packets have been transimited. It fetchs mbuf
>>>> @@ -1590,8 +1562,8 @@ txmbuf_clean_zcp(struct virtio_net *dev, struct vpool *vpool)
>>>>
>>>>    	for (index = 0; index < mbuf_count; index++) {
>>>>    		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
>>>> -		if (likely(MBUF_EXT_MEM(mbuf)))
>>>> -			pktmbuf_detach_zcp(mbuf);
>>>> +		if (likely(RTE_MBUF_INDIRECT(mbuf)))
>>>> +			rte_pktmbuf_detach(mbuf);
>>>>    		rte_ring_sp_enqueue(vpool->ring, mbuf);
>>>>
>>>>    		/* Update used index buffer information. */
>>>> @@ -1653,8 +1625,8 @@ static void mbuf_destroy_zcp(struct vpool *vpool)
>>>>    	for (index = 0; index < mbuf_count; index++) {
>>>>    		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
>>>>    		if (likely(mbuf != NULL)) {
>>>> -			if (likely(MBUF_EXT_MEM(mbuf)))
>>>> -				pktmbuf_detach_zcp(mbuf);
>>>> +			if (likely(RTE_MBUF_INDIRECT(mbuf)))
>>>> +				rte_pktmbuf_detach(mbuf);
>>>>    			rte_ring_sp_enqueue(vpool->ring, (void *)mbuf);
>>>>    		}
>>>>    	}
>>>> @@ -2149,7 +2121,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)
>>>>    					}
>>>>    					while (likely(rx_count)) {
>>>>    						rx_count--;
>>>> -						pktmbuf_detach_zcp(
>>>> +						rte_pktmbuf_detach(
>>>>    							pkts_burst[rx_count]);
>>>>    						rte_ring_sp_enqueue(
>>>>    							vpool_array[index].ring,
>>>> --
>>>> 1.9.1
>>>

      reply	other threads:[~2015-03-26 18:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-25 18:43 Zoltan Kiss
2015-03-26  1:20 ` Ananyev, Konstantin
2015-03-26  3:32   ` Ouyang, Changchun
2015-03-26 16:46   ` Zoltan Kiss
2015-03-26 17:34     ` Ananyev, Konstantin
2015-03-26 18:01       ` Zoltan Kiss [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55144986.6020606@linaro.org \
    --to=zoltan.kiss@linaro.org \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).