From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f182.google.com (mail-wi0-f182.google.com [209.85.212.182]) by dpdk.org (Postfix) with ESMTP id D24D15A9F for ; Thu, 26 Mar 2015 17:46:14 +0100 (CET) Received: by wibgn9 with SMTP id gn9so93922051wib.1 for ; Thu, 26 Mar 2015 09:46:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=RQeR1fMKgbQtFFq/QBvYB22galCGqk7M9txdSfEWOVM=; b=GnAM4JXzEcebl+Jtfehmnpdr+o+znXUf7L+CbA3jguy7fLPxjQJdgZljJx9lM4j11l i+PZi1qmd8zdxQgCGQQnHprK87x4XWL4nmNG2LVytRJaV5Lq9YsVU32zQh1ukz4sHfqV riqGrR0eNA6iJavykYNN/Mdm9BiBwSfAZQqt198xExANIZgJJma+s/H2f8Yj5i/6IYqy qc9ZGLLMmxrg7EsFSO6v7WpvkhtdTpqBsM7RW6RajelwYd9RchJ1rie/AmIfNWO2FlBc pnehiuHidj2Rhve9VGGsyHsUTi61wQD4HRReiNDF3cYgPTCcpJ/gdYivqXY34hW5Kkvb XGcQ== X-Gm-Message-State: ALoCoQmsOJCkNbIN/MyR8AnjkidBYRaQ25MSE2FnrWbB8wG5J4dIoyiKBAFSKZsayM62UgmUKpXF X-Received: by 10.180.126.69 with SMTP id mw5mr50214152wib.12.1427388374346; Thu, 26 Mar 2015 09:46:14 -0700 (PDT) Received: from [192.168.0.101] ([90.152.119.35]) by mx.google.com with ESMTPSA id a13sm9135471wjx.30.2015.03.26.09.46.13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Mar 2015 09:46:13 -0700 (PDT) Message-ID: <551437D6.80503@linaro.org> Date: Thu, 26 Mar 2015 16:46:14 +0000 From: Zoltan Kiss User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: "Ananyev, Konstantin" , "dev@dpdk.org" References: <1427309006-26590-1-git-send-email-zoltan.kiss@linaro.org> <2601191342CEEE43887BDE71AB97725821407936@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB97725821407936@irsmsx105.ger.corp.intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Thu, 26 Mar 2015 20:28:03 +0100 Subject: Re: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Mar 2015 16:46:15 -0000 On 26/03/15 01:20, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zoltan Kiss >> Sent: Wednesday, March 25, 2015 6:43 PM >> To: dev@dpdk.org >> Cc: Zoltan Kiss >> Subject: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies >> >> This macro and function were copies from the mbuf library, no reason to keep >> them. > > NACK > You can't use RTE_MBUF_INDIRECT macro here. > If you'll look at vhost code carefully, you'll realise that we don't use standard rte_pktmbuf_attach() here. > As we attach mbuf not to another mbuf but to external memory buffer, passed to us by virtio device. > Look at attach_rxmbuf_zcp(). Yes, I think the proper fix is to set the flag in attach_rxmbuf_zcp() and virtio_tx_route_zcp(), then you can use the library macro here. > Though I suppose, we can replace pktmbuf_detach_zcp() , with rte_pktmbuf_detach() - they are doing identical things. Yes, the only difference is that the latter do "m->ol_flags = 0" as well. > BTW, I wonder did you ever test your patch? Indeed I did not, shame on me. I don't have a KVM setup at hand. This fix were born as a side effect of the cleaning up in the library, and I'm afraid I don't have the time right now to create a KVM setup. Could anyone who has it at hand help out to run a quick test? (for the v2 of this patch, which I'll send in shortly) Regards, Zoltan > My guess it would cause vhost with '--zero-copy' to crash or corrupt the packets straightway. > > Konstantin > >> >> Signed-off-by: Zoltan Kiss >> --- >> examples/vhost/main.c | 38 +++++--------------------------------- >> 1 file changed, 5 insertions(+), 33 deletions(-) >> >> diff --git a/examples/vhost/main.c b/examples/vhost/main.c >> index c3fcb80..1c998a5 100644 >> --- a/examples/vhost/main.c >> +++ b/examples/vhost/main.c >> @@ -139,8 +139,6 @@ >> /* Number of descriptors per cacheline. */ >> #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc)) >> >> -#define MBUF_EXT_MEM(mb) (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb)) >> - >> /* mask of enabled ports */ >> static uint32_t enabled_port_mask = 0; >> >> @@ -1538,32 +1536,6 @@ attach_rxmbuf_zcp(struct virtio_net *dev) >> return; >> } >> >> -/* >> - * Detach an attched packet mbuf - >> - * - restore original mbuf address and length values. >> - * - reset pktmbuf data and data_len to their default values. >> - * All other fields of the given packet mbuf will be left intact. >> - * >> - * @param m >> - * The attached packet mbuf. >> - */ >> -static inline void pktmbuf_detach_zcp(struct rte_mbuf *m) >> -{ >> - const struct rte_mempool *mp = m->pool; >> - void *buf = RTE_MBUF_TO_BADDR(m); >> - uint32_t buf_ofs; >> - uint32_t buf_len = mp->elt_size - sizeof(*m); >> - m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof(*m); >> - >> - m->buf_addr = buf; >> - m->buf_len = (uint16_t)buf_len; >> - >> - buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ? >> - RTE_PKTMBUF_HEADROOM : m->buf_len; >> - m->data_off = buf_ofs; >> - >> - m->data_len = 0; >> -} >> >> /* >> * This function is called after packets have been transimited. It fetchs mbuf >> @@ -1590,8 +1562,8 @@ txmbuf_clean_zcp(struct virtio_net *dev, struct vpool *vpool) >> >> for (index = 0; index < mbuf_count; index++) { >> mbuf = __rte_mbuf_raw_alloc(vpool->pool); >> - if (likely(MBUF_EXT_MEM(mbuf))) >> - pktmbuf_detach_zcp(mbuf); >> + if (likely(RTE_MBUF_INDIRECT(mbuf))) >> + rte_pktmbuf_detach(mbuf); >> rte_ring_sp_enqueue(vpool->ring, mbuf); >> >> /* Update used index buffer information. */ >> @@ -1653,8 +1625,8 @@ static void mbuf_destroy_zcp(struct vpool *vpool) >> for (index = 0; index < mbuf_count; index++) { >> mbuf = __rte_mbuf_raw_alloc(vpool->pool); >> if (likely(mbuf != NULL)) { >> - if (likely(MBUF_EXT_MEM(mbuf))) >> - pktmbuf_detach_zcp(mbuf); >> + if (likely(RTE_MBUF_INDIRECT(mbuf))) >> + rte_pktmbuf_detach(mbuf); >> rte_ring_sp_enqueue(vpool->ring, (void *)mbuf); >> } >> } >> @@ -2149,7 +2121,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg) >> } >> while (likely(rx_count)) { >> rx_count--; >> - pktmbuf_detach_zcp( >> + rte_pktmbuf_detach( >> pkts_burst[rx_count]); >> rte_ring_sp_enqueue( >> vpool_array[index].ring, >> -- >> 1.9.1 >