From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vc0-f181.google.com (mail-vc0-f181.google.com [209.85.220.181]) by dpdk.org (Postfix) with ESMTP id BC2505926 for ; Wed, 9 Jul 2014 23:35:29 +0200 (CEST) Received: by mail-vc0-f181.google.com with SMTP id il7so8575623vcb.40 for ; Wed, 09 Jul 2014 14:35:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=pM1454WsJjZFDfpzFo4Z9Och5EhSy890jR7hWSqscc4=; b=tRV4STX43Y1w3jaxlHVYw1fVh2ipTZPHHcf5Ix6Yt9U2GcEh17ZbjQdHjoxq9QwfzV Skm7B42c6YMvLBnwEOlwslKJZntrALD66TPIs9KXmjjuAaSnQDsGCOBWiLL2UElMn3Nq DBhmK930uvK6M3wjmi8iV3hNisx5sw3UMqmoYcKiw1uCm0pj7CSGLoqNWfjxI+VrdccD UOjTlwaPrZRcOfDfErzsdM6kSm1k/crGf0WcuRk5sEISt4p5T2Sx47rSAPZ+7/5PNGGC b/yCRCZL6lgOfSwVSyp62QDJoDOhzze/PVdYtq278MiJPZlhsKvxzpfgFesFBtpWOid4 p5RQ== MIME-Version: 1.0 X-Received: by 10.52.17.129 with SMTP id o1mr35032341vdd.0.1404941751529; Wed, 09 Jul 2014 14:35:51 -0700 (PDT) Received: by 10.96.212.231 with HTTP; Wed, 9 Jul 2014 14:35:51 -0700 (PDT) In-Reply-To: References: Date: Wed, 9 Jul 2014 14:35:51 -0700 Message-ID: From: Vijayakumar Muthuvel Manickam To: "Xie, Huawei" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 21:35:30 -0000 Hi Huawei, Thanks a lot for pointing out the cause for this issue. I changed *virtio_net_hdr_mem* member in *struct virtqueue* from (void *) to phys_addr_t and the necessary typecast changes in code and don't see the issue after my changes. Below is the diff of my changes, diff -Naur a/librte_pmd_virtio/virtio_ethdev.c b/librte_pmd_virtio/virtio_ethdev.c --- a/librte_pmd_virtio/virtio_ethdev.c 2014-02-26 10:07:28.000000000 -0800 +++ b/librte_pmd_virtio/virtio_ethdev.c 2014-07-09 14:16:24.000000000 -0700 @@ -189,7 +189,7 @@ PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%"PRIx64"\n", (uint64_t)mz->phys_addr); PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%"PRIx64"\n", (uint64_t)mz->addr); vq->virtio_net_hdr_mz =3D NULL; - vq->virtio_net_hdr_mem =3D (void *)NULL; + vq->virtio_net_hdr_mem =3D 0; if (queue_type =3D=3D VTNET_TQ) { /* @@ -204,7 +204,7 @@ rte_free(vq); return (-ENOMEM); } - vq->virtio_net_hdr_mem =3D (void *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr; + vq->virtio_net_hdr_mem =3D vq->virtio_net_hdr_mz->phys_addr= ; memset(vq->virtio_net_hdr_mz->addr, 0, vq_size * sizeof(struct virtio_net_hdr)); } else if (queue_type =3D=3D VTNET_CQ) { /* Allocate a page for control vq command, data and status */ @@ -216,7 +216,7 @@ rte_free(vq); return (-ENOMEM); } - vq->virtio_net_hdr_mem =3D (void *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr; + vq->virtio_net_hdr_mem =3D vq->virtio_net_hdr_mz->phys_addr= ; memset(vq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE); } diff -Naur a/librte_pmd_virtio/virtqueue.h b/librte_pmd_virtio/virtqueue.h --- a/librte_pmd_virtio/virtqueue.h 2014-02-26 10:07:28.000000000 -0800 +++ b/librte_pmd_virtio/virtqueue.h 2014-07-09 14:01:59.000000000 -0700 @@ -134,7 +134,7 @@ */ uint16_t vq_used_cons_idx; uint16_t vq_avail_idx; - void *virtio_net_hdr_mem; /**< hdr for each xmit packet */ + phys_addr_t virtio_net_hdr_mem; /**< hdr for each xmit packet */ struct vq_desc_extra { void *cookie; @@ -325,7 +325,7 @@ dxp->ndescs =3D needed; start_dp =3D txvq->vq_ring.desc; - start_dp[idx].addr =3D (uint64_t)(uintptr_t)txvq->virtio_net_hdr_m= em + idx * sizeof(struct virtio_net_hdr); + start_dp[idx].addr =3D txvq->virtio_net_hdr_mem + idx * sizeof(struct virtio_net_hdr); start_dp[idx].len =3D sizeof(struct virtio_net_hdr); start_dp[idx].flags =3D VRING_DESC_F_NEXT; idx =3D start_dp[idx].next; Thanks, Vijay On Wed, Jul 9, 2014 at 6:41 AM, Xie, Huawei wrote: > This is due to inappropriate conversion like > vq->virtio_net_hdr_mem =3D (void > *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr; > Those two types have different width on 32bit and 64 bit, which cut highe= r > 32 bits for 32bit APP running on 64bit system. Will provide fix for this. > Don=E2=80=99t know if all DPDK examples and libs handles cases like this = properly. > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vijayakumar > Muthuvel > > Manickam > > Sent: Wednesday, July 09, 2014 10:30 AM > > To: dev@dpdk.org > > Subject: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue > > > > Hi, > > > > > > I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet > I/O > > issue under some VM configurations when testing with l2fwd application. > > > > The issue is that Tx on virtio NIC is not working. Packets enqueued by > > virtio pmd on Tx queue are not dequeued by the backend vhost-net for so= me > > reason. > > > > I confirmed this after seeing that the RX counter on the corresponding > > vnetx interface on the KVM host is zero. > > > > As a result, after enqueuing the first 128(half of 256 total size) > packets > > the Tx queue becomes full and no more packets can be enqueued. > > > > Each packet using 2 descriptors in the Tx queue allows 128 packets to b= e > > enqueued. > > > > > > > > The issue is not seen when using 64bit l2fwd application that uses 64 b= it > > virtio pmd. > > > > > > With 32bit l2fwd application I see this issue for some combination of > core > > and RAM allocated to the VM, but works in other cases as below: > > > > > > Failure cases: > > > > 8 cores and 16G/12G RAM allocated to VM > > > > > > > > Some of the Working cases: > > > > 8 cores and 8G/9G/10G/11G/13G allocated to VM > > > > 2 cores and any RAM allocation including 16G&12G > > > > One more observation is: > > By default I reserve 128 2MB hugepages for DPDK. After seeing the above > > failure scenario, if I just kill l2fwd and reduce the number of hugepag= es > > to 64 with the command, > > > > > > echo 64 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages > > > > > > the same l2fwd app starts working. I believe the issue has something to > do > > with the physical memzone virtqueue is allocated each time. > > > > > > > > > > I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and > > > > all other dpdk libs built from i686-default-linuxapp-gcc. > > > > This is because my kernel is 64bit and my application is 32 bit. > > > > > > > > Below are the details of my setup: > > > > > > > > Linux kernel : 2.6.32-220.el6.x86_64 > > > > DPDK version : dpdk-1.6.0r1 > > > > Hugepages : 128 2MB hugepages > > > > DPDK Binaries used: > > > > * 64bit igb_uio.ko > > > > * 32bit l2fwd application > > > > > > I'd appreciate if you could give me some pointers on debugging the issu= e > ? > > > > > > Thanks, > > Vijay >