From: "Xie, Huawei" <huawei.xie@intel.com>
To: Vijayakumar Muthuvel Manickam <mmvijay@gmail.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
Date: Wed, 9 Jul 2014 13:41:23 +0000 [thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B0F22D607@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <CADJ2ZGMTGRKRH367j-UumJOgPo6+ccsBO2hRtm6xexvw-YEeTQ@mail.gmail.com>
This is due to inappropriate conversion like
vq->virtio_net_hdr_mem = (void *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr;
Those two types have different width on 32bit and 64 bit, which cut higher 32 bits for 32bit APP running on 64bit system. Will provide fix for this. Don’t know if all DPDK examples and libs handles cases like this properly.
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vijayakumar Muthuvel
> Manickam
> Sent: Wednesday, July 09, 2014 10:30 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
>
> Hi,
>
>
> I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet I/O
> issue under some VM configurations when testing with l2fwd application.
>
> The issue is that Tx on virtio NIC is not working. Packets enqueued by
> virtio pmd on Tx queue are not dequeued by the backend vhost-net for some
> reason.
>
> I confirmed this after seeing that the RX counter on the corresponding
> vnetx interface on the KVM host is zero.
>
> As a result, after enqueuing the first 128(half of 256 total size) packets
> the Tx queue becomes full and no more packets can be enqueued.
>
> Each packet using 2 descriptors in the Tx queue allows 128 packets to be
> enqueued.
>
>
>
> The issue is not seen when using 64bit l2fwd application that uses 64 bit
> virtio pmd.
>
>
> With 32bit l2fwd application I see this issue for some combination of core
> and RAM allocated to the VM, but works in other cases as below:
>
>
> Failure cases:
>
> 8 cores and 16G/12G RAM allocated to VM
>
>
>
> Some of the Working cases:
>
> 8 cores and 8G/9G/10G/11G/13G allocated to VM
>
> 2 cores and any RAM allocation including 16G&12G
>
> One more observation is:
> By default I reserve 128 2MB hugepages for DPDK. After seeing the above
> failure scenario, if I just kill l2fwd and reduce the number of hugepages
> to 64 with the command,
>
>
> echo 64 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
>
>
> the same l2fwd app starts working. I believe the issue has something to do
> with the physical memzone virtqueue is allocated each time.
>
>
>
>
> I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and
>
> all other dpdk libs built from i686-default-linuxapp-gcc.
>
> This is because my kernel is 64bit and my application is 32 bit.
>
>
>
> Below are the details of my setup:
>
>
>
> Linux kernel : 2.6.32-220.el6.x86_64
>
> DPDK version : dpdk-1.6.0r1
>
> Hugepages : 128 2MB hugepages
>
> DPDK Binaries used:
>
> * 64bit igb_uio.ko
>
> * 32bit l2fwd application
>
>
> I'd appreciate if you could give me some pointers on debugging the issue ?
>
>
> Thanks,
> Vijay
next prev parent reply other threads:[~2014-07-09 13:53 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-09 2:29 Vijayakumar Muthuvel Manickam
2014-07-09 13:41 ` Xie, Huawei [this message]
2014-07-09 21:35 ` Vijayakumar Muthuvel Manickam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C37D651A908B024F974696C65296B57B0F22D607@SHSMSX101.ccr.corp.intel.com \
--to=huawei.xie@intel.com \
--cc=dev@dpdk.org \
--cc=mmvijay@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).