DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
@ 2014-07-09  2:29 Vijayakumar Muthuvel Manickam
  2014-07-09 13:41 ` Xie, Huawei
  0 siblings, 1 reply; 3+ messages in thread
From: Vijayakumar Muthuvel Manickam @ 2014-07-09  2:29 UTC (permalink / raw)
  To: dev

Hi,


I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet I/O
issue under some VM configurations when testing with l2fwd application.

The issue is that Tx on virtio NIC is not working. Packets enqueued by
virtio pmd on Tx queue are not dequeued by the backend vhost-net for some
reason.

I confirmed this after seeing that the RX counter on the corresponding
vnetx interface on the KVM host is zero.

As a result, after enqueuing the first 128(half of 256 total size) packets
the Tx queue becomes full and no more packets can be enqueued.

Each packet using 2 descriptors in the Tx queue allows 128 packets to be
enqueued.



The issue is not seen when using 64bit l2fwd application that uses 64 bit
virtio pmd.


With 32bit l2fwd application I see this issue for some combination of core
and RAM allocated to the VM, but works in other cases as below:


Failure cases:

8 cores and 16G/12G RAM allocated to VM



Some of the Working cases:

8 cores and 8G/9G/10G/11G/13G allocated to VM

2 cores and any RAM allocation including 16G&12G

One more observation is:
By default I reserve 128 2MB hugepages for DPDK. After seeing the above
failure scenario, if I just kill l2fwd and reduce the number of hugepages
to 64 with the command,


echo 64 >  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages


the same l2fwd app starts working. I believe the issue has something to do
with the physical memzone virtqueue is allocated each time.




I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and

all other dpdk libs built from i686-default-linuxapp-gcc.

This is because my kernel is 64bit and my application is 32 bit.



Below are the details of my setup:



Linux kernel : 2.6.32-220.el6.x86_64

DPDK version : dpdk-1.6.0r1

Hugepages : 128 2MB hugepages

DPDK Binaries used:

*  64bit igb_uio.ko

* 32bit l2fwd application


I'd appreciate if you could give me some pointers on debugging the issue ?


Thanks,
Vijay

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
  2014-07-09  2:29 [dpdk-dev] 32 bit virtio_pmd pkt i/o issue Vijayakumar Muthuvel Manickam
@ 2014-07-09 13:41 ` Xie, Huawei
  2014-07-09 21:35   ` Vijayakumar Muthuvel Manickam
  0 siblings, 1 reply; 3+ messages in thread
From: Xie, Huawei @ 2014-07-09 13:41 UTC (permalink / raw)
  To: Vijayakumar Muthuvel Manickam, dev

This is due to inappropriate conversion like 
               vq->virtio_net_hdr_mem = (void *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr; 
Those two types have different width on 32bit and 64 bit, which cut higher 32 bits for 32bit APP running on 64bit system. Will provide fix for this. Don’t know if all DPDK examples and libs handles cases like this properly.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vijayakumar Muthuvel
> Manickam
> Sent: Wednesday, July 09, 2014 10:30 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
> 
> Hi,
> 
> 
> I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet I/O
> issue under some VM configurations when testing with l2fwd application.
> 
> The issue is that Tx on virtio NIC is not working. Packets enqueued by
> virtio pmd on Tx queue are not dequeued by the backend vhost-net for some
> reason.
> 
> I confirmed this after seeing that the RX counter on the corresponding
> vnetx interface on the KVM host is zero.
> 
> As a result, after enqueuing the first 128(half of 256 total size) packets
> the Tx queue becomes full and no more packets can be enqueued.
> 
> Each packet using 2 descriptors in the Tx queue allows 128 packets to be
> enqueued.
> 
> 
> 
> The issue is not seen when using 64bit l2fwd application that uses 64 bit
> virtio pmd.
> 
> 
> With 32bit l2fwd application I see this issue for some combination of core
> and RAM allocated to the VM, but works in other cases as below:
> 
> 
> Failure cases:
> 
> 8 cores and 16G/12G RAM allocated to VM
> 
> 
> 
> Some of the Working cases:
> 
> 8 cores and 8G/9G/10G/11G/13G allocated to VM
> 
> 2 cores and any RAM allocation including 16G&12G
> 
> One more observation is:
> By default I reserve 128 2MB hugepages for DPDK. After seeing the above
> failure scenario, if I just kill l2fwd and reduce the number of hugepages
> to 64 with the command,
> 
> 
> echo 64 >  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
> 
> 
> the same l2fwd app starts working. I believe the issue has something to do
> with the physical memzone virtqueue is allocated each time.
> 
> 
> 
> 
> I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and
> 
> all other dpdk libs built from i686-default-linuxapp-gcc.
> 
> This is because my kernel is 64bit and my application is 32 bit.
> 
> 
> 
> Below are the details of my setup:
> 
> 
> 
> Linux kernel : 2.6.32-220.el6.x86_64
> 
> DPDK version : dpdk-1.6.0r1
> 
> Hugepages : 128 2MB hugepages
> 
> DPDK Binaries used:
> 
> *  64bit igb_uio.ko
> 
> * 32bit l2fwd application
> 
> 
> I'd appreciate if you could give me some pointers on debugging the issue ?
> 
> 
> Thanks,
> Vijay

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
  2014-07-09 13:41 ` Xie, Huawei
@ 2014-07-09 21:35   ` Vijayakumar Muthuvel Manickam
  0 siblings, 0 replies; 3+ messages in thread
From: Vijayakumar Muthuvel Manickam @ 2014-07-09 21:35 UTC (permalink / raw)
  To: Xie, Huawei; +Cc: dev

Hi Huawei,

Thanks a lot for pointing out the cause for this issue.
I changed *virtio_net_hdr_mem* member in *struct virtqueue* from (void *)
to phys_addr_t and the necessary typecast changes in code and don't see the
issue after my changes.

Below is the diff of my changes,

diff -Naur a/librte_pmd_virtio/virtio_ethdev.c
b/librte_pmd_virtio/virtio_ethdev.c
--- a/librte_pmd_virtio/virtio_ethdev.c 2014-02-26 10:07:28.000000000 -0800
+++ b/librte_pmd_virtio/virtio_ethdev.c 2014-07-09 14:16:24.000000000 -0700
@@ -189,7 +189,7 @@
        PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem:      0x%"PRIx64"\n",
(uint64_t)mz->phys_addr);
        PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%"PRIx64"\n",
(uint64_t)mz->addr);
        vq->virtio_net_hdr_mz  = NULL;
-       vq->virtio_net_hdr_mem = (void *)NULL;
+       vq->virtio_net_hdr_mem = 0;

        if (queue_type == VTNET_TQ) {
                /*
@@ -204,7 +204,7 @@
                        rte_free(vq);
                        return (-ENOMEM);
                }
-               vq->virtio_net_hdr_mem = (void
*)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr;
+               vq->virtio_net_hdr_mem = vq->virtio_net_hdr_mz->phys_addr;
                memset(vq->virtio_net_hdr_mz->addr, 0, vq_size *
sizeof(struct virtio_net_hdr));
        } else if (queue_type == VTNET_CQ) {
                /* Allocate a page for control vq command, data and status
*/
@@ -216,7 +216,7 @@
                        rte_free(vq);
                        return (-ENOMEM);
                }
-               vq->virtio_net_hdr_mem = (void
*)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr;
+               vq->virtio_net_hdr_mem = vq->virtio_net_hdr_mz->phys_addr;
                memset(vq->virtio_net_hdr_mz->addr, 0, PAGE_SIZE);
        }

diff -Naur a/librte_pmd_virtio/virtqueue.h b/librte_pmd_virtio/virtqueue.h
--- a/librte_pmd_virtio/virtqueue.h     2014-02-26 10:07:28.000000000 -0800
+++ b/librte_pmd_virtio/virtqueue.h     2014-07-09 14:01:59.000000000 -0700
@@ -134,7 +134,7 @@
         */
        uint16_t vq_used_cons_idx;
        uint16_t vq_avail_idx;
-       void     *virtio_net_hdr_mem; /**< hdr for each xmit packet */
+       phys_addr_t virtio_net_hdr_mem; /**< hdr for each xmit packet */

        struct vq_desc_extra {
                void              *cookie;
@@ -325,7 +325,7 @@
        dxp->ndescs = needed;

        start_dp = txvq->vq_ring.desc;
-       start_dp[idx].addr  = (uint64_t)(uintptr_t)txvq->virtio_net_hdr_mem
+ idx * sizeof(struct virtio_net_hdr);
+       start_dp[idx].addr  = txvq->virtio_net_hdr_mem + idx *
sizeof(struct virtio_net_hdr);
        start_dp[idx].len   = sizeof(struct virtio_net_hdr);
        start_dp[idx].flags = VRING_DESC_F_NEXT;
        idx = start_dp[idx].next;



Thanks,
Vijay


On Wed, Jul 9, 2014 at 6:41 AM, Xie, Huawei <huawei.xie@intel.com> wrote:

> This is due to inappropriate conversion like
>                vq->virtio_net_hdr_mem = (void
> *)(uintptr_t)vq->virtio_net_hdr_mz->phys_addr;
> Those two types have different width on 32bit and 64 bit, which cut higher
> 32 bits for 32bit APP running on 64bit system. Will provide fix for this.
> Don’t know if all DPDK examples and libs handles cases like this properly.
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vijayakumar
> Muthuvel
> > Manickam
> > Sent: Wednesday, July 09, 2014 10:30 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue
> >
> > Hi,
> >
> >
> > I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet
> I/O
> > issue under some VM configurations when testing with l2fwd application.
> >
> > The issue is that Tx on virtio NIC is not working. Packets enqueued by
> > virtio pmd on Tx queue are not dequeued by the backend vhost-net for some
> > reason.
> >
> > I confirmed this after seeing that the RX counter on the corresponding
> > vnetx interface on the KVM host is zero.
> >
> > As a result, after enqueuing the first 128(half of 256 total size)
> packets
> > the Tx queue becomes full and no more packets can be enqueued.
> >
> > Each packet using 2 descriptors in the Tx queue allows 128 packets to be
> > enqueued.
> >
> >
> >
> > The issue is not seen when using 64bit l2fwd application that uses 64 bit
> > virtio pmd.
> >
> >
> > With 32bit l2fwd application I see this issue for some combination of
> core
> > and RAM allocated to the VM, but works in other cases as below:
> >
> >
> > Failure cases:
> >
> > 8 cores and 16G/12G RAM allocated to VM
> >
> >
> >
> > Some of the Working cases:
> >
> > 8 cores and 8G/9G/10G/11G/13G allocated to VM
> >
> > 2 cores and any RAM allocation including 16G&12G
> >
> > One more observation is:
> > By default I reserve 128 2MB hugepages for DPDK. After seeing the above
> > failure scenario, if I just kill l2fwd and reduce the number of hugepages
> > to 64 with the command,
> >
> >
> > echo 64 >  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
> >
> >
> > the same l2fwd app starts working. I believe the issue has something to
> do
> > with the physical memzone virtqueue is allocated each time.
> >
> >
> >
> >
> > I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and
> >
> > all other dpdk libs built from i686-default-linuxapp-gcc.
> >
> > This is because my kernel is 64bit and my application is 32 bit.
> >
> >
> >
> > Below are the details of my setup:
> >
> >
> >
> > Linux kernel : 2.6.32-220.el6.x86_64
> >
> > DPDK version : dpdk-1.6.0r1
> >
> > Hugepages : 128 2MB hugepages
> >
> > DPDK Binaries used:
> >
> > *  64bit igb_uio.ko
> >
> > * 32bit l2fwd application
> >
> >
> > I'd appreciate if you could give me some pointers on debugging the issue
> ?
> >
> >
> > Thanks,
> > Vijay
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-07-09 21:35 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-09  2:29 [dpdk-dev] 32 bit virtio_pmd pkt i/o issue Vijayakumar Muthuvel Manickam
2014-07-09 13:41 ` Xie, Huawei
2014-07-09 21:35   ` Vijayakumar Muthuvel Manickam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).