From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qc0-f173.google.com (mail-qc0-f173.google.com [209.85.216.173]) by dpdk.org (Postfix) with ESMTP id 3BBC02E81 for ; Wed, 9 Jul 2014 04:29:29 +0200 (CEST) Received: by mail-qc0-f173.google.com with SMTP id c9so238535qcz.4 for ; Tue, 08 Jul 2014 19:29:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=I/X5iV3F737SIAJZUObRxsXy1CjL8lazklRsl0Aq1EU=; b=DmGeoyWzriKU1Lo/uCs5xqGSPXY9glHfRj7jBIb926oFQJrBhjgM6ixnwKCxivYwvF BfGtTDXz8Ij51bhhTprxuRQxrkSCugvCRCz5mPO7cfV+JAMT2FEi0orB6LVPPeFEsnH2 SwVz0iC1lbZrWOxKcvXrX5DqgzB3IMcaV1uC58aM1vhnEXkrON4LweW4FhTTmwJTV+Qs UiskuSbkKr7zW1leHTuOQPnemZixB/FzaTtvbRQS5+xZGTvtL4OnJ+df0ks1BDqwwR4Z AFuoBiQ+H3yVSterWqHxpXKtvxza3ew5oHvYZHzbfCFxMIzEyRoDo49L+7JtPgUpL/rY jQXQ== MIME-Version: 1.0 X-Received: by 10.140.43.118 with SMTP id d109mr61378254qga.10.1404872990899; Tue, 08 Jul 2014 19:29:50 -0700 (PDT) Received: by 10.96.212.231 with HTTP; Tue, 8 Jul 2014 19:29:50 -0700 (PDT) Date: Tue, 8 Jul 2014 19:29:50 -0700 Message-ID: From: Vijayakumar Muthuvel Manickam To: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] 32 bit virtio_pmd pkt i/o issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Jul 2014 02:29:29 -0000 Hi, I am using 32bit VIRTIO PMD from dpdk-1.6.0r1 and seeing a basic packet I/O issue under some VM configurations when testing with l2fwd application. The issue is that Tx on virtio NIC is not working. Packets enqueued by virtio pmd on Tx queue are not dequeued by the backend vhost-net for some reason. I confirmed this after seeing that the RX counter on the corresponding vnetx interface on the KVM host is zero. As a result, after enqueuing the first 128(half of 256 total size) packets the Tx queue becomes full and no more packets can be enqueued. Each packet using 2 descriptors in the Tx queue allows 128 packets to be enqueued. The issue is not seen when using 64bit l2fwd application that uses 64 bit virtio pmd. With 32bit l2fwd application I see this issue for some combination of core and RAM allocated to the VM, but works in other cases as below: Failure cases: 8 cores and 16G/12G RAM allocated to VM Some of the Working cases: 8 cores and 8G/9G/10G/11G/13G allocated to VM 2 cores and any RAM allocation including 16G&12G One more observation is: By default I reserve 128 2MB hugepages for DPDK. After seeing the above failure scenario, if I just kill l2fwd and reduce the number of hugepages to 64 with the command, echo 64 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages the same l2fwd app starts working. I believe the issue has something to do with the physical memzone virtqueue is allocated each time. I am using igb_uio.ko built from x86_64-default-linuxapp-gcc config and all other dpdk libs built from i686-default-linuxapp-gcc. This is because my kernel is 64bit and my application is 32 bit. Below are the details of my setup: Linux kernel : 2.6.32-220.el6.x86_64 DPDK version : dpdk-1.6.0r1 Hugepages : 128 2MB hugepages DPDK Binaries used: * 64bit igb_uio.ko * 32bit l2fwd application I'd appreciate if you could give me some pointers on debugging the issue ? Thanks, Vijay