DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet
@ 2015-06-26 16:56 Assaad, Sami (Sami)
  2015-06-29  9:18 ` Bruce Richardson
  0 siblings, 1 reply; 3+ messages in thread
From: Assaad, Sami (Sami) @ 2015-06-26 16:56 UTC (permalink / raw)
  To: dev

Hello,

Is it normal that a 10G NIC interface, supporting the 82599 Ethernet Controller, configured as PCI Pass-through for a virtual machine using DPDK,  reports 64 bytes per packet; no matter what the packet size?

If so; I'm assuming this is to improve the performance of passing the network traffic to the VM.  Is there a way to configure the NIC to properly present the proper byte count/packet?

Thanks in advance.

Best Regards,
Sami.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet
  2015-06-26 16:56 [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet Assaad, Sami (Sami)
@ 2015-06-29  9:18 ` Bruce Richardson
  2015-06-29 14:02   ` Assaad, Sami (Sami)
  0 siblings, 1 reply; 3+ messages in thread
From: Bruce Richardson @ 2015-06-29  9:18 UTC (permalink / raw)
  To: Assaad, Sami (Sami); +Cc: dev

On Fri, Jun 26, 2015 at 04:56:18PM +0000, Assaad, Sami (Sami) wrote:
> Hello,
> 
> Is it normal that a 10G NIC interface, supporting the 82599 Ethernet Controller, configured as PCI Pass-through for a virtual machine using DPDK,  reports 64 bytes per packet; no matter what the packet size?
> 
That would not be expected behaviour, no.
AFAIK, the 82599 NIC counters should behave in the same way whether or not it is passed through
to a VM or used on a host. 

> If so; I'm assuming this is to improve the performance of passing the network traffic to the VM.  Is there a way to configure the NIC to properly present the proper byte count/packet?
> 
I'm not sure what you mean here. I can't see how the reporting of byte-counts
would affect performance. Can you clarify what exactly you are seeing, and why
you think there is a performance benefit because of it?

/Bruce

> Thanks in advance.
> 
> Best Regards,
> Sami.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet
  2015-06-29  9:18 ` Bruce Richardson
@ 2015-06-29 14:02   ` Assaad, Sami (Sami)
  0 siblings, 0 replies; 3+ messages in thread
From: Assaad, Sami (Sami) @ 2015-06-29 14:02 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

Thanks Bruce for your response.

This is a very odd issue. I'm connecting a Pktgen-DPDK server directly to our application server running a DPDK Server/Client Process architecture. I'm constantly seeing 64bytes per packet, no matter what packet size is being received by the NIC running on the application server. I initially thought that maybe the NIC was configured in a particular mode to improve the throughput through the SR-IOV links. Based on your response, this is a wrong assumption. Interesting...I have tried both a HP and Intel NIC, DPDK 1.8 and 2.0, and DPDK examples, all of which report 64 bytes/packet by the NIC ports. I'm using CentOS 6.6 (2.6.32-504.23.4.el6.x86_64). I'm wondering if my PCI pass-through is setup properly (?) ... yet, all the network traffic is being handled as expected by the designed DPDK application.

In case the issue is with the VM configuration (which I seriously doubt), I've copied my VM XML here:
<domain type='kvm'>
  <name>vm-sami</name>
  <uuid>1eda9ae3-0155-de14-6e1c-0fbe0aa880f6</uuid>
  <memory unit='KiB'>102400000</memory>
  <currentMemory unit='KiB'>102400000</currentMemory>
  <vcpu placement='static'>46</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.6.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>Haswell</model>
    <vendor>Intel</vendor>
    <feature policy='disable' name='rtm'/>
    <feature policy='disable' name='hle'/>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='dtes64'/>
    <feature policy='require' name='invpcid'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='erms'/>
    <feature policy='require' name='xtpr'/>
    <feature policy='require' name='smep'/>
    <feature policy='require' name='pbe'/>
    <feature policy='require' name='est'/>
    <feature policy='require' name='monitor'/>
    <feature policy='require' name='smx'/>
    <feature policy='require' name='abm'/>
    <feature policy='require' name='tm'/>
    <feature policy='require' name='acpi'/>
    <feature policy='require' name='fma'/>
    <feature policy='require' name='osxsave'/>
    <feature policy='require' name='ht'/>
    <feature policy='require' name='dca'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='fsgsbase'/>
    <feature policy='require' name='f16c'/>
    <feature policy='require' name='ds'/>
    <feature policy='require' name='invtsc'/>
    <feature policy='require' name='tm2'/>
    <feature policy='require' name='avx2'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='bmi1'/>
    <feature policy='require' name='bmi2'/>
    <feature policy='require' name='pcid'/>
    <feature policy='require' name='ds_cpl'/>
    <feature policy='require' name='movbe'/>
    <feature policy='require' name='rdrand'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='...'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:6d:39:c5'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'/>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes'/>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 
Best Regards,
Sami Assaad.

-----Original Message-----
From: Bruce Richardson [mailto:bruce.richardson@intel.com] 
Sent: Monday, June 29, 2015 5:19 AM
To: Assaad, Sami (Sami)
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet

On Fri, Jun 26, 2015 at 04:56:18PM +0000, Assaad, Sami (Sami) wrote:
> Hello,
> 
> Is it normal that a 10G NIC interface, supporting the 82599 Ethernet Controller, configured as PCI Pass-through for a virtual machine using DPDK,  reports 64 bytes per packet; no matter what the packet size?
> 
That would not be expected behaviour, no.
AFAIK, the 82599 NIC counters should behave in the same way whether or not it is passed through to a VM or used on a host. 

> If so; I'm assuming this is to improve the performance of passing the network traffic to the VM.  Is there a way to configure the NIC to properly present the proper byte count/packet?
> 
I'm not sure what you mean here. I can't see how the reporting of byte-counts would affect performance. Can you clarify what exactly you are seeing, and why you think there is a performance benefit because of it?

/Bruce

> Thanks in advance.
> 
> Best Regards,
> Sami.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-06-29 14:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-26 16:56 [dpdk-dev] 10G Interface used as PCI Pass-Through reports 64bytes / packet Assaad, Sami (Sami)
2015-06-29  9:18 ` Bruce Richardson
2015-06-29 14:02   ` Assaad, Sami (Sami)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).