DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] DPDK KNI Issue
@ 2015-11-30 22:09 Ilir Iljazi
  0 siblings, 0 replies; 5+ messages in thread
From: Ilir Iljazi @ 2015-11-30 22:09 UTC (permalink / raw)
  To: users

Hi,
I have been having an issue with dpdk kni whereby I cant send and receive
packets from the kni interface. I spent about a week trying to figure it
out the issue myself to no avail. Although I did find articles with a
similar signature to mine none of the proposed solutions helped solve the
problem.

Environment:
Ubuntu Server 14.04
DPDK Package 2.1.0 (Latest)
Network Card: (10Gbe ixgbe driver)

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection
06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection

06.00.0 (port 0 connected to switch)
06:00.1 (port 1 not connected to switch)

Configuration:
1.) DPDK built without issue
2.) Modules Loaded:

insmod $RTE_TARGET/kmod/igb_uio.ko
insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple


3.) Reserved Huge Pages:

echo 4096 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 4096 >
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages


4.) Mounted huge page partition

echo ">>> Mounting huge page partition"
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge


5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)

Network devices using DPDK-compatible driver
============================================
0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
unused=
0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
unused=


6.) Started kni test application:

Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P
--config="(0,5,7)" &

Output:

EAL: PCI device 0000:06:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fcda5c00000
EAL:   PCI memory mapped at 0x7fcda5c80000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:06:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fcda5c84000
EAL:   PCI memory mapped at 0x7fcda5d04000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
APP: Port ID: 0
APP: Rx lcore ID: 5, Tx lcore ID: 7
APP: Initialising port 0 ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200 dma_addr=0xe5b1ae200
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
size no less than 32.
KNI: pci: 06:00:00  8086:10fb


Checking link status
done
Port 0 Link Up - speed 10000 Mbps - full-duplex
APP: Lcore 1 has nothing to do
APP: Lcore 2 has nothing to do
APP: Lcore 3 has nothing to do
APP: Lcore 4 has nothing to do
APP: Lcore 5 is reading from port 0
APP: Lcore 6 has nothing to do
APP: Lcore 7 is writing to port 0
APP: Lcore 0 has nothing to do


7.) KNI interface configured and brought up:

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0 192.168.13.95 netmask
255.255.248.0 up
APP: Configure network interface of 0 up
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
size no less than 32.

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0

vEth0     Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
          inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
          inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Note also that dmesg is clean not pointing to any issues:
[ 1770.113952] KNI: /dev/kni opened
[ 1770.561957] KNI: Creating kni...
[ 1770.561973] KNI: tx_phys:      0x0000000e5b1ca9c0, tx_q addr:
0xffff880e5b1ca9c0
[ 1770.561974] KNI: rx_phys:      0x0000000e5b1c8940, rx_q addr:
0xffff880e5b1c8940
[ 1770.561975] KNI: alloc_phys:   0x0000000e5b1c68c0, alloc_q addr:
0xffff880e5b1c68c0
[ 1770.561976] KNI: free_phys:    0x0000000e5b1c4840, free_q addr:
0xffff880e5b1c4840
[ 1770.561977] KNI: req_phys:     0x0000000e5b1c27c0, req_q addr:
0xffff880e5b1c27c0
[ 1770.561978] KNI: resp_phys:    0x0000000e5b1c0740, resp_q addr:
0xffff880e5b1c0740
[ 1770.561979] KNI: mbuf_phys:    0x000000006727dec0, mbuf_kva:
0xffff88006727dec0
[ 1770.561980] KNI: mbuf_va:      0x00007fcd8627dec0
[ 1770.561981] KNI: mbuf_size:    2048
[ 1770.561987] KNI: pci_bus: 06:00:00
[ 1770.599689] igb_uio 0000:06:00.0: (PCI Express:5.0GT/s:Width x8)
[ 1770.599691] 90:e2:ba:55:fd:c4
[ 1770.599777] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
MAC: 2, PHY: 0, PBA No: E68793-006
[ 1770.599779] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
Enabled Features: RxQ: 1 TxQ: 1
[ 1770.599790] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
Intel(R) 10 Gigabit Network Connection


8.) ethtool vEth0 link is detected:

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ethtool vEth0
Settings for vEth0:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
      drv probe link
Link detected: yes


9.) kernel started with: iommu=pt intel_iommu=on

GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on console=tty1
console=ttyS1,115200n8"


10.) Disabled virtualization in BIOS per forum recommendation


Situation:
Despite doing everything seemingly correct I cant ssh or ping to and from
this interface. I tried starting tcpdump on the interface but didn't notice
any traffic. I'm not sure what I'm doing wrong here, if I could get some
support I'd appreciate it. I can provide additional details from the system
if needed.

Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] DPDK KNI Issue
       [not found]     ` <CAPh65stm9JQQDHYA15OiKrpaeAkeDGQSSLmWy_ZUGWkLHW8irA@mail.gmail.com>
@ 2015-12-07  9:58       ` Pattan, Reshma
  0 siblings, 0 replies; 5+ messages in thread
From: Pattan, Reshma @ 2015-12-07  9:58 UTC (permalink / raw)
  To: Ilir Iljazi; +Cc: users

Hi Ilir,

Can you reiterate the test as suggested below. Because during KNI testing I observed one weird behavior that, If we keep running tcpdump on KNI device and try to terminate KNI application with cntl+c/ or by other means it doesn’t get terminated. In such case  we need to kill tcpdump first then KNI application. After doing this, for next round dpdk bounded ports are not receiving any traffic. So I suggest you to do below steps.

a)terminate tcpdump and KNI application
b)unbind DPDK ports from igb_uio.
c)bind back DPDK ports to igb_uio.
d)run test_pmd application and make sure traffic is been received on the ports.  If traffic is received ,
e)then run KNI application
d)run tcpdump on KNI device

Let me know how things goes.

Thanks,
Reshma
From: Ilir Iljazi [mailto:iljazi@gmail.com]
Sent: Friday, December 4, 2015 5:54 PM
To: Pattan, Reshma
Subject: Re: [dpdk-users] DPDK KNI Issue

Hi thanks again. I'm using the example application to create the interface:

./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P --config="(0,5,7)" &

The interface is created and I'm able to configure it just like any other linux interface via IP/subnet default route etc. I haven't written my application yet because I can't get basic functionality working such as connect and data xfer. Hence, the debug isn't expected to help since the routines posted are not called. Regarding the proc file system monitoring, same answer there is no application yet.

Thanks,
Ilir

On Fri, Dec 4, 2015 at 11:13 AM, Pattan, Reshma <reshma.pattan@intel.com<mailto:reshma.pattan@intel.com>> wrote:
Hi,

Are you using dpdk kni sample application? (Or) you have your own module with KNI implementation? If so please share the code will look into.

Can you also check by adding enough logging  to below functions .
a)kni_net_rx_normal and other kni_next_rx_* functions if they are invoked  by DPDK to push packets to TCPIP stack.
b)kni_net_tx being invoked by TCP/IP stack to push packets to DPDK.

Also, you can use app/proc_info app from DPDK  2.1 release to check packet statistics of DPDK ports, to see what is happening.

Thanks,
Reshma

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>] On Behalf Of Pattan, Reshma
> Sent: Friday, December 4, 2015 12:28 PM
> To: Ilir Iljazi; users@dpdk.org<mailto:users@dpdk.org>
> Subject: Re: [dpdk-users] DPDK KNI Issue
>
> Hi,
>
> I had tried KNI ping testing on fedora ,  DPDK2.2 and using one loopback
> connection, it works fine and I tried without steps 9 and 10.
> I am not sure why steps 9 & 10 are needed in your case, but you can try without
> those 2 steps and see the results.
> Also, after you start the ping, make sure there is no core dump in dmesg for KNI
> module.
> If ur running tcpdump with icmp filter try running without filter and first see if
> ARP packets are reaching to KNI or not.
> Also can you check if packet drop stats of kni iface increasing?
>
> Thanks,
> Reshma
>
> > -----Original Message-----
> > From: users [mailto:users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>] On Behalf Of Ilir Iljazi
> > Sent: Thursday, December 3, 2015 9:55 PM
> > To: users@dpdk.org<mailto:users@dpdk.org>
> > Subject: [dpdk-users] DPDK KNI Issue
> >
> > Hi,
> > I have been having an issue with dpdk kni whereby I cant send and
> > receive packets from the kni interface. I spent about a week trying to
> > figure it out the issue myself to no avail. Although I did find
> > articles with a similar signature to mine none of the proposed solutions helped
> solve the problem.
> >
> > Environment:
> > Ubuntu Server 14.04
> > DPDK Package 2.1.0 (Latest)
> > Network Card: (10Gbe ixgbe driver)
> >
> > 06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> > SFI/SFP+ Network Connection
> > 06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit
> > SFI/SFP+ Network Connection
> >
> > 06.00.0 (port 0 connected to switch)
> > 06:00.1 (port 1 not connected to switch)
> >
> > Configuration:
> > 1.) DPDK built without issue
> > 2.) Modules Loaded:
> >
> > insmod $RTE_TARGET/kmod/igb_uio.ko
> > insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple
> >
> >
> > 3.) Reserved Huge Pages:
> >
> > echo 4096 >
> > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> > echo 4096 >
> > /sys/devices/system/node/node1/hugepages/hugepages-
> 2048kB/nr_hugepages
> >
> >
> > 4.) Mounted huge page partition
> >
> > echo ">>> Mounting huge page partition"
> > mkdir -p /mnt/huge
> > mount -t hugetlbfs nodev /mnt/huge
> >
> >
> > 5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)
> >
> > Network devices using DPDK-compatible driver
> > ============================================
> > 0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> > drv=igb_uio unused=
> > 0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> > drv=igb_uio unused=
> >
> >
> > 6.) Started kni test application:
> >
> > Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P --
> > config="(0,5,7)" &
> >
> > Output:
> >
> > EAL: PCI device 0000:06:00.0 on NUMA socket -1
> > EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> > EAL:   PCI memory mapped at 0x7fcda5c00000
> > EAL:   PCI memory mapped at 0x7fcda5c80000
> > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> > PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> > EAL: PCI device 0000:06:00.1 on NUMA socket -1
> > EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> > EAL:   PCI memory mapped at 0x7fcda5c84000
> > EAL:   PCI memory mapped at 0x7fcda5d04000
> > PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
> > PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> > APP: Port ID: 0
> > APP: Rx lcore ID: 5, Tx lcore ID: 7
> > APP: Initialising port 0 ...
> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
> > sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200
> dma_addr=0xe5b1ae200
> > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
> > hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
> > PMD: ixgbe_set_tx_function(): Using simple tx code path
> > PMD: ixgbe_set_tx_function(): Vector tx enabled.
> > PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX
> > burst size no less than 32.
> > KNI: pci: 06:00:00  8086:10fb
> >
> >
> > Checking link status
> > done
> > Port 0 Link Up - speed 10000 Mbps - full-duplex
> > APP: Lcore 1 has nothing to do
> > APP: Lcore 2 has nothing to do
> > APP: Lcore 3 has nothing to do
> > APP: Lcore 4 has nothing to do
> > APP: Lcore 5 is reading from port 0
> > APP: Lcore 6 has nothing to do
> > APP: Lcore 7 is writing to port 0
> > APP: Lcore 0 has nothing to do
> >
> >
> > 7.) KNI interface configured and brought up:
> >
> > root@l3sys2-acc2-3329:~/dpdk-2.1.0#<mailto:root@l3sys2-acc2-3329:~/dpdk-2.1.0#> ifconfig vEth0 192.168.13.95
> > netmask
> > 255.255.248.0 up
> > APP: Configure network interface of 0 up
> > PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX
> > burst size no less than 32.
> >
> > root@l3sys2-acc2-3329:~/dpdk-2.1.0#<mailto:root@l3sys2-acc2-3329:~/dpdk-2.1.0#> ifconfig vEth0
> >
> > vEth0     Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
> >           inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
> >           inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
> >           collisions:0 txqueuelen:1000
> >           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> >
> > Note also that dmesg is clean not pointing to any issues:
> > [ 1770.113952] KNI: /dev/kni opened
> > [ 1770.561957] KNI: Creating kni...
> > [ 1770.561973] KNI: tx_phys:      0x0000000e5b1ca9c0, tx_q addr:
> > 0xffff880e5b1ca9c0
> > [ 1770.561974] KNI: rx_phys:      0x0000000e5b1c8940, rx_q addr:
> > 0xffff880e5b1c8940
> > [ 1770.561975] KNI: alloc_phys:   0x0000000e5b1c68c0, alloc_q addr:
> > 0xffff880e5b1c68c0
> > [ 1770.561976] KNI: free_phys:    0x0000000e5b1c4840, free_q addr:
> > 0xffff880e5b1c4840
> > [ 1770.561977] KNI: req_phys:     0x0000000e5b1c27c0, req_q addr:
> > 0xffff880e5b1c27c0
> > [ 1770.561978] KNI: resp_phys:    0x0000000e5b1c0740, resp_q addr:
> > 0xffff880e5b1c0740
> > [ 1770.561979] KNI: mbuf_phys:    0x000000006727dec0, mbuf_kva:
> > 0xffff88006727dec0
> > [ 1770.561980] KNI: mbuf_va:      0x00007fcd8627dec0
> > [ 1770.561981] KNI: mbuf_size:    2048
> > [ 1770.561987] KNI: pci_bus: 06:00:00
> > [ 1770.599689] igb_uio 0000:06:00.0: (PCI Express:5.0GT/s:Width x8) [
> > 1770.599691] 90:e2:ba:55:fd:c4 [ 1770.599777] igb_uio 0000:06:00.0
> > (unnamed
> > net_device) (uninitialized):
> > MAC: 2, PHY: 0, PBA No: E68793-006
> > [ 1770.599779] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
> > Enabled Features: RxQ: 1 TxQ: 1
> > [ 1770.599790] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
> > Intel(R) 10 Gigabit Network Connection
> >
> >
> > 8.) ethtool vEth0 link is detected:
> >
> > root@l3sys2-acc2-3329:~/dpdk-2.1.0#<mailto:root@l3sys2-acc2-3329:~/dpdk-2.1.0#> ethtool vEth0 Settings for vEth0:
> > Supported ports: [ FIBRE ]
> > Supported link modes:   10000baseT/Full
> > Supported pause frame use: No
> > Supports auto-negotiation: No
> > Advertised link modes:  10000baseT/Full Advertised pause frame use: No
> > Advertised auto-negotiation: No
> > Speed: 10000Mb/s
> > Duplex: Full
> > Port: Other
> > PHYAD: 0
> > Transceiver: external
> > Auto-negotiation: off
> > Supports Wake-on: d
> > Wake-on: d
> > Current message level: 0x00000007 (7)
> >       drv probe link
> > Link detected: yes
> >
> >
> > 9.) kernel started with: iommu=pt intel_iommu=on
> >
> > GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on console=tty1
> > console=ttyS1,115200n8"
> >
> >
> > 10.) Disabled virtualization in BIOS per forum recommendation
> >
> >
> > Situation:
> > Despite doing everything seemingly correct I cant ssh or ping to and
> > from this interface. I tried starting tcpdump on the interface but didn't notice
> any traffic.
> > I'm not sure what I'm doing wrong here, if I could get some support
> > I'd appreciate it. I can provide additional details from the system if needed.
> >
> > Thanks!


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] DPDK KNI Issue
  2015-12-04 12:27 ` Pattan, Reshma
@ 2015-12-04 16:15   ` Ilir Iljazi
       [not found]   ` <3AEA2BF9852C6F48A459DA490692831FF89F16@IRSMSX109.ger.corp.intel.com>
  1 sibling, 0 replies; 5+ messages in thread
From: Ilir Iljazi @ 2015-12-04 16:15 UTC (permalink / raw)
  To: Pattan, Reshma; +Cc: users

Thanks for the response, I have tried both with and without steps 9 and 10 to no avail. There is also no core dump generated. Interestingly, however, there are some dropped packets on the interface but they seem to be occurring in a burst when the interface is started but not incrementing throughout the uptime of the interface. Tcpdump is executed in most basic mode without filters:

#tcpdump -vv -i vEth0 

Nothing is coming in and as far as I can tell nothing is going out. 

Ilir

> On Dec 4, 2015, at 6:27 AM, Pattan, Reshma <reshma.pattan@intel.com> wrote:
> 
> Hi,
> 
> I had tried KNI ping testing on fedora ,  DPDK2.2 and using one loopback connection, it works fine and I tried without steps 9 and 10.
> I am not sure why steps 9 & 10 are needed in your case, but you can try without those 2 steps and see the results. 
> Also, after you start the ping, make sure there is no core dump in dmesg for KNI module.
> If ur running tcpdump with icmp filter try running without filter and first see if ARP packets are reaching to KNI or not.
> Also can you check if packet drop stats of kni iface increasing?
> 
> Thanks,
> Reshma
> 
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Ilir Iljazi
>> Sent: Thursday, December 3, 2015 9:55 PM
>> To: users@dpdk.org
>> Subject: [dpdk-users] DPDK KNI Issue
>> 
>> Hi,
>> I have been having an issue with dpdk kni whereby I cant send and receive
>> packets from the kni interface. I spent about a week trying to figure it out the
>> issue myself to no avail. Although I did find articles with a similar signature to
>> mine none of the proposed solutions helped solve the problem.
>> 
>> Environment:
>> Ubuntu Server 14.04
>> DPDK Package 2.1.0 (Latest)
>> Network Card: (10Gbe ixgbe driver)
>> 
>> 06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
>> Network Connection
>> 06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
>> Network Connection
>> 
>> 06.00.0 (port 0 connected to switch)
>> 06:00.1 (port 1 not connected to switch)
>> 
>> Configuration:
>> 1.) DPDK built without issue
>> 2.) Modules Loaded:
>> 
>> insmod $RTE_TARGET/kmod/igb_uio.ko
>> insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple
>> 
>> 
>> 3.) Reserved Huge Pages:
>> 
>> echo 4096 >
>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>> echo 4096 >
>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
>> 
>> 
>> 4.) Mounted huge page partition
>> 
>> echo ">>> Mounting huge page partition"
>> mkdir -p /mnt/huge
>> mount -t hugetlbfs nodev /mnt/huge
>> 
>> 
>> 5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)
>> 
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
>> unused=
>> 0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
>> unused=
>> 
>> 
>> 6.) Started kni test application:
>> 
>> Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P --
>> config="(0,5,7)" &
>> 
>> Output:
>> 
>> EAL: PCI device 0000:06:00.0 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fcda5c00000
>> EAL:   PCI memory mapped at 0x7fcda5c80000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:06:00.1 on NUMA socket -1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fcda5c84000
>> EAL:   PCI memory mapped at 0x7fcda5d04000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
>> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
>> APP: Port ID: 0
>> APP: Rx lcore ID: 5, Tx lcore ID: 7
>> APP: Initialising port 0 ...
>> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
>> sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200 dma_addr=0xe5b1ae200
>> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
>> hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
>> PMD: ixgbe_set_tx_function(): Using simple tx code path
>> PMD: ixgbe_set_tx_function(): Vector tx enabled.
>> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size
>> no less than 32.
>> KNI: pci: 06:00:00  8086:10fb
>> 
>> 
>> Checking link status
>> done
>> Port 0 Link Up - speed 10000 Mbps - full-duplex
>> APP: Lcore 1 has nothing to do
>> APP: Lcore 2 has nothing to do
>> APP: Lcore 3 has nothing to do
>> APP: Lcore 4 has nothing to do
>> APP: Lcore 5 is reading from port 0
>> APP: Lcore 6 has nothing to do
>> APP: Lcore 7 is writing to port 0
>> APP: Lcore 0 has nothing to do
>> 
>> 
>> 7.) KNI interface configured and brought up:
>> 
>> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0 192.168.13.95 netmask
>> 255.255.248.0 up
>> APP: Configure network interface of 0 up
>> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size
>> no less than 32.
>> 
>> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0
>> 
>> vEth0     Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
>>          inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
>>          inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
>>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
>>          collisions:0 txqueuelen:1000
>>          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
>> 
>> Note also that dmesg is clean not pointing to any issues:
>> [ 1770.113952] KNI: /dev/kni opened
>> [ 1770.561957] KNI: Creating kni...
>> [ 1770.561973] KNI: tx_phys:      0x0000000e5b1ca9c0, tx_q addr:
>> 0xffff880e5b1ca9c0
>> [ 1770.561974] KNI: rx_phys:      0x0000000e5b1c8940, rx_q addr:
>> 0xffff880e5b1c8940
>> [ 1770.561975] KNI: alloc_phys:   0x0000000e5b1c68c0, alloc_q addr:
>> 0xffff880e5b1c68c0
>> [ 1770.561976] KNI: free_phys:    0x0000000e5b1c4840, free_q addr:
>> 0xffff880e5b1c4840
>> [ 1770.561977] KNI: req_phys:     0x0000000e5b1c27c0, req_q addr:
>> 0xffff880e5b1c27c0
>> [ 1770.561978] KNI: resp_phys:    0x0000000e5b1c0740, resp_q addr:
>> 0xffff880e5b1c0740
>> [ 1770.561979] KNI: mbuf_phys:    0x000000006727dec0, mbuf_kva:
>> 0xffff88006727dec0
>> [ 1770.561980] KNI: mbuf_va:      0x00007fcd8627dec0
>> [ 1770.561981] KNI: mbuf_size:    2048
>> [ 1770.561987] KNI: pci_bus: 06:00:00
>> [ 1770.599689] igb_uio 0000:06:00.0: (PCI Express:5.0GT/s:Width x8) [
>> 1770.599691] 90:e2:ba:55:fd:c4 [ 1770.599777] igb_uio 0000:06:00.0 (unnamed
>> net_device) (uninitialized):
>> MAC: 2, PHY: 0, PBA No: E68793-006
>> [ 1770.599779] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
>> Enabled Features: RxQ: 1 TxQ: 1
>> [ 1770.599790] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
>> Intel(R) 10 Gigabit Network Connection
>> 
>> 
>> 8.) ethtool vEth0 link is detected:
>> 
>> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ethtool vEth0 Settings for vEth0:
>> Supported ports: [ FIBRE ]
>> Supported link modes:   10000baseT/Full
>> Supported pause frame use: No
>> Supports auto-negotiation: No
>> Advertised link modes:  10000baseT/Full
>> Advertised pause frame use: No
>> Advertised auto-negotiation: No
>> Speed: 10000Mb/s
>> Duplex: Full
>> Port: Other
>> PHYAD: 0
>> Transceiver: external
>> Auto-negotiation: off
>> Supports Wake-on: d
>> Wake-on: d
>> Current message level: 0x00000007 (7)
>>      drv probe link
>> Link detected: yes
>> 
>> 
>> 9.) kernel started with: iommu=pt intel_iommu=on
>> 
>> GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on console=tty1
>> console=ttyS1,115200n8"
>> 
>> 
>> 10.) Disabled virtualization in BIOS per forum recommendation
>> 
>> 
>> Situation:
>> Despite doing everything seemingly correct I cant ssh or ping to and from this
>> interface. I tried starting tcpdump on the interface but didn't notice any traffic.
>> I'm not sure what I'm doing wrong here, if I could get some support I'd
>> appreciate it. I can provide additional details from the system if needed.
>> 
>> Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] DPDK KNI Issue
  2015-12-03 21:54 Ilir Iljazi
@ 2015-12-04 12:27 ` Pattan, Reshma
  2015-12-04 16:15   ` Ilir Iljazi
       [not found]   ` <3AEA2BF9852C6F48A459DA490692831FF89F16@IRSMSX109.ger.corp.intel.com>
  0 siblings, 2 replies; 5+ messages in thread
From: Pattan, Reshma @ 2015-12-04 12:27 UTC (permalink / raw)
  To: Ilir Iljazi, users

Hi,

I had tried KNI ping testing on fedora ,  DPDK2.2 and using one loopback connection, it works fine and I tried without steps 9 and 10.
I am not sure why steps 9 & 10 are needed in your case, but you can try without those 2 steps and see the results. 
Also, after you start the ping, make sure there is no core dump in dmesg for KNI module.
If ur running tcpdump with icmp filter try running without filter and first see if ARP packets are reaching to KNI or not.
Also can you check if packet drop stats of kni iface increasing?

Thanks,
Reshma

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Ilir Iljazi
> Sent: Thursday, December 3, 2015 9:55 PM
> To: users@dpdk.org
> Subject: [dpdk-users] DPDK KNI Issue
> 
> Hi,
> I have been having an issue with dpdk kni whereby I cant send and receive
> packets from the kni interface. I spent about a week trying to figure it out the
> issue myself to no avail. Although I did find articles with a similar signature to
> mine none of the proposed solutions helped solve the problem.
> 
> Environment:
> Ubuntu Server 14.04
> DPDK Package 2.1.0 (Latest)
> Network Card: (10Gbe ixgbe driver)
> 
> 06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
> Network Connection
> 06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
> Network Connection
> 
> 06.00.0 (port 0 connected to switch)
> 06:00.1 (port 1 not connected to switch)
> 
> Configuration:
> 1.) DPDK built without issue
> 2.) Modules Loaded:
> 
> insmod $RTE_TARGET/kmod/igb_uio.ko
> insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple
> 
> 
> 3.) Reserved Huge Pages:
> 
> echo 4096 >
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> echo 4096 >
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 
> 
> 4.) Mounted huge page partition
> 
> echo ">>> Mounting huge page partition"
> mkdir -p /mnt/huge
> mount -t hugetlbfs nodev /mnt/huge
> 
> 
> 5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> 
> 6.) Started kni test application:
> 
> Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P --
> config="(0,5,7)" &
> 
> Output:
> 
> EAL: PCI device 0000:06:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fcda5c00000
> EAL:   PCI memory mapped at 0x7fcda5c80000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:06:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fcda5c84000
> EAL:   PCI memory mapped at 0x7fcda5d04000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> APP: Port ID: 0
> APP: Rx lcore ID: 5, Tx lcore ID: 7
> APP: Initialising port 0 ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
> sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200 dma_addr=0xe5b1ae200
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
> hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size
> no less than 32.
> KNI: pci: 06:00:00  8086:10fb
> 
> 
> Checking link status
> done
> Port 0 Link Up - speed 10000 Mbps - full-duplex
> APP: Lcore 1 has nothing to do
> APP: Lcore 2 has nothing to do
> APP: Lcore 3 has nothing to do
> APP: Lcore 4 has nothing to do
> APP: Lcore 5 is reading from port 0
> APP: Lcore 6 has nothing to do
> APP: Lcore 7 is writing to port 0
> APP: Lcore 0 has nothing to do
> 
> 
> 7.) KNI interface configured and brought up:
> 
> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0 192.168.13.95 netmask
> 255.255.248.0 up
> APP: Configure network interface of 0 up
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size
> no less than 32.
> 
> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0
> 
> vEth0     Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
>           inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
>           inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> 
> Note also that dmesg is clean not pointing to any issues:
> [ 1770.113952] KNI: /dev/kni opened
> [ 1770.561957] KNI: Creating kni...
> [ 1770.561973] KNI: tx_phys:      0x0000000e5b1ca9c0, tx_q addr:
> 0xffff880e5b1ca9c0
> [ 1770.561974] KNI: rx_phys:      0x0000000e5b1c8940, rx_q addr:
> 0xffff880e5b1c8940
> [ 1770.561975] KNI: alloc_phys:   0x0000000e5b1c68c0, alloc_q addr:
> 0xffff880e5b1c68c0
> [ 1770.561976] KNI: free_phys:    0x0000000e5b1c4840, free_q addr:
> 0xffff880e5b1c4840
> [ 1770.561977] KNI: req_phys:     0x0000000e5b1c27c0, req_q addr:
> 0xffff880e5b1c27c0
> [ 1770.561978] KNI: resp_phys:    0x0000000e5b1c0740, resp_q addr:
> 0xffff880e5b1c0740
> [ 1770.561979] KNI: mbuf_phys:    0x000000006727dec0, mbuf_kva:
> 0xffff88006727dec0
> [ 1770.561980] KNI: mbuf_va:      0x00007fcd8627dec0
> [ 1770.561981] KNI: mbuf_size:    2048
> [ 1770.561987] KNI: pci_bus: 06:00:00
> [ 1770.599689] igb_uio 0000:06:00.0: (PCI Express:5.0GT/s:Width x8) [
> 1770.599691] 90:e2:ba:55:fd:c4 [ 1770.599777] igb_uio 0000:06:00.0 (unnamed
> net_device) (uninitialized):
> MAC: 2, PHY: 0, PBA No: E68793-006
> [ 1770.599779] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
> Enabled Features: RxQ: 1 TxQ: 1
> [ 1770.599790] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
> Intel(R) 10 Gigabit Network Connection
> 
> 
> 8.) ethtool vEth0 link is detected:
> 
> root@l3sys2-acc2-3329:~/dpdk-2.1.0# ethtool vEth0 Settings for vEth0:
> Supported ports: [ FIBRE ]
> Supported link modes:   10000baseT/Full
> Supported pause frame use: No
> Supports auto-negotiation: No
> Advertised link modes:  10000baseT/Full
> Advertised pause frame use: No
> Advertised auto-negotiation: No
> Speed: 10000Mb/s
> Duplex: Full
> Port: Other
> PHYAD: 0
> Transceiver: external
> Auto-negotiation: off
> Supports Wake-on: d
> Wake-on: d
> Current message level: 0x00000007 (7)
>       drv probe link
> Link detected: yes
> 
> 
> 9.) kernel started with: iommu=pt intel_iommu=on
> 
> GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on console=tty1
> console=ttyS1,115200n8"
> 
> 
> 10.) Disabled virtualization in BIOS per forum recommendation
> 
> 
> Situation:
> Despite doing everything seemingly correct I cant ssh or ping to and from this
> interface. I tried starting tcpdump on the interface but didn't notice any traffic.
> I'm not sure what I'm doing wrong here, if I could get some support I'd
> appreciate it. I can provide additional details from the system if needed.
> 
> Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-users] DPDK KNI Issue
@ 2015-12-03 21:54 Ilir Iljazi
  2015-12-04 12:27 ` Pattan, Reshma
  0 siblings, 1 reply; 5+ messages in thread
From: Ilir Iljazi @ 2015-12-03 21:54 UTC (permalink / raw)
  To: users

Hi,
I have been having an issue with dpdk kni whereby I cant send and receive
packets from the kni interface. I spent about a week trying to figure it
out the issue myself to no avail. Although I did find articles with a
similar signature to mine none of the proposed solutions helped solve the
problem.

Environment:
Ubuntu Server 14.04
DPDK Package 2.1.0 (Latest)
Network Card: (10Gbe ixgbe driver)

06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection
06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection

06.00.0 (port 0 connected to switch)
06:00.1 (port 1 not connected to switch)

Configuration:
1.) DPDK built without issue
2.) Modules Loaded:

insmod $RTE_TARGET/kmod/igb_uio.ko
insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple


3.) Reserved Huge Pages:

echo 4096 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 4096 >
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages


4.) Mounted huge page partition

echo ">>> Mounting huge page partition"
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge


5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)

Network devices using DPDK-compatible driver
============================================
0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
unused=
0000:06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
unused=


6.) Started kni test application:

Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P
--config="(0,5,7)" &

Output:

EAL: PCI device 0000:06:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fcda5c00000
EAL:   PCI memory mapped at 0x7fcda5c80000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:06:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fcda5c84000
EAL:   PCI memory mapped at 0x7fcda5d04000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
APP: Port ID: 0
APP: Rx lcore ID: 5, Tx lcore ID: 7
APP: Initialising port 0 ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200 dma_addr=0xe5b1ae200
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
size no less than 32.
KNI: pci: 06:00:00  8086:10fb


Checking link status
done
Port 0 Link Up - speed 10000 Mbps - full-duplex
APP: Lcore 1 has nothing to do
APP: Lcore 2 has nothing to do
APP: Lcore 3 has nothing to do
APP: Lcore 4 has nothing to do
APP: Lcore 5 is reading from port 0
APP: Lcore 6 has nothing to do
APP: Lcore 7 is writing to port 0
APP: Lcore 0 has nothing to do


7.) KNI interface configured and brought up:

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0 192.168.13.95 netmask
255.255.248.0 up
APP: Configure network interface of 0 up
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
size no less than 32.

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0

vEth0     Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
          inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
          inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Note also that dmesg is clean not pointing to any issues:
[ 1770.113952] KNI: /dev/kni opened
[ 1770.561957] KNI: Creating kni...
[ 1770.561973] KNI: tx_phys:      0x0000000e5b1ca9c0, tx_q addr:
0xffff880e5b1ca9c0
[ 1770.561974] KNI: rx_phys:      0x0000000e5b1c8940, rx_q addr:
0xffff880e5b1c8940
[ 1770.561975] KNI: alloc_phys:   0x0000000e5b1c68c0, alloc_q addr:
0xffff880e5b1c68c0
[ 1770.561976] KNI: free_phys:    0x0000000e5b1c4840, free_q addr:
0xffff880e5b1c4840
[ 1770.561977] KNI: req_phys:     0x0000000e5b1c27c0, req_q addr:
0xffff880e5b1c27c0
[ 1770.561978] KNI: resp_phys:    0x0000000e5b1c0740, resp_q addr:
0xffff880e5b1c0740
[ 1770.561979] KNI: mbuf_phys:    0x000000006727dec0, mbuf_kva:
0xffff88006727dec0
[ 1770.561980] KNI: mbuf_va:      0x00007fcd8627dec0
[ 1770.561981] KNI: mbuf_size:    2048
[ 1770.561987] KNI: pci_bus: 06:00:00
[ 1770.599689] igb_uio 0000:06:00.0: (PCI Express:5.0GT/s:Width x8)
[ 1770.599691] 90:e2:ba:55:fd:c4
[ 1770.599777] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
MAC: 2, PHY: 0, PBA No: E68793-006
[ 1770.599779] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
Enabled Features: RxQ: 1 TxQ: 1
[ 1770.599790] igb_uio 0000:06:00.0 (unnamed net_device) (uninitialized):
Intel(R) 10 Gigabit Network Connection


8.) ethtool vEth0 link is detected:

root@l3sys2-acc2-3329:~/dpdk-2.1.0# ethtool vEth0
Settings for vEth0:
Supported ports: [ FIBRE ]
Supported link modes:   10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes:  10000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
      drv probe link
Link detected: yes


9.) kernel started with: iommu=pt intel_iommu=on

GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on console=tty1
console=ttyS1,115200n8"


10.) Disabled virtualization in BIOS per forum recommendation


Situation:
Despite doing everything seemingly correct I cant ssh or ping to and from
this interface. I tried starting tcpdump on the interface but didn't notice
any traffic. I'm not sure what I'm doing wrong here, if I could get some
support I'd appreciate it. I can provide additional details from the system
if needed.

Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-12-07  9:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-30 22:09 [dpdk-users] DPDK KNI Issue Ilir Iljazi
2015-12-03 21:54 Ilir Iljazi
2015-12-04 12:27 ` Pattan, Reshma
2015-12-04 16:15   ` Ilir Iljazi
     [not found]   ` <3AEA2BF9852C6F48A459DA490692831FF89F16@IRSMSX109.ger.corp.intel.com>
     [not found]     ` <CAPh65stm9JQQDHYA15OiKrpaeAkeDGQSSLmWy_ZUGWkLHW8irA@mail.gmail.com>
2015-12-07  9:58       ` Pattan, Reshma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).