* [dpdk-users] How to test l3fwd? @ 2016-07-20 20:02 Charlie Li 2016-07-20 20:06 ` Charlie Li 0 siblings, 1 reply; 5+ messages in thread From: Charlie Li @ 2016-07-20 20:02 UTC (permalink / raw) To: users Hello, My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7. I have been testing L2 throughput with l2fwd and an Ixia traffic generator. It works as expected. Command: ./l2fwd -c 0xf -n 4 -- -p 0x3 Ixia traffic: MAC (Ethernet frames) Now I am moving to test L3 throughput with l3fwd, but cannot start traffic from Ixia Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" Ixia traffic: IPv4 (IP packets) My question is: What are the IP addresses of the two ports? "LPM: Adding route 0x01010100 / 24 (0) LPM: Adding route 0x02010100 / 24 (1)" Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for port0 and 2.1.1.0 (netmask 255.255.255.0) for port1? I set up the following two flows, but Ixia complains "unreachable" Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .........(l3fwd) ........ DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100) Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0 Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .........(l3fwd) ........ DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100) Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0 Thanks, Charlie ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-users] How to test l3fwd? 2016-07-20 20:02 [dpdk-users] How to test l3fwd? Charlie Li @ 2016-07-20 20:06 ` Charlie Li 2016-07-23 1:18 ` Charlie Li 0 siblings, 1 reply; 5+ messages in thread From: Charlie Li @ 2016-07-20 20:06 UTC (permalink / raw) To: users I am also attaching the full logs from l3fwd and l2fwd. On Wed, Jul 20, 2016 at 3:02 PM, Charlie Li <charlie.li@gmail.com> wrote: > Hello, > > My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7. > > I have been testing L2 throughput with l2fwd and an Ixia traffic > generator. It works as expected. > > Command: ./l2fwd -c 0xf -n 4 -- -p 0x3 > Ixia traffic: MAC (Ethernet frames) > > > Now I am moving to test L3 throughput with l3fwd, but cannot start traffic > from Ixia > > Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" > Ixia traffic: IPv4 (IP packets) > > My question is: > > What are the IP addresses of the two ports? > > "LPM: Adding route 0x01010100 / 24 (0) > LPM: Adding route 0x02010100 / 24 (1)" > > Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for > port0 and 2.1.1.0 (netmask 255.255.255.0) for port1? > > I set up the following two flows, but Ixia complains "unreachable" > > Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .........(l3fwd) > ........ DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100) > Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0 > > Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .........(l3fwd) > ........ DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100) > Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0 > > Thanks, > Charlie > > -------------- next part -------------- [cli@cli-desktop l3fwd]$ sudo -E ./build/l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 1 on socket 0 EAL: Detected lcore 2 as core 2 on socket 0 EAL: Detected lcore 3 as core 3 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 4 lcore(s) EAL: VFIO modules not all loaded, skip VFIO support... EAL: Setting up physically contiguous memory... EAL: Ask a virtual area of 0x40000000 bytes EAL: Virtual area found at 0x7f4e80000000 (size = 0x40000000) EAL: Requesting 1 pages of size 1024MB from socket 0 EAL: TSC frequency is ~2096067 KHz EAL: Master lcore 0 is ready (tid=5486e8c0;cpuset=[0]) EAL: lcore 1 is ready (tid=53149700;cpuset=[1]) EAL: lcore 2 is ready (tid=52948700;cpuset=[2]) EAL: lcore 3 is ready (tid=52147700;cpuset=[3]) EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f4ec0000000 EAL: PCI memory mapped at 0x7f4ec0080000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5 PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb EAL: PCI device 0000:02:00.1 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f4ec0084000 EAL: PCI memory mapped at 0x7f4ec0104000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6 PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=4... Address:90:E2:BA:4F:3F:B0, Destination:02:00:00:00:00:00, Allocated mbuf pool on socket 0 LPM: Adding route 0x01010100 / 24 (0) LPM: Adding route 0x02010100 / 24 (1) LPM: Adding route IPV6 / 48 (0) LPM: Adding route IPV6 / 48 (1) txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48d69c0 hw_ring=0x7f4eb48d8a00 dma_addr=0x1b48d8a00 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=1,1,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48c4840 hw_ring=0x7f4eb48c6880 dma_addr=0x1b48c6880 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=2,2,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48b26c0 hw_ring=0x7f4eb48b4700 dma_addr=0x1b48b4700 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=3,3,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48a0540 hw_ring=0x7f4eb48a2580 dma_addr=0x1b48a2580 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=4... Address:90:E2:BA:4F:3F:B1, Destination:02:00:00:00:00:01, txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb488e2c0 hw_ring=0x7f4eb4890300 dma_addr=0x1b4890300 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=1,1,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb487c140 hw_ring=0x7f4eb487e180 dma_addr=0x1b487e180 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=2,2,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb4869fc0 hw_ring=0x7f4eb486c000 dma_addr=0x1b486c000 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. txq=3,3,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb4857e40 hw_ring=0x7f4eb4859e80 dma_addr=0x1b4859e80 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. Initializing rx queues on lcore 0 ... Initializing rx queues on lcore 1 ... Initializing rx queues on lcore 2 ... rxq=0,0,0 PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f4eb4847540 sw_sc_ring=0x7f4eb4847000 hw_ring=0x7f4eb4847a80 dma_addr=0x1b4847a80 Initializing rx queues on lcore 3 ... rxq=1,0,0 PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f4eb48366c0 sw_sc_ring=0x7f4eb4836180 hw_ring=0x7f4eb4836c00 dma_addr=0x1b4836c00 PMD: ixgbe_set_rx_function(): Port[0] doesn't meet Vector Rx preconditions or RTE_IXGBE_INC_VECTOR is not enabled PMD: ixgbe_set_rx_function(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0. PMD: ixgbe_set_rx_function(): Port[1] doesn't meet Vector Rx preconditions or RTE_IXGBE_INC_VECTOR is not enabled PMD: ixgbe_set_rx_function(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1. Checking link statusdone Port 0 Link Up - speed 10000 Mbps - full-duplex Port 1 Link Up - speed 10000 Mbps - full-duplex L3FWD: lcore 1 has nothing to do L3FWD: entering main loop on lcore 2 L3FWD: -- lcoreid=2 portid=0 rxqueueid=0 L3FWD: entering main loop on lcore 3 L3FWD: -- lcoreid=3 portid=1 rxqueueid=0 L3FWD: lcore 0 has nothing to do -------------- next part -------------- [cli@cli-desktop l2fwd]$ sudo -E ./build/l2fwd -c 0xf -n 4 -- -p 0x3 EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 1 on socket 0 EAL: Detected lcore 2 as core 2 on socket 0 EAL: Detected lcore 3 as core 3 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 4 lcore(s) EAL: VFIO modules not all loaded, skip VFIO support... EAL: Setting up physically contiguous memory... EAL: Ask a virtual area of 0x40000000 bytes EAL: Virtual area found at 0x7f3c40000000 (size = 0x40000000) EAL: Requesting 1 pages of size 1024MB from socket 0 EAL: TSC frequency is ~2096066 KHz EAL: Master lcore 0 is ready (tid=1a0a8c0;cpuset=[0]) EAL: lcore 1 is ready (tid=2fa700;cpuset=[1]) EAL: lcore 3 is ready (tid=ff2f8700;cpuset=[3]) EAL: lcore 2 is ready (tid=ffaf9700;cpuset=[2]) EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f3c80000000 EAL: PCI memory mapped at 0x7f3c80080000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5 PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb EAL: PCI device 0000:02:00.1 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f3c80084000 EAL: PCI memory mapped at 0x7f3c80104000 PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6 PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb Lcore 0: RX port 0 Lcore 1: RX port 1 Initializing port 0... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f3c7e8fed40 sw_sc_ring=0x7f3c7e8fe800 hw_ring=0x7f3c7e8ff280 dma_addr=0x1be8ff280 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f3c7e8ec640 hw_ring=0x7f3c7e8ee680 dma_addr=0x1be8ee680 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=0). done: Port 0, MAC address: 90:E2:BA:4F:3F:B0 Initializing port 1... PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f3c7e8dbc40 sw_sc_ring=0x7f3c7e8db700 hw_ring=0x7f3c7e8dc180 dma_addr=0x1be8dc180 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f3c7e8c9540 hw_ring=0x7f3c7e8cb580 dma_addr=0x1be8cb580 PMD: ixgbe_set_tx_function(): Using simple tx code path PMD: ixgbe_set_tx_function(): Vector tx enabled. PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=1). done: Port 1, MAC address: 90:E2:BA:4F:3F:B1 Checking link statusdone Port 0 Link Up - speed 10000 Mbps - full-duplex Port 1 Link Up - speed 10000 Mbps - full-duplex L2FWD: entering main loop on lcore 1 L2FWD: -- lcoreid=1 portid=1 L2FWD: lcore 2 has nothing to do L2FWD: entering main loop on lcore 0 L2FWD: -- lcoreid=0 portid=0 Port statistics ==================================== Statistics for port 0 ------------------------------ Packets sent: 0 Packets received: 0 Packets dropped: 0 Statistics for port 1 ------------------------------ Packets sent: 0 Packets received: 0 Packets dropped: 0 Aggregate statistics =============================== Total packets sent: 0 Total packets received: 0 Total packets dropped: 0 ==================================================== L2FWD: lcore 3 has nothing to do ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-users] How to test l3fwd? 2016-07-20 20:06 ` Charlie Li @ 2016-07-23 1:18 ` Charlie Li 2016-07-27 17:51 ` Vincent Li 0 siblings, 1 reply; 5+ messages in thread From: Charlie Li @ 2016-07-23 1:18 UTC (permalink / raw) To: users Never mind - I figured it out. On Wed, Jul 20, 2016 at 3:06 PM, Charlie Li <charlie.li@gmail.com> wrote: > I am also attaching the full logs from l3fwd and l2fwd. > > On Wed, Jul 20, 2016 at 3:02 PM, Charlie Li <charlie.li@gmail.com> wrote: > >> Hello, >> >> My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7. >> >> I have been testing L2 throughput with l2fwd and an Ixia traffic >> generator. It works as expected. >> >> Command: ./l2fwd -c 0xf -n 4 -- -p 0x3 >> Ixia traffic: MAC (Ethernet frames) >> >> >> Now I am moving to test L3 throughput with l3fwd, but cannot start >> traffic from Ixia >> >> Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" >> Ixia traffic: IPv4 (IP packets) >> >> My question is: >> >> What are the IP addresses of the two ports? >> >> "LPM: Adding route 0x01010100 / 24 (0) >> LPM: Adding route 0x02010100 / 24 (1)" >> >> Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for >> port0 and 2.1.1.0 (netmask 255.255.255.0) for port1? >> >> I set up the following two flows, but Ixia complains "unreachable" >> >> Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .........(l3fwd) >> ........ DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100) >> Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0 >> >> Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .........(l3fwd) >> ........ DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100) >> Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0 >> >> Thanks, >> Charlie >> >> > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-users] How to test l3fwd? 2016-07-23 1:18 ` Charlie Li @ 2016-07-27 17:51 ` Vincent Li 2016-07-27 20:42 ` Vincent Li 0 siblings, 1 reply; 5+ messages in thread From: Vincent Li @ 2016-07-27 17:51 UTC (permalink / raw) To: Charlie Li; +Cc: users could you please share how you figured out your problem? it appears I am having same kind of network link setup and no packet being forwarded, there must be something I am missing, wondering if your solution would apply to my setup. Thanks! Vincent On Fri, Jul 22, 2016 at 6:18 PM, Charlie Li <charlie.li@gmail.com> wrote: > Never mind - I figured it out. > > On Wed, Jul 20, 2016 at 3:06 PM, Charlie Li <charlie.li@gmail.com> wrote: > >> I am also attaching the full logs from l3fwd and l2fwd. >> >> On Wed, Jul 20, 2016 at 3:02 PM, Charlie Li <charlie.li@gmail.com> wrote: >> >>> Hello, >>> >>> My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7. >>> >>> I have been testing L2 throughput with l2fwd and an Ixia traffic >>> generator. It works as expected. >>> >>> Command: ./l2fwd -c 0xf -n 4 -- -p 0x3 >>> Ixia traffic: MAC (Ethernet frames) >>> >>> >>> Now I am moving to test L3 throughput with l3fwd, but cannot start >>> traffic from Ixia >>> >>> Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" >>> Ixia traffic: IPv4 (IP packets) >>> >>> My question is: >>> >>> What are the IP addresses of the two ports? >>> >>> "LPM: Adding route 0x01010100 / 24 (0) >>> LPM: Adding route 0x02010100 / 24 (1)" >>> >>> Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for >>> port0 and 2.1.1.0 (netmask 255.255.255.0) for port1? >>> >>> I set up the following two flows, but Ixia complains "unreachable" >>> >>> Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .........(l3fwd) >>> ........ DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100) >>> Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0 >>> >>> Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .........(l3fwd) >>> ........ DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100) >>> Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0 >>> >>> Thanks, >>> Charlie >>> >>> >> ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-users] How to test l3fwd? 2016-07-27 17:51 ` Vincent Li @ 2016-07-27 20:42 ` Vincent Li 0 siblings, 0 replies; 5+ messages in thread From: Vincent Li @ 2016-07-27 20:42 UTC (permalink / raw) To: Charlie Li; +Cc: users I figured it out too. just for the sake of future reference if anyone running into same kind of issue. I used DPDK pktgen as packet generator and my set up as below: syn flood flow: pktgen box Port0 ( random source ip, destination ip 2.1.1.1) -> DPDK Port0 (1.1.1.0) ..(l3fwd)..DPDK Port1 (2.1.1.0) -> pktgen box linux driver nic (ip 2.1.1.1) 1, pktgen box (used mTCP DPDK port so there is actual dpdk0 interface in Linux,everything else is same as upstream DPDK, have or not have dpdk0 in linux does not matter here): 21: dpdk0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 00:1b:21:50:bc:38 brd ff:ff:ff:ff:ff:ff inet 10.0.0.126/24 brd 10.0.0.255 scope global dpdk0 valid_lft forever preferred_lft forever inet6 fe80::21b:21ff:fe50:bc38/64 scope link valid_lft forever preferred_lft forever 8: p1p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:1b:21:50:bc:39 brd ff:ff:ff:ff:ff:ff inet 2.1.1.1/24 scope global p1p2 valid_lft forever preferred_lft forever inet6 fe80::21b:21ff:fe50:bc39/64 scope link valid_lft forever preferred_lft forever relevant pktgen load test file config: set ip dst 0 2.1.1.1 set mac 0 a0:36:9f:a1:4d:6c # <====this should be the l3fwd DPDK port 0 mac dst.ip start 0 2.1.1.1 dst.ip min 0 2.1.1.1 dst.ip max 0 2.1.1.1 dst.ip inc 0 0.0.0.1 start pktgen #./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0x6ff -- -m [2:2-15].0 Pktgen> load <the load test file> Pktgen> start 0 Pktgen> page range Pktgen> page main - Ports 0-0 of 1 <Main Page> Copyright (c) <2010-2016>, Intel Corporation Flags:Port : ------R--------:0 Link State : <UP-1000-FD> ----TotalRate---- Pkts/s Max/Rx : 0/0 0/0 Max/Tx : 1488234/1488196 1488200/1488196 MBits/s Rx/Tx : 0/952 0/952 Broadcast : 0 2 l3fwd dpdk root@r210:/home/dpdk/mtcp/dpdk-16.04# ./tools/dpdk_nic_bind.py --status Network devices using DPDK-compatible driver ============================================ 0000:01:00.0 'I350 Gigabit Network Connection' drv=igb_uio unused= 0000:01:00.1 'I350 Gigabit Network Connection' drv=igb_uio unused= start l3fwd #./build/l3fwd -c 0xf -- -p 0x3 --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" try more complete run example if above does not work #./build/l3fwd -c 0xf -- -p 0x3 --eth-dest=<dpdk port 0>,<mac of dpdk port 1> --eth-dest=<dpdk port 1>,<target dest mac> -P --config="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" ...................... L3FWD: LPM or EM none selected, default LPM on Initializing port 0 ... Creating queues: nb_rxq=2 nb_txq=4... Address:A0:36:9F:A1:4D:6C, Destination:02:00:00:00:00:00, Allocated mbuf pool on socket 0 LPM: Adding route 0x01010100 / 24 (0) LPM: Adding route 0x02010100 / 24 (1) LPM: Adding route IPV6 / 48 (0) LPM: Adding route IPV6 / 48 (1) txq=0,0,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f21258be940 hw_ring=0x7f21258c0980 dma_addr=0x5a88c0980 txq=1,1,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f21258ac7c0 hw_ring=0x7f21258ae800 dma_addr=0x5a88ae800 txq=2,2,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f212589a640 hw_ring=0x7f212589c680 dma_addr=0x5a889c680 txq=3,3,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f21258884c0 hw_ring=0x7f212588a500 dma_addr=0x5a888a500 Initializing port 1 ... Creating queues: nb_rxq=2 nb_txq=4... Address:A0:36:9F:A1:4D:6D, Destination:02:00:00:00:00:01, txq=0,0,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f2125876240 hw_ring=0x7f2125878280 dma_addr=0x5a8878280 txq=1,1,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f21258640c0 hw_ring=0x7f2125866100 dma_addr=0x5a8866100 txq=2,2,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f2125851f40 hw_ring=0x7f2125853f80 dma_addr=0x5a8853f80 txq=3,3,0 PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f212583fdc0 hw_ring=0x7f2125841e00 dma_addr=0x5a8841e00 3, pktgen box linux driver port # tcpdump -nn -i p1p2 -c 10 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on p1p2, link-type EN10MB (Ethernet), capture size 65535 bytes 13:37:07.166150 IP 216.145.81.59.26391 > 2.1.1.1.80: Flags [S], seq 3699016466:3699016472, win 8192, length 6 13:37:07.166152 IP 21.170.7.19.11344 > 2.1.1.1.80: Flags [S], seq 3254728108:3254728114, win 8192, length 6 13:37:07.166152 IP 62.211.26.25.5831 > 2.1.1.1.80: Flags [S], seq 2610626014:2610626020, win 8192, length 6 13:37:07.166153 IP 138.128.151.152.16368 > 2.1.1.1.80: Flags [S], seq 301380038:301380044, win 8192, length 6 13:37:07.166153 IP 254.40.13.171.30996 > 2.1.1.1.80: Flags [S], seq 1583422545:1583422551, win 8192, length 6 13:37:07.166154 IP 153.32.214.3.11340 > 2.1.1.1.80: Flags [S], seq 519927713:519927719, win 8192, length 6 13:37:07.166155 IP 167.247.215.107.21169 > 2.1.1.1.80: Flags [S], seq 3713824255:3713824261, win 8192, length 6 13:37:07.166155 IP 236.183.249.37.2446 > 2.1.1.1.80: Flags [S], seq 2407370042:2407370048, win 8192, length 6 13:37:07.166156 IP 64.57.152.132.5826 > 2.1.1.1.80: Flags [S], seq 1338723318:1338723324, win 8192, length 6 13:37:07.166156 IP 245.146.158.211.2438 > 2.1.1.1.80: Flags [S], seq 210449720:210449726, win 8192, length 6 On Wed, Jul 27, 2016 at 10:51 AM, Vincent Li <vincent.mc.li@gmail.com> wrote: > could you please share how you figured out your problem? it appears I > am having same kind of network link setup and no packet being > forwarded, there must be something I am missing, wondering if your > solution would apply to my setup. > > Thanks! > > Vincent > > On Fri, Jul 22, 2016 at 6:18 PM, Charlie Li <charlie.li@gmail.com> wrote: >> Never mind - I figured it out. >> >> On Wed, Jul 20, 2016 at 3:06 PM, Charlie Li <charlie.li@gmail.com> wrote: >> >>> I am also attaching the full logs from l3fwd and l2fwd. >>> >>> On Wed, Jul 20, 2016 at 3:02 PM, Charlie Li <charlie.li@gmail.com> wrote: >>> >>>> Hello, >>>> >>>> My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7. >>>> >>>> I have been testing L2 throughput with l2fwd and an Ixia traffic >>>> generator. It works as expected. >>>> >>>> Command: ./l2fwd -c 0xf -n 4 -- -p 0x3 >>>> Ixia traffic: MAC (Ethernet frames) >>>> >>>> >>>> Now I am moving to test L3 throughput with l3fwd, but cannot start >>>> traffic from Ixia >>>> >>>> Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)" >>>> Ixia traffic: IPv4 (IP packets) >>>> >>>> My question is: >>>> >>>> What are the IP addresses of the two ports? >>>> >>>> "LPM: Adding route 0x01010100 / 24 (0) >>>> LPM: Adding route 0x02010100 / 24 (1)" >>>> >>>> Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for >>>> port0 and 2.1.1.0 (netmask 255.255.255.0) for port1? >>>> >>>> I set up the following two flows, but Ixia complains "unreachable" >>>> >>>> Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .........(l3fwd) >>>> ........ DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100) >>>> Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0 >>>> >>>> Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .........(l3fwd) >>>> ........ DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100) >>>> Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0 >>>> >>>> Thanks, >>>> Charlie >>>> >>>> >>> ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-07-27 20:42 UTC | newest] Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-07-20 20:02 [dpdk-users] How to test l3fwd? Charlie Li 2016-07-20 20:06 ` Charlie Li 2016-07-23 1:18 ` Charlie Li 2016-07-27 17:51 ` Vincent Li 2016-07-27 20:42 ` Vincent Li
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).