DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 183] Problem using cloned rte_mbuf buffers with KNI interface
@ 2019-01-07 15:29 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2019-01-07 15:29 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=183

            Bug ID: 183
           Summary: Problem using cloned rte_mbuf buffers with KNI
                    interface
           Product: DPDK
           Version: 18.11
          Hardware: All
                OS: Linux
            Status: CONFIRMED
          Severity: normal
          Priority: Normal
         Component: other
          Assignee: dev@dpdk.org
          Reporter: dinesh.kp78@gmail.com
  Target Milestone: ---

problem appears in DPDK 18.11

we have a scenario to send cloned rte_mbuf buffer packets to kernel virtual
interface via KNI api. Things were working fine till DPDK-18.05 but when we
upgraded to DPDK-18.11 noticing some issues that "empty packets are getting
delivered via KNI interface"

environment setup
--------------------------

dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:00:0b.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio
unused=e1000
0000:00:0c.0 '82540EM Gigabit Ethernet Controller 100e' drv=igb_uio
unused=e1000

Network devices using kernel driver
===================================
0000:00:03.0 'Virtio network device 1000' if=eth0 drv=virtio-pci
unused=virtio_pci,igb_uio *Active*
0000:00:04.0 'Virtio network device 1000' if=eth1 drv=virtio-pci
unused=virtio_pci,igb_uio *Active*
0000:00:05.0 'Virtio network device 1000' if=eth2 drv=virtio-pci
unused=virtio_pci,igb_uio *Active*

DPDK kernel modules loaded
--------------------------
lsmod | grep igb_uio
igb_uio                13506  2 
uio                    19259  5 igb_uio
lsmod | grep rte_kni
rte_kni                28122  1 

Redhat linux7
uname -r
3.10.0-862.9.1.el7.x86_64 #1 SMP Wed Jun 27 04:30:39 EDT 2018 x86_64 x86_64
x86_64 GNU/Linux


Problem simulation
--------------------------
To simulate the scenario, modified dpdk-18.11\examples\kni\main.c
"kni_ingress()" function to use rte_pktmbuf_clone() before sending packets to
kni interface

static void
kni_ingress(struct kni_port_params *p)
{
        uint8_t i;
        uint16_t port_id;
        unsigned nb_rx, num;
        uint32_t nb_kni;
        struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
        struct rte_mbuf *pkt;

        if (p == NULL)
                return;

        nb_kni = p->nb_kni;
        port_id = p->port_id;
        for (i = 0; i < nb_kni; i++) {
                /* Burst rx from eth */
                nb_rx = rte_eth_rx_burst(port_id, 0, pkts_burst, PKT_BURST_SZ);
                if (unlikely(nb_rx > PKT_BURST_SZ)) {
                        RTE_LOG(ERR, APP, "Error receiving from eth\n");
                        return;
                }

                // ----------- clone pkt start -----------
                for (k = 0; k < nb_rx; k++) {
                        pkt = pkts_burst[k];
                        // using 'pkt->pool' for clone pkts is not efficient
way of using memory. perhaps 
                        // we should have another pool with no memory reserved
for the packet data as clone will 
                        // have new metadata + just a reference to raw data.
for test simulation it's fine to reuse same buffer pool.
                        pkts_burst[k] = rte_pktmbuf_clone(pkt, pkt->pool);
                        rte_pktmbuf_free(pkt);
                } // ----------- clone pkt end -----------

                /* Burst tx to kni */
                num = rte_kni_tx_burst(p->kni[i], pkts_burst, nb_rx);
                if (num)
                        kni_stats[port_id].rx_packets += num;

                rte_kni_handle_request(p->kni[i]);
                if (unlikely(num < nb_rx)) {
                        /* Free mbufs not tx to kni interface */
                        kni_burst_free_mbufs(&pkts_burst[num], nb_rx - num);
                        kni_stats[port_id].rx_dropped += nb_rx - num;
                }
        }
}



# /tmp/18.11/kni -l 0-1 -n 4 -b 0000:00:03.0 -b 0000:00:04.0 -b 0000:00:05.0
--proc-type=auto -m 512 -- -p 0x1 -P --config="(0,0,1)"
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL:   Device is blacklisted, not initializing
EAL: PCI device 0000:00:0b.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:0c.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
APP: Initialising port 0 ...
KNI: pci: 00:0b:00       8086:100e

Checking link status
.....done
Port0 Link Up - speed 1000Mbps - full-duplex
APP: ========================
APP: KNI Running
APP: kill -SIGUSR1 8903
APP:     Show KNI Statistics.
APP: kill -SIGUSR2 8903
APP:     Zero KNI Statistics.
APP: ========================
APP: Lcore 1 is writing to port 0
APP: Lcore 0 is reading from port 0
APP: Configure network interface of 0 up
KNI: Configure promiscuous mode of 0 to 1


Bring up vEth0 interface created by kni example app
# ifconfig vEth0 up

Dump the content received on vEth0 interface
# tcpdump -vv -i vEth0
tcpdump: listening on vEth0, link-type EN10MB (Ethernet), capture size 262144
bytes
13:38:31.982085 [|ether]
13:38:32.050576 [|ether]
13:38:32.099805 [|ether]
13:38:32.151790 [|ether]
13:38:32.206755 [|ether]
13:38:32.253135 [|ether]
13:38:32.298773 [|ether]
13:38:32.345555 [|ether]
13:38:32.388859 [|ether]
13:38:32.467562 [|ether]


On sending packets to "00:0b:00" interface by using tcpreply, I could see
packets with empty content received on "vEth0". sometime I have seen kni
example app crashing with segmentation fault.

After rte_kni net driver analysis, it appears to be physical to virtual address
conversion is not proper. perhaps something to do with memory management
changes in recent DPDK versions. (I can also confirm that modified kni example
works perfectly fine on DPDK 18.02)

As a workaround used --legacy-mem switch during kni app start up. it seems to
be promising, could manage to receive and dump cloned packets without any
issue.
#/tmp/18.11/kni -l 0-1 -n 4 -b 0000:00:03.0 -b 0000:00:04.0 -b 0000:00:05.0
--proc-type=auto --legacy-mem -m 512 -- -p 0x1 -P --config="(0,0,1)"


Could someone confirm that it's a bug in DPDK 18.11 ?

Thanks,
Dinesh

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-01-07 15:29 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-07 15:29 [dpdk-dev] [Bug 183] Problem using cloned rte_mbuf buffers with KNI interface bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).