From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f174.google.com (mail-pf0-f174.google.com [209.85.192.174]) by dpdk.org (Postfix) with ESMTP id E2A8E1B169 for ; Mon, 8 Jan 2018 14:35:35 +0100 (CET) Received: by mail-pf0-f174.google.com with SMTP id m26so5932333pfj.11 for ; Mon, 08 Jan 2018 05:35:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=9z8CgAlb8HSK3NVOBdJ61itNMe0DRuX5kMKnRta0OR4=; b=PrPn8krAqNpBct5eYxUv9G6H5EJJ/+AyDPoscn3KERilRNZMR2YalFWkOGRz+mp8gn Yw+jXxXfP5hnPVxKwkyBuji2Cp2e9tOBHbzi76rWGv7ROlOnmSlpdiGmi453WtkAeYrN FcC+i1EQTAFfkaT9+J3fyGKCXwUiVyTse7w7/CRNnxqX+rESRtl8ZsXzux3N/qd7o4qn R+y8A4zFNJ14vg+keUHMQ4GeEAdKM59s6UfDKmhgMsUR8otCI/EVgTLc4ibCLBct7OnJ CG1AGSOgWwXFpKlFei+c7vGLiryY+U/PZJUuLITf3/qmJYm0vsU8LDeGcaddnk8MYAJT CwTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=9z8CgAlb8HSK3NVOBdJ61itNMe0DRuX5kMKnRta0OR4=; b=CEMlEpZklIK267R2BzKwln/rrnFdRCt/y1LxQF6xFsyw2HFvXj/Pc6cxR2NdWBuDol WLrT/4M5NXgTPn20zblEF/CUfbU5m0UoCrJMlrp2IL01hV0ez7QY6NRYzFKSj5lfnvRe tjYrkZts5etBSCUm+HU1Rzs24auNozqTZTJ/9iDuMouMkIXgIySZ7bHyCyf9RHRufJtR GNittO9N5QUAvgmb2aB5Vh3XAa53zXrK5KUImVrswKTNLxVQspdgEUz2gvNlAkTmRQkm AkB1gWev9lnq+u/JNv1wv4UEJ9HdCpYmhtqozZS6QlNj1FtJg8BB08i8Mn5Q7Z9KllR+ rCCg== X-Gm-Message-State: AKGB3mIG/Z3cRIAy2U4Z7QI8lwPB0S/MzkH0mWmRSPcdIo3tN7KZMqkc gqoGEe6tI2a28yZAYr7rM9mRR6npsWY0gF/M3dtUJg== X-Google-Smtp-Source: ACJfBotPT4C4+p59mw7tPBYedvxZSJmW5WaYlVV7ug0djcUk8YMkWk382sSXeB20GYj9R/PFdEbIjoz4Iymka9odAK8= X-Received: by 10.101.96.200 with SMTP id r8mr9613196pgv.341.1515418534820; Mon, 08 Jan 2018 05:35:34 -0800 (PST) MIME-Version: 1.0 Received: by 10.100.142.87 with HTTP; Mon, 8 Jan 2018 05:35:34 -0800 (PST) From: Shiv Dev Date: Mon, 8 Jan 2018 19:05:34 +0530 Message-ID: To: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] dpdk docker container over VPP.. X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Jan 2018 13:35:36 -0000 Hi Team, I am trying to run dpdk apps inside docker containers. These containers have vhost-user interfaces connected to VPP. The memory regions in the VPP associated with the vhost-user are 0, which might occur if the VHOST_USER_SET_MEM_TABLE message has not reached VPP. Any ideas on how to debug the issue from the dpdk end ? Is there any way to figure out if the virtio protocol handshake has been initiated ? Regards, Shiv VPP status: ---------------- vpp# show vhost-user .. Interface: VirtualEthernet0/0/0 (ifindex 2) virtio_net_hdr_sz 12 features mask (0xffffffff): features (0x10008000): VIRTIO_NET_F_MRG_RXBUF (15) VIRTIO_F_INDIRECT_DESC (28) protocol features (0x0) socket filename /tmp/sock2.sock type server errno "Success" rx placement: tx placement: spin-lock thread 0 on vring 0 Memory regions (total 0) Interface: VirtualEthernet0/0/1 (ifindex 3) virtio_net_hdr_sz 12 features mask (0xffffffff): features (0x10008000): VIRTIO_NET_F_MRG_RXBUF (15) VIRTIO_F_INDIRECT_DESC (28) protocol features (0x0) socket filename /tmp/sock1.sock type server errno "Success" rx placement: tx placement: spin-lock thread 0 on vring 0 Memory regions (total 0) DPDK container configuration ----------------------------------------- I have compiled the dpdk app, and launching the docker with the below command. - sudo docker run -it -v /tmp/sock1.sock:/var/run/usvhost1 -v /tmp/sock2.sock:/var/run/usvhost2 -v /dev/hugepages/:/dev/hugepages dpdk-app-l2fwd - Subsequently, I launch the dpdk app.. ./bin/testpmd -l 16-17 -n 4 --log-level=8 --socket-mem=1024,1024 --no-pci --vdev=virtio_user0,path=/var/run/usvhost1,mac=00:00:00:01:01:01 --vdev=virtio_user1,path=/var/run/usvhost2,mac=00:00:00:01:01:02 -- -i --txqflags=0xf00 --disable-hw-vlan EAL: Master lcore 16 is ready (tid=4a1488c0;cpuset=[16]) EAL: lcore 17 is ready (tid=48824700;cpuset=[17]) EAL: Search driver virtio_user0 to probe device virtio_user0 EAL: Search driver virtio_user1 to probe device virtio_user1 Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. USER1: create a new mbuf pool : n=155456, size=2176, socket=0 USER1: create a new mbuf pool : n=155456, size=2176, socket=1 Configuring Port 0 (socket 0) Port 0: 00:00:00:01:01:01 Configuring Port 1 (socket 0) Port 1: 00:00:00:01:01:02 Checking link statuses... Done testpmd> testpmd> show port info all ********************* Infos for port 0 ********************* MAC address: 00:00:00:01:01:01 Driver name: net_virtio_user Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10000 Mbps Link duplex: full-duplex MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off filter off qinq(extend) off No flow type is supported. Max possible RX queues: 1 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Max possible TX queues: 1 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 ********************* Infos for port 1 ********************* MAC address: 00:00:00:01:01:02 Driver name: net_virtio_user Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10000 Mbps Link duplex: full-duplex MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off filter off qinq(extend) off No flow type is supported. Max possible RX queues: 1 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Max possible TX queues: 1 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 testpmd> ----