DPDK usage discussions
 help / color / mirror / Atom feed
From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: "gmzhang76@gmail.com" <gmzhang76@gmail.com>,
	"ovs-discuss@openvswitch.org" <ovs-discuss@openvswitch.org>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
Date: Tue, 21 Aug 2018 08:05:44 +0000	[thread overview]
Message-ID: <74F120C019F4A64C9B78E802F6AD4CC278FF8DFF@IRSMSX106.ger.corp.intel.com> (raw)
In-Reply-To: <CAFvVKm6V9sN8PVw9+WF4h63ctmVpXTh4EHCm6uy9doibkkyKuQ@mail.gmail.com>

Hi,

I am cc-ing the DPDK users’ list as the SEGV originates in the DPDK vHost code and somebody there might be able to help too.
Could you provide more information about your environment please? eg. OVS & DPDK versions, hugepage configuration, etc.

Thanks,
Ciara

From: ovs-discuss-bounces@openvswitch.org [mailto:ovs-discuss-bounces@openvswitch.org] On Behalf Of ???
Sent: Monday, August 20, 2018 12:06 PM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker

Hi,

   I used ovs-dpdk  as bridge  and l2fwd  as container. When l2fwd was runned ,the ovs-dpdk was crashed.

My command is :

    docker run -it --privileged --name=dpdk-docker  -v /dev/hugepages:/mnt/huge -v /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker

./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci --vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0  --vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1 -- -p 0x3



The crash log



Program terminated with signal 11, Segmentation fault.

#0  0x0000000000445828 in malloc_elem_alloc ()

Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64 libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64

(gdb) bt

#0  0x0000000000445828 in malloc_elem_alloc ()

#1  0x0000000000445e5d in malloc_heap_alloc ()

#2  0x0000000000444c74 in rte_zmalloc ()

#3  0x00000000006c16bf in vhost_new_device ()

#4  0x00000000006bfaf4 in vhost_user_add_connection ()

#5  0x00000000006beb88 in fdset_event_dispatch ()

#6  0x00007f613b288e25 in start_thread () from /usr/lib64/libpthread.so.0

#7  0x00007f613a86b34d in clone () from /usr/lib64/libc.so.6



My OVS  version is 2.9.1 , DPDK version is 17.11.3





Thanks







       reply	other threads:[~2018-08-21  8:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAFvVKm6V9sN8PVw9+WF4h63ctmVpXTh4EHCm6uy9doibkkyKuQ@mail.gmail.com>
2018-08-21  8:05 ` Loftus, Ciara [this message]
2018-08-21  8:17   ` O Mahony, Billy
2018-08-21  8:59     ` 张广明
2018-08-22  4:16       ` 张广明

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74F120C019F4A64C9B78E802F6AD4CC278FF8DFF@IRSMSX106.ger.corp.intel.com \
    --to=ciara.loftus@intel.com \
    --cc=gmzhang76@gmail.com \
    --cc=ovs-discuss@openvswitch.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).