DPDK usage discussions
 help / color / mirror / Atom feed
From: 张广明 <gmzhang76@gmail.com>
To: billy.o.mahony@intel.com
Cc: ciara.loftus@intel.com, ovs-discuss@openvswitch.org, users@dpdk.org
Subject: Re: [dpdk-users] [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
Date: Wed, 22 Aug 2018 12:16:56 +0800	[thread overview]
Message-ID: <CAFvVKm4eX1ykbK6y+XO1q_pEXkg578ZoMbSQbAVnzkm_DYJ--Q@mail.gmail.com> (raw)
In-Reply-To: <CAFvVKm5KZKNsZp8ftFbw32B=Hp-iFyHBHd_Vsx9TPbVDzWygLQ@mail.gmail.com>

Hi,

     This issue was resolved. The cause is i miss a  parameter --file-prefix
when run l2fwd

Thanks Billy and  Ciara




张广明 <gmzhang76@gmail.com> 于2018年8月21日周二 下午4:59写道:

> Hi,  Ciara and  Billy
>
> Thanks for your reply
>
>  The default huge page size that i used  is 1GB .
> root@localhost openvswitch]# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.10.0-514.el7.x86_64 root=/dev/mapper/centos-root ro
> crashkernel=auto iommu=pt intel_iommu=on default_hugepagesz=1G
> hugepagesz=1G hugepages=2 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap
> rd.lvm.lv=centos/usr rhgb
>
> The huge page number is 4
> [root@localhost openvswitch]# cat /proc/meminfo | grep Huge
> AnonHugePages:     14336 kB
> HugePages_Total:       4
> HugePages_Free:        2
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
>
>
> My ovs dpdk configure is
> [root@localhost openvswitch]# ovs-vsctl --no-wait get Open_vSwitch .
> other_config
> {dpdk-init="true", dpdk-socket-mem="2048,0", pmd-cpu-mask="0x01"}
>
> My ovs configure
> [root@localhost openvswitch]# ovs-vsctl show
> d2b6062a-4d6f-46f6-8fa4-66dca6b06c96
>     Manager "tcp:192.168.15.18:6640"
>         is_connected: true
>     Bridge br-router
>         Port "p2p1"
>             Interface "p2p1"
>                 type: dpdk
>                 options: {dpdk-devargs="0000:01:00.0"}
>         Port patch-gtp
>             Interface patch-gtp
>                 type: patch
>                 options: {peer=patch-router}
>         Port br-router
>             Interface br-router
>                 type: internal
>     Bridge "br0"
>         Controller "tcp:192.168.15.18:6633"
>             is_connected: true
>         fail_mode: secure
>         Port "p1p1"
>             Interface "p1p1"
>                 type: dpdk
>                 options: {dpdk-devargs="0000:03:00.0"}
>         Port patch-router
>             Interface patch-router
>                 type: patch
>                 options: {peer=patch-gtp}
>         Port "br0"
>             Interface "br0"
>                 type: internal
>         Port "vhost-user1"
>             Interface "vhost-user1"
>                 type: dpdkvhostuser
>         Port "vhost-user0"
>             Interface "vhost-user0"
>                 type: dpdkvhostuser
>     Bridge br-vxlan
>         Port br-vxlan
>             Interface br-vxlan
>                 type: internal
>
>
> Docker running command is
>
>    docker run -it --privileged --name=dpdk-docker  -v
> /dev/hugepages:/mnt/huge -v
> /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker
>
> ./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci
> --vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0
>  --vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1
> -- -p 0x3
> more detail core dump message
>
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000000000443c9c in find_suitable_element (bound=0, align=64,
> flags=0, size=6272, heap=0x7fbc461f2a1c) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:134
> 134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz))
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
> krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64
> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64
> libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64
> numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64
> pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64
> (gdb) bt
> #0  0x0000000000443c9c in find_suitable_element (bound=0, align=64,
> flags=0, size=6272, heap=0x7fbc461f2a1c) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:134
> #1  malloc_heap_alloc (heap=heap@entry=0x7fbc461f2a1c, type=type@entry=0x0,
> size=size@entry=6272, flags=flags@entry=0, align=64, align@entry=1,
> bound=bound@entry=0) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:166
> #2  0x000000000044312a in rte_malloc_socket (type=type@entry=0x0,
> size=size@entry=6272, align=align@entry=0, socket_arg=<optimized out>,
> socket_arg@entry=-1) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c:91
> #3  0x00000000004431d1 in rte_zmalloc_socket (socket=-1, align=0,
> size=6272, type=0x0) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c:126
> #4  rte_zmalloc (type=type@entry=0x0, size=size@entry=6272,
> align=align@entry=0) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c:135
> #5  0x00000000006bec48 in vhost_new_device () at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/vhost.c:311
> #6  0x00000000006bd685 in vhost_user_add_connection (fd=fd@entry=66,
> vsocket=vsocket@entry=0x1197560) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:224
> #7  0x00000000006bdbf6 in vhost_user_server_new_connection (fd=66, fd@entry=54,
> dat=dat@entry=0x1197560, remove=remove@entry=0x7fbbafffe9dc) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:284
> #8  0x00000000006bc48c in fdset_event_dispatch (arg=0xc1ace0
> <vhost_user+8192>) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/fd_man.c:308
> #9  0x00007fbc450fee25 in start_thread () from /usr/lib64/libpthread.so.0
> #10 0x00007fbc446e134d in clone () from /usr/lib64/libc.so.6
> (gdb) fr 0
> #0  0x0000000000443c9c in find_suitable_element (bound=0, align=64,
> flags=0, size=6272, heap=0x7fbc461f2a1c) at
> /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.c:134
> 134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz))
> (gdb) p elem->ms
> $1 = (const struct rte_memseg *) 0x7fa4f3ebb01c
> (gdb) p *elem->ms
> Cannot access memory at address 0x7fa4f3ebb01c
> (gdb) p *elem
> $2 = {heap = 0x7fa4f3eeda1c, prev = 0x0, free_list = {le_next = 0x0,
> le_prev = 0x7fa4f3eeda7c}, ms = 0x7fa4f3ebb01c, state = ELEM_FREE, pad = 0,
> size = 1073439232}
> (gdb)  disassemble 0x0000000000443c9c
> Dump of assembler code for function malloc_heap_alloc:
> => 0x0000000000443c9c <+156>: mov    0x18(%rax),%rax
>    0x0000000000443ca0 <+160>: test   %r15d,%r15d
>    0x0000000000443ca3 <+163>: je     0x443d7c <malloc_heap_alloc+380>
>    0x0000000000443ca9 <+169>: cmp    $0x10000000,%rax
>    0x0000000000443caf <+175>: je     0x443d25 <malloc_heap_alloc+293>
> ---Type <return> to continue, or q <return> to quit---q
> Quit
> (gdb) info reg rax
> rax            0x7fa4f3ebb01c 140346443673628
>
> Is  the   dpdk-socket-mem    too small ?
>
> Thanks
>
>
>
> O Mahony, Billy <billy.o.mahony@intel.com> 于2018年8月21日周二 下午4:17写道:
>
>> Hi,
>>
>>
>>
>> One thing to look out for with DPDK < 18.05 is that you need to used 1GB
>> huge pages (and no more than eight of them) to use virtio. I’m not sure if
>> that is the issue you have as I think it I don’t remember it causing a seg
>> fault. But is certainly worth checking.
>>
>>
>>
>> If that does not work please send the info Ciara refers to as well as the
>> ovs-vsctl interface config for the ovs vhost backend.
>>
>>
>>
>> Thanks,
>>
>> Billy
>>
>>
>>
>> *From:* ovs-discuss-bounces@openvswitch.org [mailto:
>> ovs-discuss-bounces@openvswitch.org] *On Behalf Of *Loftus, Ciara
>> *Sent:* Tuesday, August 21, 2018 9:06 AM
>> *To:* gmzhang76@gmail.com; ovs-discuss@openvswitch.org
>> *Cc:* users@dpdk.org
>> *Subject:* Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>>
>>
>>
>> Hi,
>>
>>
>>
>> I am cc-ing the DPDK users’ list as the SEGV originates in the DPDK vHost
>> code and somebody there might be able to help too.
>>
>> Could you provide more information about your environment please? eg. OVS
>> & DPDK versions, hugepage configuration, etc.
>>
>>
>>
>> Thanks,
>>
>> Ciara
>>
>>
>>
>> *From:* ovs-discuss-bounces@openvswitch.org [
>> mailto:ovs-discuss-bounces@openvswitch.org
>> <ovs-discuss-bounces@openvswitch.org>] *On Behalf Of *???
>> *Sent:* Monday, August 20, 2018 12:06 PM
>> *To:* ovs-discuss@openvswitch.org
>> *Subject:* [ovs-discuss] ovs-dpdk crash when use vhost-user in docker
>>
>>
>>
>> Hi,
>>
>>
>>
>>    I used ovs-dpdk  as bridge  and l2fwd  as container. When l2fwd was
>> runned ,the ovs-dpdk was crashed.
>>
>>
>>
>> My command is :
>>
>>
>>
>>     docker run -it --privileged --name=dpdk-docker  -v
>> /dev/hugepages:/mnt/huge -v
>> /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker
>>
>> ./l2fwd -c 0x06 -n 4  --socket-mem=1024  --no-pci
>> --vdev=net_virtio_user0,mac=00:00:00:00:00:05,path=/var/run/openvswitch/vhost-user0
>>  --vdev=net_virtio_user1,mac=00:00:00:00:00:01,path=/var/run/openvswitch/vhost-user1
>> -- -p 0x3
>>
>>
>>
>> The crash log
>>
>>
>>
>> Program terminated with signal 11, Segmentation fault.
>>
>> #0  0x0000000000445828 in malloc_elem_alloc ()
>>
>> Missing separate debuginfos, use: debuginfo-install
>> glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
>> krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64
>> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64
>> libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64
>> numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64
>> pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64
>>
>> (gdb) bt
>>
>> #0  0x0000000000445828 in malloc_elem_alloc ()
>>
>> #1  0x0000000000445e5d in malloc_heap_alloc ()
>>
>> #2  0x0000000000444c74 in rte_zmalloc ()
>>
>> #3  0x00000000006c16bf in vhost_new_device ()
>>
>> #4  0x00000000006bfaf4 in vhost_user_add_connection ()
>>
>> #5  0x00000000006beb88 in fdset_event_dispatch ()
>>
>> #6  0x00007f613b288e25 in start_thread () from /usr/lib64/libpthread.so.0
>>
>> #7  0x00007f613a86b34d in clone () from /usr/lib64/libc.so.6
>>
>>
>>
>> My OVS  version is 2.9.1 , DPDK version is 17.11.3
>>
>>
>>
>>
>>
>> Thanks
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

      reply	other threads:[~2018-08-22  4:17 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAFvVKm6V9sN8PVw9+WF4h63ctmVpXTh4EHCm6uy9doibkkyKuQ@mail.gmail.com>
2018-08-21  8:05 ` Loftus, Ciara
2018-08-21  8:17   ` O Mahony, Billy
2018-08-21  8:59     ` 张广明
2018-08-22  4:16       ` 张广明 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFvVKm4eX1ykbK6y+XO1q_pEXkg578ZoMbSQbAVnzkm_DYJ--Q@mail.gmail.com \
    --to=gmzhang76@gmail.com \
    --cc=billy.o.mahony@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=ovs-discuss@openvswitch.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).