From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by dpdk.org (Postfix) with ESMTP id 533E12BF7 for ; Wed, 22 Aug 2018 06:17:11 +0200 (CEST) Received: by mail-lj1-f171.google.com with SMTP id y17-v6so408406ljy.8 for ; Tue, 21 Aug 2018 21:17:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=k4VRn9kW+RDLXikJW6kCrr1Lrpq9h/SqqVA+X2kJ5Y8=; b=Wq/eKjw42n0evlVmUTcwsjrHJDThFjQCd+YxUGOxLVBHwxVylN9QtyFjJfCWtC8xF6 KKYnWIWGoBpTq/4yRYJkI1kdo4h8L/sdEt8zil0TcfybH8OJ5olqX6JX2uoHD9oq9BlS OPw0/japO3ccdG0M5hPBDA4NmGhuviI7e4LuF0EK3GkWZZJr5c0bQ2+L0qFPmMu9r3FC rGcyESzoxr1ZdiZRNuTdhsVhIYmE1O698jPWPaJ/+VmRFKNMVVs6ZPRZcVsKUIp1zPgq Vg5cmh2Ar6aWdD9sh6VOic6p1xK82bvcwvuV++g332yIrS9I50PGSe2FoBVEQKfySSHa d3qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=k4VRn9kW+RDLXikJW6kCrr1Lrpq9h/SqqVA+X2kJ5Y8=; b=tzhSsHHtfs3TPkeLp9Sq7gaqWiMT8B5Akx+j5+cinOXveGlOHLHoBC3DmdNUIv/uQe /YNN43He18UYNov/R22lUnUYXuBl88lpPcEmDIEm2Uf4DB6eZr1Ikb0WwiiMZuLSjPP0 fHuKNQ31YOJn9FzTAhoPz9HsebPVdjW9B/0Ng9obujhImKaQTc5SzuQGLFDt5CNqSpLO /xwNXF64tj9DBQUZbriHDp2mPApnz3/km4hxGPGb6jcFMIPykONXvhiHDXSYnWhJ8M0o sd9Ni8pvGXXTrUuQ/ighJsSc2MeTe4cDVkXrNWX8cEgbksoVCP8HQB0QLuJCH0Kq3YgO nC9Q== X-Gm-Message-State: AOUpUlFkAEVVqkYJENMJHbFvMPAv7MIEjFebJZ4oLv+RB+9rVbA8XHB1 PQtqnL+1sOFxHp3zaz0uEP/eJ9E3SO/DSaXWkJE= X-Google-Smtp-Source: AA+uWPy3/REALfrqZwpRTtUq1mjjB1Ydz0LRN6iPzUadzlzz9Jvfzp5++nU6TaYHDXIBvWjbB1lm8fpWZvZBGxY+1Y0= X-Received: by 2002:a2e:8257:: with SMTP id j23-v6mr35188834ljh.49.1534911430889; Tue, 21 Aug 2018 21:17:10 -0700 (PDT) MIME-Version: 1.0 References: <74F120C019F4A64C9B78E802F6AD4CC278FF8DFF@IRSMSX106.ger.corp.intel.com> <03135AEA779D444E90975C2703F148DC58DDE774@IRSMSX107.ger.corp.intel.com> In-Reply-To: From: =?UTF-8?B?5byg5bm/5piO?= Date: Wed, 22 Aug 2018 12:16:56 +0800 Message-ID: To: billy.o.mahony@intel.com Cc: ciara.loftus@intel.com, ovs-discuss@openvswitch.org, users@dpdk.org X-Mailman-Approved-At: Sat, 25 Aug 2018 20:34:04 +0200 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] [ovs-discuss] ovs-dpdk crash when use vhost-user in docker X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Aug 2018 04:17:12 -0000 Hi, This issue was resolved. The cause is i miss a parameter --file-prefi= x when run l2fwd Thanks Billy and Ciara =E5=BC=A0=E5=B9=BF=E6=98=8E =E4=BA=8E2018=E5=B9=B48= =E6=9C=8821=E6=97=A5=E5=91=A8=E4=BA=8C =E4=B8=8B=E5=8D=884:59=E5=86=99=E9= =81=93=EF=BC=9A > Hi, Ciara and Billy > > Thanks for your reply > > The default huge page size that i used is 1GB . > root@localhost openvswitch]# cat /proc/cmdline > BOOT_IMAGE=3D/vmlinuz-3.10.0-514.el7.x86_64 root=3D/dev/mapper/centos-roo= t ro > crashkernel=3Dauto iommu=3Dpt intel_iommu=3Don default_hugepagesz=3D1G > hugepagesz=3D1G hugepages=3D2 rd.lvm.lv=3Dcentos/root rd.lvm.lv=3Dcentos/= swap > rd.lvm.lv=3Dcentos/usr rhgb > > The huge page number is 4 > [root@localhost openvswitch]# cat /proc/meminfo | grep Huge > AnonHugePages: 14336 kB > HugePages_Total: 4 > HugePages_Free: 2 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 1048576 kB > > > My ovs dpdk configure is > [root@localhost openvswitch]# ovs-vsctl --no-wait get Open_vSwitch . > other_config > {dpdk-init=3D"true", dpdk-socket-mem=3D"2048,0", pmd-cpu-mask=3D"0x01"} > > My ovs configure > [root@localhost openvswitch]# ovs-vsctl show > d2b6062a-4d6f-46f6-8fa4-66dca6b06c96 > Manager "tcp:192.168.15.18:6640" > is_connected: true > Bridge br-router > Port "p2p1" > Interface "p2p1" > type: dpdk > options: {dpdk-devargs=3D"0000:01:00.0"} > Port patch-gtp > Interface patch-gtp > type: patch > options: {peer=3Dpatch-router} > Port br-router > Interface br-router > type: internal > Bridge "br0" > Controller "tcp:192.168.15.18:6633" > is_connected: true > fail_mode: secure > Port "p1p1" > Interface "p1p1" > type: dpdk > options: {dpdk-devargs=3D"0000:03:00.0"} > Port patch-router > Interface patch-router > type: patch > options: {peer=3Dpatch-gtp} > Port "br0" > Interface "br0" > type: internal > Port "vhost-user1" > Interface "vhost-user1" > type: dpdkvhostuser > Port "vhost-user0" > Interface "vhost-user0" > type: dpdkvhostuser > Bridge br-vxlan > Port br-vxlan > Interface br-vxlan > type: internal > > > Docker running command is > > docker run -it --privileged --name=3Ddpdk-docker -v > /dev/hugepages:/mnt/huge -v > /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker > > ./l2fwd -c 0x06 -n 4 --socket-mem=3D1024 --no-pci > --vdev=3Dnet_virtio_user0,mac=3D00:00:00:00:00:05,path=3D/var/run/openvsw= itch/vhost-user0 > --vdev=3Dnet_virtio_user1,mac=3D00:00:00:00:00:01,path=3D/var/run/openvs= witch/vhost-user1 > -- -p 0x3 > more detail core dump message > > Program terminated with signal 11, Segmentation fault. > #0 0x0000000000443c9c in find_suitable_element (bound=3D0, align=3D64, > flags=3D0, size=3D6272, heap=3D0x7fbc461f2a1c) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.= c:134 > 134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz)) > Missing separate debuginfos, use: debuginfo-install > glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 > krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 > libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64 > libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64 > numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64 > pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64 > (gdb) bt > #0 0x0000000000443c9c in find_suitable_element (bound=3D0, align=3D64, > flags=3D0, size=3D6272, heap=3D0x7fbc461f2a1c) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.= c:134 > #1 malloc_heap_alloc (heap=3Dheap@entry=3D0x7fbc461f2a1c, type=3Dtype@en= try=3D0x0, > size=3Dsize@entry=3D6272, flags=3Dflags@entry=3D0, align=3D64, align@entr= y=3D1, > bound=3Dbound@entry=3D0) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.= c:166 > #2 0x000000000044312a in rte_malloc_socket (type=3Dtype@entry=3D0x0, > size=3Dsize@entry=3D6272, align=3Dalign@entry=3D0, socket_arg=3D, > socket_arg@entry=3D-1) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c= :91 > #3 0x00000000004431d1 in rte_zmalloc_socket (socket=3D-1, align=3D0, > size=3D6272, type=3D0x0) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c= :126 > #4 rte_zmalloc (type=3Dtype@entry=3D0x0, size=3Dsize@entry=3D6272, > align=3Dalign@entry=3D0) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/rte_malloc.c= :135 > #5 0x00000000006bec48 in vhost_new_device () at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/vhost.c:311 > #6 0x00000000006bd685 in vhost_user_add_connection (fd=3Dfd@entry=3D66, > vsocket=3Dvsocket@entry=3D0x1197560) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:224 > #7 0x00000000006bdbf6 in vhost_user_server_new_connection (fd=3D66, fd@e= ntry=3D54, > dat=3Ddat@entry=3D0x1197560, remove=3Dremove@entry=3D0x7fbbafffe9dc) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/socket.c:284 > #8 0x00000000006bc48c in fdset_event_dispatch (arg=3D0xc1ace0 > ) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_vhost/fd_man.c:308 > #9 0x00007fbc450fee25 in start_thread () from /usr/lib64/libpthread.so.0 > #10 0x00007fbc446e134d in clone () from /usr/lib64/libc.so.6 > (gdb) fr 0 > #0 0x0000000000443c9c in find_suitable_element (bound=3D0, align=3D64, > flags=3D0, size=3D6272, heap=3D0x7fbc461f2a1c) at > /home/gmzhang/work/dpdk-stable-17.11.3/lib/librte_eal/common/malloc_heap.= c:134 > 134 if (check_hugepage_sz(flags, elem->ms->hugepage_sz)) > (gdb) p elem->ms > $1 =3D (const struct rte_memseg *) 0x7fa4f3ebb01c > (gdb) p *elem->ms > Cannot access memory at address 0x7fa4f3ebb01c > (gdb) p *elem > $2 =3D {heap =3D 0x7fa4f3eeda1c, prev =3D 0x0, free_list =3D {le_next =3D= 0x0, > le_prev =3D 0x7fa4f3eeda7c}, ms =3D 0x7fa4f3ebb01c, state =3D ELEM_FREE, = pad =3D 0, > size =3D 1073439232} > (gdb) disassemble 0x0000000000443c9c > Dump of assembler code for function malloc_heap_alloc: > =3D> 0x0000000000443c9c <+156>: mov 0x18(%rax),%rax > 0x0000000000443ca0 <+160>: test %r15d,%r15d > 0x0000000000443ca3 <+163>: je 0x443d7c > 0x0000000000443ca9 <+169>: cmp $0x10000000,%rax > 0x0000000000443caf <+175>: je 0x443d25 > ---Type to continue, or q to quit---q > Quit > (gdb) info reg rax > rax 0x7fa4f3ebb01c 140346443673628 > > Is the dpdk-socket-mem too small ? > > Thanks > > > > O Mahony, Billy =E4=BA=8E2018=E5=B9=B48=E6=9C= =8821=E6=97=A5=E5=91=A8=E4=BA=8C =E4=B8=8B=E5=8D=884:17=E5=86=99=E9=81=93= =EF=BC=9A > >> Hi, >> >> >> >> One thing to look out for with DPDK < 18.05 is that you need to used 1GB >> huge pages (and no more than eight of them) to use virtio. I=E2=80=99m n= ot sure if >> that is the issue you have as I think it I don=E2=80=99t remember it cau= sing a seg >> fault. But is certainly worth checking. >> >> >> >> If that does not work please send the info Ciara refers to as well as th= e >> ovs-vsctl interface config for the ovs vhost backend. >> >> >> >> Thanks, >> >> Billy >> >> >> >> *From:* ovs-discuss-bounces@openvswitch.org [mailto: >> ovs-discuss-bounces@openvswitch.org] *On Behalf Of *Loftus, Ciara >> *Sent:* Tuesday, August 21, 2018 9:06 AM >> *To:* gmzhang76@gmail.com; ovs-discuss@openvswitch.org >> *Cc:* users@dpdk.org >> *Subject:* Re: [ovs-discuss] ovs-dpdk crash when use vhost-user in docke= r >> >> >> >> Hi, >> >> >> >> I am cc-ing the DPDK users=E2=80=99 list as the SEGV originates in the D= PDK vHost >> code and somebody there might be able to help too. >> >> Could you provide more information about your environment please? eg. OV= S >> & DPDK versions, hugepage configuration, etc. >> >> >> >> Thanks, >> >> Ciara >> >> >> >> *From:* ovs-discuss-bounces@openvswitch.org [ >> mailto:ovs-discuss-bounces@openvswitch.org >> ] *On Behalf Of *??? >> *Sent:* Monday, August 20, 2018 12:06 PM >> *To:* ovs-discuss@openvswitch.org >> *Subject:* [ovs-discuss] ovs-dpdk crash when use vhost-user in docker >> >> >> >> Hi, >> >> >> >> I used ovs-dpdk as bridge and l2fwd as container. When l2fwd was >> runned ,the ovs-dpdk was crashed. >> >> >> >> My command is : >> >> >> >> docker run -it --privileged --name=3Ddpdk-docker -v >> /dev/hugepages:/mnt/huge -v >> /usr/local/var/run/openvswitch:/var/run/openvswitch dpdk-docker >> >> ./l2fwd -c 0x06 -n 4 --socket-mem=3D1024 --no-pci >> --vdev=3Dnet_virtio_user0,mac=3D00:00:00:00:00:05,path=3D/var/run/openvs= witch/vhost-user0 >> --vdev=3Dnet_virtio_user1,mac=3D00:00:00:00:00:01,path=3D/var/run/openv= switch/vhost-user1 >> -- -p 0x3 >> >> >> >> The crash log >> >> >> >> Program terminated with signal 11, Segmentation fault. >> >> #0 0x0000000000445828 in malloc_elem_alloc () >> >> Missing separate debuginfos, use: debuginfo-install >> glibc-2.17-196.el7_4.2.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 >> krb5-libs-1.15.1-8.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 >> libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7_4.1.x86_64 >> libpcap-1.5.3-9.el7.x86_64 libselinux-2.5-12.el7.x86_64 >> numactl-libs-2.0.9-6.el7_2.x86_64 openssl-libs-1.0.2k-8.el7.x86_64 >> pcre-8.32-17.el7.x86_64 zlib-1.2.7-17.el7.x86_64 >> >> (gdb) bt >> >> #0 0x0000000000445828 in malloc_elem_alloc () >> >> #1 0x0000000000445e5d in malloc_heap_alloc () >> >> #2 0x0000000000444c74 in rte_zmalloc () >> >> #3 0x00000000006c16bf in vhost_new_device () >> >> #4 0x00000000006bfaf4 in vhost_user_add_connection () >> >> #5 0x00000000006beb88 in fdset_event_dispatch () >> >> #6 0x00007f613b288e25 in start_thread () from /usr/lib64/libpthread.so.= 0 >> >> #7 0x00007f613a86b34d in clone () from /usr/lib64/libc.so.6 >> >> >> >> My OVS version is 2.9.1 , DPDK version is 17.11.3 >> >> >> >> >> >> Thanks >> >> >> >> >> >> >> >> >> >