From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by dpdk.org (Postfix, from userid 33) id 4B7E11B1E6; Wed, 5 Dec 2018 04:01:37 +0100 (CET) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Wed, 05 Dec 2018 03:01:37 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: ethdev X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: critical X-Bugzilla-Who: he.qiao17@zte.com.cn X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 116] Single-port, multi-core and multi-queue mode (open RSS), when configuring IP, may cause dpdk coredump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Dec 2018 03:01:37 -0000 https://bugs.dpdk.org/show_bug.cgi?id=3D116 Bug ID: 116 Summary: Single-port, multi-core and multi-queue mode (open RSS), when configuring IP, may cause dpdk coredump Product: DPDK Version: unspecified Hardware: x86 OS: Linux Status: CONFIRMED Severity: critical Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: he.qiao17@zte.com.cn Target Milestone: --- Coredump stack information=EF=BC=9A Thread 153 "lcore-slave-1" received signal SIGSEGV, Segmentation fault. #0 0x00007ffff37b570d in ixgbe_rxq_rearm (rxq=3D0x7ffd4e5ed680) at ixgbe_rxtx_vec_sse.c:98 #1 0x00007ffff37b6740 in _recv_raw_pkts_vec (rxq=3D0x7ffd4e5ed680, rx_pkts=3D0x926db0 , nb_pkts=3D32, split_packet=3D0x7fff227f582= 0 "")at xgbe_rxtx_vec_sse.c:290 #2 0x00007ffff37b743b in ixgbe_recv_scattered_pkts_vec (rx_queue=3D0x7ffd4e5ed680, rx_pkts=3D0x926db0 , nb_pkts=3D144)= at ixgbe_rxtx_vec_sse.c:502 #3 0x0000000000515bd1 in rte_eth_rx_burst (port_id=3D0 '\000', queue_id=3D= 0, rx_pkts=3D0x926db0 , nb_pkts=3D144) at rte_ethdev.h:2659 #4 0x000000000051c1b4 in app_lcore_io_rx My dpdk IO rx/tx queue configuration is as follows: --rx (0,0,1),(0,1,9),(0,2,17) --tx (0,1) one core corresponds to one queue, using multiple queues. When configuring IP, function rte_kni_handle_request() can get the RTE_KNI_REQ_CFG_NETWORK_IF message from the kernel. At this time, rte_eth_dev_stop will be called. All the queue information on the current p= ort will be cleared in this process, but some queues are being used by other co= res. rte_eth_dev_stop() -> ixgbe_dev_stop() -> ixgbe_dev_clear_queues() -> ixgbe_rx_queue_release_mbufs() void __attribute__((cold)) ixgbe_dev_clear_queues(struct rte_eth_dev *dev) { ... for (i =3D 0; i < dev->data->nb_rx_queues; i++) { struct ixgbe_rx_queue *rxq =3D dev->data->rx_queues[i]; ... } } static inline void _ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) { ... /* set all entries to NULL */ memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); } The reason for coredump is known, but I do not know how to solve it gracefu= lly --=20 You are receiving this mail because: You are the assignee for the bug.=