From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BEE9A04DD; Fri, 30 Oct 2020 08:39:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 53FBF6A1B; Fri, 30 Oct 2020 08:39:54 +0100 (CET) Received: from m15111.mail.126.com (m15111.mail.126.com [220.181.15.111]) by dpdk.org (Postfix) with ESMTP id DCB8437B0 for ; Thu, 29 Oct 2020 17:45:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Subject:Message-ID:Date:MIME-Version; bh=ZglpB MY7SrA/hH+Dah0PG4L4hwXd8kHw6xx8fFjVuBI=; b=HpTbWkqNtyNCsrsNJKOqY dMihieaDPs19oxbQp+q0Lv4HGhurgWUCnQhSGdSt8FK+CqPdxrmpUmjg0w/4M7Gr L/MLaTKhZZgfgXPrelXudWg5+8E/HsvbhShafX4/M8pveHpHrRO0N3Jnz1QXpWcB Kl36Wc8rbVx2EPOoLXx3Ow= Received: from [192.168.3.5] (unknown [218.72.71.192]) by smtp1 (Coremail) with SMTP id C8mowAD3_x+C8ZpfPo5cKw--.28504S2; Fri, 30 Oct 2020 00:44:52 +0800 (CST) From: Weifeng LI To: dev@dpdk.org, shahafs@mellanox.com, matan@mellanox.com, viacheslavo@mellanox.com, chas3@att.com Cc: liweifeng2@huawei.com, 863348577@qq.com, Weifeng Li , zhaohui8@huawei.com Message-ID: <465ecde5-a65a-782e-4f46-3e084e54c609@126.com> Date: Fri, 30 Oct 2020 00:42:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.1 MIME-Version: 1.0 X-CM-TRANSID: C8mowAD3_x+C8ZpfPo5cKw--.28504S2 X-Coremail-Antispam: 1Uf129KBjvJXoW7ur4rCw4kWw18Jw1xZr1rtFb_yoW8XF4DpF W5W3WrtasrC3W7Xw15Zw4UCr15urZ2qanrGry5J347A3yYyr9rZr18trykJF4jkr98Cr1k CF40gan0vF1q9r7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jC1vfUUUUU= X-Originating-IP: [218.72.71.192] X-CM-SenderInfo: 5olzvx5ihqwmaw6rjloofrz/1tbiXAXM-lpEB2qHzgAAsR X-Mailman-Approved-At: Fri, 30 Oct 2020 08:39:53 +0100 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Segfault when eal thread executing mlx5 nic's lsc event X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" hi     I am using the dpdk bond of mlx5. There is a segment error in the process of starting the bond port. This is because EAL interrupt thread is processing LSC interrupt when slave_configure is executing the rte_eth_dev_rss_reta_update. rte_eth_dev_rss_reta_update will also use mlx5 flow list.     I've also found another discussion about this issue.https://mails.dpdk.org/archives/dev/2019-March/125929.html     Do it need a lock to protect the mlx5 flow list? int slave_configure(struct rte_eth_dev *bonded_eth_dev,         struct rte_eth_dev *slave_eth_dev) {     ...     /* Start device */     errval = rte_eth_dev_start(slave_eth_dev->data->port_id);     if (errval != 0) {         RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",                 slave_eth_dev->data->port_id, errval);         return -1;     }     /* If RSS is enabled for bonding, synchronize RETA */     if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {         int i;         struct bond_dev_private *internals;         internals = bonded_eth_dev->data->dev_private;         for (i = 0; i < internals->slave_count; i++) {             if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {                 errval = rte_eth_dev_rss_reta_update(                         slave_eth_dev->data->port_id,                         &internals->reta_conf[0],                         internals->slaves[i].reta_size);                 if (errval != 0) {                     RTE_BOND_LOG(WARNING,                              "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."                              " RSS Configuration for bonding may be inconsistent.",                              slave_eth_dev->data->port_id, errval);                 }                 break;             }         }     }