From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 2FE9F11D4 for ; Fri, 2 Sep 2016 08:55:46 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP; 01 Sep 2016 23:55:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,270,1470726000"; d="scan'208";a="1050500397" Received: from shwdeisgchi083.ccr.corp.intel.com (HELO [10.239.67.193]) ([10.239.67.193]) by fmsmga002.fm.intel.com with ESMTP; 01 Sep 2016 23:55:44 -0700 To: Kyle Larose , "dev@dpdk.org" References: Cc: "huawei.xie@intel.com" , "yuanhan.liu@linux.intel.com" From: "Tan, Jianfeng" Message-ID: Date: Fri, 2 Sep 2016 14:55:43 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] virtio kills qemu VM after stopping/starting ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Sep 2016 06:55:46 -0000 Hi Kyle, On 9/2/2016 4:53 AM, Kyle Larose wrote: > Hello everyone, > > In my own testing, I recently stumbled across an issue where I could get qemu to exit when sending traffic to my application. To do this, I simply needed to do the following: > > 1) Start my virtio interfaces > 2) Send some traffic into/out of the interfaces > 3) Stop the interfaces > 4) Start the interfaces > 5) Send some more traffic > > At this point, I would lose connectivity to my VM. Further investigation revealed qemu exiting with the following log: > > 2016-09-01T15:45:32.119059Z qemu-kvm: Guest moved used index from 5 to 1 > > I found the following bug report against qemu, reported by a user of DPDK: https://bugs.launchpad.net/qemu/+bug/1558175 > > That thread seems to have stalled out, so I think we probably should deal with the problem within DPDK itself. Either way, later in the bug report chain, we see a link to this patch to DPDK: http://dpdk.org/browse/dpdk/commit/?id=9a0615af774648. The submitter of the bug report claims that this patch fixes the problem. Perhaps it does. I once got a chance to analyze the bug you referred here. The patch (http://dpdk.org/browse/dpdk/commit/?id=9a0615af774648) does not fix that bug. The root cause of that bug is: when DPDK appilcation gets killed, nobody tells the vhost backend to stop. So when restarting the DPDK app, those hugepages are reused, and initialized to all zero. And unavoidably, "idx" in those memory is changed to 0. I have written a patch to notify the vhost backend to stop when DPDK app gets killed suddenly (more accurate, when /dev/uioX gets closed), and this patch will be sent out recently. And this patch does not fix your problem, either. You did not kill the app. (I should not update info about that bug here, and I mean they are different problems) > However, it introduces a new problem: If I remove the patch, I cannot reproduce the problem. So, that leads me to believe that it has caused a regression. > > To summarize the patch’s changes, it basically changes the virtio_dev_stop function to flag the device as stopped, and stops the device when closing/uninitializing it. However, there is a seemingly unintended side-effect. > > In virtio_dev_start, we have the following block of code: > > /* On restart after stop do not touch queues */ > if (hw->started) > return 0; > > /* Do final configuration before rx/tx engine starts */ > virtio_dev_rxtx_start(dev); > > …. > > Prior to the patch, if an interface were stopped then started, without restarting the application, the queues would be left as-is, because hw->started would be set to 1. Yes, my previous patch did break this behavior (by stopping and re-starting the device, the queues would be left as-is) and leads to the problem here. And you are proposing to recover. But is this a right behavior to follow? On the one side, when we stop the virtio device, should we notify the vhost backend to stop too? Currently, we just flag it as stopped. On the other side, now in many PMDs, we mix the dev initialization/configuration code into dev_start functions, that is to day, we re-configure the device every time we start it (may be to make sure that changed configuration will be configured). Then what if we increase the queue numbers of virtio? Thanks, Jianfeng > Now, calling stop sets hw->started to 0, which means the next call to start will “touch the queues”. This is the unintended side-effect that causes the problem. > > I made a change locally to break the state of the device into two: started and opened. The devices starts out neither started nor opened. If the device is accepting packets, it is started. If the device has set up its queues, it is opened. Stopping the device does not close the device. This allows me to change the check above to: > > if (hw->opened) { > hw->started=1 > return 0; > } > > Then, if I stop and start the device, it does not reinitialize the queues. I have no problem. I can restart ports as much as I want, and the system keeps running. Traffic flows when they’ve restarted as well, which is always a plus. ☺ > > Some background: > - I tested against DPDK 16.04 and DPDK 16.07. > - I’m using virtio NICs: > - CPU: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz > - Host OS: CentOS Linux release 7.1.1503 (Core) > - Guest OS: CentOS Linux release 7.2.1511 (Core) > - Qemu-kvm version: 1.5.3-86.el7_1.6 > > I plan on submitting a patch to fix this tomorrow. Let me know if anyone has any thoughts about this, or a better way to fix it. > > Thanks, > > Kyle