From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 3D36D3254 for ; Tue, 5 Sep 2017 06:44:52 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP; 04 Sep 2017 21:44:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,478,1498546800"; d="scan'208";a="897108143" Received: from debian-zgviawfucg.sh.intel.com (HELO debian-ZGViaWFuCg) ([10.67.104.160]) by FMSMGA003.fm.intel.com with ESMTP; 04 Sep 2017 21:44:50 -0700 Date: Tue, 5 Sep 2017 12:45:17 +0800 From: Tiwei Bie To: Maxime Coquelin Cc: dev@dpdk.org, yliu@fridaylinux.org, jfreiman@redhat.com, mst@redhat.com, vkaplans@redhat.com, jasowang@redhat.com Message-ID: <20170905044516.GC31895@debian-ZGViaWFuCg> References: <20170831095023.21037-1-maxime.coquelin@redhat.com> <20170831095023.21037-4-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20170831095023.21037-4-maxime.coquelin@redhat.com> User-Agent: Mutt/1.7.2 (2016-11-26) Subject: Re: [dpdk-dev] [PATCH 03/21] vhost: protect virtio_net device struct X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 04:44:53 -0000 On Thu, Aug 31, 2017 at 11:50:05AM +0200, Maxime Coquelin wrote: > virtio_net device might be accessed while being reallocated > in case of NUMA awareness. This case might be theoretical, > but it will be needed anyway to protect vrings pages against > invalidation. > > The virtio_net devs are now protected with a readers/writers > lock, so that before reallocating the device, it is ensured > that it is not being referenced by the processing threads. > [...] > > +struct virtio_net * > +get_device(int vid) > +{ > + struct virtio_net *dev; > + > + rte_rwlock_read_lock(&vhost_devices[vid].lock); > + > + dev = __get_device(vid); > + if (unlikely(!dev)) > + rte_rwlock_read_unlock(&vhost_devices[vid].lock); > + > + return dev; > +} > + > +void > +put_device(int vid) > +{ > + rte_rwlock_read_unlock(&vhost_devices[vid].lock); > +} > + This patch introduced a per-device rwlock which needs to be acquired unconditionally in the data path. So for each vhost device, the IO threads of different queues will need to acquire/release this lock during each enqueue and dequeue operation, which will cause cache contention when multiple queues are enabled and handled by different cores. With this patch alone, I saw ~7% performance drop when enabling 6 queues to do 64bytes iofwd loopback test. Is there any way to avoid introducing this lock to the data path? Best regards, Tiwei Bie