From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E10701075; Thu, 16 Mar 2017 07:23:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489645387; x=1521181387; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=ZJI3N2kvvZZd3Pmrv7fRoLeJetJnhEjIRkkcdBgCUbY=; b=tSc4fr7Aih6oCcCZEk6Q0yJxlYLTkvTnYE+CPLZyHgeZdKU2GeE1y2Co KYgmwzwXA3CUsShZ9vWgwpyh2mfCcQ==; Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Mar 2017 23:23:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,170,1486454400"; d="scan'208";a="1123191533" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by fmsmga001.fm.intel.com with ESMTP; 15 Mar 2017 23:23:04 -0700 Date: Thu, 16 Mar 2017 14:21:22 +0800 From: Yuanhan Liu To: Kevin Traynor Cc: maxime.coquelin@redhat.com, dev@dpdk.org, stable@dpdk.org Message-ID: <20170316062122.GN18844@yliu-dev.sh.intel.com> References: <1489605049-18686-1-git-send-email-ktraynor@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1489605049-18686-1-git-send-email-ktraynor@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-stable] [PATCH] vhost: fix virtio_net cache sharing of broadcast_rarp X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Mar 2017 06:23:07 -0000 On Wed, Mar 15, 2017 at 07:10:49PM +0000, Kevin Traynor wrote: > The virtio_net structure is used in both enqueue and dequeue datapaths. > broadcast_rarp is checked with cmpset in the dequeue datapath regardless > of whether descriptors are available or not. > > It is observed in some cases where dequeue and enqueue are performed by > different cores and no packets are available on the dequeue datapath > (i.e. uni-directional traffic), the frequent checking of broadcast_rarp > in dequeue causes performance degradation for the enqueue datapath. > > In OVS the issue can cause a uni-directional performance drop of up to 15%. > > Fix that by moving broadcast_rarp to a different cache line in > virtio_net struct. Thanks, but I'm a bit confused. The drop looks like being caused by cache false sharing, but I don't see anything would lead to a false sharing. I mean, there is no write in the same cache line where the broadcast_rarp belongs. Or, the "volatile" type is the culprit here? Talking about that, I had actually considered to turn "broadcast_rarp" to a simple "int" or "uint16_t" type, to make it more light weight. The reason I used atomic type is to exactly send one broadcast RARP packet once SEND_RARP request is recieved. Otherwise, we may send more than one RARP packet when MQ is invovled. But I think we don't have to be that accurate: it's tolerable when more RARP are sent. I saw 4 SEND_RARP requests (aka 4 RARP packets) in the last time I tried vhost-user live migration after all. I don't quite remember why it was 4 though. That said, I think it also would resolve the performance issue if you change "rte_atomic16_t" to "uint16_t", without moving the place? --yliu