From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 784D4B3A3 for ; Tue, 26 Aug 2014 18:38:32 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 26 Aug 2014 09:42:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,405,1406617200"; d="scan'208";a="593567046" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga002.jf.intel.com with ESMTP; 26 Aug 2014 09:42:31 -0700 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.195.1; Tue, 26 Aug 2014 09:42:31 -0700 Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server (TLS) id 14.3.195.1; Tue, 26 Aug 2014 09:42:30 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.17]) by SHSMSX103.ccr.corp.intel.com ([169.254.4.219]) with mapi id 14.03.0195.001; Wed, 27 Aug 2014 00:42:24 +0800 From: "Zhou, Danny" To: "Michael Marchetti" , "dev@dpdk.org" Thread-Topic: overcommitting CPUs Thread-Index: Ac/BSK4Ie9dEDL+WSrSuFei6WRfr+QAAt/0w Date: Tue, 26 Aug 2014 16:42:23 +0000 Message-ID: References: In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] overcommitting CPUs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Aug 2014 16:38:32 -0000 I have a prototype that works on Niantic to enable NIC rx interrupt and all= ow interrupt and polling mode switch according to real traffic load on the = rx queue. It is designed for DPDK power management, and can apply to CPU re= source sharing as well. It only works for non-virtualized environment at t= he moment. The prototype also optimized DPDK interrupt notification mechani= sm to user space in order to minimize the latency. Basically, it looks like= a user space NAPI.=20 The downside of this solution is that packet latency is enlarged, which is = combination of interrupt latency, CPU wakeup latency from C3/C6 C0, cache w= armup latency, OS scheduling latency. Also it potentially drop packets for = burst traffic on >40G NIC. In other words, the latency is non-deterministic= which is not suitable for packet latency sensitive scenarios. > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Michael Marchetti > Sent: Wednesday, August 27, 2014 12:27 AM > To: dev@dpdk.org > Subject: [dpdk-dev] overcommitting CPUs >=20 > Hi, has there been any consideration to introduce a non-spinning network = driver (interrupt based), for the purpose of overcommitting > CPUs in a virtualized environment? This would obviously have reduced hig= h-end performance but would allow for increased guest > density (sharing of physical CPUs) on a host. >=20 > I am interested in adding support for this kind of operation, is there an= y interest in the community? >=20 > Thanks, >=20 > Mike.