From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 97BB3567E for ; Wed, 27 May 2015 16:50:50 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 27 May 2015 07:50:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,506,1427785200"; d="scan'208";a="716395605" Received: from orsmsx107.amr.corp.intel.com ([10.22.240.5]) by fmsmga001.fm.intel.com with ESMTP; 27 May 2015 07:50:49 -0700 Received: from orsmsx151.amr.corp.intel.com (10.22.226.38) by ORSMSX107.amr.corp.intel.com (10.22.240.5) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 27 May 2015 07:50:49 -0700 Received: from fmsmsx153.amr.corp.intel.com (10.18.125.6) by ORSMSX151.amr.corp.intel.com (10.22.226.38) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 27 May 2015 07:50:49 -0700 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.213]) by FMSMSX153.amr.corp.intel.com ([169.254.9.37]) with mapi id 14.03.0224.002; Wed, 27 May 2015 07:50:48 -0700 From: "Wiles, Keith" To: Lin XU , "dev@dpdk.org" Thread-Topic: [dpdk-dev] proposal: raw packet send and receive API for PMD driver Thread-Index: AQHQmIyEdvTN/E8PiUuVC401KnNyBQ== Date: Wed, 27 May 2015 14:50:48 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.23.28] Content-Type: text/plain; charset="us-ascii" Content-ID: <9B9D66B16624A44C821D7E4041D3F70E@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] proposal: raw packet send and receive API for PMD driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 May 2015 14:50:51 -0000 On 5/26/15, 11:18 PM, "Lin XU" wrote: >I think it is very important to decouple PMD driver with DPDK framework. > (1) Currently, the rte_mbuf struct is too simple and hard to support >complex application such as IPSEC, flow control etc. This key struct >should be extendable to support customer defined management header and >hardware offloading feature. I was wondering if adding something like M_EXT support for external storage to DPDK MBUF would be more reasonable. IMO decoupling PMDs from DPDK will possible impact performance and I would prefer not to let this happen. The drivers are written for performance, but they did start out as normal FreeBSD/Linux drivers. Most of the core code to the Intel drivers are shared between other systems. > (2) To support more NICs. >So, I thinks it time to add new API for PMD(a no radical way), and >developer can add initial callback function in PMD for various upper >layer protocol procedures. We have one callback now I think, but what callbacks do you need? The only callback I can think of is for a stack to know when it can release its hold on the data as it has been transmitted for retry needs. > >