From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0045.outbound.protection.outlook.com [104.47.41.45]) by dpdk.org (Postfix) with ESMTP id 172793195 for ; Wed, 29 Nov 2017 13:56:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=bdzzqO4Dfrp6Yh7sU/nDwK/duPM+VBhamVrvOHON5dA=; b=amI0/cb5XvynCpis29Pe2XZdxPYpbFQSsG2Ya5wT29K2C1NOahPvxP5fV7t+dkBjhMTDxjOZsrs4bnk19f0CGx3kpVTIqouIFOODr+Lf7/e8epUCYxXiLRl2SDgDrXiabJgUqTFccwGYtCXN2wdv86uPYDZKmirHtLHF4kIA50s= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.JacobKollanukkaran@cavium.com; Received: from jerin (14.140.2.178) by CY1PR07MB2524.namprd07.prod.outlook.com (10.167.16.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Wed, 29 Nov 2017 12:56:26 +0000 Date: Wed, 29 Nov 2017 18:26:07 +0530 From: Jerin Jacob To: "Ma, Liang" Cc: dev@dpdk.org, "Van Haaren, Harry" , "Richardson, Bruce" , "Jain, Deepak K" , "Mccarthy, Peter" Message-ID: <20171129125605.GA24298@jerin> References: <1511522632-139652-1-git-send-email-liang.j.ma@intel.com> <20171124205532.GA5197@jerin> <20171129121954.GA23464@sivswdev01.ir.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171129121954.GA23464@sivswdev01.ir.intel.com> User-Agent: Mutt/1.9.1 (2017-09-22) X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: MA1PR0101CA0005.INDPRD01.PROD.OUTLOOK.COM (52.134.136.143) To CY1PR07MB2524.namprd07.prod.outlook.com (10.167.16.15) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6c11b2a9-e896-46ed-0c49-08d537289bce X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(7168020)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603270); SRVR:CY1PR07MB2524; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2524; 3:ixR9RcKMm864TrYcunF3dhWlJ9RpU1hMBvx+EG7VuHr3InJ8B/fmdlC5X4qWzNYsKTMpYxeVp/15adMX7WANpAJqDNBo3lal8r4UURQ1hbDR6GZ5LcNWXpI/9oAZF1WuWNfz+TdrJnExpDwn5fXpTbLdj3keluG73wOJ9GPcGPi5S5Wq//ycQCku0jkS1U1VqgPVsT8yGxtIRtxk8D9BgSpcDa7wDbMOO72hh5y8Vab3odMlUpAf6t30fg3nZ8SV; 25:pyzMqVDnB3GHL/fL7Yt0MU3FQgSlwrgU1isDB+9BJbkR4M4XyqTpK88+ulRUeiGVHiG36I4e5+/9Osm0ZDuxQmghqzNuvz9e7nCjH9Fbv8sdpnm9/8yP9925DuvpNxlKeNdEpAX2cG4IOK6n8U79Jz1yATz4mMfW+zRu77sm9g4EDtVZnVQeoxCXxtp176o1g7OfV37sPPDr5JkWSI8Nwes/Bg1yTVO48AZ3VKXtywAfJWAfVILYg5yDWw3vVotbizI+73M4k2rCGH4hTAy/iMLe+hWzV9jYXZPVOicJ+/SRc5McRUJkZC08QWDv6phwZnTkLWrC1YqzrRFo7cljBbJusqAA5J03FxQVmMIDpcE=; 31:pr/1uEZOPZhw+JuMmgYiogJLZt3hbVEoag/fzEnm0+7GeP5ck2jAWqR4z2FFsXsERujJpHbzUMFZL7rxjVguNKDbLxJaInsr97ekK4HIbfReNOyn6Xl6BXb9InwiWjGsvLokbMCwqRyXq6KXwwtWJ/AsZpFjtDFjv0WaURH40X5VSTK8dghZIcOh4yQ9/m6AnjCyQz94p08FdDuCNVxWA0YKk54ch+63dcVwdTIg7Ww= X-MS-TrafficTypeDiagnostic: CY1PR07MB2524: X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2524; 20:SemMRk+yABcGvT9TgC8hSWqp6CJF710PXduGdFzg8qp0wlYu4tDXfzJJZeB8dnRqG/KIr07tLT7wPLQGj4BL7aLb2GM5WT9e9WAC/HfQmJAb+uVvcltHUiCoH5fRJXqlDu0P2fUXH/sSgku5ubwskgEnE1iOZKS9xddqOVf8SV0RdUq6Nh5/+Q+5iBmSJcHWnzYJPNkQXgrXna7tqGnDI4GJ/nxIxwzhXUf6alOOgkG6TiRq53h5qw2P9iegPd/thl4Wsb2dLCiCMIh5BvE33O3ztcAH5/wgm1Qk4q1bdS5c2yRlgSxHBhoATCA2AoEWG2s2qgOHFOBWgwbmztzAF/V3dw0Hlpt/9cG+lAAPFs/7o4Jlc5oMszgRncevrLigd/T3lIeIh9s7nfH2whv6WIvPtrdHNm+DXOqaa0G0RfMROXsGcZci7lgYziAR48qQNlZPhd3Pt8mSCmlOknHORuH4mwNzWHq5YT/nuIWqAQGmwnfXuQYS1N7bIMX3MmJ+Ah3LUyn/E5j/eaestpaQD9Axgig54lSIvw56RxH3rVHDpkM6CmzRv8e3P3uGFrYWMhipHCpXZD6YvkOxTCUKlXn+ztfSIYPPOQDhpbLT6I0= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(100405760836317)(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(3231022)(10201501046)(3002001)(93006095)(6041248)(20161123560025)(20161123555025)(20161123562025)(20161123564025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:CY1PR07MB2524; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:CY1PR07MB2524; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2524; 4:yQf0o91kTFli8wsopBnpkaopDxFys3+jbg56ohXtkTWtgCPUgbOrUj3CtZVlEcYLsQ8menD3K15AesgL3c7XCQ3k3uQvXhAm140YiqkJSWJ206RmZGiu6HrcrQk7WUANgRA6CxwoQpMwtoGkq4D+j/y1Hnk39cso7uuFw8KEBgKeUWI1B6wV9OmAUZ7Yg+ioA1C6XowIH8pvFVrdqP9GCHmDX6fJefhOBEOQQOVyuzt/ReuW3m2X7fmXtuZ7sJ42B11uXGpL1frntkgVE67mp0DtataQzVlEYwVp8tnCiJe5qGCciSEMzbvCfIUsNbYEJK9rfvdllAVdgx4skM+CNHBRzys1vWSe8Beq+HtRiwk+7S1eH2CVaxhvVZoEE/pz X-Forefront-PRVS: 05066DEDBB X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(366004)(346002)(189002)(199003)(76104003)(377424004)(13464003)(24454002)(50466002)(16526018)(8936002)(66066001)(8676002)(229853002)(106356001)(68736007)(55236003)(33716001)(101416001)(76176999)(52116002)(54356999)(50986999)(6496006)(97736004)(4326008)(53936002)(478600001)(6246003)(55016002)(83506002)(189998001)(33656002)(25786009)(105586002)(305945005)(5660300001)(6916009)(42882006)(7736002)(2950100002)(23726003)(72206003)(6666003)(9686003)(1076002)(81156014)(16586007)(81166006)(3846002)(316002)(5009440100003)(6116002)(58126008)(54906003)(47776003)(2906002)(110426004)(18370500001)(217873001); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR07MB2524; H:jerin; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR07MB2524; 23:t4kA+28SaIn7bgYq4ESBkKZvsdl9F+SfBfcentzhJ?= =?us-ascii?Q?zzYFV1UFDBvgo+m5omfcVUstglxv3z/mBZ/vgxg/zWVuVS3O8E1Lg05c4zfD?= =?us-ascii?Q?R+K1cviIHVAf/v/8HZPDfMwI9KjjVzd3tiyGxmR2qguK+DriGkcmP91E6nHx?= =?us-ascii?Q?wgLkixJT0fRPdvonBByAf9hOZIO2tOCpUe3CJscdgBuS/FlAFrePP42yMuhT?= =?us-ascii?Q?wduass8p3bhlP+glPUKqqgG0cfnh1KqG+jwTi2RnuO9wqAOqJLqOL5z4Yti9?= =?us-ascii?Q?Y8POA3/Lcgf9IT2/Dvh0cBppyyzdJG9rPbtr07+OTWSFOwFqcdP75CtRok1f?= =?us-ascii?Q?wixHi+Yncbl9xDHuuu49oqOG8ylO2VXDCJbgxDGu+rkeoAM3K9+eOWQ5U4ni?= =?us-ascii?Q?GQaC/2ZA0o0GTW2XvXeptICMzO8vcFKvWGUs8KlxXodVTOD/61DL0fFZJ7Yy?= =?us-ascii?Q?4L9uR0oDmryZAhk+ok+OzzlPXf+DlwjSD83bE0F0viFBvrv8VYleDFB4Qxyq?= =?us-ascii?Q?1dr3Dw9e2erBLud4WNUFchSX6UIggR7Mv9vf18aG9C1vjSGIRMxL4WfoexkC?= =?us-ascii?Q?hQ+48LhperNwEB/mhi72Dr2huXdlHoyVxysDmJ8LrcXVIAEfErLBRPamqZz7?= =?us-ascii?Q?j1oL2fx41/ZQGVJck4rdWgki9a40i0Xx/bSfAE/TLn5w141ZEBcU+zOK/oE0?= =?us-ascii?Q?RS5dMNmLGj+aztpDhw8m2fi0z1zejv65h2eeLpQOrgbaxxA080bqx89OcNjo?= =?us-ascii?Q?KAVmF75XW1FDwFHcZELg6QKCIgzFj9I8tc4L7aEmY6XoMWC6zRwcCZLmAC/L?= =?us-ascii?Q?JsZY5yOvyo9gYQ85oRFqD0/nW6rq26Pj0w5WRPcjhNYZZ+70DLqBvO2gFuLl?= =?us-ascii?Q?Ij/YOpRtS5su0V0IFh95oR9o9hcI4eGTVUHezBgctU8Lvhn/QC4SbWvBFTdg?= =?us-ascii?Q?HYqef2lCT0r2RycPDKVnKLk0gSfdFA/M7bl6bmHnpie54HxQr8I5GSTGxif3?= =?us-ascii?Q?+J4NuKZi5OVx1SJ7Phh03iq6zlTRDO05YRJHnjLswzrTaAU/su5ml3Fej2uY?= =?us-ascii?Q?1/ClKGKS6n10bTg8TZfr1+oH68bgD8aqgrybcHqFZXYXBDMrpNT5nhveDJHS?= =?us-ascii?Q?T+vp7RhVm2DsVNTF3t/fTE7lHoGtIzEpmCb0be1ck+BdaXfho0WFHkMoDawc?= =?us-ascii?Q?M7uXgngFpqMWJUcNeCo/1GlS+5syJW0BeOAlCNvhAAYvn1e+SwQHDBmg+v5M?= =?us-ascii?Q?9FNm4JlxyDS0kvZf0+7x2UPuM44mFEsKXcIHcx9IDGYw3StKj/4NNtgwtXhB?= =?us-ascii?Q?gD8Pk3Yr69fydnW5j0cfEVECGm2b/BtiornoEQ4BX92wdaRUEKhXZNC19n2y?= =?us-ascii?Q?GlXonyJ1Lrj09IJSHDX8ggJUwK2XZvbCLNRnf2JDtqi4MM2DZj58jjeLBZ9U?= =?us-ascii?Q?w+sDKFyxA=3D=3D?= X-Microsoft-Antispam-Message-Info: +f1pf8GUnARVCacCyLw00jvkUR1U7Pft/1iGVxTLi3JfUQxScaBV+WRdocsIg4I0OmJdrerSNt05F9wR/ELNvA== X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2524; 6:uuiBUYcRxtBsMNkGQ1U/9x5Yl4ad0czjDlit2W6+ktQmFa8VqzSYaXig9a4EdHhVzKPeMjyvEthbRk3Kcf7nz2vRH8v6RT2etUebCOVNPBtcZU5xlLoBovDbG/3UsVDjxwgXBVRhGmkjOFZJGhGsAmrXAefE5CPu+TGY0qYFLEvA77WkJ5hFEkgOTXbrHbgAE0Fu8d5juRNs7BEKjSajWzZmAShNOJBR07gGWWvcA47aXYPIRa01uu5WF0pA1ZDYPxP1ijuesskXHaPzugFaqQtj28vI/5IWK99XDAmvhLylnwIjlfewhGi61WhJiTxbaIUek7oy/pEqo7aasockb3fMxxoLmRVdB/oB42MncGE=; 5:DK+n9blAq+vQNQZb63JxlnGPsegfonCYv2E+2eeaFY15I8tcL9s9sk6yQCxVbMkLDFSRYbU6WJeZWDNcsFPr0m/Kra4447lCtpGSfV97bFgziXgVzcp7Ic6wFFf8nAUYlLk6VUlQYrMdDT7BF2q3+Z0c0PgkExevTzQtljTSseo=; 24:2eveA2eUKv56dz/xNU148ssXyuC/J5GsP7w5ZOplp521JZQEnLqQbcNEm3AasVWRlJekWgCEyYadsiYEuElWr2tL1nhj7j6BDSfPGG7ssmY=; 7:gSrQP2DxQyf+JasklI1jV4r64cUPp7Z0Jj4F1J2JeW86o6svfmdYpoFbsmrWiPIvm6hE0GTgJbxmSVFr+BNQLcmJoVjPE4EHfK7v5zjEtqOLTBp4zT6ciAMElyrNm2zhtkQ8I61h3Or5igPwccF4jj5D2WDnKBwrgSr3DgvcmZg4W6Jr9EYvGfxd2ml4Ne7xJ4oRF2hlAGiF1j3wgEiz4TNN8kaFcArdEnQD4xpgjs+c0axnI+Vtc9JYanrjeRgg SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Nov 2017 12:56:26.0322 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6c11b2a9-e896-46ed-0c49-08d537289bce X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR07MB2524 Subject: Re: [dpdk-dev] [RFC PATCH 0/7] RFC:EventDev OPDL PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Nov 2017 12:56:31 -0000 -----Original Message----- > Date: Wed, 29 Nov 2017 12:19:54 +0000 > From: "Ma, Liang" > To: Jerin Jacob > CC: dev@dpdk.org, "Van Haaren, Harry" , > "Richardson, Bruce" , "Jain, Deepak K" > , "Mccarthy, Peter" > Subject: Re: [RFC PATCH 0/7] RFC:EventDev OPDL PMD > User-Agent: Mutt/1.5.20 (2009-06-14) > > Hi Jerin, > Many thanks for your comments. Please check my comment below. > > On 25 Nov 02:25, Jerin Jacob wrote: > > -----Original Message----- > > > Date: Fri, 24 Nov 2017 11:23:45 +0000 > > > From: liang.j.ma@intel.com > > > To: jerin.jacob@caviumnetworks.com > > > CC: dev@dpdk.org, harry.van.haaren@intel.com, bruce.richardson@intel.com, > > > deepak.k.jain@intel.com, john.geary@intel.com > > > Subject: [RFC PATCH 0/7] RFC:EventDev OPDL PMD > > > X-Mailer: git-send-email 2.7.5 > > > > > > From: Liang Ma > > > > > > Thanks Liang Ma for the RFC. > > > > > > > > The OPDL (Ordered Packet Distribution Library) eventdev is a specific > > > implementation of the eventdev API. It is particularly suited to packet > > > processing workloads that have high throughput and low latency > > > requirements. All packets follow the same path through the device. > > > The order which packets follow is determinted by the order in which > > > queues are set up. Packets are left on the ring until they are transmitted. > > > As a result packets do not go out of order. > > > > > > Features: > > > > > > The OPDL eventdev implements a subset of features of the eventdev API; > > > > > > Queues > > > * Atomic > > > * Ordered (Parallel is supported as parallel is a subset of Ordered) > > > * Single-Link > > > > > > Ports > > > * Load balanced (for Atomic, Ordered, Parallel queues) > > > * Single Link (for single-link queues) > > > > > > Single Port Queue > > > > > > It is possible to create a Single Port Queue > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK. Packets dequeued from this queue do > > > not need to be re-enqueued (as is the case with an ordered queue). The > > > purpose of this queue is to allow for asynchronous handling of packets in > > > the middle of a pipeline. Ordered queues in the middle of a pipeline > > > cannot delete packets. > > > > > > > > > Queue Dependencies > > > > > > As stated the order in which packets travel through queues is static in > > > nature. They go through the queues in the order the queues are setup at > > > initialisation rte_event_queue_setup(). For example if an application > > > sets up 3 queues, Q0, Q1, Q2 and has 3 assoicated ports P0, P1, P2 and > > > P3 then packets must be > > > > > > * Enqueued onto Q0 (typically through P0), then > > > > > > * Dequeued from Q0 (typically through P1), then > > > > > > * Enqueued onto Q1 (also through P1), then > > > > > > * Dequeued from Q2 (typically through P2), then > > > > > > * Enqueued onto Q3 (also through P2), then > > > > > > * Dequeued from Q3 (typically through P3) and then transmitted on the > > > relevant eth port > > > > > > > > > Limitations > > > > > > The opdl implementation has a number of limitations. These limitations are > > > due to the static nature of the underlying queues. It is because of this > > > that the implementation can achieve such high throughput and low latency > > > > > > The following list is a comprehensive outline of the what is supported and > > > the limitations / restrictions imposed by the opdl pmd > > > > > > - The order in which packets moved between queues is static and fixed > > > (dynamic scheduling is not supported). > > > > > > - NEW, RELEASE op type are not explicitly supported. RX (first enqueue) > > > implicitly adds NEW event types, and TX (last dequeue) implicitly does > > > RELEASE event types. > > > > > > - All packets follow the same path through device queues. > > > > > > - Flows within queues are NOT supported. > > > > > > - Event priority is NOT supported. > > > > > > - Once the device is stopped all inflight events are lost. Applications should > > > clear all inflight events before stopping it. > > > > > > - Each port can only be associated with one queue. > > > > > > - Each queue can have multiple ports associated with it. > > > > > > - Each worker core has to dequeue the maximum burst size for that port. > > > > > > - For performance, the rte_event flow_id should not be updated once > > > packet is enqueued on RX. > > > > Some top-level comments, > > > > # How does application knows this PMD has above limitations? > > > > I think, We need to add more capability RTE_EVENT_DEV_CAP_* > > to depict these constraints. On the same note, I believe this > > PMD is "radically" different than other SW/HW PMD then anyway > > we cannot write the portable application using this PMD. So there > > is no point in abstracting it as eventdev PMD. Could you please > > work on the new capabilities are required to enable this PMD. > > If it needs more capability flags to express this PMD capability, > > we might have a different library for this as it defects the > > purpose of portable eventdev applications. > > > Agree with improve capability information with add more details with > RTE_EVENT_DEV_CAP_*. While the OPDL is designed around a different Please submit patches required for new caps required for this PMD to depict the constraints. That is the only way application can know the constraints for the given PMD. > load-balancing architecture, that of load-balancing across pipeline > stages where a consumer is only working on a single stage, this does not > necessarily mean that it is completely incompatible with other eventdev > implementations. Although, it is true that an application written to use > one of the existing eventdevs probably won't work nicely with the OPDL > eventdev, the converse situation should work ok. That is, an application > written as a pipeline using the OPDL eventdev for load balancing should > work without changes with the generic SW implementation, and there should > be no reason why it should not also work with other HW implementations > in DPDK too. > OPDL PMD implement a subset functionality of eventdev API. I demonstrate > OPDL on this year PRC DPDK summit, got some early feedback from potential > users. Most of them would like to use that under existing API(aka eventdev) > rather than another new API/lib. That let potential user easier to swap to > exist SW/HW eventdev PMD. Perfect. Lets have one application then so it will it make easy to swap SW/HW eventdev PMD. > > > # We should not add yet another "PMD" specific example application > > in example area like "examples/eventdev_pipeline_opdl_pmd". We are > > working on making examples/eventdev/pipeline_sw_pmd to make work > > on both HW and SW. > > > We would agree here that we don't need a proliferation of example applications. > However this is a different architecture (not a dynamic packet scheduler rather > a static pipeline work distributer), and as such perhaps we should have a > sample app that demonstrates each contrasting architecture. I agree. We need sample application. Why not change the exiting examples/eventdev/pipeline_sw_pmd to make it work as we are addressing the pipeling here. Let write the application based on THE USE CASE not specific to PMD. PMD specific application won't scale. > > > # We should not add new PMD specific test cases in > > test/test/test_eventdev_opdl.c area.I think existing PMD specific > > test case can be moved to respective driver area, and it can do > > the self-test by passing some command line arguments to vdev. > > > We simply followed the existing test structure here. Would it be confusing to > have another variant of example test code, is this done anywhere else? > Also would there be a chance that DTS would miss running tests or not like > having to run them using a different method. However we would defer to the consensus here. > Could you elaborate on your concerns with having another test file in the test area ? PMD specific test cases wont scale. It defect the purpose of the common framework. Cryptodev fell into that trap earlier then they fixed it. For DTS case, I think, still it can verified through vdev command line arguments to the new PMD. What do you think? > > > # Do you have relative performance number with exiting SW PMD? > > Meaning how much it improves for any specific use case WRT exiting > > SW PMD. That should a metric to define the need for new PMD. > > > Yes, we definitely has the number. Given the limitation(Ref cover letter), OPDL > can achieve 3X-5X times schedule rate(on Xeon 2699 v4 platform) compare with the > standard SW PMD and no need of schedule core. This is the core value of OPDL PMD. > For certain user case, "static pipeline" "strong order ", OPDL is very useful > and efficient and generic to processor arch. Sounds good. > > > # There could be another SW driver from another vendor like ARM. > > So, I think, it is important to define the need for another SW > > PMD and how much limitation/new capabilities it needs to define to > > fit into the eventdev framework, > > > Put a summary here, OPDL is designed for certain user case, performance is increase > dramatically. Also OPDL can fallback to standard SW PMD seamless. > That definitely fit into the eventdev API >