From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0063.outbound.protection.outlook.com [104.47.41.63]) by dpdk.org (Postfix) with ESMTP id 64F022BE1 for ; Thu, 29 Jun 2017 09:18:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=eabtrBr34ufs4ck8fl9S5mIdGrAETBwaATiRKz/E6LM=; b=YzQ6v4//XOON04wuGKVJBYBmrxUpzVeAjVku+FgUlYhs5nYCTESUk/my6El1BnhcvU3NP+Hz9ZPZsscuYOlP6dF0JYMHJCzeUWjHK0+0r/e/Q5jq+8BfrMEOW8kph3mWHP6P7lPbfuTwILTjE48xFDFqb9RGJNEnv+uEUlYSHyI= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from jerin (111.93.218.67) by BN3PR0701MB1718.namprd07.prod.outlook.com (10.163.39.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1220.11; Thu, 29 Jun 2017 07:18:23 +0000 Date: Thu, 29 Jun 2017 12:47:28 +0530 From: Jerin Jacob To: "Hunt, David" Cc: Harry van Haaren , dev@dpdk.org, Gage Eads , Bruce Richardson Message-ID: <20170629071726.GC5893@jerin> References: <1492768299-84016-1-git-send-email-harry.van.haaren@intel.com> <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com> <20170510141202.GA8431@jerin> <6c53f05d-2dd2-5b83-2eab-bcecd93bea82@intel.com> <20170627093506.GB14276@jerin> <72d1d150-a119-2b23-642c-484ca658c4b3@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <72d1d150-a119-2b23-642c-484ca658c4b3@intel.com> User-Agent: Mutt/1.8.3 (2017-05-23) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MA1PR01CA0084.INDPRD01.PROD.OUTLOOK.COM (10.174.56.24) To BN3PR0701MB1718.namprd07.prod.outlook.com (10.163.39.17) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a957d57b-7aaf-46e9-7f28-08d4bebf088f X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(300000503095)(300135400095)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:BN3PR0701MB1718; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 3:JnzsEY0i1EtI373Q7+bgL6Qq2jrZSLnBgyDUUhlD9qIB5Vsq3UFLyjetJVlIqL/LRXgk98lvHyvy+Rm7eFXCvWIvbJkGNhNhqrvvFUyz/De2p64yIG6UGj4ByGKlp/IS1cmmQQDQIOXNa9Yq7w9ooMXEjK34ruAXuCwLDUPaD1W4SJLKOdeYF7qcwSc6fGZpHAf88xF8riQiZxxG/YAWItWFAB/IeY/S8NVXBe0MFzlode8phxO10G/N37ztTY0YuYOkiMq+n+o9D5Sqf1aMls8lou4aGPnulIToocYu0a8EBZp4dEj7CgW1IvcC2q2YOkdcMBvExvZ7utEb8oKTodsVd9Aq0AXOrwS8b5gVurL1L0gP86v8KOm6j+P7yW7uUJZeBnXIEO5NB9mI3CVqx2O5PxG8VIsxt0c+2eSkxtU/p6AVxnF7Q6iDqx2JTd38g2U6eGxsg9xEi/Q8RnqaoVL/lmxy/YxpCyKKin9kpgw8qGnMingNYNR0qrX+GTLpQ4Z1h12qpTYeizntKk44lkL5abV75gwEaXdZBjzjfemIygXDUAoPYfM89nW86RBdzY+U656BaE8iF3NckTiplAOUCWT01E3WWqPfSWPmurRvbd048BBB1WbK8sDMfvPSq3GRz0bA8Bq42oQNQwO9rYuJer9awYDFPIOUUtJ+5q33tStr6oM2RGe16FZWk4uy1naE0yeQ1N30nMbhQy1IdjA9nZMHrsjvqRo2kGMoQI0= X-MS-TrafficTypeDiagnostic: BN3PR0701MB1718: X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 25:o2njAhaBO+4rEvVwcJtB2Dj0WopEIfaF09CSBvZ4ZOWVX7GHJZy+KiWHS+bV3fNtBkA6F0JcbQcFKn/XuxVFmT5Fz+M8ZnD586RWmsm/iirE45Wc0mYdS0bg7dyn/XQniBsuLVkIgSijHBwSkTEnFZZkxicNukI7/RlMQeIQxcPwWeFrEwwRTqUUv5tbtbwd7nYnO6FRdK1GQ10Q0rLocJKsWYHe91jn72Q/S+vD9EJBUgydmBvd0i7fEkMKlohkKgp+brQ6rTVeiVpzw4eRfS8wjnBC79hSIpviPhJwkIamcaSWkpZ7D/Q402eJL8njyFDAq86CN9ETqm4tEV6FHWLx3GIHVWFZabavq4223NvYQOOin6ob6BaGRWm6gI1TFPP/a+7pHVRZOI3D6DqmyDVUQKCQ7T1N/RCjMguRTIflCL7j31nI2tX9pT5lvkUew82j9GsUZkyr0/jzTX7eMxz+kstkILh2wniPcFEbB3sa/rYtyWjIcpGc+124Oio0stPS3RN97pDYBEqv/Pc1H8vZHkW1I2YRJ+CVuylcta0p/8IlvfUsWH9/miA6GMaSh1S7gcF08lE184nBWeIt9ypx0K0/vOFrwogXyz0wts4B0DigtAYuExAgYmfxj9gmPza4a57qcDNCa0NMbD2mpa7ZmPe26ybLNbjV7T+14szXxtIwPBNrcHFZXDyTsZ6jMJoNWRqZ+Sim2JzmUS6DH1X3U61NYIvGa5s8JRB2qBE/29vS6OX6HSBRBQ02+S7iRw6lWIjx8ZyYUdMHk+2Cfd0GJPSjQAGozNgkudesdW5glglgcFZSpLxOg6H7YjrG0MnSpKwTyU7Wux3Oeaa118CvvrSybZP+srsY0EPnwNz0LE+2uzyFRXnEKd4BjRCLicadmvzhyx8vm/E7+DEpfOVi7pMbS4Og/XEcAvK9/1o= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 31:q0Gy/g30EfdJqAnBaHPgxde+UFO+2rpiyQhwqBU6KirvvgsomT94z1QeeSwIaHpzav4KpzHjMOE1MdrY+6L6F6AjczxFSwSFJr3HJ2RD9wjM0zYXcM2LdihadIETja9JIQdoYhPG3JEK9WdloFXJce8UZuJIh2zYQoPTDGE6Pvx3i2RKQWBzxZ36GaQVJu8B0wyPH1C0MUCM3VuThBtH7pXsp+IKjE2m1C2Bx8Km4Py2N3y9Nw7w9oLyWR95QNPzrqLMX0bRHU1XzswW7YXtn/O4Ynge6IJEkEOqov2SavN+ScBiiIU5WaXtIv1o0/YA5G1e28557LeBm5SPMyBql6OSFnwq3vVg6CuZsgSha6fRQt0ajmu2u1hW6qCk0PYO9hNi9/hMSLbIkuL1orbwB9JlMgGp8SN+NaKdXvrxLGD1I17YlAz7mSC0Zkk9hOK30lYennvIv6Sif/Bn6aKjMyf5yJUYVidYQFjPTrb8whlJx5LpLe+W7kF070l/P9YNiEuH/VS3vBzJuc/E+lSF7ToiL3EziGJkG38y/txMpF+L6UnH2cor3P/8WZmGdnNYRfXaDB5zOhPhfinDN3vCrkZVOx1CRLVMeu4oGs13tf171kpxQekMzkR4FNBgS14XqvzYyjW9AN91lNHa3J/z0htVY0LXHXcBcdJwPgaNQ9pABTDVOSVPI9aFBtva3/vZ X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 20:0je66/wlUYFQA+1CDbBER35BV8iNPOkvBqVZc2dhVk8Mj++nGfwCNfUKYzEyklXmLUyjdoQJezdkhI6GM48e+WCHagjYaVt1vXeGbM3LihLGNPPxDSz5Mqx0UGCx09/oDV3u469tiB50o0tY0lFaEskMO9EPxmgA8yVZcH5byO6YEvmbc3dR7zPQoFU2Hqyz2n3+f5awxCy8lqahNKzseH16zrEMLWqeN17n6+6ekLSCkEEUlU9izqfiDqm5dOSt1w5LQr1RYbdwQqQEHEcrMDXvelg7Itj9oPyhCo0e/ObcYnRppN8Yt/Rbl2vfafXkH6gQkoHAr9EfKin0OjEIrFnV2DO+VKS4ZfVLekTPp60SiCoJ7Eggtn8KLM/3Rm7fN6GWz2+XnCsJy7K5xX3T8SPhnd+I5kNt++F4NoEpP8zERcBNPE4rlgNRRk4drZk/B+03PEC3raMBeSUs9YSvpLJhWW01D/SuwtTQ6NeHofjT9syZ1gmdGzYe0Mh8Y14k005VxOmuCVMnnOSCoAl3J6c6F9c4uA0Dn+Y7QI726a/ueF6nuboQlfNzDViDyMzvsQiyvtkkGoVVVUKSQFiXQNuUsHb0aQDSMhpcd8BsNdc= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(133145235818549)(236129657087228)(48057245064654)(148574349560750)(228905959029699)(247924648384137); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(5005006)(8121501046)(93006095)(100000703101)(100105400095)(3002001)(10201501046)(6041248)(20161123560025)(20161123555025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123562025)(6072148)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BN3PR0701MB1718; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BN3PR0701MB1718; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1718; 4:m0M+KlrYfiVkLGNRD/zkGxrsLZxreaZPsaDqXD6F?= =?us-ascii?Q?Gs3ebC0DPnDbhSqg3PqyRPes7LYsNIUXWxe0gAVy9yGSQ/5oEsArywgMJ77x?= =?us-ascii?Q?/0DaNyWfUuNw2GCUUn5trFiWC6yc5cV5jGr44hA1Z2zFVpl9zjVklyHjiPqo?= =?us-ascii?Q?dr5+hzx2ozqPu3bVCsgpGKN+4o4S7iGpKiohAE1ugobByYoJaSU1/C8y0w7x?= =?us-ascii?Q?VzSEmMlnREraVWKArYQTLH/wPiufq/pDOY1eK6Jvb2rrsnVqD2lrKocVmheJ?= =?us-ascii?Q?nYJDsV72Nc96SAHFpTZvepLSc6AfrTjyPUDC8TONgGrnTZuW3jxD+WkByKZj?= =?us-ascii?Q?RCaDrvZVF2egOyh4/+a3lhiUQhBpWZYO64Yeex/04sGiyGNhOlQ2fZJsf2zo?= =?us-ascii?Q?MCW8OMAL58A+zwcqBYa8FDFCjqJw7ximoBX2kAmLq1Q1oJfXnEtcyJ9Jhp99?= =?us-ascii?Q?LpzRLcpBTmv3y9GGm+uJifwnDYDFIob9a7OJ/2S1juwEqkrwGI0h3fRFAYi0?= =?us-ascii?Q?NVcMtawuq/ipmu1JsnppRvq58A//+E1bd7jncilGGRC7bt43xTzrHXMJVkPB?= =?us-ascii?Q?XJ2gk05EdyJIsJPRKNXpqkMvwgSabxFLp4Le1ZTzvlhlNXGrmgqmBQbOOQgq?= =?us-ascii?Q?UdJtUuQiNislVvzfioTHlwanVO+CnD3cvvzk68U9RIcBurjyhM0YVGg0E9k/?= =?us-ascii?Q?XFLUXwWiYSjL8jmKC5ONM/bEZRRioEGt3sOzI9KUx4TVz4y66Q/ZCKqIgqOL?= =?us-ascii?Q?BFJZBARH+biRrPxMlQkPGd5s1p1QtWi+2tJeW87y361Qr2MY9ei6VSQ2e/HN?= =?us-ascii?Q?vOJgyB1s+oC3+njV0AUsv4XsUmfqmHLNZuYk5ZfA4j6iR/LzxV+2nhK9EqTR?= =?us-ascii?Q?04oW+oZgbGoB3IqgfooKxdZ/E66ia4n2xCoAnahBvB1/9LCzUzE1lHoU7b+T?= =?us-ascii?Q?RG4g30yc3Nk4o82G9mWTN8OScvvKSUz1w8wKpAEqeacAeIHGtFKACrdSYRaw?= =?us-ascii?Q?/QGjGlp0NSTvK85bqnoWd5bdDXRAyRmlh3v0tWqjJsyfPUoOFZrx2MKCWRIA?= =?us-ascii?Q?D43qcL574NbnaffHEx9kAI/6fzpIyCe0PpdAH7fUnqCUlQWjOHOWaji4zjjp?= =?us-ascii?Q?t6lvzZ+6PZS9pdEVOmHqdNwXkGdFnm2mURFmEy7KSSqict1tLhJdwEs9aIwh?= =?us-ascii?Q?dTcrtR433DspXsfEcgFCd0G3wcaEjRsViTNBak0BDSTI/kiEnpkYvWSQ3NVA?= =?us-ascii?Q?aZ11nOf0txc7jGP/kGtwSJY7kuwpB/4HnPSVTUi/ssuTzg7qURiuzd+kBL17?= =?us-ascii?Q?5VnTci3OVlhwGzUqJMbM/yYLd6htzVblOrw73e9dlLHu?= X-Forefront-PRVS: 0353563E2B X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39840400002)(39850400002)(39400400002)(39450400003)(39410400002)(377454003)(51914003)(13464003)(24454002)(6916009)(93886004)(3846002)(47776003)(33656002)(42882006)(23726003)(50466002)(6116002)(66066001)(5660300001)(55016002)(110136004)(1076002)(7736002)(5009440100003)(2950100002)(54906002)(229853002)(9686003)(38730400002)(6496005)(6306002)(2906002)(561944003)(53376002)(53936002)(53546010)(6246003)(42186005)(33716001)(305945005)(4326008)(50986999)(76176999)(25786009)(54356999)(189998001)(8676002)(72206003)(966005)(81166006)(478600001)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1718; H:jerin; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1718; 23:JjwuaosrCKSbWQPa/gJwlgbOdTjSMBy0d4c9U0U?= =?us-ascii?Q?j8B7opfAZ4xCzdSlSgtmxVr/tMYIcj8Zju6Ehpe4ot+DCcjcl8TfRjADLP5G?= =?us-ascii?Q?sZnKGB0GvUIYrZN6lvuoJv7iZPsD2AMSKBMza74dUIJRd3kTnkwSsNU7O2yt?= =?us-ascii?Q?T7tpw7lMKwYFPq371jXq/kZ4Po/UFTrJEoew3e/pZn6CokhKo0vMbc0sNiac?= =?us-ascii?Q?eKXhg8GkT+LamET1zGIDpsjMqeCEwF0eaXkT9u68n8SubtKUJUQemTtfTxBT?= =?us-ascii?Q?Pbju3rJ3fq07CXlp4DzYFAVThdl7wZ0zFWStfwHQJObFlEBYNZL1mpbEqNCd?= =?us-ascii?Q?cPfnNtXQHOPNzl/Xj00Ybd86BiuO59gKv/E/6nnm0iEcaC2IdwQp2SvVpNUa?= =?us-ascii?Q?LAaNXnmsoGf1mAEL5SwvgvGu+TPUnCQ3F2mptxtcOx2uiO//0SCkrnwoXy8+?= =?us-ascii?Q?icsS8z3gaPqw8/P03fml1EXbpArpysA+BXC4H9hq3NArrKJ6GxMLXgAlCTBa?= =?us-ascii?Q?TNre89blsgczDjt+8Ov7bZDsChDrfa7D+MxH8PGxH7ZHZIMNDzkbFtQqESr7?= =?us-ascii?Q?+w5/bgbVu7t+guyzJW6ShPTPgVF3anrfa9Ln2OqmKerPkevw8JmIEJ1Px2wt?= =?us-ascii?Q?IZIKb9OB0170sZZNHnkn5STfamhJ7V/t4wxWHp57ojnuAaNpUlDtSkg40Ah5?= =?us-ascii?Q?aUIlJhp0Y0yWmFPIMzsfjVE50I5mnnP0JfD5PTeBygQtjp59BPCqVMI4pVeV?= =?us-ascii?Q?xRaxZBraJTsoq14fCfSt1CWV5/IIE8SmjUc9vCWE25QMK0Q9N7i2FTUkyyOJ?= =?us-ascii?Q?C/JxDltlYtqnedaLUe0vuFTdtjF8AtzMtL+P50ZG926aYQjYUrnVd00lahAb?= =?us-ascii?Q?KHYuFtxG4iYusrcEu8blzO7SePcl2Welg9R8Kg1V5iKLuhCBueNOOXJTUpJe?= =?us-ascii?Q?fDuKOniXSWhFpFWpy8J4Vdm1hzr5B+QNJPGFyBDx2XEmSfVN8tdRkae3HWF1?= =?us-ascii?Q?GmoStlL6Pnny7V/O06/H2xohsChn4aXaNtpYGWjwRjqrmODZc+epyvDJgK7p?= =?us-ascii?Q?y9CioCirPJSDdtnLz9Pq1ZByoy4pjON/DCVbqyncmcNm5Rt6LbizD6h5S/MA?= =?us-ascii?Q?oNG2rJfqs+6n/5jM/l2TInlrW0gnuSDf1lP43WggQhq1705YpQDaq3RyQ1A5?= =?us-ascii?Q?OpoFzjWiwvcW+Edvjlgprd/Q70dsX1ljNpvC+QPHuH27HpmsX1A1CM+ajsBr?= =?us-ascii?Q?xJon7JxvUW/wgP6LzJKpev75+bW1rPBR0Fr+7akcFEKBgryOGjAPkGQP7718?= =?us-ascii?Q?RW1tTg8FxCgvvPdaW7A+ONh+xy5iE+mWLSOUzvRwGqRUUOBIhXbNGlSSFo4P?= =?us-ascii?Q?t9/ssNQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1718; 6:FEF/rbcHqyaqtOOMVVgQ2dpciPIKGiza0zUi7I1O?= =?us-ascii?Q?z7/9M+XuggZbmIKHn8SolsMo60cSvT6WfdOqgPrJ08uurm+GeBeXzGN2UhnA?= =?us-ascii?Q?7rTgCgLtu+bEcgmKouLAZiiNCpkcdYy2A23BQhjChQEHLYtbDtvy09+vXz62?= =?us-ascii?Q?XztT7jcL+N36v/ndsM1WFPw3UMCc5u6/5yUdYurs+ezpy6LmcbevcKhVEI4f?= =?us-ascii?Q?MlU2NjB/r5icdX1l0UvYnHsXRG3/uapa3KsCYehwDNQd70cjC8ETi/G1+xVf?= =?us-ascii?Q?4T92kYo/GyPPDCHcAd6NB0LNFAA+CAhbexCErAGjXf5vXVeENsCzqo8Ehbsa?= =?us-ascii?Q?qtw0nZq+CIJih81ll6oSE1yhGSVSxoj19LDOI3XBb3OPr+magkI9Av9emNbi?= =?us-ascii?Q?+kDfhBHPADZLoFFGIxGWoHHN/gpwoLK2ukhcip9ipP0QMvD1CajemnyX5Nq0?= =?us-ascii?Q?6Z62SSskoh/VXqMmo9fQ7uMumjCax2kAx1ERiwkags4NIbZB4lAGSZiX3pcY?= =?us-ascii?Q?spAGi1QTERcOut0f+S5xH/u2f0hGt/2Q/CvSyFX8vV3xuEx/1zDTJ6KdqF/h?= =?us-ascii?Q?nDwz16XigLf9ojDdabJ/+PcFlw4OKPkTSOl3dIkUZsxjOOiJZn2yNVjOSY5E?= =?us-ascii?Q?l3rDH6Q/ktJS54UkJ/nCR1nnFBaLs/IFG6B7nej40vNXOlYDm9ip7/bj0Pfx?= =?us-ascii?Q?GWg44h2/MRZ7swqL0giOmVMFGbUtEoPbwAjyZrUApLHOaUkXKjtlcfmUilQQ?= =?us-ascii?Q?T+MQ6r6dAh7iYQmhhUB6ioPmxJsqNRUdG2+USr1wGzMhVvDhXJ86xhczmgb0?= =?us-ascii?Q?iyndaTGi35EuAXTK/+Z9icH1aKOCPdm6eg1dBLeT8SUc1qWRYFWsu49+QOwP?= =?us-ascii?Q?CkSJQwTJUmejstOYBTxHU3E6ApWSQu9kjKdimsAWMD4hbH5N4jsOIy8STX+X?= =?us-ascii?Q?jRZHn7VedWL6OfRQvu3ssEfKhBeb5RjmoXOIxXGhHA6QLEZXAgdv5BmEFtwH?= =?us-ascii?Q?5sE=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 5:8FLDz+6TVYicrEoSdT7NBKD2EFikaE/QF7sdLvYrGmV+cttJladOZ95jyP2w5Xy3xVs1+zjOq27vfLiW1SFqd3EesRYEGxAmPkxzT2QHOGAoRV4zIdsn15JkJuggHc30lmlF38no7beWQ2RQijuM1owxrt+Mu0DqKt48wwg20QdNapsvfLu6NPmmxV29E3BhSi2ZPdhKvztJnbGWWH5LDxnHRZKm3GddBOCm78ITNVey1F1BgC4MrBlY+7BRrNIM9n7GKN5JC32+OFIT9YDjrFUZYUrSgkl2nT7RY/ohqNz+9P16dCpEJz5VUrWMcr1YW8hmGYqRwfq0YXu0sMa1hcd8LWxY5oYRW6kUOkpMAVJ0gqLXqwMGf2vd4LQX89Nc1t6ou4tS6JTFF8mXgW/M/MMeWKnHH99/Ajhbyd6kTUPadQb1p3SXJ2KlIskNtcnohlATdG3qn5XakfRDzGubYxvw7gNjILSPFYKrBAqaGAFOtcqfl8Gzoyi0/48rs0mp; 24:ECMgkwyI8MJby7nVvIq7m85UPmUINCytjfUEVba8SQPow/oz+uu8F3MLUwVzWlrafTkuW7jOxu8wv+TjUUbd2J4TagJgzY8YSrau4+ya56E= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 7:9JLTWWHKSclr8N7/kIup41vfWrPDusC4kR8v9KgtgYc+mXnZ3YFUPYglN0Jvok761QR3lmFCKsXwDzIL4j3OXWPrGbPmlU8JzhJ1nTNFEqqDQJAY8kvEBSGgK1hrq9qbr8e/sLf9aXFqqU9JySnQY2hXchQecABrnzIdacUrKP0vrJOQckiEcu0S/QgzYIS5SWVG3iG3Zk8g8TiUEP897X97GvojJCMxFD84jpRrg6HgTEiy5Ver5Vahyz37XIWexzIz6uiyabFHqaEm0jEL/D9lAb83aDQSreT3MpPJZ8mE00rn/UXrss4t3dzrlbDMVQ/0SaX47r6X2j1DT1lJ2k5uO+K5NSZS3t+y4yRo2k9Jmowmq8FX56v38FbqP9FmQsX5W6Zx7GuxaWAL0fDmCvGNpp/963fdggjAESkxRCRBN0vX11oTsisNitWn5QMF7C5Av+3YcRHo5SulmkxZFBK006kSDTVUHbrqaecGFkncHT5drLHp/yZFmX7nWWHAEItk4fICSqT8AnTcdcZuYR+FmrRAhFir3qGdyp5/Ghq7sfhDEEzsjz0JfJrdldNwJWHQ5gUTKNxScMp7LaTRuduR+GOiaBsyQVmsWv4/rJ8OXKVeciV3NvRx7kwu5IWkZQ/8X/ss2vlciTXvOWG4id6V9XAFUOny+SnJpIKUlBPuAKFRoV+ujPXNUTAL8ndjCN8fVtP8D4RoHLRKqu+cv+H5zx5bjQD/9e/x2Tx9jli0k5avjXpVDjHCRV5RwyClHhqmPNvaSXLoXYa8ogh2QlXUmAyAKLL8ZNF5Upsz87A= X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jun 2017 07:18:23.2578 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1718 Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Jun 2017 07:18:29 -0000 -----Original Message----- > Date: Tue, 27 Jun 2017 14:12:20 +0100 > From: "Hunt, David" > To: Jerin Jacob > CC: Harry van Haaren , dev@dpdk.org, Gage Eads > , Bruce Richardson > Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added > sample app > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 > Thunderbird/45.8.0 > > Hi Jerin: > > > On 27/6/2017 10:35 AM, Jerin Jacob wrote: > > -----Original Message----- > > > Date: Mon, 26 Jun 2017 15:46:47 +0100 > > > From: "Hunt, David" > > > To: Jerin Jacob , Harry van Haaren > > > > > > CC: dev@dpdk.org, Gage Eads , Bruce Richardson > > > > > > Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added > > > sample app > > > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 > > > Thunderbird/45.8.0 > > > > > > Hi Jerin, > > Hi David, > > > > Looks like you have sent the old version. The below mentioned comments > > are not addressed in v2. > > Oops. Glitch in the Matrix. I've just pushed a V3 with the changes. > > > > I'm assisting Harry on the sample app, and have just pushed up a V2 patch > > > based on your feedback. I've addressed most of your suggestions, comments > > > below. There may still a couple of outstanding questions that need further > > > discussion. > > A few general comments: > > 1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major > > portion(eventdev setup and producer()) of this application > > 2) Producing one lcore worth of packets, really cant show as example > > eventdev application as it will be pretty bad in low-end machine. > > At least application infrastructure should not limit. > > > > Considering above points, Should we wait for rx adapter to complete > > first? I would like to show this as real world application to use eventdev. > > > > Thoughts? > > > > On the same note: > > Can we finalize on rx adapter proposal? I can work on v1 of patch and > > common code if Nikhil or Gage don't have bandwidth. Let me know? > > > > last followup: > > http://dpdk.org/ml/archives/dev/2017-June/068776.html > > I had a quick chat with Harry, and wonder if we'd be as well to merge the > app as it is now, and as the new frameworks become available, the app can be > updated to make use of them? I feel it would be better to have something out > there for people to play with than waiting for 17.11. I agree with your concern. How about renaming the test and doc specific to SW PMD and then once we fix the known issues with HW eventdev + ethdev(Rx adapter) integration and then rename the application to generic eventdev. > > Also, if you have bandwidth to patch the app for your desired use cases, > that would be a good contribution. I'd only be guessing for some of it :) > > > > > Regards, > > > Dave > > > > > > On 10/5/2017 3:12 PM, Jerin Jacob wrote: > > > > -----Original Message----- > > > > > Date: Fri, 21 Apr 2017 10:51:37 +0100 > > > > > From: Harry van Haaren > > > > > To: dev@dpdk.org > > > > > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren > > > > > , Gage Eads , Bruce > > > > > Richardson > > > > > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app > > > > > X-Mailer: git-send-email 2.7.4 > > > > > > > > > > This commit adds a sample app for the eventdev library. > > > > > The app has been tested with DPDK 17.05-rc2, hence this > > > > > release (or later) is recommended. > > > > > > > > > > The sample app showcases a pipeline processing use-case, > > > > > with event scheduling and processing defined per stage. > > > > > The application recieves traffic as normal, with each > > > > > packet traversing the pipeline. Once the packet has > > > > > been processed by each of the pipeline stages, it is > > > > > transmitted again. > > > > > > > > > > The app provides a framework to utilize cores for a single > > > > > role or multiple roles. Examples of roles are the RX core, > > > > > TX core, Scheduling core (in the case of the event/sw PMD), > > > > > and worker cores. > > > > > > > > > > Various flags are available to configure numbers of stages, > > > > > cycles of work at each stage, type of scheduling, number of > > > > > worker cores, queue depths etc. For a full explaination, > > > > > please refer to the documentation. > > > > > > > > > > Signed-off-by: Gage Eads > > > > > Signed-off-by: Bruce Richardson > > > > > Signed-off-by: Harry van Haaren > > > > Thanks for the example application to share the SW view. > > > > I could make it run on HW after some tweaking(not optimized though) > > > > > > > > [...] > > > > > +#define MAX_NUM_STAGES 8 > > > > > +#define BATCH_SIZE 16 > > > > > +#define MAX_NUM_CORE 64 > > > > How about RTE_MAX_LCORE? > > > Core usage in the sample app is held in a uint64_t. Adding arrays would be > > > possible, but I feel that the extra effort would not give that much benefit. > > > I've left as is for the moment, unless you see any strong requirement to go > > > beyond 64 cores? > > I think, it is OK. Again with service core infrastructure this will change. > > > > > > > + > > > > > +static unsigned int active_cores; > > > > > +static unsigned int num_workers; > > > > > +static unsigned long num_packets = (1L << 25); /* do ~32M packets */ > > > > > +static unsigned int num_fids = 512; > > > > > +static unsigned int num_priorities = 1; > > > > looks like its not used. > > > Yes, Removed. > > > > > > > > +static unsigned int num_stages = 1; > > > > > +static unsigned int worker_cq_depth = 16; > > > > > +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY; > > > > > +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1}; > > > > > +static int16_t qid[MAX_NUM_STAGES] = {-1}; > > > > Moving all fastpath related variables under a structure with cache > > > > aligned will help. > > > I tried a few different combinations of this, and saw no gains, some losses. > > > So will leave as is for the moment, if that's OK. > > I think, the one are using in fastpath better to allocate from huge page > > using rte_malloc() > > > > > > > +static int worker_cycles; > > > > > +static int enable_queue_priorities; > > > > > + > > > > > +struct prod_data { > > > > > + uint8_t dev_id; > > > > > + uint8_t port_id; > > > > > + int32_t qid; > > > > > + unsigned num_nic_ports; > > > > > +}; > > > Yes, saw a percent or two gain when this plus following two data structs > > > cache aligned. > > looks like it not fixed in v2. Looks like you have sent the old > > version. > > > > > > > + > > > > > + return 0; > > > > > +} > > > > > + > > > > > +static int > > > > > +producer(void) > > > > > +{ > > > > > + static uint8_t eth_port; > > > > > + struct rte_mbuf *mbufs[BATCH_SIZE]; > > > > > + struct rte_event ev[BATCH_SIZE]; > > > > > + uint32_t i, num_ports = prod_data.num_nic_ports; > > > > > + int32_t qid = prod_data.qid; > > > > > + uint8_t dev_id = prod_data.dev_id; > > > > > + uint8_t port_id = prod_data.port_id; > > > > > + uint32_t prio_idx = 0; > > > > > + > > > > > + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, > > > BATCH_SIZE); > > > > > + if (++eth_port == num_ports) > > > > > + eth_port = 0; > > > > > + if (nb_rx == 0) { > > > > > + rte_pause(); > > > > > + return 0; > > > > > + } > > > > > + > > > > > + for (i = 0; i < nb_rx; i++) { > > > > > + ev[i].flow_id = mbufs[i]->hash.rss; > > > > prefetching the buff[i+1] may help here? > > > I tried, didn't make much difference. > > OK. > > > > > > > + ev[i].op = RTE_EVENT_OP_NEW; > > > > > + ev[i].sched_type = queue_type; > > > > The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED. > > > So, we > > > > cannot assign .sched_type as queue_type. > > > > > > > > I think, one option could be to avoid translation in application is to > > > > - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY > > > > - Introduce a new RTE_EVENT_DEV_CAP_ to denote > > > RTE_EVENT_QUEUE_CFG_ALL_TYPES cap > > > > ability > > > > - add sched_type in struct rte_event_queue_conf. If capability flag is > > > > not set then implementation takes sched_type value for the queue. > > > > > > > > Any thoughts? > > > > > > Not sure here, would it be ok for the moment, and we can work on a patch in > > > the future? > > OK > > > > > > > + > > > > > + if (tx_core[lcore_id] && (tx_single || > > > > > + rte_atomic32_cmpset(&tx_lock, 0, 1))) { > > > > > + consumer(); > > > > Should consumer() need to come in this pattern? I am thinking like > > > > if events is from last stage then call consumer() in worker() > > > > > > > > I think, above scheme works better when the _same_ worker code need to run > > > the > > > > case where > > > > 1) ethdev HW is capable to enqueuing the packets to same txq from > > > > multiple thread > > > > 2) ethdev is not capable to do so. > > > > > > > > So, The above cases can be addressed in configuration time where we link > > > > the queues to port > > > > case 1) Link all workers to last queue > > > > case 2) Link only worker to last queue > > > > > > > > and keeping the common worker code. > > > > > > > > HW implementation has functional and performance issue if "two" ports are > > > > assigned to one lcore for dequeue. The above scheme fixes that problem > > > too. > > > > > > > > > Can we have a bit more discussion on this item? Is this needed for this > > > sample app, or can we perhaps work a patch for this later? Harry? > > As explained above, Is there any issue in keeping consumer() for last > > stage ? > > I would probably see this as a future enhancement as per my initial comments > above. Any hardware or new framework additions are welcome as future patches > to the app. > > > > > > > > > + rte_atomic32_clear((rte_atomic32_t *)&tx_lock); > > > > > + } > > > > > +} > > > > > + > > > > > +static int > > > > > +worker(void *arg) > > > > > +{ > > > > > + struct rte_event events[BATCH_SIZE]; > > > > > + > > > > > + struct worker_data *data = (struct worker_data *)arg; > > > > > + uint8_t dev_id = data->dev_id; > > > > > + uint8_t port_id = data->port_id; > > > > > + size_t sent = 0, received = 0; > > > > > + unsigned lcore_id = rte_lcore_id(); > > > > > + > > > > > + while (!done) { > > > > > + uint16_t i; > > > > > + > > > > > + schedule_devices(dev_id, lcore_id); > > > > > + > > > > > + if (!worker_core[lcore_id]) { > > > > > + rte_pause(); > > > > > + continue; > > > > > + } > > > > > + > > > > > + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id, > > > > > + events, RTE_DIM(events), 0); > > > > > + > > > > > + if (nb_rx == 0) { > > > > > + rte_pause(); > > > > > + continue; > > > > > + } > > > > > + received += nb_rx; > > > > > + > > > > > + for (i = 0; i < nb_rx; i++) { > > > > > + struct ether_hdr *eth; > > > > > + struct ether_addr addr; > > > > > + struct rte_mbuf *m = events[i].mbuf; > > > > > + > > > > > + /* The first worker stage does classification */ > > > > > + if (events[i].queue_id == qid[0]) > > > > > + events[i].flow_id = m->hash.rss % num_fids; > > > > Not sure why we need do(shrinking the flows) this in worker() in queue > > > based pipeline. > > > > If an PMD has any specific requirement on num_fids,I think, we > > > > can move this configuration stage or PMD can choose optimum fid internally > > > to > > > > avoid modulus operation tax in fastpath in all PMD. > > > > > > > > Does struct rte_event_queue_conf.nb_atomic_flows help here? > > > In my tests the modulus makes very little difference in the throughput. And > > > I think it's good to have a way of varying the number of flows for testing > > > different scenarios, even if it's not the most performant. > > Not sure. > > > > > > > + > > > > > + events[i].queue_id = next_qid[events[i].queue_id]; > > > > > + events[i].op = RTE_EVENT_OP_FORWARD; > > > > missing events[i].sched_type.HW PMD does not work with this. > > > > I think, we can use similar scheme like next_qid for next_sched_type. > > > Done. added events[i].sched_type = queue_type. > > > > > > > > + > > > > > + /* change mac addresses on packet (to use mbuf data) */ > > > > > + eth = rte_pktmbuf_mtod(m, struct ether_hdr *); > > > > > + ether_addr_copy(ð->d_addr, &addr); > > > > > + ether_addr_copy(ð->s_addr, ð->d_addr); > > > > > + ether_addr_copy(&addr, ð->s_addr); > > > > IMO, We can make packet processing code code as "static inline function" > > > so > > > > different worker types can reuse. > > > Done. moved out to a work() function. > > I think, mac swap should do in last stage, not on each forward. > > ie. With existing code, 2 stage forward makes in original order. > > > > > > > + > > > > > + /* do a number of cycles of work per packet */ > > > > > + volatile uint64_t start_tsc = rte_rdtsc(); > > > > > + while (rte_rdtsc() < start_tsc + worker_cycles) > > > > > + rte_pause(); > > > > Ditto. > > > Done. moved out to a work() function. > > > > > > > I think, All worker specific variables like "worker_cycles" can moved into > > > > one structure and use. > > > > > > > > > + } > > > > > + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, > > > > > + events, nb_rx); > > > > > + while (nb_tx < nb_rx && !done) > > > > > + nb_tx += rte_event_enqueue_burst(dev_id, port_id, > > > > > + events + nb_tx, > > > > > + nb_rx - nb_tx); > > > > > + sent += nb_tx; > > > > > + } > > > > > + > > > > > + if (!quiet) > > > > > + printf(" worker %u thread done. RX=%zu TX=%zu\n", > > > > > + rte_lcore_id(), received, sent); > > > > > + > > > > > + return 0; > > > > > +} > > > > > + > > > > > +/* > > > > > + * Parse the coremask given as argument (hexadecimal string) and fill > > > > > + * the global configuration (core role and core count) with the parsed > > > > > + * value. > > > > > + */ > > > > > +static int xdigit2val(unsigned char c) > > > > multiple instance of "xdigit2val" in DPDK repo. May be we can push this > > > > as common code. > > > Sure, that's something we can look at in a separate patch, now that it's > > > being used more and more. > > make sense. > > > > > > > +{ > > > > > + int val; > > > > > + > > > > > + if (isdigit(c)) > > > > > + val = c - '0'; > > > > > + else if (isupper(c)) > > > > > + val = c - 'A' + 10; > > > > > + else > > > > > + val = c - 'a' + 10; > > > > > + return val; > > > > > +} > > > > > + > > > > > + > > > > > +static void > > > > > +usage(void) > > > > > +{ > > > > > + const char *usage_str = > > > > > + " Usage: eventdev_demo [options]\n" > > > > > + " Options:\n" > > > > > + " -n, --packets=N Send N packets (default ~32M), 0 > > > implies no limit\n" > > > > > + " -f, --atomic-flows=N Use N random flows from 1 to N > > > (default 16)\n" > > > > I think, this parameter now, effects the application fast path code.I > > > think, > > > > it should eventdev configuration para-mater. > > > See above comment on num_fids > > > > > > > > + " -s, --num_stages=N Use N atomic stages (default > > > 1)\n" > > > > > + " -r, --rx-mask=core mask Run NIC rx on CPUs in core > > > mask\n" > > > > > + " -w, --worker-mask=core mask Run worker on CPUs in core > > > mask\n" > > > > > + " -t, --tx-mask=core mask Run NIC tx on CPUs in core > > > mask\n" > > > > > + " -e --sched-mask=core mask Run scheduler on CPUs in core > > > mask\n" > > > > > + " -c --cq-depth=N Worker CQ depth (default 16)\n" > > > > > + " -W --work-cycles=N Worker cycles (default 0)\n" > > > > > + " -P --queue-priority Enable scheduler queue > > > prioritization\n" > > > > > + " -o, --ordered Use ordered scheduling\n" > > > > > + " -p, --parallel Use parallel scheduling\n" > > > > IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of > > > > operation. It is valid have to any combination. We need to express that in > > > > command like > > > > example: > > > > 3 stage with > > > > O->A->P > > > How about we add an option that specifies the mode of operation for each > > > stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4 > > > stages with atomic, parallel, paralled, ordered. Or maybe reuse your > > > test-eventdev parameter style? > > Any scheme is fine. > > > > > > > + " -q, --quiet Minimize printed output\n" > > > > > + " -D, --dump Print detailed statistics before > > > exit" > > > > > + "\n"; > > > > > + fprintf(stderr, "%s", usage_str); > > > > > + exit(1); > > > > > +} > > > > > + > > > > [...] > > > > > > > > > + rx_single = (popcnt == 1); > > > > > + break; > > > > > + case 't': > > > > > + tx_lcore_mask = parse_coremask(optarg); > > > > > + popcnt = __builtin_popcountll(tx_lcore_mask); > > > > > + tx_single = (popcnt == 1); > > > > > + break; > > > > > + case 'e': > > > > > + sched_lcore_mask = parse_coremask(optarg); > > > > > + popcnt = __builtin_popcountll(sched_lcore_mask); > > > > > + sched_single = (popcnt == 1); > > > > > + break; > > > > > + default: > > > > > + usage(); > > > > > + } > > > > > + } > > > > > + > > > > > + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 || > > > > > + sched_lcore_mask == 0 || tx_lcore_mask == 0) { > > > > > + > > > > > + /* Q creation - one load balanced per pipeline stage*/ > > > > > + > > > > > + /* set up one port per worker, linking to all stage queues */ > > > > > + for (i = 0; i < num_workers; i++) { > > > > > + struct worker_data *w = &worker_data[i]; > > > > > + w->dev_id = dev_id; > > > > > + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) { > > > > > + printf("Error setting up port %d\n", i); > > > > > + return -1; > > > > > + } > > > > > + > > > > > + uint32_t s; > > > > > + for (s = 0; s < num_stages; s++) { > > > > > + if (rte_event_port_link(dev_id, i, > > > > > + &worker_queues[s].queue_id, > > > > > + &worker_queues[s].priority, > > > > > + 1) != 1) { > > > > > + printf("%d: error creating link for port %d\n", > > > > > + __LINE__, i); > > > > > + return -1; > > > > > + } > > > > > + } > > > > > + w->port_id = i; > > > > > + } > > > > > + /* port for consumer, linked to TX queue */ > > > > > + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) { > > > > If ethdev supports MT txq queue support then this port can be linked to > > > > worker too. something to consider for future. > > > > > > > Sure. No change for now. > > OK > > Just to add a comment for any remaining comments above, we would hope that > none of them are blockers for the merge of the current version, as they can > be patched in the future as the infrastructure changes. > > Rgds, > Dave. > >