From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0061.outbound.protection.outlook.com [104.47.36.61]) by dpdk.org (Postfix) with ESMTP id 6AC891BE0 for ; Tue, 4 Sep 2018 10:13:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UBUcf+KQqN3/0otyBK1bwU19ZqpSrg5t049Wvg22lPA=; b=hCROZ5erhIVgI+Wqnp+fDfInk4FwXRmMnruNiMUvW3CybndZhBr05/lLFNsA+9Lxc5oZk+LICvllzDUZN0ePMvIjCJ4TKs1N6uF5t56T5+2JN19Fs7olq5WsTI2IYU90yxDv/cWXu9d3M1SZDtmroPR8ccltx7KaPNKkgX6/UkM= Received: from ltp-pvn (111.93.218.67) by BYAPR07MB4965.namprd07.prod.outlook.com (2603:10b6:a03:5b::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1101.16; Tue, 4 Sep 2018 08:13:14 +0000 Date: Tue, 4 Sep 2018 13:42:55 +0530 From: Pavan Nikhilesh To: "Rao, Nikhil" , jerin.jacob@caviumnetworks.com, santosh.shukla@caviumnetworks.com, anoob.joseph@caviumnetworks.com Cc: dev@dpdk.org Message-ID: <20180904081254.GA22964@ltp-pvn> References: <20180831104040.26678-1-pbhagavatula@caviumnetworks.com> <20180831104040.26678-2-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: BM1PR0101CA0026.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:1a::12) To BYAPR07MB4965.namprd07.prod.outlook.com (2603:10b6:a03:5b::14) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 636072a1-0542-49ba-d810-08d6123e4479 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989137)(4534165)(4627221)(201703031133081)(201702281549075)(8990107)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:BYAPR07MB4965; X-Microsoft-Exchange-Diagnostics: 1; BYAPR07MB4965; 3:Nwif/FYLBfNc/oKuHJnTO5HPPWeqCS8K5OIrPGcmG4YmTCyPErfeim+cvEC2HT+ZeBDdph3a7b/DwapgF3BkqQG7RBkJGEwqlEnqcZIbxwe2nVh/5ar+GKGraUOorhRzFx64vRvRTWleKxYVh25+ZXzemPqV1NpmJb4NnKPSHBjw+IpdgjV/KPQ0qVRHSdR9jmkQLmkMl/YZOrNotjumSylhgkR+75O7vPrqdwnHPpf0T5e8koB2R+VhSFM74MzZ; 25:6jMgdVg0cAQxwa1pYscMcET1gWbydU+ebXKPhomfB1NDUFQnhN5yf33PDEYNK8KNUrG5Uon0858oqQFn/CUVlrQdE+dVKZ4ymbcQvKjy3EqNB8cowJP2QJKUk+bu6v91j+Z4obMPZObZhDrBbZn76GipcnM4JRVsTCwATbcNpg5ihJ0Tm7/Sj+qaGJG0mlFamrMFEsocp3N9nx9nAqmucFGEPXSoNdtFxCaF/47hiT+Kr1dAURkwV99jZSq7zqmJ2VSkaykXqLXLxxzMaHo3+ylmHJVee6ur/UOyc++Ra1Rh8X02AjFQMjEDR5+4RssKiCusryWuWISIehHu6My3yA==; 31:Np6tHjz5Z2KycrjuouYdEsgXhUZatL2K4+GBQ7uPntO5QVomOCILvG4UssvdQrsNE8GX+DjNhsaugO3tbFSPe86fa7maxaM7eIaG273f6aMQmirz7jk1zjJ0FKzs+ecsLBkIZfE3MWUMTm2lfO3E002OnGb/G4jASug/yH/5e18AyV2lKCd43ySvcY7YPlaGPSAfAqdrOM4KNG8b2o6TLaWN0XyHfstnoV92yO3AjJw= X-MS-TrafficTypeDiagnostic: BYAPR07MB4965: Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; X-Microsoft-Exchange-Diagnostics: 1; BYAPR07MB4965; 20:a7A/Xj4HZwv4vn3UxFWX9nXxrfWdDqybi07RM9PmHRdHgA0syZzsQkpBDXEDXlji0XjK/yn3rASKSNZ3G9sF0dC3/rlu1DcD4d9RHw+Yi2k/B9wrWaCZWDBSHQ/LVNIaxLmBQMBjA+sWpIQSO6GYwIbRXabrXPyxZ0PMmfdn86We81PrqpelW7UGZwHeLF+wTCP7ZNgisf2Em9lgBttucpBnSFHigW63lvXf2DJv3enC6MexloTYv9LXH+ym6mZY2DfTVu/P8fTNbll5xhGoQhcKpr5BdgPQwAqVVvKcwdtDQ2E3ofJyCToQPKmRZ8XEz9OUNXaUo99/XnWcuUF6SnSt3/iJQPgVcR9eSl898qsbBCp0RGt/F6y5cBF3nzZQwI3Eow37TmaBl3AF3doG5v7QZSJqoyqnh6273JDi5YqESNp2wMRJYGhPm2Rfp8WzSrPTO5Xu1YMivQw3u8ov1dKwvwMsssGOT459BSjvGFpcsyXM4QvYHg07ALvMM1KWzKSMzaNTea3IGHUFX5pSfrU57ytZlABmjjct65q2Q6zlsI7ABHfIPAxa1356zoQ/9r7Om7cmIin86zOyp8laBs+Sze9VQpjh1heV+5IBs+s=; 4:39kmT8ELfWHoQHh3OSgx0nDvSdnNB5jEbOevCS2+noYdEQoipH+0XjCX5Kvpr1xJpdrk8xUtDvKC+obPzccLCHqKqC49Qtcv80ymsCCHl9Qjc7CEF32O+JQzODSgiU2+pFEOXeGgoBism5/BmNzb/kbNs12+C9GKfgZo1UexEuisk099SP2l+uEbrpIJC5Hf0lSenofcL/ZDwtm8By1m+TNze/SkbyIAQX6TJSj7LYuDNC5hcjoia1GZFYRLYM7jJpGC0CSIHs2gmNhprsFaSC/M54witrAuM7qU+rP5L3ePKCdnjRv99op7YvMhhAxo X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(788757137089); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(823301075)(3231311)(944501410)(52105095)(3002001)(93006095)(10201501046)(149027)(150027)(6041310)(20161123560045)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(201708071742011)(7699016); SRVR:BYAPR07MB4965; BCL:0; PCL:0; RULEID:; SRVR:BYAPR07MB4965; X-Forefront-PRVS: 0785459C39 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(366004)(396003)(136003)(39860400002)(376002)(346002)(51914003)(57704003)(189003)(199004)(9686003)(6496006)(105586002)(476003)(106356001)(81156014)(8676002)(14444005)(81166006)(305945005)(7736002)(97736004)(42882007)(52116002)(11346002)(956004)(33716001)(76176011)(229853002)(446003)(53936002)(33896004)(55016002)(575784001)(16586007)(50466002)(25786009)(386003)(26005)(58126008)(316002)(72206003)(33656002)(4326008)(53546011)(6246003)(68736007)(5009440100003)(5660300001)(66066001)(6636002)(6666003)(8936002)(2906002)(486006)(23726003)(6116002)(186003)(16526019)(47776003)(478600001)(3846002)(1076002)(18370500001)(107986001)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:BYAPR07MB4965; H:ltp-pvn; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BYAPR07MB4965; 23:3u+yoBvrfDcJ8acE5whFStaV/EmmJCJARX57sLr0B?= =?us-ascii?Q?Kq39JFlJDJbmkkmW+dTTRaNV/ntONHwbr4FzKwRlwLn0nsUD24q6qPp/SfuF?= =?us-ascii?Q?5LZLQlIrzU7WnJVqymQhVygZWol+67cd9KKUentwnij+D8P2fnQnCPWBNiKn?= =?us-ascii?Q?6WZylCIigldrNLjdWeXygicRRSOz7g/tUCp52WYapwF0ThcTJHbbM5UitAjn?= =?us-ascii?Q?RM9loJxS07xAJUOVfjUQygS5g+nft7mNmL/MQQMCW+PfGPPNaXZpbemlW2KZ?= =?us-ascii?Q?01vK0s7wzVK3g44dvPB/rybG/aF9AwG6iNvEOwOhCHDRc4zg0hrEyLqD/pWI?= =?us-ascii?Q?4HAa4iE36KprSDRQSfKm0Y5N0wqwmUXyrSaWF/HnmXutNkhEt8BQM9bPGReV?= =?us-ascii?Q?tg8NCwQqCU7EK403uq/BIzAIiCYWuUAzEapNZcVIRilXSNDj+Let2OceWh99?= =?us-ascii?Q?L3uKcePCmh4h1LeblhekvT6lvj2wC7C7HTeM5WhTZsgMPUVowQKh7O697SZr?= =?us-ascii?Q?fJfsoF0br+r3/BN7FOELQwoMKykP/v774ttVecPKhvbFkYNOTsW92tWP9j2X?= =?us-ascii?Q?rQ/UiLPsRUCAAwlhf2TI5nWoWzJfRVNi87KlX0ymAOfIW4AJJPNRzRHUF3c+?= =?us-ascii?Q?XWdsE5wKj4UgDejXj6YD6Fj+VGk9EGzYcouzIYYlo9b5Ay524h5CY1hWG/4j?= =?us-ascii?Q?XjFOaeJ7BgfxsHt+n5CqgMtqssJzdcNp5FZReGs497xLjdBof+EPlgzB3CmK?= =?us-ascii?Q?wpIfpqAwRO7mJnzluAgPzbE8ZIU1BRNTGsNMIFU9yPWvqfbdOSpKwWe7brAO?= =?us-ascii?Q?1LHi+mg77fi6Q5cXHpms7zRQ13mQQXZYv/WYPEG6f6EALzizVkwMIQOgNDbi?= =?us-ascii?Q?4Kl/wrefBv2CA1MHEQdLjKF8Z+OsIWSoenaySG/E5LM+iUIEuEiG7ZFJpTmJ?= =?us-ascii?Q?nJwkbLzPJPo4dZk+KW/2QHpV9jF2sNHe3UETKc59ZPUAVIui5MTT1Bln5uVB?= =?us-ascii?Q?NrPpAknLhYqDo0dXL+U4zRK2Q4FVaJSuo3B3TPOo27DFyomoPnTSvhYTK4jx?= =?us-ascii?Q?jKd0L2WYyj8Sf0bHt+nt9ohox6SIkpqPHduvPiR3nTXnw3ukf8idRgfhUFW3?= =?us-ascii?Q?SqKjhtCa8gYG8DT74quaRt/rwDzVaqWeZptTt/CQrMIJFae7PsH8v0YFZKNJ?= =?us-ascii?Q?up9b4oyZv/l0jwsBcUzJS+Q6d6BkxjA2lymGapElT7rFpgl2OYzSDsKTU9pZ?= =?us-ascii?Q?xuZh/zT15msTI08JqUyjE0R9P719wI0OqO2Exd8VM34cvLqSb6x8WheoS2uV?= =?us-ascii?Q?R3vhcciv7FQ/SX4QJPVJOjhLRI2eF+4OMaoygLhMtIs08STk7tGy7dHLYpFx?= =?us-ascii?Q?572g/oGqt/C5W4rZoPmXmUiWdEnqVCGDB4dB72mspc2Lv8m3PFy5iFgKDpd+?= =?us-ascii?Q?E0mc6vSfTJrVfST3JqVQhRuioIE4Z2xlo9iXiUX11fW+YIQClOK?= X-Microsoft-Antispam-Message-Info: +2t4kxpYOxWbtYD00Ee8uqr9pvLqKf+bU0PspSwt0kBO9XgKdf/eQmkbNbSch3VdsaXa69o4CGRpCp6N+GtylhMyKhAPpNwD3eFN93b8F5RNA6tvHl+ucfGm2FfcEovod5y+V0jwGpxwsQFk/8jNG72aDNSCUyKh09lexsxfoe5Er7NPFqFaWPOzZzH2YbgtKJww34DN+Uj4tGJ3MatFn0v5g7LS7QoUgeLxqVyzOdRURbXIM7n3o1cY30Lw6CqExJkjrsOTKmrxDOc6DTGgoMgClHMSbguhnszoN52FUwoPGTYxT6koKGwgjNn6UkWx2GolarwY/2ho8FbdzuII99mSi0/FFpvsVz/lKnfT9JI= X-Microsoft-Exchange-Diagnostics: 1; BYAPR07MB4965; 6:YCZU00955NfDEa4fB5zfgrOgUZRUY9r2W2NvygBWRh2lwXuhxJXXLGli+/E64tF2BqB4uOQaBkk3oRDBS4C8DZEhI4J41LsVRhuNSUNh82qrTz51G6/Ug0n2EkbrmxSR04xuV5XxUZIjRbIaLpxTL8yND7hu/4HLjijdD0EICTQ8WBCU2/MO8XFTU/y5rhvFGoV+BN+RA46tgqsEgvVFSAEFnCl+RVsUHLUuR5A9Dhrw0zLnIovGaI3wmODo78b/67sdqkhcfEG+hNvZTuWnXqE30LXNv2IBB1uLOvfzegshRttz/qkPdDYh/pIIFPSUtU1H58edf8Sqj/rwf8CoDd+FRowbDLFhee9M4yyNO4Jkn8+PGmVx3n21ebPFWWdFnSEScZdHM6Wll2LVlL5XZJbJ6XLYZ5b+XjG5VXzH5hLh2X4Qwe+wJAHUk6cv0vVQcaC4cGpQAYop779GgtlSsQ==; 5:A9v+YmNzkXe7fHL3oBxvXebir2jiOFrFjJPvMa3D0FaQZw1QAe7Tl0fVYsVMxhK97qDEEDMkTaEpocasyTJLL072j4zsErQbSrE+JxgfWCDfLY+LmiU6DJz+8L2SSa2Z83KF/uuL2vmPQPLGed1qUuUgLRDrkw/7+9+4bvKy+mo=; 7:CiMOd7IlXRFntFEGXHoK7DgD/w27Y9HowODAThDKowwswzzTSvRsfMYHHEBAo30TAHO20J6UAV9xzziVmnOM/lhupv1kDd5IY1TJbtjW7o4OEnxpmuKn2CLmlCx74mTiMA7EC4UcQ8eLSH/S1iwVqdsoyBHZX/an9sOwiov5ovKtmFrkNRy4TMh3cKdf1nm+Cqeov3Y2difAQq4aFYz88Lb04OJhC18QQ9Elo8/tmkYaoUE135bqX5CIM1Yr3tY6 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Sep 2018 08:13:14.6398 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 636072a1-0542-49ba-d810-08d6123e4479 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR07MB4965 Subject: Re: [dpdk-dev] [PATCH 2/2] app/test-eventdev: add Tx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Sep 2018 08:13:20 -0000 On Tue, Sep 04, 2018 at 11:07:15AM +0530, Rao, Nikhil wrote: > Hi Pavan, > > Few comments below. > > On 8/31/2018 4:10 PM, Pavan Nikhilesh wrote: > > Convert existing Tx service based pipeline to Tx adapter based APIs and > > simplify worker functions. > > > > Signed-off-by: Pavan Nikhilesh > > --- > > app/test-eventdev/test_pipeline_atq.c | 216 +++++++++++--------- > > app/test-eventdev/test_pipeline_common.c | 193 ++++++------------ > > app/test-eventdev/test_pipeline_common.h | 43 ++-- > > app/test-eventdev/test_pipeline_queue.c | 238 ++++++++++++----------- > > 4 files changed, 322 insertions(+), 368 deletions(-) > > > > diff --git a/app/test-eventdev/test_pipeline_atq.c b/app/test-eventdev/test_pipeline_atq.c > > > -static int > > +static __rte_noinline int > > pipeline_atq_worker_multi_stage_burst_fwd(void *arg) > > { > > PIPELINE_WORKER_MULTI_STAGE_BURST_INIT; > > - const uint8_t nb_stages = t->opt->nb_stages; > > - const uint8_t tx_queue = t->tx_service.queue_id; > > + const uint8_t *tx_queue = t->tx_evqueue_id; > > > > while (t->done == false) { > > uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, > > @@ -253,9 +235,10 @@ pipeline_atq_worker_multi_stage_burst_fwd(void *arg) > > > > if (cq_id == last_queue) { > > w->processed_pkts++; > > - ev[i].queue_id = tx_queue; > > + ev[i].queue_id = tx_queue[ev[i].mbuf->port]; > > pipeline_fwd_event(&ev[i], > > RTE_SCHED_TYPE_ATOMIC); > > + > Unintentional newline ? Will remove in next version. > > } else { > > ev[i].sub_event_type++; > > pipeline_fwd_event(&ev[i], > > static int > > > > @@ -317,23 +296,25 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt) > > int nb_ports; > > int nb_queues; > > uint8_t queue; > > - struct rte_event_dev_info info; > > - struct test_pipeline *t = evt_test_priv(test); > > - uint8_t tx_evqueue_id = 0; > > + uint8_t tx_evqueue_id[RTE_MAX_ETHPORTS] = {0}; > > uint8_t queue_arr[RTE_EVENT_MAX_QUEUES_PER_DEV]; > > uint8_t nb_worker_queues = 0; > > + uint8_t tx_evport_id = 0; > > + uint16_t prod = 0; > > + struct rte_event_dev_info info; > > + struct test_pipeline *t = evt_test_priv(test); > > > > nb_ports = evt_nr_active_lcores(opt->wlcores); > > nb_queues = rte_eth_dev_count_avail(); > > > > - /* One extra port and queueu for Tx service */ > > - if (t->mt_unsafe) { > > - tx_evqueue_id = nb_queues; > > - nb_ports++; > > - nb_queues++; > > + /* One queue for Tx service */ > > + if (!t->internal_port) { > See comment about struct test_pipeline::internal_port in the > test_pipeline_common.h review below. > > > + RTE_ETH_FOREACH_DEV(prod) { > > + tx_evqueue_id[prod] = nb_queues; > > + nb_queues++; > > + } > > } > > > > > > @@ -388,14 +371,11 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt) > > .new_event_threshold = info.max_num_events, > > }; > > > > - if (t->mt_unsafe) { > > + if (!t->internal_port) { > > ret = pipeline_event_port_setup(test, opt, queue_arr, > > nb_worker_queues, p_conf); > > if (ret) > > return ret; > > - > > - ret = pipeline_event_tx_service_setup(test, opt, tx_evqueue_id, > > - nb_ports - 1, p_conf); > > } else > > ret = pipeline_event_port_setup(test, opt, NULL, nb_queues, > > p_conf); > > @@ -424,14 +404,17 @@ pipeline_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt) > > * stride = 1 > > * > > * event queue pipelines: > > - * eth0 -> q0 > > - * } (q3->tx) Tx service > > - * eth1 -> q1 > > + * eth0 -> q0 \ > > + * q3->tx > > + * eth1 -> q1 / > > * > > * q0,q1 are configured as stated above. > > * q3 configured as SINGLE_LINK|ATOMIC. > > */ > > ret = pipeline_event_rx_adapter_setup(opt, 1, p_conf); > > + if (ret) > > + return ret; > > + ret = pipeline_event_tx_adapter_setup(opt, p_conf); > pipeline_event_tx_adapter_setup() creates a tx adapter per eth port, > that doesn't match the preceding diagram. I will fix in next version. > > > > > diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c > > index a54068df3..7f858e23f 100644 > > --- a/app/test-eventdev/test_pipeline_common.c > > +++ b/app/test-eventdev/test_pipeline_common.c > > @@ -5,58 +5,6 @@ > > > > > > @@ -215,7 +160,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) > > { > > uint16_t i; > > uint8_t nb_queues = 1; > > - uint8_t mt_state = 0; > > struct test_pipeline *t = evt_test_priv(test); > > struct rte_eth_rxconf rx_conf; > > struct rte_eth_conf port_conf = { > > @@ -238,13 +182,13 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) > > return -ENODEV; > > } > > > > + t->internal_port = 0; > > RTE_ETH_FOREACH_DEV(i) { > > struct rte_eth_dev_info dev_info; > > struct rte_eth_conf local_port_conf = port_conf; > > + uint32_t caps = 0; > > > > rte_eth_dev_info_get(i, &dev_info); > > - mt_state = !(dev_info.tx_offload_capa & > > - DEV_TX_OFFLOAD_MT_LOCKFREE); > > rx_conf = dev_info.default_rxconf; > > rx_conf.offloads = port_conf.rxmode.offloads; > > > > @@ -279,11 +223,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) > > return -EINVAL; > > } > > > > - t->mt_unsafe |= mt_state; > > - t->tx_service.tx_buf[i] = > > - rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(BURST_SIZE), 0); > > - if (t->tx_service.tx_buf[i] == NULL) > > - rte_panic("Unable to allocate Tx buffer memory."); > > + rte_event_eth_tx_adapter_caps_get(opt->dev_id, i, &caps); > > + if ((caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) > > + t->internal_port = 1;See comment about struct test_pipeline::internal_port below > > rte_eth_promiscuous_enable(i); > > } > > > > @@ -295,7 +237,6 @@ pipeline_event_port_setup(struct evt_test *test, struct evt_options *opt, > > uint8_t *queue_arr, uint8_t nb_queues, > > const struct rte_event_port_conf p_conf) > > { > > - int i; > > int ret; > > uint8_t port; > > struct test_pipeline *t = evt_test_priv(test); > > @@ -321,10 +262,9 @@ pipeline_event_port_setup(struct evt_test *test, struct evt_options *opt, > > 0) != nb_queues) > > goto link_fail; > > } else { > > - for (i = 0; i < nb_queues; i++) { > > - if (rte_event_port_link(opt->dev_id, port, > > - &queue_arr[i], NULL, 1) != 1) > > - goto link_fail; > > + if (rte_event_port_link(opt->dev_id, port, queue_arr, > > + NULL, nb_queues) != nb_queues) { > > + goto link_fail; > > } > Minor, isn't it possible to replace the if (queue_arr == NULL) {} else > {} block with a single call to rte_event_port_link() ? Will simplify in the next version. > > > > > diff --git a/app/test-eventdev/test_pipeline_common.h b/app/test-eventdev/test_pipeline_common.h > > index 9cd6b905b..b8939db81 100644 > > --- a/app/test-eventdev/test_pipeline_common.h > > +++ b/app/test-eventdev/test_pipeline_common.h > > @@ -14,6 +14,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -26,6 +27,9 @@ > > #include "evt_options.h" > > #include "evt_test.h" > > > > +#define PIPELINE_TX_ADPTR_ENQ 0x1 > > +#define PIPELINE_TX_ADPTR_FWD 0x2 > > + > I don't see a reference to these defines. These are artifacts from a different design, will remove. > > > struct test_pipeline; > > > > struct worker_data { > > @@ -35,30 +39,19 @@ struct worker_data { > > struct test_pipeline *t; > > } __rte_cache_aligned; > > > > -struct tx_service_data { > > - uint8_t dev_id; > > - uint8_t queue_id; > > - uint8_t port_id; > > - uint32_t service_id; > > - uint64_t processed_pkts; > > - uint16_t nb_ethports; > > - struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS]; > > - struct test_pipeline *t; > > -} __rte_cache_aligned; > > - > > struct test_pipeline { > > /* Don't change the offset of "done". Signal handler use this memory > > * to terminate all lcores work. > > */ > > int done; > > uint8_t nb_workers; > > - uint8_t mt_unsafe; > Can we also replace references to mt_unsafe in comments ? Will clean up in the next version. > > > + uint8_t internal_port; > Shouldn't internal_port be a per eth device flag ? Or is there an > assumption that all eth devices will be such that the eventdev PMD's > internal port capability will be set ? > Current app design doesn't support both the models together. I will add a check to quit when both types of PMD are detected. > > > diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c > > index 2e0d93d99..e1153573b 100644 > > --- a/app/test-eventdev/test_pipeline_queue.c > > +++ b/app/test-eventdev/test_pipeline_queue.c > > @@ -15,7 +15,7 @@ pipeline_queue_nb_event_queues(struct evt_options *opt) > > return (eth_count * opt->nb_stages) + eth_count; > > } > > > > -static int > > +static __rte_noinline int > > pipeline_queue_worker_single_stage_tx(void *arg) > > { > > PIPELINE_WORKER_SINGLE_STAGE_INIT; > > @@ -29,9 +29,12 @@ pipeline_queue_worker_single_stage_tx(void *arg) > > } > > > > if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { > > - pipeline_tx_pkt(ev.mbuf); > > + rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0); > > + rte_event_eth_tx_adapter_enqueue(dev, port, > > + &ev, 1); > Do we need a retry loop for enqueue above and at other usages in this > patch ? Will add it in the next version. > > > w->processed_pkts++; > > } else { > > + > new line is intentional ? > > ev.queue_id++; > > pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); > > pipeline_event_enqueue(dev, port, &ev); > > @@ -41,11 +44,11 @@ pipeline_queue_worker_single_stage_tx(void *arg) > > return 0; > > } > > > > Nikhil Thanks for the review. Pavan.