From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by dpdk.org (Postfix) with ESMTP id EDC242BDB for ; Mon, 28 Jan 2019 02:30:34 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 7932C21C76; Sun, 27 Jan 2019 20:30:34 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Sun, 27 Jan 2019 20:30:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=iePTCXE0mFAcltoicuXUR/LChLTRedXvYriqFPYqMPY=; b=FZ3Lihl94nRJ gu3XaYxAW5/YD3gUfR/bSJXAFf41R1yYzDhby8vVtN2aak+82x83VupJEyuMTry7 PYujaOmjycGrXwkg3fWX6Kef/boGHrvGzxjcenzbl5MA3dM0Qjy1Xsw3l85kWgTc 1pg3lqc784H5E7vN60J/CS+oy9Sab4Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=iePTCXE0mFAcltoicuXUR/LChLTRedXvYriqFPYqM PY=; b=C8OSVJHztuvaR2qkRIQ2t67fxN/PwQCap3wsbYZ8xOOQHN9Edeu508AZU mIJOoGvXytWmMoUXzfIDfxvWo1nlG3n7i0yQLybONu/6egzgANVKW4mSA4eJIOjm NMAn1upUJ0Ba/HovctZ2v7VqukiX9sAZDjrdDTbAz7LP0LxHGScyueOvc4uRnwas dUZzXDF6JymdFbUgDf9IYxVy3Ub+WRCLryjMfjo193Af+YCq5s7fokwr5VcZzYGQ uDEXDJGWn3hHgA3Sl5vgBILxQv0ndBTFNMP7o2UpJPTONoBq3/N263/w09HghcTa A532ASJpeeYQNf2/Fiwj0KxtyNIXg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedtledrieelgdefjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthenuceurghilhhouhhtmecufedt tdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvufffkfgjfh gggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgrshcuofhonhhjrghlohhnuceo thhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecukfhppeejjedrudefgedrvddtfe drudekgeenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvthenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 100C3100BB; Sun, 27 Jan 2019 20:30:32 -0500 (EST) From: Thomas Monjalon To: Vipin Varghese , john.mcnamara@intel.com, marko.kovacevic@intel.com Cc: dev@dpdk.org, shreyansh.jain@nxp.com, amol.patel@intel.com, sanjay.padubidri@intel.com Date: Mon, 28 Jan 2019 02:30:32 +0100 Message-ID: <4205846.5ZvrQ4I3CE@xps> In-Reply-To: <20190121104144.67365-3-vipin.varghese@intel.com> References: <20190116145452.53835-3-vipin.varghese@intel.com> <20190121104144.67365-1-vipin.varghese@intel.com> <20190121104144.67365-3-vipin.varghese@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v5 2/2] doc: add guide for debug and troubleshoot X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jan 2019 01:30:35 -0000 Hi, I feel this doc will be updated to provide a complete debug checklist, and will become useful to many users hopefully. One general comment about documentation, It is better to wrap lines logically, for example, always start sentences at the beginning of a new line. It will make further update patches simpler to review. Few more nits below, 21/01/2019 11:41, Vipin Varghese: > +.. _debug_troubleshoot_via_pmd: No need of such anchor. > + > +Debug & Troubleshoot guide via PMD > +================================== Why "via PMD"? Do we use PMD for troubleshooting? Or is it dedicated to troubleshoot the PMD behaviour? > + > +DPDK applications can be designed to run as single thread simple stage to > +multiple threads with complex pipeline stages. These application can use poll applications > +mode devices which helps in offloading CPU cycles. A few models are help A colon would be nice at the end of the line before the list. > + > + * single primary > + * multiple primary > + * single primary single secondary > + * single primary multiple secondary > + > +In all the above cases, it is a tedious task to isolate, debug and understand > +odd behaviour which occurs randomly or periodically. The goal of guide is to > +share and explore a few commonly seen patterns and behaviour. Then, isolate > +and identify the root cause via step by step debug at various processing > +stages. I don't understand how this introduction is related to "via PMD" in the title. > + > +Application Overview > +-------------------- > + > +Let us take up an example application as reference for explaining issues and > +patterns commonly seen. The sample application in discussion makes use of > +single primary model with various pipeline stages. The application uses PMD > +and libraries such as service cores, mempool, pkt mbuf, event, crypto, QoS > +and eth. "pkt mbuf" can be called simply mbuf, but event, crypto and eth should be eventdev, cryptodev and ethdev. > + > +The overview of an application modeled using PMD is shown in > +:numref:`dtg_sample_app_model`. > + > +.. _dtg_sample_app_model: > + > +.. figure:: img/dtg_sample_app_model.* > + > + Overview of pipeline stage of an application > + > +Bottleneck Analysis > +------------------- > + > +To debug the bottleneck and performance issues the desired application missing comma after "issues"? > +is made to run in an environment matching as below colon missing > + > +#. Linux 64-bit|32-bit > +#. DPDK PMD and libraries are used Isn't it always the case with DPDK? > +#. Libraries and PMD are either static or shared. But not both Strange assumption. Why would it be both? > +#. Machine flag optimizations of gcc or compiler are made constant What do you mean? > + > +Is there mismatch in packet rate (received < send)? > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +RX Port and associated core :numref:`dtg_rx_rate`. > + > +.. _dtg_rx_rate: > + > +.. figure:: img/dtg_rx_rate.* > + > + RX send rate compared against Received rate RX send ? > + > +#. Are generic configuration correct? Are -> Is > + - What is port Speed, Duplex? rte_eth_link_get() > + - Are packets of higher sizes are dropped? rte_eth_get_mtu() are dropped -> dropped > + - Are only specific MAC received? rte_eth_promiscuous_get() > + > +#. Are there NIC specific drops? > + - Check rte_eth_rx_queue_info_get() for nb_desc and scattered_rx > + - Is RSS enabled? rte_eth_dev_rss_hash_conf_get() > + - Are packets spread on all queues? rte_eth_dev_stats() > + - If stats for RX and drops updated on same queue? check receive thread > + - If packet does not reach PMD? check if offload for port and queue > + matches to traffic pattern send. > + > +#. If problem still persists, this might be at RX lcore thread > + - Check if RX thread, distributor or event rx adapter? these may be > + processing less than required > + - Is the application is build using processing pipeline with RX stage? If is build -> built > + there are multiple port-pair tied to a single RX core, try to debug by > + using rte_prefetch_non_temporal(). This will intimate the mbuf in cache > + is temporary. I stop nit-picking review here. Marko, John, please could you check english grammar? Thanks