From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4A3DD44042; Thu, 16 May 2024 11:18:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BA4C402DC; Thu, 16 May 2024 11:18:04 +0200 (CEST) Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by mails.dpdk.org (Postfix) with ESMTP id 042A94025C; Thu, 16 May 2024 11:18:03 +0200 (CEST) Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6f5053dc057so4344146b3a.2; Thu, 16 May 2024 02:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1715851082; x=1716455882; darn=dpdk.org; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=CCPO/KBgsfH8ev5BTAG+xOvc+I75yQeZQUDq49IZOuU=; b=X69dQjH+xYzoLFG3AI+ibzMywm01wghXM3vGIR+tK0w0W9XdqYkXeFlIsR0E7DMk3d RaA6gNaT5MerAY5HVjCF1+v2zstFjlVsfZQF4LFwuaIZcXVpLNFdMkevy2AaatcCjVzP V9WgPdFbhNfd5gmYt1mrpJ5+5sMXx9lxUcLnCoCL3lcp1l0Lx7L5dOdE9UoMpUTflgAf AhKFjeK01LUIs9UPbucphQPr8xH8ysZ0q+qrUvdmHOoH9ugve1O3YRv4QH69Xq58XfOA MocVD/DrzZ1LVz2khJ5DOlID5Syi4ctRNka9qKsywOqffgw8WSv2+JfwCqNa2Us7WXO+ mF/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715851082; x=1716455882; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=CCPO/KBgsfH8ev5BTAG+xOvc+I75yQeZQUDq49IZOuU=; b=i3Kasri1WXoTZvzAneHhZuFff0gxw87N9MVUvoOTGvlSHm+DkSlKeNqmKg798biuGi 0yUwH//qErATHnM82MKvUlNj9M7anlJUerwrtEHRYwuUn5wHvF51Y3+4A5uM4li04mKm KEUsqiphz3n19QQLnvwMCiRDsNuMVcGwGcmEs2p4JoSxTwHVBOdTR70oY2v68JIHi0Dy 4AhN5PFDVFSGhzCxW1kI0qoLczBdCABCttdOYzetBrbOn9SmXXGTn1CGag7jlPVoWL1n 8OCNQeJkIc73brIZOdP28G8SJDBmV+WwzIOKQkISpE+3MR+l2cSFO/BhxBE+vICvM50M Riow== X-Forwarded-Encrypted: i=1; AJvYcCUbyzqxFwnT3eHE5EJr99rcFafgRL7vvIoIQtTEbKpvFZpD3h1U/29IzRoPTOwdsFWafYMoN+yTvHOUKpw= X-Gm-Message-State: AOJu0YzdSEBiuKxcULBiR5Jz+tmW71/uLgFHaX+UXAGVz8E3X38Hya/K VUOfUJjiYWmb2Pucvu1zICte6xtYZnP9OiyiI2z+T2M+RjD23cDpP/zELVBYi5Y+Z1NmIQeLR9Y iKa/VMOqh78TLFWnLlx7IJ53RBZOKBjVV X-Google-Smtp-Source: AGHT+IGkBD52sC3lU5TYFu7SDm5XN/Qo2W4CA6QCjNSPlP7TAEaVgNuc3M5E6b7Hb1QZZY/E2IGoZvVCKzUCoB/IhZw= X-Received: by 2002:a05:6a00:2444:b0:6e6:f9b6:4b1a with SMTP id d2e1a72fcca58-6f4e02aace1mr22711858b3a.11.1715851081707; Thu, 16 May 2024 02:18:01 -0700 (PDT) MIME-Version: 1.0 From: Fuji Nafiul Date: Thu, 16 May 2024 15:17:49 +0600 Message-ID: Subject: pdump framework highest throughput hit bottleneck at 2.8Million pps To: users@dpdk.org, dev@dpdk.org Content-Type: multipart/alternative; boundary="000000000000367b8b06188eb622" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000367b8b06188eb622 Content-Type: text/plain; charset="UTF-8" I modified the dpdkdump project's code (which uses the pdump framework) to scale and support the highest throughput possible. I enabled the pdump framework (which creates a hook in rx/tx in the background) with big rte_ring and then tried to fan out packets to several rte_rings (I tried with max 10) which was continuously polled by 10 separate processes solely running on separate cores and write the packets. when my primary application had in/out around 1 million pps I saw that, then I could get around 2 million pps (in+out) in the main rte_ring. but when I increased the load to 2 million pps (in/out), I was only getting around 2.6-2.8 million pps from the hook though I should get around 4 million pps (in+out). I am seeing a big "ring full" count, but is it because my fan out is slow or pdump itself have a bottleneck limit? please let me know... I took a look at dumpcap which aims at capturing 10gbit/s but it needs to be run as a primary process as far as I understand... but I need a secondary dpdk application to capture around 5-10 million pps --000000000000367b8b06188eb622 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I modified the dpdkdump=C2=A0project's code (which use= s=C2=A0the pdump framework) to scale and support the highest throughput pos= sible. I enabled the pdump framework (which creates a hook in rx/tx in the = background) with big rte_ring and then tried to fan out packets to several = rte_rings (I tried with max 10) which was continuously=C2=A0polled by 10 se= parate processes solely running on separate cores and write the packets. wh= en my primary application had in/out around 1 million pps I saw that, then = I could get around 2 million pps (in+out) in the main rte_ring. but when I = increased the load to 2 million pps (in/out), I was only getting around 2.6= -2.8 million pps from the hook though I should get around 4 million pps (in= +out). I am seeing a big "ring full" count, but is it because my = fan out is slow or pdump itself have a bottleneck limit? please=C2=A0let me= know...=C2=A0
I took a look at dumpcap which aims at capturing 10gbit/s= but it needs to be run as a primary process as far as I understand... but = I need a secondary dpdk application to capture around 5-10 million pps --000000000000367b8b06188eb622--