From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8940D458B9 for ; Sat, 31 Aug 2024 15:38:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 800A2402D9; Sat, 31 Aug 2024 15:38:25 +0200 (CEST) Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by mails.dpdk.org (Postfix) with ESMTP id 10C9F40263 for ; Sat, 31 Aug 2024 15:38:23 +0200 (CEST) Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-4280ca0791bso23642705e9.1 for ; Sat, 31 Aug 2024 06:38:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1725111503; x=1725716303; darn=dpdk.org; h=content-language:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=USGeJL39opftbUw0+TIrw6QUqdzKrS86AcIuKZ/JcpM=; b=eYuuB2PDP+G+MihjgFG6wDaVwK/r1orHGJ4V2VHkA0UxV1EUJQ0jV8FwlESnJzFYtk S/E91W7hSWYG6ICjM2C+16eavySuKNGzu4ZRvoRaidXkzu6D6j7dGAPlDPQT2jwDWpbV fx7Dag8NPm9bqs/74GZ9wtyLcIQn++nyd/WTQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725111503; x=1725716303; h=content-language:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=USGeJL39opftbUw0+TIrw6QUqdzKrS86AcIuKZ/JcpM=; b=n9QJRcG5x4J1kqFq5fpKDh2JJJIKDwkpjdP8ZjP4mI5AW1bv6bxkWfEaaa8370Q6H6 K4CxDqoFuw5nbBdA97I5PTWKZVJ4iViIDxCkDDAsp+XgqWuDsS6p9K/rs1zqxtr+VRAj dQ0jxjkgFzsbkMMo4HrT47B6ErQvuZfc2T43E4ykb18VKc9TV0p5afi3XX38kfH0slmm NYsITAy1JhlKdjL9bmn44G0idJKiHenRkUx8s8lB67WNlN3eFV5UD7lDIfoXvx1s72B5 87fzr7iCyfZF1j/vmlp4+VNpQUTmgZHFgA8xTEQPn98kC+Ey9GJLDTS4QRXoW94F/xW+ RvRw== X-Forwarded-Encrypted: i=1; AJvYcCUUxg3282XTGPD9ZbR4xb7M30yXWVcmzAkewkPGvxxbNOPJg9My2TTiVp7LMf/EGW3EQUUd72U=@dpdk.org X-Gm-Message-State: AOJu0YxBDO1i8LdH/AOhHvjoENrCm3iADuxfOZNyeBY7KMmgoq3gNa3j zjt6xwCKnkmq9+xDzhQUAToVCixLekBzxUTfFS3Ir0qVvSpL7SGRmhz5YPI+3ewIylnXBczaE2K gMwluB2omD2Wepyfy3OUmQBbDAcTydaHcwg== X-Google-Smtp-Source: AGHT+IEkt2PawMV8f6how1cPK9njrIq1kZbAA7ply4m9BvyBWdEj463FWgLdru3qcCBy4dBcEvMXJg== X-Received: by 2002:a05:600c:4693:b0:426:6ed5:fd5 with SMTP id 5b1f17b1804b1-42bb01ae664mr66212725e9.6.1725111502464; Sat, 31 Aug 2024 06:38:22 -0700 (PDT) Received: from [192.168.0.112] ([92.81.76.237]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-374ba3da654sm2907725f8f.89.2024.08.31.06.38.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 31 Aug 2024 06:38:21 -0700 (PDT) Message-ID: <39430e7d-161e-40c3-bbb0-a7cd5de0b7cf@broadcom.com> Date: Sat, 31 Aug 2024 16:38:20 +0300 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] net/memif: fix buffer overflow in zero copy Rx To: Ferruh Yigit , Jakub Grajciar Cc: dev@dpdk.org, stable@dpdk.org, Mihai Brodschi References: <8bf5e505-191b-46ea-8b90-0ed4fc15d306@broadcom.com> <75a9c5d9-da21-4e78-b637-84f152daae30@broadcom.com> <2d6fba2f-f522-4d0a-abbb-38d938f61af2@broadcom.com> From: Mihai Brodschi In-Reply-To: Content-Language: en-US Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi Ferruh, Apologies for the late response. I've run some performance tests for the two proposed solutions. In the tables below, the rte_memcpy results correspond to this patch. The 2xpktmbuf_alloc results correspond to the other proposed solution. bash commands: server# ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single -l --file=test1 -- --nb-cores --txq --rxq --burst -i client# ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single -l --file=test2 -- --nb-cores --txq --rxq --burst -i testpmd commands: client: testpmd> start server: testpmd> start tx_first CPU: AMD EPYC 7713P RAM: DDR4-3200 OS: Debian 12 DPDK: 22.11.1 SERVER_CORES=72,8,9,10,11 CLIENT_CORES=76,12,13,14,15 Results: ================================================================== | | 1 CORE | 2 CORES | 4 CORES | ================================================================== | unpatched burst=32 | 9.95 Gbps | 19.24 Gbps | 36.4 Gbps | ------------------------------------------------------------------ | 2xpktmbuf_alloc burst=32 | 9.86 Gbps | 18.88 Gbps | 36.6 Gbps | ------------------------------------------------------------------ | 2xpktmbuf_alloc burst=31 | 9.17 Gbps | 18.69 Gbps | 35.1 Gbps | ------------------------------------------------------------------ | rte_memcpy burst=32 | 9.54 Gbps | 19.10 Gbps | 36.6 Gbps | ------------------------------------------------------------------ | rte_memcpy burst=31 | 9.39 Gbps | 18.53 Gbps | 35.5 Gbps | ================================================================== CPU: Intel Core i7-14700HX RAM: DDR5-5600 OS: Ubuntu 24.04.1 DPDK: 23.11.1 SERVER_CORES=0,1,3,5,7 CLIENT_CORES=8,9,11,13,15 Results: ================================================================== | | 1 CORE | 2 CORES | 4 CORES | ================================================================== | unpatched burst=32 | 15.52 Gbps | 27.35 Gbps | 46.8 Gbps | ------------------------------------------------------------------ | 2xpktmbuf_alloc burst=32 | 15.49 Gbps | 27.68 Gbps | 46.4 Gbps | ------------------------------------------------------------------ | 2xpktmbuf_alloc burst=31 | 14.98 Gbps | 26.75 Gbps | 45.2 Gbps | ------------------------------------------------------------------ | rte_memcpy burst=32 | 15.99 Gbps | 28.44 Gbps | 49.3 Gbps | ------------------------------------------------------------------ | rte_memcpy burst=31 | 14.85 Gbps | 27.32 Gbps | 46.3 Gbps | ================================================================== On 19/07/2024 12:03, Ferruh Yigit wrote: > On 7/8/2024 12:45 PM, Ferruh Yigit wrote: >> On 7/8/2024 4:39 AM, Mihai Brodschi wrote: >>> >>> >>> On 07/07/2024 21:46, Mihai Brodschi wrote: >>>> >>>> >>>> On 07/07/2024 18:18, Mihai Brodschi wrote: >>>>> >>>>> >>>>> On 07/07/2024 17:05, Ferruh Yigit wrote: >>>>>> >>>>>> My expectation is numbers should be like following: >>>>>> >>>>>> Initially: >>>>>> size = 256 >>>>>> head = 0 >>>>>> tail = 0 >>>>>> >>>>>> In first refill: >>>>>> n_slots = 256 >>>>>> head = 256 >>>>>> tail = 0 >>>>>> >>>>>> Subsequent run that 32 slots used: >>>>>> head = 256 >>>>>> tail = 32 >>>>>> n_slots = 32 >>>>>> rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots); >>>>>> head & mask = 0 >>>>>> // So it fills first 32 elements of buffer, which is inbound >>>>>> >>>>>> This will continue as above, combination of only gap filled and head >>>>>> masked with 'mask' provides the wrapping required. >>>>> >>>>> If I understand correctly, this works only if eth_memif_rx_zc always processes >>>>> a number of packets which is a power of 2, so that the ring's head always wraps >>>>> around at the end of a refill loop, never in the middle of it. >>>>> Is there any reason this should be the case? >>>>> Maybe the tests don't trigger the crash because this condition holds true for them? >>>> >>>> Here's how to reproduce the crash on DPDK stable 23.11.1, using testpmd: >>>> >>>> Server: >>>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single-file-segments -l2,3 --file-prefix test1 -- -i >>>> >>>> Client: >>>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single-file-segments -l4,5 --file-prefix test2 -- -i >>>> testpmd> start >>>> >>>> Server: >>>> testpmd> start tx_first >>>> testpmt> set burst 15 >>>> >>>> At this point, the client crashes with a segmentation fault. >>>> Before the burst is set to 15, its default value is 32. >>>> If the receiver processes packets in bursts of size 2^N, the crash does not occur. >>>> Setting the burst size to any power of 2 works, anything else crashes. >>>> After applying this patch, the crashes are completely gone. >>> >>> Sorry, this might not crash with a segmentation fault. To confirm the mempool is >>> corrupted, please compile DPDK with debug=true and the c_args -DRTE_LIBRTE_MEMPOOL_DEBUG. >>> You should see the client panic when changing the burst size to not be a power of 2. >>> This also works on the latest main branch. >>> >> >> Hi Mihai, >> >> Right, if the buffer size is not multiple of burst size, issue is valid. >> And as there is a requirement to have buffer size power of two, burst >> should have the same. >> I assume this issue is not caught before because default burst size is 32. >> >> Can you please share some performance impact of the change, with two >> possible solutions we discussed above? >> >> Other option is to add this as a limitation to the memif zero copy, but >> this won't be good for usability. >> >> We can decide based on performance numbers. >> >> > > Hi Jakup, > > Do you have any comment on this? > > I think we should either document this as limitation of the driver, or > fix it, and if so need to decide the fix. > -- This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.