From: "Zhou, YidingX" <yidingx.zhou@intel.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Burakov, Anatoly" <anatoly.burakov@intel.com>,
"He, Xingguang" <xingguang.he@intel.com>,
"stable@dpdk.org" <stable@dpdk.org>,
Stephen Hemminger <stephen@networkplumber.org>
Subject: RE: [PATCH v2] net/pcap: fix timeout of stopping device
Date: Mon, 5 Dec 2022 01:58:49 +0000 [thread overview]
Message-ID: <DM5PR1101MB21077968B2257D58DF4D0CFD85189@DM5PR1101MB2107.namprd11.prod.outlook.com> (raw)
In-Reply-To: <6b94e8ff-caa6-53e1-7810-1daf35cf9a7d@amd.com>
> >>>>> On Tue, 6 Sep 2022 16:05:11 +0800 Yiding Zhou
> >>>>> <mailto:yidingx.zhou@intel.com> wrote:
> >>>>>
> >>>>>> The pcap file will be synchronized to the disk when stopping the device.
> >>>>>> It takes a long time if the file is large that would cause the
> >>>>>> 'detach sync request' timeout when the device is closed under
> >>>>>> multi-process scenario.
> >>>>>>
> >>>>>> This commit fixes the issue by using alarm handler to release dumper.
> >>>>>>
> >>>>>> Fixes: 0ecfb6c04d54 ("net/pcap: move handler to process private")
> >>>>>> Cc: mailto:stable@dpdk.org
> >>>>>>
> >>>>>> Signed-off-by: Yiding Zhou <mailto:yidingx.zhou@intel.com>
> >>>>>
> >>>>>
> >>>>> I think you need to redesign the handshake if this the case.
> >>>>> Forcing 30 second delay at the end of all uses of pcap is not acceptable.
> >>>>
> >>>> @Zhang, Qi Z Do we need to redesign the handshake to fix this?
> >>>
> >>> Hi, Ferruh
> >>> Sorry for the late reply.
> >>> I did not receive your email on Oct 6, I got your comments from patchwork.
> >>>
> >>> "Can you please provide more details on multi-process communication
> >>> and call trace, to help us think about a solution to address this
> >>> issue in a more generic way (not just for pcap but for any case
> >>> device close takes more than multi-process timeout)?"
> >>>
> >>> I try to explain this issue with a sequence diagram, hope it can be
> >>> displayed
> >> correctly in the mail.
> >>>
> >>> thread intr thread intr thread thread
> >>> of secondary of secondary of primary of
> primary
> >>> | | | |
> >>> | | | |
> >>> rte_eal_hotplug_remove
> >>> rte_dev_remove
> >>> eal_dev_hotplug_request_to_primary
> >>> rte_mp_request_sync ------------------------------------------------------->|
> >>>
> >>> |
> >>>
> >> handle_secondary_request
> >>> |<-----------------|
> >>> |
> >>> __handle_secondary_request
> >>>
> eal_dev_hotplug_request_to_secondary
> >>> |<------------------------------------- rte_mp_request_sync
> >>> |
> >>> handle_primary_request--------->|
> >>> |
> >>> __handle_primary_request
> >>> local_dev_remove(this will take long time)
> >>> rte_mp_reply -------------------------------->|
> >>> |
> >>> local_dev_remove
> >>> |<-------------------------------------------------
> >>> rte_mp_reply
> >>>
> >>> The marked 'local_dev_remove()' in the secondary process will
> >>> perform a
> >> pcap file synchronization operation.
> >>> When the pcap file is too large, it will take a lot of time
> >>> (according to my test
> >> 100G takes 20+ seconds).
> >>> This caused the processing of hot_plug message to time out.
> >>
> >> Hi Yiding,
> >>
> >> Thanks for the information,
> >>
> >> Right now all MP operations timeout is hardcoded in the code and it
> >> is 5 seconds.
> >> Do you think does it work to have an API to set custom timeout,
> >> something like `rte_mp_timeout_set()`, and call this from pdump?
> >>
> >> This gives a generic solution for similar cases, not just for pcap.
> >> But my concern is if this is too much multi-process related internal
> >> detail to update, @Anatoly may comment on this.
> >
> > Hi, Ferruh
> > For pdump case only, I think the timeout is affected by pcap's size and other
> system components, such as the type of FS, system memory size.
> > It may be difficult to predict the specific time value for setting.
>
> It doesn't have to be specific.
>
> Point here is to have a multi process API to set timeout, instead of put a
> hardcoded timeout in pcap PMD.
OK, I understood.
prev parent reply other threads:[~2022-12-05 1:58 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-25 7:20 [PATCH] net/pcap: reduce time for " Yiding Zhou
2022-08-25 10:09 ` Ferruh Yigit
2022-08-25 11:17 ` Zhou, YidingX
2022-08-25 12:21 ` Ferruh Yigit
2022-08-29 11:50 ` Zhou, YidingX
2022-08-31 16:42 ` Stephen Hemminger
2022-09-01 7:40 ` Zhou, YidingX
2022-09-06 8:05 ` [PATCH v2] net/pcap: fix timeout of " Yiding Zhou
2022-09-06 14:57 ` Stephen Hemminger
2022-09-06 16:21 ` Zhou, YidingX
2022-09-21 7:14 ` Zhou, YidingX
2022-10-03 15:00 ` Ferruh Yigit
2022-11-22 9:25 ` Zhou, YidingX
2022-11-22 17:28 ` Stephen Hemminger
2022-12-02 10:22 ` Zhou, YidingX
2022-11-29 14:11 ` Ferruh Yigit
2022-12-02 10:13 ` Zhou, YidingX
2022-12-02 11:19 ` Ferruh Yigit
2022-12-05 1:58 ` Zhou, YidingX [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM5PR1101MB21077968B2257D58DF4D0CFD85189@DM5PR1101MB2107.namprd11.prod.outlook.com \
--to=yidingx.zhou@intel.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=stable@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=xingguang.he@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).