From: "Dey, Souvik" <sodey@rbbn.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>,
Yangchao Zhou <zhouyates@gmail.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] kni: fix possible kernel crash with va2pa
Date: Tue, 18 Jun 2019 04:06:55 +0000 [thread overview]
Message-ID: <BN7PR03MB38921E84199B20E7A47C5155CDEA0@BN7PR03MB3892.namprd03.prod.outlook.com> (raw)
In-Reply-To: <b36dbd18-ec57-1597-f8c9-9ded17f2647e@intel.com>
Hi Yigit,
I was facing the kernel crash issue, at skb_over_put(), using dpdk using 18.11 on 4.19.28 kernel. On checking I did find this patch and this patch is solving the issue also. But then I saw you comment in the patch and that’s looks scary to have the patch. Is there any improvements/fixes planned for this issue and in which version? is there any performance impact of the below patch ? As this issues is blocking our release any inputs to this asap will be really appreciated.
--
Regards,
Souvik
From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
Sent: Friday, March 22, 2019 4:49 PM
To: Yangchao Zhou <zhouyates@gmail.com>; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v2] kni: fix possible kernel crash with va2pa
________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________
On 3/12/2019 9:22 AM, Yangchao Zhou wrote:
> va2pa depends on the physical address and virtual address offset of
> current mbuf. It may get the wrong physical address of next mbuf which
> allocated in another hugepage segment.
>
> In rte_mempool_populate_default(), trying to allocate whole block of
> contiguous memory could be failed. Then, it would reserve memory in
> several memzones that have different physical address and virtual address
> offsets. The rte_mempool_populate_default() is used by
> rte_pktmbuf_pool_create().
>
> Signed-off-by: Yangchao Zhou <zhouyates@gmail.com<mailto:zhouyates@gmail.com>>
> ---
> v2: Add an explanation that causes this problem.
> Use m->next to store physical address.
<...>
> @@ -481,7 +486,7 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni)
> uint32_t ret;
> uint32_t len;
> uint32_t i, num_rq, num_fq, num;
> - struct rte_kni_mbuf *kva;
> + struct rte_kni_mbuf *kva, *_kva;
> void *data_kva;
> struct sk_buff *skb;
> struct net_device *dev = kni->net_dev;
> @@ -545,8 +550,11 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni)
> if (!kva->next)
> break;
>
> - kva = pa2kva(va2pa(kva->next, kva));
> + _kva = kva;
> + kva = pa2kva(kva->next);
> data_kva = kva2data_kva(kva);
> + /* Convert physical address to virtual address */
> + _kva->next = pa2va(_kva->next, kva);
> }
> }
>
Also need to update 'kni_net_rx_lo_fifo()', at worst to update 'next' fields
because it fills 'kni->free_q', without conversion userspace will crash.
<...>
> @@ -550,7 +563,7 @@ rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num)
> unsigned int i;
>
> for (i = 0; i < num; i++)
> - phy_mbufs[i] = va2pa(mbufs[i]);
> + phy_mbufs[i] = va2pa_all(mbufs[i]);
>
> ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);
There is a problem here.
When fifo 'kni->rx_q' is full, 'rte_kni_tx_burst' will send less mbuf than
requested, than the application needs to handle the remaining packages, most
probably will free them, but now some packages has physical address in their
'next' field, which will cause app to crash.
I don't know really how to solve this.
Perhaps getting free count from 'kni->rx_q' and only convert that much
(va2pa_all) to physical address can work, but I can't estimate performance
effect of it.
-----------------------------------------------------------------------------------------------------------------------
Notice: This e-mail together with any attachments may contain information of Ribbon Communications Inc. that
is confidential and/or proprietary for the sole use of the intended recipient. Any review, disclosure, reliance or
distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended
recipient, please notify the sender immediately and then delete all copies, including any attachments.
-----------------------------------------------------------------------------------------------------------------------
next prev parent reply other threads:[~2019-06-18 4:07 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-28 7:30 [dpdk-dev] [PATCH] " Yangchao Zhou
2019-03-06 17:31 ` Ferruh Yigit
2019-06-14 18:41 ` Dey, Souvik
2019-03-12 9:22 ` [dpdk-dev] [PATCH v2] " Yangchao Zhou
2019-03-19 18:35 ` Ferruh Yigit
2019-03-19 18:35 ` Ferruh Yigit
2019-03-22 20:49 ` Ferruh Yigit
2019-03-22 20:49 ` Ferruh Yigit
2019-06-18 4:06 ` Dey, Souvik [this message]
2019-06-18 15:48 ` Stephen Hemminger
2019-06-25 15:04 ` [dpdk-dev] [PATCH v3] " Yangchao Zhou
2019-07-02 20:07 ` [dpdk-dev] [v3] " Junxiao Shi
2019-07-10 20:11 ` Ferruh Yigit
2019-07-10 20:40 ` yoursunny
2019-07-10 21:23 ` Ferruh Yigit
2019-07-10 23:52 ` yoursunny
2019-07-10 20:09 ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2019-07-11 7:46 ` Ferruh Yigit
2019-07-15 20:50 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BN7PR03MB38921E84199B20E7A47C5155CDEA0@BN7PR03MB3892.namprd03.prod.outlook.com \
--to=sodey@rbbn.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=zhouyates@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).