DPDK usage discussions
 help / color / mirror / Atom feed
From: JeongHwan Kim <kjh.kernel.kr@gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Attatching global memory pointer to mbuf and sending it to ethernet port.
Date: Fri, 3 Jan 2020 13:52:19 +0900	[thread overview]
Message-ID: <CAE-TMpDcqEuOB6dmNoFuCn0N08-GN06FzjO2RQ+1y=M=oCED7Q@mail.gmail.com> (raw)
In-Reply-To: <CAE-TMpA+BedGbdefKg3asLFJij0u427wJaqsWG8vTm-ptWcCBg@mail.gmail.com>

2020년 1월 3일 (금) 오전 11:26, JeongHwan Kim <kjh.kernel.kr@gmail.com>님이 작성:

>
> 2020년 1월 3일 (금) 오전 11:01, JeongHwan Kim <kjh.kernel.kr@gmail.com>님이 작성:
>
>> No, I'm using only one process which contains several threads.
>>
>> 2020년 1월 3일 (금) 오전 10:20, Stephen Hemminger <stephen@networkplumber.org>님이
>> 작성:
>>
>>> On Fri, 3 Jan 2020 09:11:46 +0900
>>> JeongHwan Kim <kjh.kernel.kr@gmail.com> wrote:
>>>
>>> > Hi, everyone
>>> >
>>> > I'd like to send the contents of global memory through ethernet port
>>> > without memory copying to mbuf.
>>> > So, I attached the pointer of global memory to mbuf's buffer address,
>>> but
>>> > segfault occurs when I send it to ethernet port.
>>> > I 'm using dpdk version 2019.11  and eal config with   "--iova-mode=pa"
>>> > option.
>>> > My hw platform is NXP ls2088a.
>>> >
>>> > My routine is like this:
>>> >
>>> > int buf_len = 100;
>>> > struct rte_mbuf *m = rte_pktmbuf_alloc(mbuf_pool);
>>> > struct rte_mbuf_ext_shared_info *shinfo =
>>> > rte_pktmbuf_ext_shinfo_init_helper(buf_addr, &buf_len,  free_cb,
>>> fcb_arg);
>>> > rte_pktmbuf_attach_extbuf(m, buf_addr, buf_iova, buf_len, shinfo);
>>> > rte_pktmbuf_reset_headroom(m);
>>> > ...
>>> > and sent  "m"  to ethernet, the result was
>>> >
>>> > Thread 4 "lcore-slave-1" received signal SIGSEGV, Segmentation fault.
>>> > [Switching to Thread 0xffffbdb0c910 (LWP 10614)]
>>> > 0x0000aaaaaab73f58 in eth_mbuf_to_sg_fd ()
>>> >
>>> > The buffer was created as follows :
>>> > user_mz = rte_memzone_reserve_aligned("user_mz", size, rte_socket_id(),
>>> >                     RTE_MEMZONE_1GB|RTE_MEMZONE_IOVA_CONTIG,
>>> >                     RTE_CACHE_LINE_SIZE);
>>> > buf_addr = user_mz->addr;
>>> > buf_iova = user_mz->iova;
>>> >
>>> > Please give me a hint for this problem.
>>> > Thanks in advance.
>>> >
>>> > Jeong-Hwa Kim.
>>>
>>> Are you using primary/secondary process model?
>>> Is the memory zone being created in the primary process?
>>>
>>
> I'm not using primary/secondary process model.
> Only primary process exists and  it consists of several threads.
>

I found the weird situation.
First packet I sent was OK, it has an address of the first part of memory
zone.
But after that, the next packet of which address is next the buf length was
fail.

      reply	other threads:[~2020-01-03  4:52 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-03  0:11 JeongHwan Kim
2020-01-03  1:20 ` Stephen Hemminger
2020-01-03  2:01   ` JeongHwan Kim
2020-01-03  2:26     ` JeongHwan Kim
2020-01-03  4:52       ` JeongHwan Kim [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAE-TMpDcqEuOB6dmNoFuCn0N08-GN06FzjO2RQ+1y=M=oCED7Q@mail.gmail.com' \
    --to=kjh.kernel.kr@gmail.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).