From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Július Milan" <jmilan.dev@gmail.com>
Cc: dev@dpdk.org, jgrajcia@cisco.com
Subject: Re: [dpdk-dev] [PATCH] net/memif: enable loopback
Date: Wed, 5 Feb 2020 16:53:14 +0000 [thread overview]
Message-ID: <68844faf-9d42-ea58-3404-d7e0fd454b93@intel.com> (raw)
In-Reply-To: <20200205164405.GA26554@vbox>
On 2/5/2020 4:44 PM, Július Milan wrote:
> On Wed, Feb 05, 2020 at 04:00:19PM +0000, Ferruh Yigit wrote:
>> On 2/5/2020 3:41 PM, Július Milan wrote:
>>> With this patch it is possible to connect 2 DPDK memifs into loopback,
>>> i.e. when they have the same id and different roles, as for example:
>>> "--vdev=net_memif0,role=master,id=0"
>>> "--vdev=net_memif1,role=slave,id=0"
>>
>> Overall looks good idea but this cause a crash in testpmd on exit, can you
>> please check?
>>
> Thank you,
> Do you mean this message?
> "EAL: Error: Invalid memory"
No.
> If not, how can I reproduce it?
Start testpmd [1], quit it [2], and will get a crash [3]. Full log in [4].
[1]
./build/app/testpmd --no-pci --vdev=net_memif0,role=master,id=0
--vdev=net_memif1,role=slave,id=0 --vdev=net_memif1,role=slave,id=1 --log-level
"pmd*:debug" -- --no-mlockall -i
[2]
testpmd> quit
[3]
Segmentation fault (core dumped)
[4]
$ ./build/app/testpmd --no-pci --vdev=net_memif0,role=master,id=0
--vdev=net_memif1,role=slave,id=0 --vdev=net_memif1,role=slave,id=1 --log-level
"pmd*:debug" -- --no-mlockall -i
EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
rte_pmd_memif_probe(): Initialize MEMIF: net_memif0.
memif_socket_create(): Memif listener socket /run/memif.sock created.
rte_pmd_memif_probe(): Initialize MEMIF: net_memif1.
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=779456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=779456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 3A:45:AA:E6:9A:93
Configuring Port 1 (socket 0)
memif_listener_handler(): /run/memif.sock: Connection request accepted.
memif_msg_send_from_queue(): Sent msg type 2.
memif_connect_slave(): Memif socket: /run/memif.sock connected.
memif_msg_receive(): Received msg type: 2.
memif_msg_receive_hello(): Connecting to DPDK 20.02.0-rc1.
memif_msg_send_from_queue(): Sent msg type 3.
memif_msg_receive(): Received msg type: 3.
memif_msg_send_from_queue(): Sent msg type 1.
memif_msg_receive(): Received msg type: 1.
memif_msg_send_from_queue(): Sent msg type 4.
memif_msg_receive(): Received msg type: 4.
memif_msg_send_from_queue(): Sent msg type 1.
memif_msg_receive(): Received msg type: 1.
memif_msg_send_from_queue(): Sent msg type 5.
memif_msg_receive(): Received msg type: 5.
memif_msg_send_from_queue(): Sent msg type 1.
memif_msg_receive(): Received msg type: 1.
memif_msg_send_from_queue(): Sent msg type 5.
memif_msg_receive(): Received msg type: 5.
memif_msg_send_from_queue(): Sent msg type 1.
memif_msg_receive(): Received msg type: 1.
memif_msg_send_from_queue(): Sent msg type 6.
memif_msg_receive(): Received msg type: 6.
memif_connect(): Connected.
memif_msg_receive_connect(): Remote interface net_memif1 connected.
memif_msg_send_from_queue(): Sent msg type 7.
memif_msg_receive(): Received msg type: 7.
memif_connect(): Connected.
memif_msg_receive_connected(): Remote interface net_memif0 connected.
Port 1: BA:50:9A:75:5E:9F
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> quit
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
memif_msg_receive(): Received msg type: 8.
memif_msg_receive_disconnect(): Disconnect received: Device closed
memif_disconnect(): Disconnected.
memif_msg_receive(): Invalid message size.
memif_msg_send_from_queue(): sendmsg fail: Broken pipe.
memif_disconnect(): Unexpected message(s) in message queue.
memif_disconnect(): Disconnected.
memif_msg_send_from_queue(): Sent msg type 0.
Segmentation fault (core dumped)
>
>>>
>>> Fixes: 09c7e63a71 ("net/memif: introduce memory interface PMD")
>>>
>>> Signed-off-by: Július Milan <jmilan.dev@gmail.com>
>>> ---
>>> drivers/net/memif/memif_socket.c | 17 ++++-------------
>>> 1 file changed, 4 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
>>> index ad5e30b96..552d3bec1 100644
>>> --- a/drivers/net/memif/memif_socket.c
>>> +++ b/drivers/net/memif/memif_socket.c
>>> @@ -203,7 +203,7 @@ memif_msg_receive_init(struct memif_control_channel *cc, memif_msg_t *msg)
>>> dev = elt->dev;
>>> pmd = dev->data->dev_private;
>>> if (((pmd->flags & ETH_MEMIF_FLAG_DISABLED) == 0) &&
>>> - pmd->id == i->id) {
>>> + (pmd->id == i->id) && (pmd->role == MEMIF_ROLE_MASTER)) {
>>> /* assign control channel to device */
>>> cc->dev = dev;
>>> pmd->cc = cc;
>>> @@ -965,20 +965,11 @@ memif_socket_init(struct rte_eth_dev *dev, const char *socket_filename)
>>> }
>>> pmd->socket_filename = socket->filename;
>>>
>>> - if (socket->listener != 0 && pmd->role == MEMIF_ROLE_SLAVE) {
>>> - MIF_LOG(ERR, "Socket is a listener.");
>>> - return -1;
>>> - } else if ((socket->listener == 0) && (pmd->role == MEMIF_ROLE_MASTER)) {
>>> - MIF_LOG(ERR, "Socket is not a listener.");
>>> - return -1;
>>> - }
>>> -
>>> TAILQ_FOREACH(elt, &socket->dev_queue, next) {
>>> tmp_pmd = elt->dev->data->dev_private;
>>> - if (tmp_pmd->id == pmd->id) {
>>> - MIF_LOG(ERR, "Memif device with id %d already "
>>> - "exists on socket %s",
>>> - pmd->id, socket->filename);
>>> + if (tmp_pmd->id == pmd->id && tmp_pmd->role == pmd->role) {
>>> + MIF_LOG(ERR, "Two interfaces with the same id (%d) can "
>>> + "not have the same role.", pmd->id);
>>> return -1;
>>> }
>>> }
>>>
>>
prev parent reply other threads:[~2020-02-05 16:53 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-05 15:41 Július Milan
2020-02-05 16:00 ` Ferruh Yigit
2020-02-05 16:44 ` Július Milan
2020-02-05 16:53 ` Ferruh Yigit [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=68844faf-9d42-ea58-3404-d7e0fd454b93@intel.com \
--to=ferruh.yigit@intel.com \
--cc=dev@dpdk.org \
--cc=jgrajcia@cisco.com \
--cc=jmilan.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).