From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2792F455D4; Wed, 10 Jul 2024 17:06:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C40A141109; Wed, 10 Jul 2024 17:06:51 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 01F05410ED for ; Wed, 10 Jul 2024 17:06:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1720624008; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=goRCiPrddMaUe5Q1Y8KY7Yrokj+oH5rUUcA2EUETB1g=; b=NuVOcBErobrpz72bCrEJn30KwG12gHqElgcc9ni/CvARpMf3fFg2AYHkJYaLPiJxsRKxCH 2rFdmI/NcsYDLw0H6pO+bNzRlCp+7kb6Iq0+u0r0JDlCcCMnTD8H9vKTqWO2UOlGH8ll6p hJotq5I+bC3PZSossmcJmbTjuHqzqnk= Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-228-bttLpvMQPuaYLt_QQR4_yg-1; Wed, 10 Jul 2024 11:06:45 -0400 X-MC-Unique: bttLpvMQPuaYLt_QQR4_yg-1 Received: by mail-lj1-f197.google.com with SMTP id 38308e7fff4ca-2eebdfcb63eso11355331fa.2 for ; Wed, 10 Jul 2024 08:06:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720624002; x=1721228802; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=goRCiPrddMaUe5Q1Y8KY7Yrokj+oH5rUUcA2EUETB1g=; b=kfMeH9cwxfRjH3sjfAqy6CHml8Qn2IyKxoTz/rTJwRsp1JQrYv8aQZltbvdiWxFN0O vYxYgbE7fSmWLjRRWWnwkaw/Agk/Jb+M/z5BpTrlFFWMlWW2ZpVscFeOrqtumrhEE8k9 UioBTAWzyQAmVrDtUdEplYJRYbdHVKEZXPcIcGmqm5Uh0LPwpAZTj5csqab5WSHgdy5Z lKwszcsPGeAGM/6A3Vn+fYDb2hlk8L0foraHATD94b7Ivgq/7G3hcw/x4TNOfgGUNMZ0 wmlo+e+ODv0QxCuK5kYTtQlM66oYqa11NAqLQ39SUlugc/XS8x9L8Kl424ZYK+vsH8LR 8a1w== X-Gm-Message-State: AOJu0Yy3s9b1F7Ofc1OCthgqM27BLVFbR8x6MURyFpmHlaqjUN/DjMtz 86lPI5TWnUGwvD2Q1uRGTcp3X0tCDz294dIGI6MWiEe8cL0sQI4wurYsySNoKsMcjoVH6+55Fhj 1IjLPf2RCZVSGpvdeli1lTe60pf2NAJ4fQxlI1kFsMUmA3vrGXpxhFtPDLKc3p6PPwOE22huiQq oY9jckHJwoyigbmIM= X-Received: by 2002:a2e:9695:0:b0:2eb:d924:43fb with SMTP id 38308e7fff4ca-2eeb31716f8mr38642471fa.41.1720624002474; Wed, 10 Jul 2024 08:06:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7GrH1DYWM3Y3DP1IoCsqMGOCOWAjIKJqBV8ANZrUUCxaZAzOUyNG/TeUVFzqXYHF212QF93oSDTHBwITpTag= X-Received: by 2002:a2e:9695:0:b0:2eb:d924:43fb with SMTP id 38308e7fff4ca-2eeb31716f8mr38642231fa.41.1720624002065; Wed, 10 Jul 2024 08:06:42 -0700 (PDT) MIME-Version: 1.0 References: <20240229132919.2186118-2-schalla@marvell.com> <20240703100353.2243038-1-schalla@marvell.com> <20240703100353.2243038-2-schalla@marvell.com> In-Reply-To: <20240703100353.2243038-2-schalla@marvell.com> From: David Marchand Date: Wed, 10 Jul 2024 17:06:30 +0200 Message-ID: Subject: Re: [PATCH v3 1/3] net/virtio_user: convert cq descriptor IOVA address to Virtual address To: Srujana Challa , Jerin Jacob Kollanukkaran Cc: dev@dpdk.org, maxime.coquelin@redhat.com, chenbox@nvidia.com, ndabilpuram@marvell.com, vattunuru@marvell.com, Thomas Monjalon , Ferruh Yigit X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hello Srujana, Jerin, On Wed, Jul 3, 2024 at 12:04=E2=80=AFPM Srujana Challa wrote: > > This patch modifies the code to convert descriptor buffer IOVA > addresses to virtual addresses during the processing of shadow > control queue when IOVA mode is PA. This change enables Virtio-user > to operate with IOVA as the descriptor buffer address. > > Signed-off-by: Srujana Challa This patch triggers a segfault in testpmd when using virtio-user (server) + vhost-user (client). This was caught in OVS unit tests running in GHA virtual machines. In such an env with no iommu, IOVA is forced to PA. This can be reproduced with two testpmd in an env without iommu like a vm, or alternatively forcing --iova-mode=3Dpa in the cmdline: $ rm -f vhost-user; gdb ./build/app/dpdk-testpmd -ex 'run -l 0-2 --in-memory --socket-mem=3D512 --single-file-segments --no-pci --file-prefix virtio --vdev=3Dnet_virtio_user,path=3Dvhost-user,queues=3D2,server=3D1 -- -i' ... EAL: Selected IOVA mode 'PA' ... vhost_user_start_server(): (vhost-user) waiting for client connection... $ ./build/app/dpdk-testpmd -l 0,3-4 --in-memory --socket-mem=3D512 --single-file-segments --no-pci --file-prefix vhost-user --vdev net_vhost,iface=3Dvhost-user,client=3D1 -- -i ... EAL: Selected IOVA mode 'PA' ... VHOST_CONFIG: (vhost-user) virtio is now ready for processing. On the virtio-user side: Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault. 0x0000000002f956ab in virtio_user_handle_ctrl_msg_split (dev=3D0x11a01a8d00, vring=3D0x11a01a8aa0, idx_hdr=3D0) at ../drivers/net/virtio/virtio_user/virtio_user_dev.c:942 942 if (hdr->class =3D=3D VIRTIO_NET_CTRL_MQ && (gdb) bt #0 0x0000000002f956ab in virtio_user_handle_ctrl_msg_split (dev=3D0x11a01a8d00, vring=3D0x11a01a8aa0, idx_hdr=3D0) at ../drivers/net/virtio/virtio_user/virtio_user_dev.c:942 #1 0x0000000002f95d06 in virtio_user_handle_cq_split (dev=3D0x11a01a8d00, queue_idx=3D4) at ../drivers/net/virtio/virtio_user/virtio_user_dev.c:1087 #2 0x0000000002f95dba in virtio_user_handle_cq (dev=3D0x11a01a8d00, queue_idx=3D4) at ../drivers/net/virtio/virtio_user/virtio_user_dev.c:1104 #3 0x0000000002f79af7 in virtio_user_notify_queue (hw=3D0x11a01a8d00, vq=3D0x11a0181e00) at ../drivers/net/virtio/virtio_user_ethdev.c:278 #4 0x0000000002f45408 in virtqueue_notify (vq=3D0x11a0181e00) at ../drivers/net/virtio/virtqueue.h:525 #5 0x0000000002f45bf0 in virtio_control_queue_notify (vq=3D0x11a0181e00, cookie=3D0x0) at ../drivers/net/virtio/virtio_ethdev.c:227 #6 0x0000000002f404a5 in virtio_send_command_split (cvq=3D0x11a0181e60, ctrl=3D0x7fffffffc850, dlen=3D0x7fffffffc84c, pkt_num=3D1) at ../drivers/net/virtio/virtio_cvq.c:158 #7 0x0000000002f407a7 in virtio_send_command (cvq=3D0x11a0181e60, ctrl=3D0x7fffffffc850, dlen=3D0x7fffffffc84c, pkt_num=3D1) at ../drivers/net/virtio/virtio_cvq.c:224 #8 0x0000000002f45af7 in virtio_set_multiple_queues_auto (dev=3D0x4624b80 , nb_queues=3D1) at ../drivers/net/virtio/virtio_ethdev.c:192 #9 0x0000000002f45b99 in virtio_set_multiple_queues (dev=3D0x4624b80 , nb_queues=3D1) at ../drivers/net/virtio/virtio_ethdev.c:210 #10 0x0000000002f4ad2d in virtio_dev_start (dev=3D0x4624b80 ) at ../drivers/net/virtio/virtio_ethdev.c:2385 #11 0x0000000000aa4336 in rte_eth_dev_start (port_id=3D0) at ../lib/ethdev/rte_ethdev.c:1752 #12 0x00000000005984f7 in eth_dev_start_mp (port_id=3D0) at ../app/test-pmd/testpmd.c:642 #13 0x000000000059ddb7 in start_port (pid=3D65535) at ../app/test-pmd/testpmd.c:3269 #14 0x00000000005a0eea in main (argc=3D2, argv=3D0x7fffffffdfe0) at ../app/test-pmd/testpmd.c:4644 (gdb) l 937 /* locate desc for status */ 938 idx_status =3D i; 939 n_descs++; 940 941 hdr =3D virtio_user_iova2virt(vring->desc[idx_hdr].addr); 942 if (hdr->class =3D=3D VIRTIO_NET_CTRL_MQ && 943 hdr->cmd =3D=3D VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { 944 uint16_t queues, *addr; 945 946 addr =3D virtio_user_iova2virt(vring->desc[idx_data].addr); (gdb) p hdr $1 =3D (struct virtio_net_ctrl_hdr *) 0x0 We need someone from Marvell to fix this issue. The next option is to revert the whole series (which was discussed and agreed before Maxime went off, in the event this series would trigger any issue). --=20 David Marchand