* 32-bit virtio failing on DPDK v23.11.1 (and tags)
@ 2024-08-28 21:27 Chris Brezovec (cbrezove)
2024-09-03 14:43 ` Chris Brezovec (cbrezove)
0 siblings, 1 reply; 5+ messages in thread
From: Chris Brezovec (cbrezove) @ 2024-08-28 21:27 UTC (permalink / raw)
To: dev, maxime.coquelin; +Cc: common-dpio-core-team(mailer list)
[-- Attachment #1.1: Type: text/plain, Size: 3006 bytes --]
HI Maxime,
My name is Chris Brezovec, we met and talked about some 32 bit virtio issues we were seeing at Cisco during the DPDK summit last year. There was also a back and forth between you and Dave Johnson at Cisco last September regarding the same issue. I have attached some of the email chain from that conversation that resulted in this commit being made to dpdk v23.11 (https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4).
We recently picked up the v23.11.1 DPDK release and saw that 32 bit virtio is not working again, but 64-bit virtio is working. We are noticing CVQ timeouts - PMD receives no response from host and this leads to failure of the port to start. We were able to recreate this issue using testpmd. We have done some tracing through the virtio changes made during the development of the v23.xx DPDK release, and believe we have identified the following rework commit to have caused a failure (https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6).
We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all seem to see the same issue when running in 32-bit mode using testpmd.
We were hoping you might be able to take a quick look at the two commits and see if there might be something obvious missing in the refactor work that might have caused this issue. I am thinking there might a location or two in the code that should be using the VIRTIO_MBUF_ADDR() or similar macro that might have been missed.
Regards,
ChrisB
This is some of the testpmd output seen on v23.11.2-rc2:
LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib /home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l 2-3 -a 0000:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8 --log-level=lib.eal:info --log-level=lib.eal:debug --log-level=lib.ethdev:info --log-level=lib.ethdev:debug --log-level=lib.virtio:warning --log-level=lib.virtio:info --log-level=lib.virtio:debug --log-level=pmd.*:debug --iova-mode=pa -- -i
— snip —
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_disable(): Failed to disable promisc
Failed to disable promiscuous mode for device (port 0): Resource temporarily unavailable
Error during restoring configuration for device (port 0): Resource temporarily unavailable
virtio_dev_stop(): stop
Fail to start port 0: Resource temporarily unavailable
Done
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_enable(): Failed to enable promisc
Error during enabling promiscuous mode for port 0: Resource temporarily unavailable - ignore
[-- Attachment #1.2: Type: text/html, Size: 7948 bytes --]
[-- Attachment #2: Re- Commit broke 32-bit testpmd app.eml --]
[-- Type: application/octet-stream, Size: 32884 bytes --]
Received: from LV3PR11MB8577.namprd11.prod.outlook.com (2603:10b6:408:1b8::21)
by MN2PR11MB4190.namprd11.prod.outlook.com with HTTPS; Wed, 20 Sep 2023
13:06:04 +0000
Received: from MN2PR11CA0014.namprd11.prod.outlook.com (2603:10b6:208:23b::19)
by LV3PR11MB8577.namprd11.prod.outlook.com (2603:10b6:408:1b8::21) with
Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.28; Wed, 20 Sep
2023 13:05:57 +0000
Received: from BL02EPF0001A106.namprd05.prod.outlook.com
(2603:10b6:208:23b:cafe::3c) by MN2PR11CA0014.outlook.office365.com
(2603:10b6:208:23b::19) with Microsoft SMTP Server (version=TLS1_2,
cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.29 via Frontend
Transport; Wed, 20 Sep 2023 13:05:57 +0000
Authentication-Results: spf=fail (sender IP is 173.37.142.88)
smtp.mailfrom=redhat.com; dkim=pass (signature was verified)
header.d=redhat.com;dmarc=pass action=none header.from=redhat.com;
Received-SPF: Fail (protection.outlook.com: domain of redhat.com does not
designate 173.37.142.88 as permitted sender) receiver=protection.outlook.com;
client-ip=173.37.142.88; helo=alln-iport-1.cisco.com;
Received: from alln-iport-1.cisco.com (173.37.142.88) by
BL02EPF0001A106.mail.protection.outlook.com (10.167.241.139) with Microsoft
SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
15.20.6792.20 via Frontend Transport; Wed, 20 Sep 2023 13:05:57 +0000
X-CSE-ConnectionGUID: 364BvZ77Qce2yYSKyW/L1w==
X-CSE-MsgGUID: T2M1IHhJQNC5y98vzzMbxw==
Received: from alln-core-10.cisco.com ([173.36.13.132])
by alln-iport-1.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 13:05:49 +0000
Received: from alln-inbound-j.cisco.com (alln-inbound-j.cisco.com [173.37.147.240])
by alln-core-10.cisco.com (8.15.2/8.15.2) with ESMTPS id 38KD5j6t014317
(version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK)
for <cbrezove@cisco.com>; Wed, 20 Sep 2023 13:05:49 GMT
X-CSE-ConnectionGUID: 7b2kavB8QymHXqbZhPtiKg==
X-CSE-MsgGUID: kwPRJO+MSEKP2+2gI4VkPg==
Received-SPF: Pass (alln-inbound-j.cisco.com: domain of
maxime.coquelin@redhat.com designates 170.10.129.124 as
permitted sender) identity=mailfrom;
client-ip=170.10.129.124; receiver=alln-inbound-j.cisco.com;
envelope-from="maxime.coquelin@redhat.com";
x-sender="maxime.coquelin@redhat.com";
x-conformance=spf_only; x-record-type="v=spf1";
x-record-text="v=spf1 ip4:162.247.160.0/21 ip4:167.89.0.0/17
ip4:168.245.0.0/17 ip4:170.10.128.0/24 ip4:170.10.129.0/24
ip4:170.10.133.0/24 ip4:173.194.0.0/16 ip4:185.58.84.93/32
ip4:192.250.208.112 ip4:192.250.208.113 ip4:192.254.112.0/20
ip4:198.2.128.0/18 ip4:198.21.0.0/21 ip4:198.245.80.0/20
ip4:198.37.144.0/20 ip4:199.122.120.0/21
include:spf3.redhat.com -all"
Received-SPF: None (alln-inbound-j.cisco.com: no sender
authenticity information available from domain of
postmaster@us-smtp-delivery-124.mimecast.com) identity=helo;
client-ip=170.10.129.124; receiver=alln-inbound-j.cisco.com;
envelope-from="maxime.coquelin@redhat.com";
x-sender="postmaster@us-smtp-delivery-124.mimecast.com";
x-conformance=spf_only
Authentication-Results-Original: alln-inbound-j.cisco.com; spf=Pass
smtp.mailfrom=maxime.coquelin@redhat.com; spf=None
smtp.helo=postmaster@us-smtp-delivery-124.mimecast.com; dkim=pass (signature
verified) header.i=@redhat.com; dmarc=pass (p=none dis=none) d=redhat.com
IronPort-SDR: 650aee2b_QD9L5HCf0QFwtsHcA+w02gdMFgcWHFlQIekm/VWausdufWn
QGqwhj6JUJA+OZOtEZOhTVezPgTzSwRp+EuvWKQ==
X-from-outside-Cisco: 170.10.129.124
X-ThreatScanner-Verdict: Negative
X-IPAS-Result: =?us-ascii?q?A0EDAAAG7Qplh3yBCqpaGgEBAQEBAQEBAQEDAQEBARIBA?=
=?us-ascii?q?QEBAgIBAQEBQIE7BQEBAQELAYI2eYEGBAtHhFODT4ROX4ZAgiMDhDyXaYFWF?=
=?us-ascii?q?IERAxg+DwEBAgEBAQEIAQE7CQQBAQMBA4UAAocBAh4HATQJDgECAQIBAQEBA?=
=?us-ascii?q?QMCAwEBAQEBAQECAQEBBAEBAQIBAQIEAgIBAQIQAQEBASIXBw4QJ4U7CCUNg?=
=?us-ascii?q?lkZgQyBHQEBAQEBAQEBAQEBAQEBAQEBAQEXAg14AQEBAQIBEhEECwEFCAEBL?=
=?us-ascii?q?AsBBAsJAhEDAQIBAgIjAwICRgkIBgEMBgIBAR6CXAGCOyMDEZQwjkIBjHQBA?=
=?us-ascii?q?QF4fzOBAYIJAQEGBAMCAX2BX64wgV8JgRouAYgIAYU5GIM7eicPgVVEgRUng?=
=?us-ascii?q?RWBbz6BU4EPAoErARIBIRUXgy+CaIcmgiCFPwMCAjKCJoMvAyeKJSqBCAhcg?=
=?us-ascii?q?Wo9Ag1UCwtdgRFROIE7AgIRJxIUBUJwGwMHA4EEECsHBDIbBwYJFhgVJQZRB?=
=?us-ascii?q?C0kCRMSPgSBZ4FRCoEGPxEOEYJFIgIHNjYZS4JdCRUMNQRKdhArBBQYgRMEa?=
=?us-ascii?q?h8VHjcREhkNAwh2HQIRIzwDBQMENgoVDQshBVcDRwZLCwMCHAUDAwSBNgUPH?=
=?us-ascii?q?gIQGgYOKQMDGVACEBQDPgMDBgMLMAQOAxkrHUACAQttPTUGAwsbBkACJ6FEA?=
=?us-ascii?q?26Bby0+BoE7IGAbBSUjGjALCy+ScQkEsQM0B4QOgVkGDIoVlQoGDwQvhAGMb?=
=?us-ascii?q?gOGJRSKaTOHGJgtII1BlUqEcQIEAgQFAhaBYzpscE0jUIJnCUYDGQ+BNoxqC?=
=?us-ascii?q?QMNCYNWhWWKFj81AgkwAgcBCgEBAwmJIYIoAQE?=
IronPort-PHdr: A9a23:8wwduxQU8XxBQszZLZZTss6mANpsovmRAWYlg6HPa5pwe6iut67vI
FbYra00ygOTA8OCu6kP1ruempujcFJDyK7CikxKSIZLWR4BhJdetC0bK+nBJGvFadXHVGgEJ
vlET0Jv5HqhMEJYS47UblzWpWCuv3ZJQk2sfQV6Kf7oFYHMks+5y/69+4HJYwVPmTGxfa5+I
A+5oAnPt8Qam5ZuJ6U/xxfGonZFf/ldyH91K16Ugxvz6cC88YJ5/ShXp/wv6dNLX7/gf6Q/Q
7xYDTAmPH4w6cb2qxTNThaB62UFXGkOnRRGGwfK4AjkU5n+ryX2ruVy1jWUMs3wVrA0RC+t7
7x3Rx/yiScILCA2/WfKgcFtlq1boRahpxtiw47IZYyeKfRzcr/Bcd4cWGFMWNtaWS5cDYOmd
4YBDOQPMulWoIfgp1UAswWzBQeuC+zzxTFFnWP20K4g3ug9DQ3L0g4tEtQTu3rUttX1M6ISX
PiywqbSwjTDbvZW1ing44XWdRAhuOyMUqx0ccrQz0kkCgTIjlCKpo3qPjOV0/oCv3KH4OpnT
OKvlnAoqwVwojip3coskJfGiZ8Vyl/e6SV12po6Jdq9SENiZ9OvDZRfuT2AOYRsXsMiX39nu
Dw8yrAet5C2fDUGxIgnyhDQZfKKcoyF7xb/WOuSPzp2hHxodr29ihuv/0WtxO7xWteo3VpWo
SRIjMfBu3EC2RHc98SKS/1w9Vqv1zaI0gDc8OBEIUYsmKrUKp4h3r4wlocIvkjZAiD2n0D2g
amLfUsn4uil8/nrb7f6qpOGKoN4lhvyPrkwlsG7G+g0LxYCUmeD9em91bDv51D1TbZIg/Esj
KXUvp7XKd4Zq6O3BQJez5gu6xKiDze9zNQXg2MHIk9EeBOGkYfkI03CLfblBvmlmVusii1kx
/XeM73hHJrNKn/Dna/8fbZm8kJc0w8zzcxH555NF74OPvbzWk7vtNPGFB84MxW4z/v5BNhyz
I8eXGOPAqqHP6zOq1CI/f4vL/OQa48SvTbxM/kl5/jwgn8lgVIRYKuk0YcNZHylAvhqOViVb
WToj9sbDGsGoAUzQPTviFKYUD5TY3iyX7g75jE+EI+mCJ3MR4+sgLyEwii3BIFZZmdfClCRD
3joc4SEW/EXZSKIPMBujzwEWqK9S4M7yR6uswr6x6JhLuXP4iIYr47s1MBp5+3PkhE/7SF4A
9yH026RV2F0gn8IRzgu0a9jukN90EmM0Kl/g/xGC9Ne/O9GUgYhNZHAyOx2Ecz9WgXEftuRV
VmmQdSmATQpQ9wr39IAfltzF825jhDb0SqnG6UbmqCWCpIp6q7cxGDxJ8hlxHbGyqYhi14mT
9NXO2O/nqBx+FubO4mc2X+emKuwPYER2DLK8i/D12+DuF1fVk86GfHfWH0bekzQhd/4/UjFC
bSpDOJjekFtwNCPYolNatvyilMOBPvqJtncZCP70zO8DAiFxLSkZ4vxcGJb1yLYXgxM2Qwe4
16KLQQ9QCC7rCiWWDBjC3rje0/vt+5kpyX/BgUPxgSSZlFhn4K49xUVmLTcUOse07kEpWZr4
2FvBFuw1NLLI9GBvAFmOq5bZIV5qFxH2GTTsEptP5WIKL1+glkZcEJ8uEa9+Q9wD9BlmNgrq
X5i5w1rNK+c0VUJIzaRzZH1N/vcLXDv8RSubYbM113e2crQ8aAKvqdr427/tR2kQxJxu05s1
MNYhiP0DujiEl9LAtr4B1ws/l1hp7iAP3dur4iBz3BoOLm5vnjY1tYgFOYpmV6sKs1SPKONC
En5FMhJYqrPM7kvml2kZRsfILoKrf9sYJr3JLOajff1bbs6xW//0TsUheIc0heC7S15GODS2
JNX2fyA2hrCSzCul0qrsM38hdJfeDsQE2GjmyTtGIMDaqRucNQOBHy1KsK6g9Ry1ZvgQGVR+
1PmBl9Uva3IMQaVPUD6jwZbh08c83mmnCb90TFuiDAuqPia3SnDhvrvbgEGPWgZT25kkVrqP
c2onsgXDkOvaRQkkgfg+Vfiyvo+xsU3JG7VXUpOY23qNWhkSaW9rP+PZcVI5JIuvndcV+Kgb
FaGD6Lnqh0B3S75WmZYzzc3djel6PCb/lR7jieHIW53vT/FfpNxw1LF6cTYVLtN2WgcSSd1k
zTRDVWnbcOk89iOmpHIv/vtMgDAXJ0GYybb9qqZ9wmk2Gg2MRewz6yDw9HqNVAI6iP674VFf
gGX8ASlRdOs/aagALc0GysJDgr+wdJoQ6FYi5Nu3LgLhmEkxZeJ02JbkmvZI/NJhazXbHxRF
GUInN753zru4FZlaSyj9oHednzGhc0iR4Hnbk0mmQwg8u1EbcX17LV4hHdYmn2j9lrsS8N4x
Ss3lbwwzWMg07wZlhIGzTzeKLVPHnZzMzLJzx2y1OqT66hFYFyMY7mSiVdhvYubJbO5kwVhQ
RObG5h9QhAh5MNjEVvO20To5o27Kefid5UokCekgTOYgcEAe9oh0/sQgix/Pnjh+Gco0PM/k
Ul21Iqh7+BueQ9FpMpRcTYAchHwbtkYwDzmirxSzI6OioaoGJglUi0AQM6xFarsWCkK4PLgM
QvKWDQwqXaeTKCbVQaE8EkztTaHCY2iYkLXa3UcxM9pAQOALWRFjQwUVSl8lZk8WUTinZS9I
BomuGhQ0xr+rR1BogoJHxD7VmGapQGibSoyRN2dahFX9QpFoUzSNJ/W9fp9ShlR5YbptwmRM
iqebgVMA3sOXxmBDkDiO7To7t7Z7uWcB+yWNfbCYbyS7+dZUqTA3oqhh61h+TvELcCTJj9iA
vk8j1JERmx8Et/Fli8nVjdSjCfRd4uXqRO95CBtv4XmqqitXQPq6ICGEKcUOtJqqFi6gqaGY
vaZnz0xaS1Z2ZUF2WLSxfAB0UQThSBje3jlEbkJuSPXCqOFsrVaAx4Sd2V4M84bp6451xNGb
NbSkci9lqYtiPkzBh9JVEe0nMavaI1CLzSnOV/HDVrNP7ODTdHS6/n+erj0CbRUiOMRthu6v
iudHwrheD+CjDLuERuoNLMEgCaeNR1Y8IazF3QlQW3qXNPjZ1uwN8ZqgDk3x5Uui3/KPHJaO
j95O09Atbyf6ypEj+43QjMZqCM9a7DeymDAtbWQI41z07MjGilukuNG/Hk2g6BY6i1JXr09m
SffqMJvv0Dzl+COzjR9Vx8dz1QDzImPvEhkJeDY7swZBi6CpUpLtD/BTU1Q+4gtENDktqFOx
8Kanab3LnJD9djT4M0QQsqSI8OcO31nOh3sS1u2REMISyCmMWbHiglTivaXozeQp4g7r5Wqk
50XV7JfVVodDP4WCk15WtcFJd0kO1Fs2a7el8MO6Xek+VPUTdVbu52BXfWLHPjrKTCxl7RIZ
xIUh7j/KM5AU++zk1wnYV58koPQHkPWVt0YuTVvWQgypFQeuGg7VGA43FjpLx+8+HJGX+Dhh
QY400EtBIZlvCep+Vo8IUDG4Tc9gFVk08uwmiifKXbwNPviBN8MTXCo6A5obMu9G0EvNEWzh
RA2bWmbAeIAyeMmLScy12q+8dNOAaIOFPMUJkVMmLfPIa1viAsUqz37lxYYva2cUd07zltsK
dn28zpBw14xNYFtY/WIYvMTnx4IwfjS203gnuEpnF1HeB5LqTvUIXVT/hRPbOVuJjL0rLUzs
krbymcFIC5UEKN06vNyqhFka7nGknKmg+IFexv2bLb6TevRunCcx5fZGBVqixtOzREVu+Evi
48iaxbGDhF1iuHMT1JTbZqEcFgOJ8tKqCqKLHzI67iRh8ssZcPlUbqwKI3G/KcM3hD9TVZvR
cJVtJRHRt71gAnZNZu1de9Vj090olytfBLfVbxIYE7ZyWZX5Zvjkdknhc8FYWtFZAc1eSSvu
uSO919s26XFBYtqJC9dBNNMN2pqCpTrymgA5S4GVWPvlLpeklHnjXe0pyLbCCTwYoh4fPnPI
wlhBNG/5XM09K3+zFfT9t+2y3jSE9Nkt5eP4OceodOGDflTUL9x9kWakIhDTHvsWGnKQ5azI
NDrZo8gYMaRaD7yW0Gjiz8zU8b6Pcq8ZqmOjwbyQI9IsY6dlDk9PM65HzsaFl9+veYGrK57Y
AQCZdI8b3uK/0wmMLejJQ6Dzti0a3u2bCBTU+Eawei+ZqJa1TtpN7Xmjn4hS5wxxvKrt0UKQ
dBCjx3Tw+qie5gLUSX3HS84GU2Hri44mm59c+cql75gmFWR6R9FaWHNLbY1OyRestoxBE2fO
yB/Dm4+Al6ah43e5ALp3fYZ/jBWmJBf1ugW1Rq29pLZfj+oX7Sm7JvPtC91J90vs6x0PMroJ
dKbuZffmBTFQZXQuxHDWym/Xak/+JAYMGdDTf9ElHtwc9QBopZE4FEtW90WOqcKE6wwvvWmZ
D5lEyMI3GpADd/G2T0Hj+y1wKecnRCVOsdHUlRMoNBJhd0TVDRzayUVqfq4T4vhkGmAWwDjw
S8O9kFX4R4Y04N9euH77ZDZCcYWl3hTovd5WSLQCt9j8F6pEgl+bnDiRfmhmvDv1gVXnquE7
w==
IronPort-Data: A9a23:vzodvqrhYRujVWWIBMC26FaBqK5eBmK9YRIvgKrLsJaIsI4StFCzt
garIBmHaf2NNGf2e4t1b4rj8kwC7cWGx99kGQNuqCk8QywW8JacVYWSI27OZB+ff5bJJK5FA
2TySTViwOQcEieEzvt4GuS59RGQ7YnRG/ykTraCY3gtLeNdYH9JoQp5nOIkiZJfj9G8Agec0
fv/uMS31GWNglaYCUpKrfrYwP9TlK6q4mhA7wZmPasjUGL2zhH5MrpPdcldEFOkEuG4LsbiL
87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRoukoPD8fwXG8M49m/c3Kd/
/0W3XC4YV9B0qQhBI3xWTEBe811FfQuFLPveibn6aR/ZqAJGpfh66wGMa04AWEX0rl9W2JH6
9ZIESEITiCajdC2+K66aeY506zPLOGzVG8ekmx7iCrcEe5jTZ3HQrvH/84dhW1swMVPGPvVb
tEFLzFoaXwsYTUWZQ9RUcp4xb35wCClL1W0q3rNzUYzy23a3A103f7mN8PId9iLQ+1Pk0qYr
36A9GP8av0fHI3Pk2vUriL87gPJtSDkYLgwM/6dzdNJuAfQnGMyVCMVbVTu9JFVjWbjBosGd
RN8FjAVhaM47kG5Scfwdxa5u3WD+BUbXrJt//YS9RrI0a/I+0OVC2wFUDNbeZl/7pNwQD0v2
1SAhM+vDjtq2FGIdZ6D3uie9zC7JwJFFH4HRR8dRzkF28Xig45m23ojUe1fOKKyi9T0HxT5z
DaLsDUyit0vYSgjiPrTEbfv3GzEm3TZcuImzl6IAT/9v2uVcKbgNtP4swGKhRpVBN/BFgHpg
ZQSpySJAAkz4XyliTfIW+ARBPSl6vKELjDGkBs2RcVn8jWo9nWqZpwW6zZ7TKuIDirmUWC2C
KMwkVkKjHO2AJdMRfIqC25WI5lxpZUM7fy/CpjpgiNmO/CdjjOv8iB0flK31GvwikUqmqxXE
c7FIJv1USZAUv82kGLeqwIhPVkDmXtWKYT7GsmT8vhb+eP2iIO9FO5faDNikMhnvPzsTPrpH
yZ3apbTkksAOAEPSizW9IMYJEsWZX48H4z248VSe/SCKQMOJY3SI6C5/F/VQKQ8x/49vr6Qo
BmVAxYEoHKh3ievAVvRNRhehEbHBssXQYQTZnB3Yj5FGhELPe6S0UvoX8dvIuB+pbAynaYco
jtsU5zoP8mjgw/volw1RYPwoIxraFKgggfmAsZvSGVXk0dIFlOVqOz3NBDi7jcPBSeRvM4z6
e/onADCTJZJA0woAM/KYbj9hxm8rFoMqtJUBkHoG9h0fFmz0Y5ILyeqsOQ7DftRIjr+xxyb9
T2sPzEmmcf3rbUIreb53ZK/k9/xEs9VPFZrIG3A3LPnaQjY5jWCxKFDYsapfBfcdn/FyKW6b
9po0sCmYe0Lo2hRurUhOr1u86E3yPW3g7p80A5fPW7qam6zAehKOUi23shot4xMyIRGuACwZ
Fm9x9lCNZiNO+LnCFQ0Jjd5Xt+c1PoRpCbe3c41LGr++iVz2riNCmdWADWhlw1fK+FTHL4+4
OJ8pvMT1ROzuiAqPvmCkCpQ0WaGdV4Ecqc/s6AlEJ3ZsRUqxn5CcK7jJHfPur/XUOp1M24uP
jOwr4jBje4FxkP9LlwCJUKU1u9Z3ZkzqBRGyWEZHGuwm/3HuK4T/AZQ+jEJXAhq3k158+ZsC
FNKaWxxB4uzpglNuuYSflqCOQ97AD+hxnfQ0HoMzW3QcFmpXDfCLUo7Iue8w3oa+GN9IBlep
bGV52L5Yw23fsuq8HIAZl59of/JXO5OzArzqPm6Lp7UA7g7fjvXrau8bkUYqxbcIJ0QhxTHq
MZu4NRLQ4HxbgBJkZIrE4Ob9+U0YzKVKFNSRcpO+PsyIljdXzWpyB6yK0yVUeFcFczgqEOXJ
ZRnGZNSak6Yyi2LkAE+OYcNBL1FxNgS+9sIf+LQF14s6rewgGJgj8PNy3LYmmQuftRJlPQ9I
KP3cxapMDSZpVlQql/3gPh0AEiKSvhaW1Skx8GwyvsDKLwbuuI1cU0S7Kq9j0/IDCRZpSCrr
CHxTI6I6dw60ol9vZreIoMaDSWOFN7DfuCp8geyjtdwUe3yIfr+7wM7lnS3PiB9H6cgZNBsp
LHc7P/1xBzkuZg1YUD4mr6ANfBC9JygVtptLebIFn1Qsi/Yav/Ovj8/1nGeAsFVmewE4vj9a
Rm0Q5ayf4RNWvN25n5cWw5BGTkzVoX1aabBo3umjvKuUxIy7y3OHOmFx1TIM15JV3YvEIKkL
A3NoNO8z4p8gKUQIQ4bFtd0B5NcC33ya5sMLtHem2GRMTi1vwmkpLDnqysF1RjKLXuhS+PR/
pPPQ0nFRiSY4a3n4olQjN1vg0cxEn14vOgXe3Ad8f5QjxSRLjYPDcYZAKU8JqBkqA7A/7CmW
2iVd0onMzv3Yhpcexal4NjDYBaWNtZTBvjHfA4W72GmQAboIrPZDLVYo3Iqpz88fzb41+ioJ
O0P4nC6bFD73phtQv1V/fChx/tuwvTB3H8T5EThiIrIDg0DBakRnmlUdOaXufcrz+mW/KkKG
YQ0eYyAaFygDFX0Dd4ld3NRGQ8UpiKqlm90KyKOxdvUsp6HiuZHzZUT/g01PqIrNKw3yHwmH
RsbhFdhJ0iM13ATsLdvsNUs6UOxIezeBdC0dccPWiVL95yNBq8b0w/uUMbBoAzOOOKSLr8Fq
gSR3g==
IronPort-HdrOrdr: A9a23:KYQam6Pdha2mecBcTvOjsMiBIKoaSvp037BN7TEIdfU1SL3gqy
nKpp8mPHDP6Qr5NEtQ++xoQZPtfZqjz+8S3WBhB9eftWDd0QPGEGgF1/qE/9TOIUPDH4VmpM
RdmsZFebjN5RATt6zHCEPRKbsdKJftytHNudvj
X-Talos-CUID: 9a23:Wx1cCWON3ecPSO5DByM990cKMJAcY3j9/G7aEU+AFWdKR+jA
X-Talos-MUID: =?us-ascii?q?9a23=3ACyAWzw3GipD9JXNtK/YJ15R82DUj4LuWMmIq0rA?=
=?us-ascii?q?9luq8b28qNBbBsjePTdpy?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-AV: E=Sophos;i="6.03,162,1694736000";
d="scan'208";a="98299488"
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124])
by alln-inbound-j.cisco.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2023 13:05:47 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
s=mimecast20190719; t=1695215147;
h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
to:to:cc:cc:mime-version:mime-version:content-type:content-type:
content-transfer-encoding:content-transfer-encoding:
in-reply-to:in-reply-to:references:references;
bh=zgXSAq+JwUm4Xbg84X1deL0ZSdiP4GCL74z1O2/BjrY=;
b=jBpz1HNqp60As7qV+jtFp4kdBzNz4fAhn493k7hDWZcPsn/6BYkOHa3/n+H05dFX+FuuEf
haTq4EPOs30UBSm3tB4/vMuR85xFpCkq+Bm5kWCfvM0aRSRE/ezlRKziJ5uwH3RAjSYlUq
g77W9bntIajcQsdd79VZr/SNRXzL0fA=
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2,
cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
us-mta-61-xigv2NFtPsi60ZNTfEe17Q-1; Wed, 20 Sep 2023 09:05:42 -0400
X-MC-Unique: xigv2NFtPsi60ZNTfEe17Q-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5])
(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
(No client certificate requested)
by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 58A38280D207;
Wed, 20 Sep 2023 13:05:40 +0000 (UTC)
Received: from [10.39.208.35] (unknown [10.39.208.35])
by smtp.corp.redhat.com (Postfix) with ESMTPS id 87CCC1686D;
Wed, 20 Sep 2023 13:05:38 +0000 (UTC)
Message-ID: <dd2dd416-f1f9-f779-4f6a-105fd6a7ab6c@redhat.com>
Date: Wed, 20 Sep 2023 15:05:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
Thunderbird/102.13.0
Subject: Re: Commit broke 32-bit testpmd app
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: "Roger Melton (rmelton)" <rmelton@cisco.com>,
"Dave Johnson (davejo)" <davejo@cisco.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"Sampath Peechu (speechu)" <speechu@cisco.com>, chenbo.xia@outlook.com
Cc: "Malcolm Bumgardner (mbumgard)" <mbumgard@cisco.com>,
"Chris Brezovec (cbrezove)" <cbrezove@cisco.com>,
David Marchand <david.marchand@redhat.com>
References: <MN2PR11MB45814151ACC8381ABFB1DCB9BD059@MN2PR11MB4581.namprd11.prod.outlook.com>
<63486764-3b44-3299-6830-05435dfd78f3@redhat.com>
<MN2PR11MB4581331F5C263CE604B44508BD049@MN2PR11MB4581.namprd11.prod.outlook.com>
<827f912f-fc2d-6d41-ba8c-e7f3f9f2e24b@redhat.com>
<MN2PR11MB4581CA6E374DFB974E6163CFBD0D9@MN2PR11MB4581.namprd11.prod.outlook.com>
<MN2PR11MB4581BCFFA0FD8FF9B128F35EBD1B9@MN2PR11MB4581.namprd11.prod.outlook.com>
<MN2PR11MB45816D7BD72CEA1C9EDC6A99BDCE9@MN2PR11MB4581.namprd11.prod.outlook.com>
<MN2PR11MB458198A11C9CDA27C69FBA37BD1FA@MN2PR11MB4581.namprd11.prod.outlook.com>
<MW4PR11MB65707E45F173E52B799E5DA4D3EFA@MW4PR11MB6570.namprd11.prod.outlook.com>
<ae4c8aaf-6c12-8ddf-90b9-76e76564728d@cisco.com>
<b01d2744-982a-f4b7-8fb3-48ee9ecacd05@redhat.com>
In-Reply-To: <b01d2744-982a-f4b7-8fb3-48ee9ecacd05@redhat.com>
X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Outbound-SMTP-Client: 173.37.147.240, alln-inbound-j.cisco.com
X-Outbound-Node: alln-core-10.cisco.com
Return-Path: maxime.coquelin@redhat.com
X-MS-Exchange-Organization-ExpirationStartTime: 20 Sep 2023 13:05:57.4519
(UTC)
X-MS-Exchange-Organization-ExpirationStartTimeReason: OriginalSubmit
X-MS-Exchange-Organization-ExpirationInterval: 1:00:00:00.0000000
X-MS-Exchange-Organization-ExpirationIntervalReason: OriginalSubmit
X-MS-Exchange-Organization-Network-Message-Id:
5f7c47b1-fc6f-4213-9f43-08dbb9da547c
X-EOPAttributedMessage: 0
X-MS-Exchange-Organization-MessageDirectionality: Originating
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic:
BL02EPF0001A106:EE_|LV3PR11MB8577:EE_|MN2PR11MB4190:EE_
X-MS-Exchange-Organization-AuthSource:
BL02EPF0001A106.namprd05.prod.outlook.com
X-MS-Exchange-Organization-AuthAs: Anonymous
X-OriginatorOrg: cisco.onmicrosoft.com
X-MS-Office365-Filtering-Correlation-Id: 5f7c47b1-fc6f-4213-9f43-08dbb9da547c
X-MS-Exchange-Organization-SCL: 1
X-Microsoft-Antispam: BCL:5;
X-Forefront-Antispam-Report:
CIP:173.37.142.88;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:alln-iport-1.cisco.com;PTR:alln-iport-1.cisco.com;CAT:NONE;SFS:(13230031)(4636009)(286005)(451199024)(82310400011)(53546011)(31696002)(83380400001)(7636003)(7596003)(356005)(84300400001)(86362001)(36756003)(2616005)(107886003)(6266002)(426003)(26005)(110136005)(54906003)(30864003)(44832011)(8936002)(8676002)(1096003)(4326008)(336012)(31686004)(58800400005)(966005)(42696004)(43740500002);DIR:INB;
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Sep 2023 13:05:57.1394
(UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5f7c47b1-fc6f-4213-9f43-08dbb9da547c
X-MS-Exchange-CrossTenant-Id: 5ae1af62-9505-4097-a69a-c1553ef7840e
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5ae1af62-9505-4097-a69a-c1553ef7840e;Ip=[173.37.142.88];Helo=[alln-iport-1.cisco.com]
X-MS-Exchange-CrossTenant-AuthSource:
BL02EPF0001A106.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR11MB8577
X-MS-Exchange-Transport-EndToEndLatency: 00:00:07.8298834
X-MS-Exchange-Processed-By-BccFoldering: 15.20.6792.025
X-Microsoft-Antispam-Mailbox-Delivery:
ucf:1;jmr:0;auth:0;dest:C;OFR:CustomRules;ENG:(910001)(944506478)(944626604)(920097)(930097)(140003);
X-Microsoft-Antispam-Message-Info:
=?utf-8?B?TWJFeVFNczNKRzdZNWhubHc3dEVWc2tjeE1UcS9uMTVDY2JPcUtLMlROMGJw?=
=?utf-8?B?Uk94cDh6dGx6aTQwN3dab2FuL21saXc0eE9sVjY1TnJvbnlhV0VGOXYyQ3Zs?=
=?utf-8?B?VjkxYldtS2tidmtodGk5ZVM3UVF5OExkYWxoMzhxZkJVRTZXTXhWUTUzV1Nm?=
=?utf-8?B?aGpoTGtFN2J6eTEyQ0lNamgwZlVQb3VuSHEwY29Wbjg0RVFOTmxFREtIT0pE?=
=?utf-8?B?SDJRTEJXaHZTOFVsd2pURjRjT0l2dWpuc0ZhVmJ1UHdBeTRqMmFGQW0xT2VG?=
=?utf-8?B?bENTU1ZUTlRDUzF4ZHNkSmdHRnQxT29odHJJckh1RWk2Y3Bpenk5NEV2Rzg4?=
=?utf-8?B?MTl5VjVlUmVYbW15NTNQZk42N3pwc2taM3J4cW53OWowOVRBTU8yTzY2RTQr?=
=?utf-8?B?RlJJU3JJeEtsbndFQnVyVUdMMXo1WXFSL2V1ekVDNlVLTlR1YnF6OWRyZ1h4?=
=?utf-8?B?U1VoTmRLSXdpaHZFZDlOeUxLeTRTeVFHclJXRlJYVUc4eE9PSlRSbklGR0JV?=
=?utf-8?B?MEtsN0JLcVpUQkM4RzVRcUdnWVlBODVKKzNsTEgxTzFGY05aYlJUTmt3Rit0?=
=?utf-8?B?MFFBZmk2MGhONytlcHpHU3Q1eFNJUldUK2ZndCt3b2ROMCtrQWgyNDhRRHN2?=
=?utf-8?B?ZHhzMUs3VWtrUlI1aWhBUzRrNTdkWTB1blRUNGlZbjEvMTB6ejFoZEM0d21r?=
=?utf-8?B?ZTRPZ05Sc3FoekVva1lOcHFEblZsNDM0TndRNThTeXVGSlY0SitiSUszSzcw?=
=?utf-8?B?SnNFVm4yRldiMzhEakxUK3hnQVpnWVVSWnZqRnZRaFpQVTROSXBoL1hsNmxa?=
=?utf-8?B?cHpyalFUc01XYlp5RWtycGxnQjZVOWxFOG5UWmJ5emx4d3BIRmRRRHhPTHJR?=
=?utf-8?B?N0hLYzVOaDM2U091QzhmTjA5V09kNzQ0TzNkTXJPVXJaL3FQazJWM3FHQjN0?=
=?utf-8?B?bkcyQnUrTHM3Ri9rM0wzcWcyZ2JlNFlJSHlRdUhrOXVNZytlWDREV25LSE10?=
=?utf-8?B?azcwVEl3U2RRTnc0QlVBVnUyTHBEdDR2RERkYmxGMzVUbFp2czZoWGpCdFBP?=
=?utf-8?B?V2RYc21GWmpVa0hXbjN0TEk1cklvTGxFbjg2eTZFVG1ScnRWWFBrL3VmUmJI?=
=?utf-8?B?N0dWbStnNmx5NXk2cmRvMXVXU2NBdDZpRzdOMmJ4KzhZZHlhRTJJN1Z4d01q?=
=?utf-8?B?L01KWENrS3BseWEvTXlhSHp1Z2FRT1VVaVVSR3FtM2tGcFJMemlGZHNwUi9x?=
=?utf-8?B?akxhWlJNWDFabFdRZVNkUE56a2tvSGd1R0lnbWhyTjBVclRoYTJIMDlFWXAx?=
=?utf-8?B?TjhNT2FqNDJJYkxvMHZRdnpCY3RXaUVzUHRPd055NFN4T2lHL0k1SkJMcmxw?=
=?utf-8?B?TEU1WTFhM2RNTHdmc0ZrWFNOWDd5R2ZwNCs4YzFSM05FUXZJbkRCS0MvczlR?=
=?utf-8?B?N05qSHRTTjNkNWRvSGhGMmUvcXJlc2hzVHBuSmZIMXB5ejY0VE5abkhob0FM?=
=?utf-8?B?dXJhOGZiU01HUkN5dFpkbzEyNmhydU9yTzcxMkxzdEFQd0s5R1Vrem5QN3JQ?=
=?utf-8?B?eWdsUWdMMWZUL3FadDd4UzkzM0ZvRWgyMjNHZmNpek50UnBIUGcyb2VYV1lE?=
=?utf-8?B?RlpMZEZFOWZ3QTlNSFlFRXE2Sm9rZkxGVDdvK1hkay9lQzltbjByaStJNjF3?=
=?utf-8?B?TEtXbkQrTmlPZk0rQ0g3a2dCdTNmL2hZbFJKZHdPR2czNEF3cUxIdVI4Uno4?=
=?utf-8?B?cHUwR1d1ZWNDZjBzNXpueVBtUGVReG1GNjBYMm4wNjR2OFBoUWZRZ2kxQ3Zl?=
=?utf-8?B?U09qLzIrVHI5elVTTVF0TlVub3BrT05HbExuamgyemZnMnUzY0lJa005STB5?=
=?utf-8?B?Wlh6Znc2NnJmQ3BOUlQxOFRWbEVGYW8xa2JjVHk2YU5JL2dDSFpJY3ROQng2?=
=?utf-8?B?cEhiYWFmTWdDK29YdVFHUmVxa2RsYkVqbG5wcDRBYlBTSHZPeThiOVp1VDZu?=
=?utf-8?B?bEFDSFVSVHhiTmZTcWRXQnN1Ly9KcFNpRVVSK0Q5bGtRNEE5a0NRZkt2R3Av?=
=?utf-8?B?WGEvNitMSWF4RnNPaVVnRjVFWVprVnlnYXdQREVLMFpXdFNrVE1Ed1I1cEVv?=
=?utf-8?B?MnpaZFQ3Y2Q0Tng5R2R4WElqKzlVQVNpWEg4RmxSRDA1eEZjV21ldk9HcVV3?=
=?utf-8?B?eWUwQlB0TldVQXIrVUZyRThkendhRmkvcE1RTlNSSHBwMFlJRFIxWlFEOHpr?=
=?utf-8?B?R2x5S3c1NHR3OVd6NTlQZURsTXFhQjZ4WmV1ZTVVSk5SUEtrb0NEUW5tQ1I5?=
=?utf-8?B?UkVZN0pjY2FGbDE2TUUvWDdSUVZwVEFhRDVyWVZ6cWZmODFKMktweXVmUnB0?=
=?utf-8?B?S29pU3VpYi9jVDdSZm5QbzhnckVpb1lRVmY3ZDhjcS9WZUFpZTZ6enB0SXcy?=
=?utf-8?B?akNnMlo4VFhybVpNanJ1QU9kdk5lcS9ONVo2eGtFUXlWakFuUmxyOFkwSlcr?=
=?utf-8?B?dnJJUmR3U2l5YUY1VGI1OFkrTWI1SVA0SmdidTZWcTdnPT0=?=
MIME-Version: 1.0
On 9/20/23 09:35, Maxime Coquelin wrote:
> Hi,
>
> I tried to reproduce without success(see attached log).
>
> I fail to reproduce because buf_iova fits into 32 bits in my case:
> (gdb) p /x *tx_pkts[0]
>
> $4 = {
> cacheline0 = 0x77b19ec0,
> buf_addr = 0x77b19f40,
> buf_iova = 0x49519f40,
> rearm_data = 0x77b19ed0,
>
> However, looking at your report, something like this would work for you:
>
> diff --git a/drivers/net/virtio/virtqueue.h
> b/drivers/net/virtio/virtqueue.h
> index 9d4aba11a3..38efbc517a 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -124,7 +124,7 @@ virtqueue_store_flags_packed(struct
> vring_packed_desc *dp,
> * (virtio-pci and virtio-user).
> */
> #define VIRTIO_MBUF_ADDR(mb, vq) \
> - ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) +
> (vq)->mbuf_addr_offset)))
> + (*(uint64_t *)((uintptr_t)(mb) + (vq)->mbuf_addr_offset))
>
>
> The problem is that it would likely break Virtio-user en 32bits mode, as
> this is how it was initially implemented, and got fixed few years ago,
> as David hinted to me:
>
> commit 260aae9ad9621e3e758f1443abb8fcbc25ece07c
> Author: Jianfeng Tan <jianfeng.tan@intel.com>
> Date: Wed Apr 19 02:30:33 2017 +0000
>
> net/virtio-user: fix address on 32-bit system
>
> virtio-user cannot work on 32-bit system as higher 32-bit of the
> addr field (64-bit) in the desc is filled with non-zero value
> which should not happen for a 32-bit system.
>
> In case of virtio-user, we use buf_addr of mbuf to fill the
> virtqueue desc addr. This is a regression bug. For 32-bit system,
> the first 4 bytes of mbuf is buf_addr, with following 8 bytes for
> buf_phyaddr. With below wrong definition, both buf_addr and lower
> 4 bytes buf_phyaddr are obtained to fill the virtqueue desc.
> #define VIRTIO_MBUF_ADDR(mb, vq) \
> (*(uint64_t *)((uintptr_t)(mb) + (vq)->offset))
>
> Fixes: 25f80d108780 ("net/virtio: fix packet corruption")
> Cc: stable@dpdk.org
>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>
> If my understanding is correct, on 32 bits, when mbuf->buf_addr is used
> (Virtio-user), we need to mask out the higher 4 bytes, while when using
> Virtio-pci we need the full 64 bits (as the physical addresses used as
> IOVA on the guest are 64 bits).
I posted a fix aiming at making it work for both Virtio-user and Virtio-
PCI 32 bits builds while not impacting 64 bits performance. Could you
please have a try and report feedback by replying to the patch?
Regards,
Maxime
> Regards,
> Maxime
>
> On 9/13/23 15:24, Roger Melton (rmelton) wrote:
>> +Chris Brezovec
>>
>> Hi Maxime,
>>
>> Chris from our team is attending the DPDK Summit in Dublin this week.
>> If you have some time available, we'd appreciate it if he could meet
>> with you to discuss the 32bit virtio issue we are seeing.
>>
>> Regards,
>> Roger Melton
>>
>> On 9/6/23 2:57 PM, Dave Johnson (davejo) wrote:
>>>
>>> Hi Maxime,
>>>
>>> This email is regarding the following commit:
>>>
>>> https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657
>>>
>>> A query had been sent previously on this topic (see below) indicating
>>> this commit appears to have broken the 32-bit testpmd app and
>>> impacted one of our products that runs as a 32-bit DPDK application.
>>> We consequently backed the commit out of our product but would prefer
>>> to get a fix for it. In the earlier exchange, you had asked if we
>>> were using virtio-pci or virtio-user (we are using virtio-pci) and
>>> asked for logs which Sampath provided. It’s been a while, so let me
>>> now if you need me to send resend those logs or need any other
>>> information.
>>>
>>> FWIW, I reproduced this using testpmd and noticed that this part of
>>> the change seems to be the interesting part (in
>>> drivers/net/virtio/virtqueue.h):
>>>
>>> /**
>>>
>>> * Return the IOVA (or virtual address in case of virtio-user) of mbuf
>>>
>>> * data buffer.
>>>
>>> *
>>>
>>> * The address is firstly casted to the word size (sizeof(uintptr_t))
>>>
>>> * before casting it to uint64_t. This is to make it work with different
>>>
>>> * combination of word size (64 bit and 32 bit) and virtio device
>>>
>>> * (virtio-pci and virtio-user).
>>>
>>> */
>>>
>>> #define VIRTIO_MBUF_ADDR(mb, vq) \
>>>
>>> ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) +
>>> (vq)->mbuf_addr_offset))
>>>
>>> If I revert just this part of the changeset (by re-using the
>>> VIRTIO_MBUF_ADDR to return buf_iova which matches what it had used
>>> previously), then 32-bit testpmd is able to receive traffic again:
>>>
>>> #define VIRTIO_MBUF_ADDR(mb, vq) (mb->buf_iova)
>>>
>>> Looking at the address produced by each of these, I see the address
>>> is the same except that the casting results in the upper bits getting
>>> cleared:
>>>
>>> Address from patch (nonworking case) = 0x58e7c900
>>>
>>> Address using buf_iova (working case) = 0x158e7c900
>>>
>>> ::
>>>
>>> Address from patch (nonworking case) = 0x58e7bfc0
>>>
>>> Address using buf_iova (working case) = 0x158e7bfc0
>>>
>>> ::
>>>
>>> Address from patch (nonworking case) = 0x58e7b680
>>>
>>> Address using buf_iova (working case) = 0x158e7b680
>>>
>>> ::
>>>
>>> Regards, Dave
>>>
>>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>>> *Date: *Monday, January 30, 2023 at 3:29 PM
>>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>,
>>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>, Malcolm Bumgardner
>>> (mbumgard) <mbumgard@cisco.com>
>>> *Subject: *Re: Commit broke 32-bit testpmd app
>>>
>>> Hi Maxime,
>>>
>>> Could you please let us know if you got a chance to look at the
>>> debugs logs I provided?
>>>
>>> Thanks,
>>>
>>> Sampath
>>>
>>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>>> *Date: *Tuesday, December 6, 2022 at 1:08 PM
>>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>,
>>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>>> *Subject: *Re: Commit broke 32-bit testpmd app
>>>
>>> Hi Maxime,
>>>
>>> Did you get a chance to look into this?
>>>
>>> Please let me know if you need anything else.
>>>
>>> Thanks,
>>>
>>> Sampath
>>>
>>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>>> *Date: *Wednesday, November 23, 2022 at 5:15 PM
>>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>,
>>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>>> *Subject: *Re: Commit broke 32-bit testpmd app
>>>
>>> Hi Maxime,
>>>
>>> I’m attaching the following for reference.
>>>
>>> * Instructions for Centos8 test setup
>>> * Diffs between the working and non-working versions (working
>>> version has the problem commit backed out)
>>> * Working logs (stats show that ping packets from neighbor VM can be
>>> seen with both 64-bit and 32-bit apps)
>>> * Non-working logs (stats show that ping packets from neighbor VM
>>> are seen with 64-bit app but NOT seen with 32-bit app)
>>>
>>> ============================
>>>
>>> $ sudo ./usertools/dpdk-devbind.py --status
>>>
>>> Network devices using DPDK-compatible driver
>>>
>>> ============================================
>>>
>>> 0000:07:00.0 'Virtio network device 1041' drv=igb_uio unused=
>>>
>>> 0000:08:00.0 'Virtio network device 1041' drv=igb_uio unused=
>>>
>>> Network devices using kernel driver
>>>
>>> ===================================
>>>
>>> 0000:01:00.0 'Virtio network device 1041' if=enp1s0 drv=virtio-pci
>>> unused=igb_uio *Active*
>>>
>>> …
>>>
>>> ===========================
>>>
>>> Thanks,
>>>
>>> Sampath
>>>
>>> *From: *Maxime Coquelin <maxime.coquelin@redhat.com>
>>> *Date: *Tuesday, November 22, 2022 at 4:24 AM
>>> *To: *Sampath Peechu (speechu) <speechu@cisco.com>,
>>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>>> *Subject: *Re: Commit broke 32-bit testpmd app
>>>
>>> Hi,
>>>
>>> In my initial reply (see below), I also asked if you had logs to share.
>>> And wondered whether it happens with Virtio PCI or Virtio-user?
>>>
>>> Regards,
>>> Maxime
>>>
>>> On 11/16/22 00:30, Sampath Peechu (speechu) wrote:
>>> > ++ dev@dpdk.org <mailto:dev@dpdk.org <mailto:dev@dpdk.org>>
>>> >
>>> > *From: *Maxime Coquelin <maxime.coquelin@redhat.com>
>>> > *Date: *Tuesday, November 15, 2022 at 3:19 AM
>>> > *To: *Sampath Peechu (speechu) <speechu@cisco.com>,
>>> chenbo.xia@intel.com
>>> > <chenbo.xia@intel.com>
>>> > *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>>> > *Subject: *Re: Commit broke 32-bit testpmd app
>>> >
>>> > Hi Sampath,
>>> >
>>> >
>>> > Please add dev@dpdk.org, the upstream mailing list, if this is related
>>> > to the upstream DPDK project.If it is using RHEL DPDK package, please
>>> > use the appropriate support channels.
>>> >
>>> > On 11/14/22 23:55, Sampath Peechu (speechu) wrote:
>>> > > Hi Virtio Maintainers team,
>>> > >
>>> > > This email is regarding the following commit.
>>> > >
>>> > >
>>> >
>>> https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657 <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657> <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657 <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657>>
>>> > >
>>> > > The above commit appears to have broken the 32-bit testpmd app (and
>>> > > consequently impacted one of our products that runs as a 32-bit
>>> DPDK
>>> > > app). The 64-bit testpmd app does not appear to be impacted though.
>>> >
>>> > We'll need some logs to understand what is going on.
>>> > Does it happen with virtio-pci or virtio-user?
>>> >
>>> > Regards,
>>> > Maxime
>>> >
>>> > > With the commit in place, we didn’t see any packets going
>>> through at
>>> > > all. After backing out the commit and rebuilding the 32-bit
>>> testpmd app
>>> > > in our test setup, we were able to pass traffic as expected.
>>> > >
>>> > > Could you please let us know if this is a known issue? And if
>>> there is a
>>> > > fix available for it?
>>> > >
>>> > > Thank you,
>>> > >
>>> > > Sampath Peechu
>>> > >
>>> > > Cisco Systems
>>> > >
>>> >
>>>
>>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
2024-08-28 21:27 32-bit virtio failing on DPDK v23.11.1 (and tags) Chris Brezovec (cbrezove)
@ 2024-09-03 14:43 ` Chris Brezovec (cbrezove)
2024-09-06 9:15 ` Maxime Coquelin
2024-11-02 15:59 ` Chris Brezovec (cbrezove)
0 siblings, 2 replies; 5+ messages in thread
From: Chris Brezovec (cbrezove) @ 2024-09-03 14:43 UTC (permalink / raw)
To: dev, maxime.coquelin; +Cc: Roger Melton (rmelton), Walt Robinson (walrobin)
[-- Attachment #1: Type: text/plain, Size: 3522 bytes --]
Hi Maxime / others,
I am just following up to see if you have had any chance to look at what I previously sent and had any ideas regarding the issue.
Thanks in advance!
-ChrisB
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Wednesday, August 28, 2024 at 5:27 PM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: common-dpio-core-team(mailer list) <common-dpio-core-team@cisco.com>
Subject: 32-bit virtio failing on DPDK v23.11.1 (and tags)
HI Maxime,
My name is Chris Brezovec, we met and talked about some 32 bit virtio issues we were seeing at Cisco during the DPDK summit last year. There was also a back and forth between you and Dave Johnson at Cisco last September regarding the same issue. I have attached some of the email chain from that conversation that resulted in this commit being made to dpdk v23.11 (https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4).
We recently picked up the v23.11.1 DPDK release and saw that 32 bit virtio is not working again, but 64-bit virtio is working. We are noticing CVQ timeouts - PMD receives no response from host and this leads to failure of the port to start. We were able to recreate this issue using testpmd. We have done some tracing through the virtio changes made during the development of the v23.xx DPDK release, and believe we have identified the following rework commit to have caused a failure (https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6).
We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all seem to see the same issue when running in 32-bit mode using testpmd.
We were hoping you might be able to take a quick look at the two commits and see if there might be something obvious missing in the refactor work that might have caused this issue. I am thinking there might a location or two in the code that should be using the VIRTIO_MBUF_ADDR() or similar macro that might have been missed.
Regards,
ChrisB
This is some of the testpmd output seen on v23.11.2-rc2:
LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib /home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l 2-3 -a 0000:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8 --log-level=lib.eal:info --log-level=lib.eal:debug --log-level=lib.ethdev:info --log-level=lib.ethdev:debug --log-level=lib.virtio:warning --log-level=lib.virtio:info --log-level=lib.virtio:debug --log-level=pmd.*:debug --iova-mode=pa -- -i
— snip —
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_disable(): Failed to disable promisc
Failed to disable promiscuous mode for device (port 0): Resource temporarily unavailable
Error during restoring configuration for device (port 0): Resource temporarily unavailable
virtio_dev_stop(): stop
Fail to start port 0: Resource temporarily unavailable
Done
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_enable(): Failed to enable promisc
Error during enabling promiscuous mode for port 0: Resource temporarily unavailable - ignore
[-- Attachment #2: Type: text/html, Size: 10890 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
2024-09-03 14:43 ` Chris Brezovec (cbrezove)
@ 2024-09-06 9:15 ` Maxime Coquelin
2024-11-02 15:59 ` Chris Brezovec (cbrezove)
1 sibling, 0 replies; 5+ messages in thread
From: Maxime Coquelin @ 2024-09-06 9:15 UTC (permalink / raw)
To: Chris Brezovec (cbrezove), dev
Cc: Roger Melton (rmelton), Walt Robinson (walrobin)
Hello Chris,
On 9/3/24 16:43, Chris Brezovec (cbrezove) wrote:
> Hi Maxime / others,
>
> I am just following up to see if you have had any chance to look at what
> I previously sent and had any ideas regarding the issue.
It seems there are not a lot of people testing 32-bits builds with
Virtio if it is borken since v23.03.
As it looks important to you, could you please work in setting up a CI?
For the issue itself, nothing catch my eye for now. I will continue to
have a look.
Regards,
Maxime
> Thanks in advance!
>
> -ChrisB
>
> *From: *Chris Brezovec (cbrezove) <cbrezove@cisco.com>
> *Date: *Wednesday, August 28, 2024 at 5:27 PM
> *To: *dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com
> <maxime.coquelin@redhat.com>
> *Cc: *common-dpio-core-team(mailer list) <common-dpio-core-team@cisco.com>
> *Subject: *32-bit virtio failing on DPDK v23.11.1 (and tags)
>
> HI Maxime,
>
> My name is Chris Brezovec, we met and talked about some 32 bit virtio
> issues we were seeing at Cisco during the DPDK summit last year. There
> was also a back and forth between you and Dave Johnson at Cisco last
> September regarding the same issue. I have attached some of the email
> chain from that conversation that resulted in this commit being made to
> dpdk v23.11
> (https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4 <https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4>).
>
> We recently picked up the v23.11.1 DPDK release and saw that 32 bit
> virtio is not working again, but 64-bit virtio is working. We are
> noticing CVQ timeouts - PMD receives no response from host and this
> leads to failure of the port to start. We were able to recreate this
> issue using testpmd. We have done some tracing through the virtio
> changes made during the development of the v23.xx DPDK release, and
> believe we have identified the following rework commit to have caused a
> failure
> (https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6 <https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6>).
>
> We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all
> seem to see the same issue when running in 32-bit mode using testpmd.
>
> We were hoping you might be able to take a quick look at the two commits
> and see if there might be something obvious missing in the refactor work
> that might have caused this issue. I am thinking there might a location
> or two in the code that should be using the VIRTIO_MBUF_ADDR() or
> similar macro that might have been missed.
>
> Regards,
>
> ChrisB
>
> This is some of the testpmd output seen on v23.11.2-rc2:
>
> LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib
> /home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l
> 2-3 -a 0000:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8
> --log-level=lib.eal:info --log-level=lib.eal:debug
> --log-level=lib.ethdev:info --log-level=lib.ethdev:debug
> --log-level=lib.virtio:warning --log-level=lib.virtio:info
> --log-level=lib.virtio:debug --log-level=pmd.*:debug --iova-mode=pa -- -i
>
> — snip —
>
> virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255,
> vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
>
> virtio_send_command_split(): vq->vq_queue_index = 2
>
> virtio_send_command_split(): vq->vq_free_cnt=64
>
> vq->vq_desc_head_idx=0
>
> virtio_dev_promiscuous_disable(): Failed to disable promisc
>
> Failed to disable promiscuous mode for device (port 0): Resource
> temporarily unavailable
>
> Error during restoring configuration for device (port 0): Resource
> temporarily unavailable
>
> virtio_dev_stop(): stop
>
> Fail to start port 0: Resource temporarily unavailable
>
> Done
>
> virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255,
> vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
>
> virtio_send_command_split(): vq->vq_queue_index = 2
>
> virtio_send_command_split(): vq->vq_free_cnt=64
>
> vq->vq_desc_head_idx=0
>
> virtio_dev_promiscuous_enable(): Failed to enable promisc
>
> Error during enabling promiscuous mode for port 0: Resource temporarily
> unavailable - ignore
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
2024-09-03 14:43 ` Chris Brezovec (cbrezove)
2024-09-06 9:15 ` Maxime Coquelin
@ 2024-11-02 15:59 ` Chris Brezovec (cbrezove)
2024-11-07 7:17 ` Chris Brezovec (cbrezove)
1 sibling, 1 reply; 5+ messages in thread
From: Chris Brezovec (cbrezove) @ 2024-11-02 15:59 UTC (permalink / raw)
To: dev, maxime.coquelin; +Cc: Roger Melton (rmelton), Walt Robinson (walrobin)
[-- Attachment #1: Type: text/plain, Size: 5210 bytes --]
Hi Maxime / team,
I have been going through the 12+ virtio commits between the last known working version and the first place we noticed this being broken. It does appear to be a change in this commit: https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6
I focused on the virtio_alloc_queue_headers() and virtio_free_queue_headers() functions. I think I have narrowed it down to to the hdr_mem setting. The following changes seem to be working in my test environment (which is a little limited).
I was hoping you could look at these changes and hopefully help get a fix in for 24.11.
Regards,
-ChrisB
---
drivers/net/virtio/virtqueue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 6f419665f1..fc7f7a9c55 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -344,7 +344,7 @@ virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name
if (vq->hw->use_va)
*hdr_mem = (uintptr_t)(*hdr_mz)->addr;
else
- *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+ *hdr_mem = (*hdr_mz)->iova;
return 0;
}
--
2.35.6
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Tuesday, September 3, 2024 at 10:43 AM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: Roger Melton (rmelton) <rmelton@cisco.com>, Walt Robinson (walrobin) <walrobin@cisco.com>
Subject: Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
Hi Maxime / others,
I am just following up to see if you have had any chance to look at what I previously sent and had any ideas regarding the issue.
Thanks in advance!
-ChrisB
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Wednesday, August 28, 2024 at 5:27 PM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: common-dpio-core-team(mailer list) <common-dpio-core-team@cisco.com>
Subject: 32-bit virtio failing on DPDK v23.11.1 (and tags)
HI Maxime,
My name is Chris Brezovec, we met and talked about some 32 bit virtio issues we were seeing at Cisco during the DPDK summit last year. There was also a back and forth between you and Dave Johnson at Cisco last September regarding the same issue. I have attached some of the email chain from that conversation that resulted in this commit being made to dpdk v23.11 (https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4).
We recently picked up the v23.11.1 DPDK release and saw that 32 bit virtio is not working again, but 64-bit virtio is working. We are noticing CVQ timeouts - PMD receives no response from host and this leads to failure of the port to start. We were able to recreate this issue using testpmd. We have done some tracing through the virtio changes made during the development of the v23.xx DPDK release, and believe we have identified the following rework commit to have caused a failure (https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6).
We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all seem to see the same issue when running in 32-bit mode using testpmd.
We were hoping you might be able to take a quick look at the two commits and see if there might be something obvious missing in the refactor work that might have caused this issue. I am thinking there might a location or two in the code that should be using the VIRTIO_MBUF_ADDR() or similar macro that might have been missed.
Regards,
ChrisB
This is some of the testpmd output seen on v23.11.2-rc2:
LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib /home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l 2-3 -a 0000:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8 --log-level=lib.eal:info --log-level=lib.eal:debug --log-level=lib.ethdev:info --log-level=lib.ethdev:debug --log-level=lib.virtio:warning --log-level=lib.virtio:info --log-level=lib.virtio:debug --log-level=pmd.*:debug --iova-mode=pa -- -i
— snip —
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_disable(): Failed to disable promisc
Failed to disable promiscuous mode for device (port 0): Resource temporarily unavailable
Error during restoring configuration for device (port 0): Resource temporarily unavailable
virtio_dev_stop(): stop
Fail to start port 0: Resource temporarily unavailable
Done
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_enable(): Failed to enable promisc
Error during enabling promiscuous mode for port 0: Resource temporarily unavailable - ignore
[-- Attachment #2: Type: text/html, Size: 15311 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
2024-11-02 15:59 ` Chris Brezovec (cbrezove)
@ 2024-11-07 7:17 ` Chris Brezovec (cbrezove)
0 siblings, 0 replies; 5+ messages in thread
From: Chris Brezovec (cbrezove) @ 2024-11-07 7:17 UTC (permalink / raw)
To: dev, maxime.coquelin; +Cc: Roger Melton (rmelton), Walt Robinson (walrobin)
[-- Attachment #1: Type: text/plain, Size: 5728 bytes --]
Maxime,
Do you think you might be able to look at the info below (probably got lost in the many emails this last week / weekend).
Kind regards,
-ChrisB
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Saturday, November 2, 2024 at 12:00 PM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: Roger Melton (rmelton) <rmelton@cisco.com>, Walt Robinson (walrobin) <walrobin@cisco.com>
Subject: Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
Hi Maxime / team,
I have been going through the 12+ virtio commits between the last known working version and the first place we noticed this being broken. It does appear to be a change in this commit: https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6
I focused on the virtio_alloc_queue_headers() and virtio_free_queue_headers() functions. I think I have narrowed it down to to the hdr_mem setting. The following changes seem to be working in my test environment (which is a little limited).
I was hoping you could look at these changes and hopefully help get a fix in for 24.11.
Regards,
-ChrisB
---
drivers/net/virtio/virtqueue.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 6f419665f1..fc7f7a9c55 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -344,7 +344,7 @@ virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name
if (vq->hw->use_va)
*hdr_mem = (uintptr_t)(*hdr_mz)->addr;
else
- *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+ *hdr_mem = (*hdr_mz)->iova;
return 0;
}
--
2.35.6
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Tuesday, September 3, 2024 at 10:43 AM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: Roger Melton (rmelton) <rmelton@cisco.com>, Walt Robinson (walrobin) <walrobin@cisco.com>
Subject: Re: 32-bit virtio failing on DPDK v23.11.1 (and tags)
Hi Maxime / others,
I am just following up to see if you have had any chance to look at what I previously sent and had any ideas regarding the issue.
Thanks in advance!
-ChrisB
From: Chris Brezovec (cbrezove) <cbrezove@cisco.com>
Date: Wednesday, August 28, 2024 at 5:27 PM
To: dev@dpdk.org <dev@dpdk.org>, maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>
Cc: common-dpio-core-team(mailer list) <common-dpio-core-team@cisco.com>
Subject: 32-bit virtio failing on DPDK v23.11.1 (and tags)
HI Maxime,
My name is Chris Brezovec, we met and talked about some 32 bit virtio issues we were seeing at Cisco during the DPDK summit last year. There was also a back and forth between you and Dave Johnson at Cisco last September regarding the same issue. I have attached some of the email chain from that conversation that resulted in this commit being made to dpdk v23.11 (https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4).
We recently picked up the v23.11.1 DPDK release and saw that 32 bit virtio is not working again, but 64-bit virtio is working. We are noticing CVQ timeouts - PMD receives no response from host and this leads to failure of the port to start. We were able to recreate this issue using testpmd. We have done some tracing through the virtio changes made during the development of the v23.xx DPDK release, and believe we have identified the following rework commit to have caused a failure (https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6).
We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all seem to see the same issue when running in 32-bit mode using testpmd.
We were hoping you might be able to take a quick look at the two commits and see if there might be something obvious missing in the refactor work that might have caused this issue. I am thinking there might a location or two in the code that should be using the VIRTIO_MBUF_ADDR() or similar macro that might have been missed.
Regards,
ChrisB
This is some of the testpmd output seen on v23.11.2-rc2:
LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib /home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l 2-3 -a 0000:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8 --log-level=lib.eal:info --log-level=lib.eal:debug --log-level=lib.ethdev:info --log-level=lib.ethdev:debug --log-level=lib.virtio:warning --log-level=lib.virtio:info --log-level=lib.virtio:debug --log-level=pmd.*:debug --iova-mode=pa -- -i
— snip —
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_disable(): Failed to disable promisc
Failed to disable promiscuous mode for device (port 0): Resource temporarily unavailable
Error during restoring configuration for device (port 0): Resource temporarily unavailable
virtio_dev_stop(): stop
Fail to start port 0: Resource temporarily unavailable
Done
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_enable(): Failed to enable promisc
Error during enabling promiscuous mode for port 0: Resource temporarily unavailable - ignore
[-- Attachment #2: Type: text/html, Size: 17170 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-11-07 7:17 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-28 21:27 32-bit virtio failing on DPDK v23.11.1 (and tags) Chris Brezovec (cbrezove)
2024-09-03 14:43 ` Chris Brezovec (cbrezove)
2024-09-06 9:15 ` Maxime Coquelin
2024-11-02 15:59 ` Chris Brezovec (cbrezove)
2024-11-07 7:17 ` Chris Brezovec (cbrezove)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).