aboutsummaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_input.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-10-08ceph: add checking of wait_for_completion_killable() return valueViacheslav Dubeyko1-1/+4
The Coverity Scan service has detected the calling of wait_for_completion_killable() without checking the return value in ceph_lock_wait_for_completion() [1]. The CID 1636232 defect contains explanation: "If the function returns an error value, the error value may be mistaken for a normal value. In ceph_lock_wait_for_completion(): Value returned from a function is not checked for errors before being used. (CWE-252)". The patch adds the checking of wait_for_completion_killable() return value and return the error code from ceph_lock_wait_for_completion(). [1] https://scan5.scan.coverity.com/#/project-view/64304/10063?selectedIssue=1636232 Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Reviewed-by: Alex Markuze <amarkuze@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2025-10-08ceph: make ceph_start_io_*() killableMax Kellermann3-26/+49
This allows killing processes that wait for a lock when one process is stuck waiting for the Ceph server. This is similar to the NFS commit 38a125b31504 ("fs/nfs/io: make nfs_start_io_*() killable"). [ idryomov: drop comment on include, formatting ] Signed-off-by: Max Kellermann <max.kellermann@ionos.com> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2025-10-08libceph: Use HMAC-SHA256 library instead of crypto_shashEric Biggers3-58/+26
Use the HMAC-SHA256 library functions instead of crypto_shash. This is simpler and faster. Signed-off-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2025-10-08tracing: Fix irqoff tracers on failure of acquiring calltimeSteven Rostedt1-13/+10
The functions irqsoff_graph_entry() and irqsoff_graph_return() both call func_prolog_dec() that will test if the data->disable is already set and if not, increment it and return. If it was set, it returns false and the caller exits. The caller of this function must decrement the disable counter, but misses doing so if the calltime fails to be acquired. Instead of exiting out when calltime is NULL, change the logic to do the work if it is not NULL and still do the clean up at the end of the function if it is NULL. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20251008114943.6f60f30f@gandalf.local.home Fixes: a485ea9e3ef3 ("tracing: Fix irqsoff and wakeup latency tracers when using function graph") Reported-by: Sasha Levin <sashal@kernel.org> Closes: https://lore.kernel.org/linux-trace-kernel/20251006175848.1906912-2-sashal@kernel.org/ Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-10-08tracing: Fix wakeup tracers on failure of acquiring calltimeSteven Rostedt1-10/+6
The functions wakeup_graph_entry() and wakeup_graph_return() both call func_prolog_preempt_disable() that will test if the data->disable is already set and if not, increment it and disable preemption. If it was set, it returns false and the caller exits. The caller of this function must decrement the disable counter, but misses doing so if the calltime fails to be acquired. Instead of exiting out when calltime is NULL, change the logic to do the work if it is not NULL and still do the clean up at the end of the function if it is NULL. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20251008114835.027b878a@gandalf.local.home Fixes: a485ea9e3ef3 ("tracing: Fix irqsoff and wakeup latency tracers when using function graph") Reported-by: Sasha Levin <sashal@kernel.org> Closes: https://lore.kernel.org/linux-trace-kernel/20251006175848.1906912-1-sashal@kernel.org/ Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-10-08tracing/osnoise: Replace kmalloc + copy_from_user with memdup_user_nulThorsten Blum1-7/+4
Replace kmalloc() followed by copy_from_user() with memdup_user_nul() to simplify and improve osnoise_cpus_write(). Remove the manual NUL-termination. No functional changes intended. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/20251001130907.364673-2-thorsten.blum@linux.dev Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-10-08io_uring/zcrx: increment fallback loop src offsetPavel Begunkov1-0/+1
Don't forget to adjust the source offset in io_copy_page(), otherwise it'll be copying into the same location in some cases for highmem setups. Fixes: e67645bb7f3f4 ("io_uring/zcrx: prepare fallback for larger pages") Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-08io_uring/zcrx: fix overshooting recv limitPavel Begunkov1-0/+4
It's reported that sometimes a zcrx request can receive more than was requested. It's caused by io_zcrx_recv_skb() adjusting desc->count for all received buffers including frag lists, but then doing recursive calls to process frag list skbs, which leads to desc->count double accounting and underflow. Reported-and-tested-by: Matthias Jasny <matthiasjasny@gmail.com> Fixes: 6699ec9a23f85 ("io_uring/zcrx: add a read limit to recvzc requests") Cc: stable@vger.kernel.org Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-08loop: remove redundant __GFP_NOWARN flagPedro Demarchi Gomes1-1/+1
GFP_NOWAIT already includes __GFP_NOWARN, so let's remove the redundant __GFP_NOWARN. Signed-off-by: Pedro Demarchi Gomes <pedrodemargomes@gmail.com> Reviewed-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-10-08s390/uv: Fix comment of uv_find_secret() functionThomas Huth1-2/+2
The uv_get_secret_metadata() function has been removed some months ago, so we should not mention it in the comment anymore. Fixes: a42831f0b74dc ("s390/uv: Remove uv_get_secret_metadata function") Signed-off-by: Thomas Huth <thuth@redhat.com> Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-10-08selftests: netfilter: query conntrack state to check for port clash resolutionFlorian Westphal1-17/+41
Jakub reported this self test flaking occasionally (fails, but passes on re-run) on debug kernels. This is because the test checks for elapsed time to determine if both connections were established in parallel. Rework this to no longer depend on timing. Use busywait helper to check that both sockets have moved to established state and then query the conntrack engine for the two entries. Reported-by: Jakub Kicinski <kuba@kernel.org> Closes: https://lore.kernel.org/netfilter-devel/20250926163318.40d1a502@kernel.org/ Fixes: 117e149e26d1 ("selftests: netfilter: test nat source port clash resolution interaction with tcp early demux") Signed-off-by: Florian Westphal <fw@strlen.de>
2025-10-08selftests: netfilter: nft_fib.sh: fix spurious test failuresFlorian Westphal1-5/+8
Jakub reports spurious failure of nft_fib.sh test. This is caused by a subtle bug inherited when i moved faulty ping from one test case to another. nft_fib.sh not only checks that the fib expression matched, it also records the number of matches and then validates we have the expected count. When I did this it was under the assumption that we would have 0 to n matching packets. In case of the failure, the entry has n+1 matching packets. This happens because ping_unreachable helper uses "ping -c 1 -w 1", instead of the intended "-W". -w alters the meaning of -c (count), namely, its then treated as number of wanted *replies* instead of "number of packets to send". So, in some cases, ping -c 1 -w 1 ends up sending two packets which then makes the test fail due to the higher-than-expected packet count. Fix the actual bug (s/-w/-W) and also change the error handling: 1. Show the number of expected packets in the error message 2. Always try to delete the key from the set. Else, later test that makes sure we don't have unexpected keys in there will always fail as well. Reported-by: Jakub Kicinski <kuba@kernel.org> Closes: https://lore.kernel.org/netfilter-devel/20250927090709.0b3cd783@kernel.org/ Fixes: 98287045c979 ("selftests: netfilter: move fib vrf test to nft_fib.sh") Signed-off-by: Florian Westphal <fw@strlen.de>
2025-10-08bridge: br_vlan_fill_forward_path_pvid: use br_vlan_group_rcu()Eric Woudstra1-1/+1
net/bridge/br_private.h:1627 suspicious rcu_dereference_protected() usage! other info that might help us debug this: rcu_scheduler_active = 2, debug_locks = 1 7 locks held by socat/410: #0: ffff88800d7a9c90 (sk_lock-AF_INET){+.+.}-{0:0}, at: inet_stream_connect+0x43/0xa0 #1: ffffffff9a779900 (rcu_read_lock){....}-{1:3}, at: __ip_queue_xmit+0x62/0x1830 [..] #6: ffffffff9a779900 (rcu_read_lock){....}-{1:3}, at: nf_hook.constprop.0+0x8a/0x440 Call Trace: lockdep_rcu_suspicious.cold+0x4f/0xb1 br_vlan_fill_forward_path_pvid+0x32c/0x410 [bridge] br_fill_forward_path+0x7a/0x4d0 [bridge] Use to correct helper, non _rcu variant requires RTNL mutex. Fixes: bcf2766b1377 ("net: bridge: resolve forwarding path for VLAN tag actions in bridge devices") Signed-off-by: Eric Woudstra <ericwouds@gmail.com> Signed-off-by: Florian Westphal <fw@strlen.de>
2025-10-08netfilter: nft_objref: validate objref and objrefmap expressionsFernando Fernandez Mancera1-0/+39
Referencing a synproxy stateful object from OUTPUT hook causes kernel crash due to infinite recursive calls: BUG: TASK stack guard page was hit at 000000008bda5b8c (stack is 000000003ab1c4a5..00000000494d8b12) [...] Call Trace: __find_rr_leaf+0x99/0x230 fib6_table_lookup+0x13b/0x2d0 ip6_pol_route+0xa4/0x400 fib6_rule_lookup+0x156/0x240 ip6_route_output_flags+0xc6/0x150 __nf_ip6_route+0x23/0x50 synproxy_send_tcp_ipv6+0x106/0x200 synproxy_send_client_synack_ipv6+0x1aa/0x1f0 nft_synproxy_do_eval+0x263/0x310 nft_do_chain+0x5a8/0x5f0 [nf_tables nft_do_chain_inet+0x98/0x110 nf_hook_slow+0x43/0xc0 __ip6_local_out+0xf0/0x170 ip6_local_out+0x17/0x70 synproxy_send_tcp_ipv6+0x1a2/0x200 synproxy_send_client_synack_ipv6+0x1aa/0x1f0 [...] Implement objref and objrefmap expression validate functions. Currently, only NFT_OBJECT_SYNPROXY object type requires validation. This will also handle a jump to a chain using a synproxy object from the OUTPUT hook. Now when trying to reference a synproxy object in the OUTPUT hook, nft will produce the following error: synproxy_crash.nft: Error: Could not process rule: Operation not supported synproxy name mysynproxy ^^^^^^^^^^^^^^^^^^^^^^^^ Fixes: ee394f96ad75 ("netfilter: nft_synproxy: add synproxy stateful object support") Reported-by: Georg Pfuetzenreuter <georg.pfuetzenreuter@suse.com> Closes: https://bugzilla.suse.com/1250237 Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
2025-10-08can: m_can: fix CAN state in system PMMarc Kleine-Budde1-4/+3
A suspend/resume cycle on a down interface results in the interface coming up in Error Active state. A suspend/resume cycle on an Up interface will always result in Error Active state, regardless of the actual CAN state. During suspend, only set running interfaces to CAN_STATE_SLEEPING. During resume only touch the CAN state of running interfaces. For wakeup sources, set the CAN state depending on the Protocol Status Regitser (PSR), for non wakeup source interfaces m_can_start() will do the same. Fixes: e0d1f4816f2a ("can: m_can: add Bosch M_CAN controller support") Reviewed-by: Markus Schneider-Pargmann <msp@baylibre.com> Link: https://patch.msgid.link/20250929-m_can-fix-state-handling-v4-4-682b49b49d9a@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08can: m_can: m_can_chip_config(): bring up interface in correct stateMarc Kleine-Budde1-1/+1
In some SoCs (observed on the STM32MP15) the M_CAN IP core keeps the CAN state and CAN error counters over an internal reset cycle. An external reset is not always possible, due to the shared reset with the other CAN core. This caused the core not always be in Error Active state when bringing up the controller. Instead of always setting the CAN state to Error Active in m_can_chip_config(), fix this by reading and decoding the Protocol Status Regitser (PSR) and set the CAN state accordingly. Fixes: e0d1f4816f2a ("can: m_can: add Bosch M_CAN controller support") Reviewed-by: Markus Schneider-Pargmann <msp@baylibre.com> Link: https://patch.msgid.link/20250929-m_can-fix-state-handling-v4-3-682b49b49d9a@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08can: m_can: m_can_handle_state_errors(): fix CAN state transition to Error ↵Marc Kleine-Budde1-21/+32
Active The CAN Error State is determined by the receive and transmit error counters. The CAN error counters decrease when reception/transmission is successful, so that a status transition back to the Error Active status is possible. This transition is not handled by m_can_handle_state_errors(). Add the missing detection of the Error Active state to m_can_handle_state_errors() and extend the handling of this state in m_can_handle_state_change(). Fixes: e0d1f4816f2a ("can: m_can: add Bosch M_CAN controller support") Fixes: cd0d83eab2e0 ("can: m_can: m_can_handle_state_change(): fix state change") Reviewed-by: Markus Schneider-Pargmann <msp@baylibre.com> Link: https://patch.msgid.link/20250929-m_can-fix-state-handling-v4-2-682b49b49d9a@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08can: m_can: m_can_plat_remove(): add missing pm_runtime_disable()Marc Kleine-Budde1-1/+1
Commit 227619c3ff7c ("can: m_can: move runtime PM enable/disable to m_can_platform") moved the PM runtime enable from the m_can core driver into the m_can_platform. That patch forgot to move the pm_runtime_disable() to m_can_plat_remove(), so that unloading the m_can_platform driver causes an "Unbalanced pm_runtime_enable!" error message. Add the missing pm_runtime_disable() to m_can_plat_remove() to fix the problem. Cc: Patrik Flykt <patrik.flykt@linux.intel.com> Fixes: 227619c3ff7c ("can: m_can: move runtime PM enable/disable to m_can_platform") Reviewed-by: Markus Schneider-Pargmann <msp@baylibre.com> Link: https://patch.msgid.link/20250929-m_can-fix-state-handling-v4-1-682b49b49d9a@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08can: gs_usb: gs_make_candev(): populate net_device->dev_portCeleste Liu1-0/+1
The gs_usb driver supports USB devices with more than 1 CAN channel. In old kernel before 3.15, it uses net_device->dev_id to distinguish different channel in userspace, which was done in commit acff76fa45b4 ("can: gs_usb: gs_make_candev(): set netdev->dev_id"). But since 3.15, the correct way is populating net_device->dev_port. And according to documentation, if network device support multiple interface, lack of net_device->dev_port SHALL be treated as a bug. Fixes: acff76fa45b4 ("can: gs_usb: gs_make_candev(): set netdev->dev_id") Cc: stable@vger.kernel.org Signed-off-by: Celeste Liu <uwu@coelacanthus.name> Link: https://patch.msgid.link/20250930-gs-usb-populate-net_device-dev_port-v1-1-68a065de6937@coelacanthus.name Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08can: gs_usb: increase max interface to U8_MAXCeleste Liu1-12/+10
This issue was found by Runcheng Lu when develop HSCanT USB to CAN FD converter[1]. The original developers may have only 3 interfaces device to test so they write 3 here and wait for future change. During the HSCanT development, we actually used 4 interfaces, so the limitation of 3 is not enough now. But just increase one is not future-proofed. Since the channel index type in gs_host_frame is u8, just make canch[] become a flexible array with a u8 index, so it naturally constraint by U8_MAX and avoid statically allocate 256 pointer for every gs_usb device. [1]: https://github.com/cherry-embedded/HSCanT-hardware Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices") Reported-by: Runcheng Lu <runcheng.lu@hpmicro.com> Cc: stable@vger.kernel.org Reviewed-by: Vincent Mailhol <mailhol@kernel.org> Signed-off-by: Celeste Liu <uwu@coelacanthus.name> Link: https://patch.msgid.link/20250930-gs-usb-max-if-v5-1-863330bf6666@coelacanthus.name Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2025-10-08crypto: skcipher - Fix reqsize handlingT Pratham1-0/+2
Commit afddce13ce81d ("crypto: api - Add reqsize to crypto_alg") introduced cra_reqsize field in crypto_alg struct to replace type specific reqsize fields. It looks like this was introduced specifically for ahash and acomp from the commit description as subsequent commits add necessary changes in these alg frameworks. However, this is being recommended for use in all crypto algs [1] instead of setting reqsize using crypto_*_set_reqsize(). Using cra_reqsize in skcipher algorithms, hence, causes memory corruptions and crashes as the underlying functions in the algorithm framework have not been updated to set the reqsize properly from cra_reqsize. [2] Add proper set_reqsize calls in the skcipher init function to properly initialize reqsize for these algorithms in the framework. [1]: https://lore.kernel.org/linux-crypto/aCL8BxpHr5OpT04k@gondor.apana.org.au/ [2]: https://gist.github.com/Pratham-T/24247446f1faf4b7843e4014d5089f6b Fixes: afddce13ce81d ("crypto: api - Add reqsize to crypto_alg") Fixes: 52f641bc63a4 ("crypto: ti - Add driver for DTHE V2 AES Engine (ECB, CBC)") Signed-off-by: T Pratham <t-pratham@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-10-07Input: atmel_mxt_ts - allow reset GPIO to sleepMarek Vasut1-1/+1
The reset GPIO is not toggled in any critical section where it couldn't sleep, allow the reset GPIO to sleep. This allows the driver to operate reset GPIOs connected to I2C GPIO expanders. Signed-off-by: Marek Vasut <marek.vasut@mailbox.org> Link: https://lore.kernel.org/r/20251005023335.166483-1-marek.vasut@mailbox.org Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
2025-10-07net: pse-pd: tps23881: Fix current measurement scalingThomas Wismer1-1/+1
The TPS23881 improves on the TPS23880 with current sense resistors reduced from 255 mOhm to 200 mOhm. This has a direct impact on the scaling of the current measurement. However, the latest TPS23881 data sheet from May 2023 still shows the scaling of the TPS23880 model. Fixes: 7f076ce3f1733 ("net: pse-pd: tps23881: Add support for power limit and measurement features") Signed-off-by: Thomas Wismer <thomas.wismer@scs.ch> Acked-by: Kory Maincent <kory.maincent@bootlin.com> Link: https://patch.msgid.link/20251006204029.7169-2-thomas@wismer.xyz Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-10-07net/mlx5: fix pre-2.40 binutils assembler errorArnd Bergmann1-1/+1
Old binutils versions require a slightly stricter syntax for the .arch_extension directive and fail with the extra semicolon: /tmp/cclfMnj9.s:656: Error: unknown architectural extension `simd;' Drop the semicolon to make it work with all supported toolchain version. Link: https://lore.kernel.org/all/20251001163655.GA370262@ax162/ Reported-by: Paolo Abeni <pabeni@redhat.com> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Suggested-by: Nathan Chancellor <nathan@kernel.org> Fixes: fd8c8216648c ("net/mlx5: Improve write-combining test reliability for ARM64 Grace CPUs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Patrisious Haddad <phaddad@nvidia.com> Link: https://patch.msgid.link/20251006115640.497169-1-arnd@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-10-07mm: hugetlb: avoid soft lockup when mprotect to large memory areaYang Shi1-0/+2
When calling mprotect() to a large hugetlb memory area in our customer's workload (~300GB hugetlb memory), soft lockup was observed: watchdog: BUG: soft lockup - CPU#98 stuck for 23s! [t2_new_sysv:126916] CPU: 98 PID: 126916 Comm: t2_new_sysv Kdump: loaded Not tainted 6.17-rc7 Hardware name: GIGACOMPUTING R2A3-T40-AAV1/Jefferson CIO, BIOS 5.4.4.1 07/15/2025 pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : mte_clear_page_tags+0x14/0x24 lr : mte_sync_tags+0x1c0/0x240 sp : ffff80003150bb80 x29: ffff80003150bb80 x28: ffff00739e9705a8 x27: 0000ffd2d6a00000 x26: 0000ff8e4bc00000 x25: 00e80046cde00f45 x24: 0000000000022458 x23: 0000000000000000 x22: 0000000000000004 x21: 000000011b380000 x20: ffff000000000000 x19: 000000011b379f40 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : ffffc875e0aa5e2c x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000 x5 : fffffc01ce7a5c00 x4 : 00000000046cde00 x3 : fffffc0000000000 x2 : 0000000000000004 x1 : 0000000000000040 x0 : ffff0046cde7c000 Call trace:   mte_clear_page_tags+0x14/0x24   set_huge_pte_at+0x25c/0x280   hugetlb_change_protection+0x220/0x430   change_protection+0x5c/0x8c   mprotect_fixup+0x10c/0x294   do_mprotect_pkey.constprop.0+0x2e0/0x3d4   __arm64_sys_mprotect+0x24/0x44   invoke_syscall+0x50/0x160   el0_svc_common+0x48/0x144   do_el0_svc+0x30/0xe0   el0_svc+0x30/0xf0   el0t_64_sync_handler+0xc4/0x148   el0t_64_sync+0x1a4/0x1a8 Soft lockup is not triggered with THP or base page because there is cond_resched() called for each PMD size. Although the soft lockup was triggered by MTE, it should be not MTE specific. The other processing which takes long time in the loop may trigger soft lockup too. So add cond_resched() for hugetlb to avoid soft lockup. Link: https://lkml.kernel.org/r/20250929202402.1663290-1-yang@os.amperecomputing.com Fixes: 8f860591ffb2 ("[PATCH] Enable mprotect on huge pages") Signed-off-by: Yang Shi <yang@os.amperecomputing.com> Tested-by: Carl Worth <carl@os.amperecomputing.com> Reviewed-by: Christoph Lameter (Ampere) <cl@gentwo.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Oscar Salvador <osalvador@suse.de> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Will Deacon <will@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07fsnotify: pass correct offset to fsnotify_mmap_perm()Ryan Roberts1-1/+2
fsnotify_mmap_perm() requires a byte offset for the file about to be mmap'ed. But it is called from vm_mmap_pgoff(), which has a page offset. Previously the conversion was done incorrectly so let's fix it, being careful not to overflow on 32-bit platforms. Discovered during code review. Link: https://lkml.kernel.org/r/20251003155238.2147410-1-ryan.roberts@arm.com Fixes: 066e053fe208 ("fsnotify: add pre-content hooks on mmap()") Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Kiryl Shutsemau <kas@kernel.org> Cc: Amir Goldstein <amir73il@gmail.com> Cc: David Hildenbrand <david@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07mm/ksm: fix flag-dropping behavior in ksm_madviseJakub Acs2-1/+2
syzkaller discovered the following crash: (kernel BUG) [ 44.607039] ------------[ cut here ]------------ [ 44.607422] kernel BUG at mm/userfaultfd.c:2067! [ 44.608148] Oops: invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN NOPTI [ 44.608814] CPU: 1 UID: 0 PID: 2475 Comm: reproducer Not tainted 6.16.0-rc6 #1 PREEMPT(none) [ 44.609635] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 [ 44.610695] RIP: 0010:userfaultfd_release_all+0x3a8/0x460 <snip other registers, drop unreliable trace> [ 44.617726] Call Trace: [ 44.617926] <TASK> [ 44.619284] userfaultfd_release+0xef/0x1b0 [ 44.620976] __fput+0x3f9/0xb60 [ 44.621240] fput_close_sync+0x110/0x210 [ 44.622222] __x64_sys_close+0x8f/0x120 [ 44.622530] do_syscall_64+0x5b/0x2f0 [ 44.622840] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 44.623244] RIP: 0033:0x7f365bb3f227 Kernel panics because it detects UFFD inconsistency during userfaultfd_release_all(). Specifically, a VMA which has a valid pointer to vma->vm_userfaultfd_ctx, but no UFFD flags in vma->vm_flags. The inconsistency is caused in ksm_madvise(): when user calls madvise() with MADV_UNMEARGEABLE on a VMA that is registered for UFFD in MINOR mode, it accidentally clears all flags stored in the upper 32 bits of vma->vm_flags. Assuming x86_64 kernel build, unsigned long is 64-bit and unsigned int and int are 32-bit wide. This setup causes the following mishap during the &= ~VM_MERGEABLE assignment. VM_MERGEABLE is a 32-bit constant of type unsigned int, 0x8000'0000. After ~ is applied, it becomes 0x7fff'ffff unsigned int, which is then promoted to unsigned long before the & operation. This promotion fills upper 32 bits with leading 0s, as we're doing unsigned conversion (and even for a signed conversion, this wouldn't help as the leading bit is 0). & operation thus ends up AND-ing vm_flags with 0x0000'0000'7fff'ffff instead of intended 0xffff'ffff'7fff'ffff and hence accidentally clears the upper 32-bits of its value. Fix it by changing `VM_MERGEABLE` constant to unsigned long, using the BIT() macro. Note: other VM_* flags are not affected: This only happens to the VM_MERGEABLE flag, as the other VM_* flags are all constants of type int and after ~ operation, they end up with leading 1 and are thus converted to unsigned long with leading 1s. Note 2: After commit 31defc3b01d9 ("userfaultfd: remove (VM_)BUG_ON()s"), this is no longer a kernel BUG, but a WARNING at the same place: [ 45.595973] WARNING: CPU: 1 PID: 2474 at mm/userfaultfd.c:2067 but the root-cause (flag-drop) remains the same. [akpm@linux-foundation.org: rust bindgen wasn't able to handle BIT(), from Miguel] Link: https://lore.kernel.org/oe-kbuild-all/202510030449.VfSaAjvd-lkp@intel.com/ Link: https://lkml.kernel.org/r/20251001090353.57523-2-acsjakub@amazon.de Fixes: 7677f7fd8be7 ("userfaultfd: add minor fault registration mode") Signed-off-by: Jakub Acs <acsjakub@amazon.de> Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: SeongJae Park <sj@kernel.org> Tested-by: Alice Ryhl <aliceryhl@google.com> Tested-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Xu Xin <xu.xin16@zte.com.cn> Cc: Chengming Zhou <chengming.zhou@linux.dev> Cc: Peter Xu <peterx@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07mm/damon/vaddr: do not repeat pte_offset_map_lock() until successSeongJae Park1-6/+2
DAMON's virtual address space operation set implementation (vaddr) calls pte_offset_map_lock() inside the page table walk callback function. This is for reading and writing page table accessed bits. If pte_offset_map_lock() fails, it retries by returning the page table walk callback function with ACTION_AGAIN. pte_offset_map_lock() can continuously fail if the target is a pmd migration entry, though. Hence it could cause an infinite page table walk if the migration cannot be done until the page table walk is finished. This indeed caused a soft lockup when CPU hotplugging and DAMON were running in parallel. Avoid the infinite loop by simply not retrying the page table walk. DAMON is promising only a best-effort accuracy, so missing access to such pages is no problem. Link: https://lkml.kernel.org/r/20250930004410.55228-1-sj@kernel.org Fixes: 7780d04046a2 ("mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails") Signed-off-by: SeongJae Park <sj@kernel.org> Reported-by: Xinyu Zheng <zhengxinyu6@huawei.com> Closes: https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [6.5+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP ↵Lance Yang1-5/+10
subpage to shared zeropage When splitting an mTHP and replacing a zero-filled subpage with the shared zeropage, try_to_map_unused_to_zeropage() currently drops several important PTE bits. For userspace tools like CRIU, which rely on the soft-dirty mechanism for incremental snapshots, losing the soft-dirty bit means modified pages are missed, leading to inconsistent memory state after restore. As pointed out by David, the more critical uffd-wp bit is also dropped. This breaks the userfaultfd write-protection mechanism, causing writes to be silently missed by monitoring applications, which can lead to data corruption. Preserve both the soft-dirty and uffd-wp bits from the old PTE when creating the new zeropage mapping to ensure they are correctly tracked. Link: https://lkml.kernel.org/r/20250930081040.80926-1-lance.yang@linux.dev Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang <lance.yang@linux.dev> Suggested-by: David Hildenbrand <david@redhat.com> Suggested-by: Dev Jain <dev.jain@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Acked-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Byungchul Park <byungchul@sk.com> Cc: Gregory Price <gourry@gourry.net> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Jann Horn <jannh@google.com> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Usama Arif <usamaarif642@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yu Zhao <yuzhao@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07mm/thp: fix MTE tag mismatch when replacing zero-filled subpagesLance Yang2-19/+4
When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace. Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer. KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging. As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage. Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical. [1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com Link: https://lkml.kernel.org/r/20250922021458.68123-1-lance.yang@linux.dev Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang <lance.yang@linux.dev> Reported-by: Qun-wei Lin <Qun-wei.Lin@mediatek.com> Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@mediatek.com Suggested-by: David Hildenbrand <david@redhat.com> Acked-by: Zi Yan <ziy@nvidia.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: andrew.yang <andrew.yang@mediatek.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Byungchul Park <byungchul@sk.com> Cc: Charlie Jenkins <charlie@rivosinc.com> Cc: Chinwen Chang <chinwen.chang@mediatek.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Domenico Cerasuolo <cerasuolodomenico@gmail.com> Cc: Gregory Price <gourry@gourry.net> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Kairui Song <ryncsn@gmail.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Mariano Pache <npache@redhat.com> Cc: Mathew Brost <matthew.brost@intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Palmer Dabbelt <palmer@rivosinc.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Samuel Holland <samuel.holland@sifive.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Yu Zhao <yuzhao@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07memcg: skip cgroup_file_notify if spinning is not allowedShakeel Butt2-10/+23
Generally memcg charging is allowed from all the contexts including NMI where even spinning on spinlock can cause locking issues. However one call chain was missed during the addition of memcg charging from any context support. That is try_charge_memcg() -> memcg_memory_event() -> cgroup_file_notify(). The possible function call tree under cgroup_file_notify() can acquire many different spin locks in spinning mode. Some of them are cgroup_file_kn_lock, kernfs_notify_lock, pool_workqeue's lock. So, let's just skip cgroup_file_notify() from memcg charging if the context does not allow spinning. Alternative approach was also explored where instead of skipping cgroup_file_notify(), we defer the memcg event processing to irq_work [1]. However it adds complexity and it was decided to keep things simple until we need more memcg events with !allow_spinning requirement. Link: https://lore.kernel.org/all/5qi2llyzf7gklncflo6gxoozljbm4h3tpnuv4u4ej4ztysvi6f@x44v7nz2wdzd/ [1] Link: https://lkml.kernel.org/r/20250922220203.261714-1-shakeel.butt@linux.dev Fixes: 3ac4638a734a ("memcg: make memcg_rstat_updated nmi safe") Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Michal Hocko <mhocko@suse.com> Closes: https://lore.kernel.org/all/20250905061919.439648-1-yepeilin@google.com/ Cc: Alexei Starovoitov <ast@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Peilin Ye <yepeilin@google.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Tejun Heo <tj@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07lib/test_kho: use kho_preserve_vmalloc instead of storing addresses in fdtMike Rapoport (Microsoft)1-12/+29
KHO test stores physical addresses of the preserved folios directly in fdt. Use kho_preserve_vmalloc() instead of it and kho_restore_vmalloc() to retrieve the addresses after kexec. This makes the test more scalable from one side and adds tests coverage for kho_preserve_vmalloc() from the other. Link: https://lkml.kernel.org/r/20250921054458.4043761-5-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: Baoquan He <bhe@redhat.com> Cc: Changyuan Lyu <changyuanl@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07kho: add support for preserving vmalloc allocationsMike Rapoport (Microsoft)2-0/+309
A vmalloc allocation is preserved using binary structure similar to global KHO memory tracker. It's a linked list of pages where each page is an array of physical address of pages in vmalloc area. kho_preserve_vmalloc() hands out the physical address of the head page to the caller. This address is used as the argument to kho_vmalloc_restore() to restore the mapping in the vmalloc address space and populate it with the preserved pages. [pasha.tatashin@soleen.com: free chunks using free_page() not kfree()] Link: https://lkml.kernel.org/r/mafs0a52idbeg.fsf@kernel.org [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/20250921054458.4043761-4-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: Baoquan He <bhe@redhat.com> Cc: Changyuan Lyu <changyuanl@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07kho: replace kho_preserve_phys() with kho_preserve_pages()Mike Rapoport (Microsoft)3-17/+17
to make it clear that KHO operates on pages rather than on a random physical address. The kho_preserve_pages() will be also used in upcoming support for vmalloc preservation. Link: https://lkml.kernel.org/r/20250921054458.4043761-3-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Cc: Alexander Graf <graf@amazon.com> Cc: Baoquan He <bhe@redhat.com> Cc: Changyuan Lyu <changyuanl@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07kho: check if kho is finalized in __kho_preserve_order()Mike Rapoport (Microsoft)1-29/+26
Patch series "kho: add support for preserving vmalloc allocations", v5. Following the discussion about preservation of memfd with LUO [1] these patches add support for preserving vmalloc allocations. Any KHO uses case presumes that there's a data structure that lists physical addresses of preserved folios (and potentially some additional metadata). Allowing vmalloc preservations with KHO allows scalable preservation of such data structures. For instance, instead of allocating array describing preserved folios in the fdt, memfd preservation can use vmalloc: preserved_folios = vmalloc_array(nr_folios, sizeof(*preserved_folios)); memfd_luo_preserve_folios(preserved_folios, folios, nr_folios); kho_preserve_vmalloc(preserved_folios, &folios_info); This patch (of 4): Instead of checking if kho is finalized in each caller of __kho_preserve_order(), do it in the core function itself. Link: https://lkml.kernel.org/r/20250921054458.4043761-1-rppt@kernel.org Link: https://lkml.kernel.org/r/20250921054458.4043761-2-rppt@kernel.org Link: https://lore.kernel.org/all/20250807014442.3829950-30-pasha.tatashin@soleen.com [1] Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: Baoquan He <bhe@redhat.com> Cc: Changyuan Lyu <changyuanl@google.com> Cc: Chris Li <chrisl@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07MAINTAINERS, .mailmap: update Umang's email addressUmang Jain2-1/+2
Map my new professional email address in .mailmap and also update the IMX283 driver entry with the same. At the same time, update my role to a reviewer for the IMX283 driver. Link: https://lkml.kernel.org/r/20250929074025.84407-1-uajain@igalia.com Signed-off-by: Umang Jain <uajain@igalia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-10-07drm/nouveau: fix bad ret code in nouveau_bo_move_prepShuhao Fu1-1/+1
In `nouveau_bo_move_prep`, if `nouveau_mem_map` fails, an error code should be returned. Currently, it returns zero even if vmm addr is not correctly mapped. Cc: stable@vger.kernel.org Reviewed-by: Petr Vorel <pvorel@suse.cz> Signed-off-by: Shuhao Fu <sfual@cse.ust.hk> Fixes: 9ce523cc3bf2 ("drm/nouveau: separate buffer object backing memory from nvkm structures") Signed-off-by: Danilo Krummrich <dakr@kernel.org>
2025-10-07smb: client: Return directly after a failed genlmsg_new() in ↵Markus Elfring1-5/+2
cifs_swn_send_register_message() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Return directly after a call of the function “genlmsg_new” failed at the beginning. * Delete the label “fail” which became unnecessary with this refactoring. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-10-07smb: client: Use common code in cifs_do_create()Markus Elfring1-2/+2
Use a label once more so that a bit of common code can be better reused at the end of this function implementation. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Enzo Matsumiya <ematsumiya@suse.de> Reviewed-by: David Howells <dhowells@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
2025-10-07drm/amd/display: Incorrect Mirror CositingJesse Agate1-5/+5
[WHY] hinit/vinit are incorrect in the case of mirroring. [HOW] Cositing sign must be flipped when image is mirrored in the vertical or horizontal direction. Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Reviewed-by: Samson Tam <samson.tam@amd.com> Signed-off-by: Jesse Agate <jesse.agate@amd.com> Signed-off-by: Brendan Leder <breleder@amd.com> Signed-off-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amd/display: Enable Dynamic DTBCLK SwitchFangzhi Zuo1-0/+4
[WHAT] Since dcn35, DTBCLK can be disabled when no DP2 sink connected for power saving purpose. Cc: Mario Limonciello <mario.limonciello@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Reviewed-by: Aurabindo Pillai <aurabindo.pillai@amd.com> Signed-off-by: Fangzhi Zuo <Jerry.Zuo@amd.com> Signed-off-by: Alex Hung <alex.hung@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdgpu: Report individual reset errorLijo Lazar1-10/+15
If reinitialization of one of the GPUs fails after reset, it logs failure on all subsequent GPUs eventhough they have resumed successfully. A sample log where only device at 0000:95:00.0 had a failure - amdgpu 0000:15:00.0: amdgpu: GPU reset(19) succeeded! amdgpu 0000:65:00.0: amdgpu: GPU reset(19) succeeded! amdgpu 0000:75:00.0: amdgpu: GPU reset(19) succeeded! amdgpu 0000:85:00.0: amdgpu: GPU reset(19) succeeded! amdgpu 0000:95:00.0: amdgpu: GPU reset(19) failed amdgpu 0000:e5:00.0: amdgpu: GPU reset(19) failed amdgpu 0000:f5:00.0: amdgpu: GPU reset(19) failed amdgpu 0000:05:00.0: amdgpu: GPU reset(19) failed amdgpu 0000:15:00.0: amdgpu: GPU reset end with ret = -5 To avoid confusion, report the error for each device separately and return the first error as the overall result. Signed-off-by: Lijo Lazar <lijo.lazar@amd.com> Reviewed-by: Asad Kamal <asad.kamal@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdgpu: partially revert "revert to old status lock handling v3"Christian König4-68/+105
The CI systems are pointing out list corruptions, so we still need to fix something here. Keep the asserts, but revert the lock changes for now. Fixes: 59e4405e9ee2 ("drm/amdgpu: revert to old status lock handling v3") Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amd/display: Fix unsafe uses of kernel mode FPUArd Biesheuvel6-7/+56
The point of isolating code that uses kernel mode FPU in separate compilation units is to ensure that even implicit uses of, e.g., SIMD registers for spilling occur only in a context where this is permitted, i.e., from inside a kernel_fpu_begin/end block. This is important on arm64, which uses -mgeneral-regs-only to build all kernel code, with the exception of such compilation units where FP or SIMD registers are expected to be used. Given that the compiler may invent uses of FP/SIMD anywhere in such a unit, none of its code may be accessible from outside a kernel_fpu_begin/end block. This means that all callers into such compilation units must use the DC_FP start/end macros, which must not occur there themselves. For robustness, all functions with external linkage that reside there should call dc_assert_fp_enabled() to assert that the FPU context was set up correctly. Fix this for the DCN35, DCN351 and DCN36 implementations. Cc: Austin Zheng <austin.zheng@amd.com> Cc: Jun Lei <jun.lei@amd.com> Cc: Harry Wentland <harry.wentland@amd.com> Cc: Leo Li <sunpeng.li@amd.com> Cc: Rodrigo Siqueira <siqueira@igalia.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: amd-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amd/pm: Disable VCN queue reset on SMU v13.0.6 due to regressionJesse.Zhang1-2/+1
Disable VCN reset capability for the program 4 as it's causing regressions. Fixes: 9d20f37a106f ("drm/amd/pm: Add VCN reset support for SMU v13.0.6") Reviewed-by: Lijo Lazar <lijo.lazar@amd.com> Signed-off-by: Jesse Zhang <jesse.zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdgpu: Fix general protection fault in amdgpu_vm_bo_reset_state_machineJesse.Zhang1-1/+1
After GPU reset with VRAM loss, a general protection fault occurs during user queue restoration when accessing vm_bo->vm after spinlock release in amdgpu_vm_bo_reset_state_machine. The root cause is that vm_bo points to the last entry from the list_for_each_entry loop, but this becomes invalid after the spinlock is released. Accessing vm_bo->vm at this point leads to memory corruption. Crash log shows: [ 326.981811] Oops: general protection fault, probably for non-canonical address 0x4156415741e58ac8: 0000 [#1] SMP NOPTI [ 326.981820] CPU: 13 UID: 0 PID: 1035 Comm: kworker/13:3 Tainted: G E 6.16.0+ #25 PREEMPT(voluntary) [ 326.981826] Tainted: [E]=UNSIGNED_MODULE [ 326.981827] Hardware name: Gigabyte Technology Co., Ltd. X870E AORUS PRO ICE/X870E AORUS PRO ICE, BIOS F3i 12/19/2024 [ 326.981831] Workqueue: events amdgpu_userq_restore_worker [amdgpu] [ 326.981999] RIP: 0010:amdgpu_vm_assert_locked+0x16/0x70 [amdgpu] [ 326.982094] Code: 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 48 85 ff 74 45 48 8b 87 80 03 00 00 48 85 c0 74 40 <48> 8b b8 80 01 00 00 48 85 ff 74 3b 8b 05 0c b7 0e f0 85 c0 75 05 [ 326.982098] RSP: 0018:ffffaa91c2a6bc20 EFLAGS: 00010206 [ 326.982100] RAX: 4156415741e58948 RBX: ffff9e8f013e8330 RCX: 0000000000000000 [ 326.982102] RDX: 0000000000000005 RSI: 000000001d254e88 RDI: ffffffffc144814a [ 326.982104] RBP: ffffaa91c2a6bc68 R08: 0000004c21a25674 R09: 0000000000000001 [ 326.982106] R10: 0000000000000001 R11: dccaf3f2f82863fc R12: ffff9e8f013e8000 [ 326.982108] R13: ffff9e8f013e8000 R14: 0000000000000000 R15: ffff9e8f09980000 [ 326.982110] FS: 0000000000000000(0000) GS:ffff9e9e79995000(0000) knlGS:0000000000000000 [ 326.982112] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 326.982114] CR2: 000055ed6c9caa80 CR3: 0000000797060000 CR4: 0000000000750ef0 [ 326.982116] PKRU: 55555554 Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdgpu: Check swus/ds for switch state saveLijo Lazar1-8/+15
For saving switch state, check if the GPU is having SWUS/DS architecture. Otherwise, skip saving. Reported-by: Roman Elshin <roman.elshin@gmail.com> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4602 Fixes: 1dd2fa0e00f1 ("drm/amdgpu: Save and restore switch state") Signed-off-by: Lijo Lazar <lijo.lazar@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdkfd: Fix two comments in kfd_ioctl.hFelix Kuehling1-2/+2
Queue read and write pointers are "to KFD", not "from KFD". Suggested-by: Robert Liu <robert.liu@amd.com> Signed-off-by: Felix Kuehling <felix.kuehling@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Robert Liu <robert.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amd/pm: Avoid interface mismatch messagingLijo Lazar3-2/+5
PMFW interface version is not used by some IP implementations like SMU v13.0.6/12, instead rely on PMFW version checks. Avoid the log if interface version is not used. Signed-off-by: Lijo Lazar <lijo.lazar@amd.com> Reviewed-by: Asad Kamal <asad.kamal@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2025-10-07drm/amdgpu: Merge amdgpu_vm_set_pasid into amdgpu_vm_initJesse.Zhang3-56/+24
As KFD no longer uses a separate PASID, the global amdgpu_vm_set_pasid()function is no longer necessary. Merge its functionality directly intoamdgpu_vm_init() to simplify code flow and eliminate redundant locking. v2: remove superflous check adjust amdgpu_vm_fin and remove amdgpu_vm_set_pasid (Chritian) v3: drop amdgpu_vm_assert_locked (Chritian) Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4614 Fixes: 59e4405e9ee2 ("drm/amdgpu: revert to old status lock handling v3") Reviewed-by: Christian König <christian.koenig@amd.com> Suggested-by: Christian König <christian.koenig@amd.com> Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>