aboutsummaryrefslogtreecommitdiffstats
path: root/kernel/power (follow)
AgeCommit message (Collapse)AuthorFilesLines
2024-02-08PM: EM: Change debugfs configuration to use runtime EM table dataLukasz Luba1-8/+59
Dump the runtime EM table values which can be modified in time. In order to do that allocate chunk of debug memory which can be later freed automatically thanks to devm_kcalloc(). This design can handle the fact that the EM table memory can change after EM update, so debug code cannot use the pointer from initialization phase. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Optimize em_cpu_energy() and remove divisionLukasz Luba1-4/+3
The Energy Model (EM) can be modified at runtime which brings new possibilities. The em_cpu_energy() is called by the Energy Aware Scheduler (EAS) in its hot path. The energy calculation uses power value for a given performance state (ps) and the CPU busy time as percentage for that given frequency. It is possible to avoid the division by 'scale_cpu' at runtime, because EM is updated whenever new max capacity CPU is set in the system. Use that feature and do the needed division during the calculation of the coefficient 'ps->cost'. That enhanced 'ps->cost' value can be then just multiplied simply by utilization: pd_nrg = ps->cost * \Sum cpu_util to get the needed energy for whole Performance Domain (PD). With this optimization and earlier removal of map_util_freq(), the em_cpu_energy() should run faster on the Big CPU by 1.43x and on the Little CPU by 1.69x (RockPi 4B board). Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Support late CPUs booting and capacity adjustmentLukasz Luba1-0/+124
The patch adds needed infrastructure to handle the late CPUs boot, which might change the previous CPUs capacity values. With this changes the new CPUs which try to register EM will trigger the needed re-calculations for other CPUs EMs. Thanks to that the em_per_state::performance values will be aligned with the CPU capacity information after all CPUs finish the boot and EM registrations. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Add performance field to struct em_perf_state and optimizeLukasz Luba1-0/+27
The performance doesn't scale linearly with the frequency. Also, it may be different in different workloads. Some CPUs are designed to be particularly good at some applications e.g. images or video processing and other CPUs in different. When those different types of CPUs are combined in one SoC they should be properly modeled to get max of the HW in Energy Aware Scheduler (EAS). The Energy Model (EM) provides the power vs. performance curves to the EAS, but assumes the CPUs capacity is fixed and scales linearly with the frequency. This patch allows to adjust the curve on the 'performance' axis as well. Code speed optimization: Removing map_util_freq() allows to avoid one division and one multiplication operations from the EAS hot code path. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Introduce em_dev_update_perf_domain() for EM updatesLukasz Luba1-0/+44
Add API function em_dev_update_perf_domain() which allows the EM to be changed safely. Concurrent updaters are serialized with a mutex and the removal of memory that will not be used any more is carried out with the help of RCU. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Add functions for memory allocations for new EM tablesLukasz Luba1-5/+33
The runtime modified EM table can be provided from drivers. Create mechanism which allows safely allocate and free the table for device drivers. The same table can be used by the EAS in task scheduler code paths, so make sure the memory is not freed when the device driver module is unloaded. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Introduce runtime modifiable tableLukasz Luba1-0/+53
The new runtime table can be populated with a new power data to better reflect the actual efficiency of the device e.g. CPU. The power can vary over time e.g. due to the SoC temperature change. Higher temperature can increase power values. For longer running scenarios, such as game or camera, when also other devices are used (e.g. GPU, ISP) the CPU power can change. The new EM framework is able to addresses this issue and change the EM data at runtime safely. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Split the allocation and initialization of the EM tableLukasz Luba1-22/+33
Split the process of allocation and data initialization for the EM table. The upcoming changes for modifiable EM will use it. This change is not expected to alter the general functionality. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Check if the get_cost() callback is present in em_compute_costs()Lukasz Luba1-1/+1
Subsequent changes will introduce a case in which 'cb->get_cost' may not be set in em_compute_costs(), so add a check to ensure that it is not NULL before attempting to dereference it. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Introduce em_compute_costs()Lukasz Luba1-29/+43
Move the EM costs computation code into a new dedicated function, em_compute_costs(), that can be reused in other places in the future. This change is not expected to alter the general functionality. Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Find first CPU active while updating OPP efficiencyLukasz Luba1-2/+9
The Energy Model might be updated at runtime and the energy efficiency for each OPP may change. Thus, there is a need to update also the cpufreq framework and make it aligned to the new values. In order to do that, use a first active CPU from the Performance Domain. This is needed since the first CPU in the cpumask might be offline when we run this code path. Reviewed-by: Hongyan Xia <hongyan.xia2@arm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Extend em_cpufreq_update_efficiencies() argument listLukasz Luba1-6/+4
In order to prepare the code for the modifiable EM perf_state table, make em_cpufreq_update_efficiencies() take a pointer to the EM table as its second argument and modify it to use that new argument instead of the 'table' member of dev->em_pd. No functional impact. Reviewed-by: Hongyan Xia <hongyan.xia2@arm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-08PM: EM: Add missing newline for the message logLukasz Luba1-1/+1
Fix missing newline for the string long in the error code path. Reviewed-by: Hongyan Xia <hongyan.xia2@arm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-05PM: hibernate: Add support for LZ4 compression for hibernationNikhil V3-3/+41
Extend the support for LZ4 compression to be used with hibernation. The main idea is that different compression algorithms have different characteristics and hibernation may benefit when it uses any of these algorithms: a default algorithm, having higher compression rate but is slower(compression/decompression) and a secondary algorithm, that is faster(compression/decompression) but has lower compression rate. LZ4 algorithm has better decompression speeds over LZO. This reduces the hibernation image restore time. As per test results: LZO LZ4 Size before Compression(bytes) 682696704 682393600 Size after Compression(bytes) 146502402 155993547 Decompression Rate 335.02 MB/s 501.05 MB/s Restore time 4.4s 3.8s LZO is the default compression algorithm used for hibernation. Enable CONFIG_HIBERNATION_COMP_LZ4 to set the default compressor as LZ4. Signed-off-by: Nikhil V <quic_nprakash@quicinc.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-05PM: hibernate: Move to crypto APIs for LZO compressionNikhil V4-28/+132
Currently for hibernation, LZO is the only compression algorithm available and uses the existing LZO library calls. However, there is no flexibility to switch to other algorithms which provides better results. The main idea is that different compression algorithms have different characteristics and hibernation may benefit when it uses alternate algorithms. By moving to crypto based APIs, it lays a foundation to use other compression algorithms for hibernation. There are no functional changes introduced by this approach. Signed-off-by: Nikhil V <quic_nprakash@quicinc.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-05PM: hibernate: Rename lzo* to make it genericNikhil V1-60/+60
Renaming lzo* to generic names, except for lzo_xxx() APIs. This is used in the next patch where we move to crypto based APIs for compression. There are no functional changes introduced by this approach. Signed-off-by: Nikhil V <quic_nprakash@quicinc.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2024-02-05PM: sleep: stats: Use locking in dpm_save_failed_dev()Rafael J. Wysocki1-0/+5
Because dpm_save_failed_dev() may be called simultaneously by multiple failing device PM functions, the state of the suspend_stats fields updated by it may become inconsistent. Prevent that from happening by using a lock in dpm_save_failed_dev(). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-02-05PM: sleep: stats: Define suspend_stats next to the code using itRafael J. Wysocki3-19/+66
It is not necessary to define struct suspend_stats in a header file and the suspend_stats variable in the core device system-wide PM code. They both can be defined in kernel/power/main.c, next to the sysfs and debugfs code accessing suspend_stats, which can be static. Modify the code in question in accordance with the above observation and replace the static inline functions manipulating suspend_stats with regular ones defined in kernel/power/main.c. While at it, move the enum suspend_stat_step to the end of suspend.h which is a more suitable place for it. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-02-05PM: sleep: stats: Use unsigned int for success and failure countersRafael J. Wysocki1-3/+3
Change the type of the "success" and "fail" fields in struct suspend_stats to unsigned int, because they cannot be negative. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-02-05PM: sleep: stats: Use an array of step failure countersRafael J. Wysocki2-25/+27
Instead of using a set of individual struct suspend_stats fields representing suspend step failure counters, use an array of counters indexed by enum suspend_stat_step for this purpose, which allows dpm_save_failed_step() to increment the appropriate counter automatically, so that its callers don't need to do that directly. It also allows suspend_stats_show() to carry out a loop over the counters array to print their values. Because the counters cannot become negative, use unsigned int for representing them. The only user-observable impact of this change is a different ordering of entries in the suspend_stats debugfs file which is not expected to matter. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
2024-02-05PM: sleep: stats: Use array of suspend step namesRafael J. Wysocki1-32/+18
Replace suspend_step_name() in the suspend statistics code with an array of suspend step names which has fewer lines of code and less overhead. While at it, remove two unnecessary line breaks in suspend_stats_show() and adjust some white space in there to the kernel coding style for a more consistent code layout. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
2023-12-20PM: hibernate: Repair excess function parameter description warningRandy Dunlap1-1/+0
Function swsusp_close() does not have any parameters, so remove the description of parameter @exclusive to prevent this warning. swap.c:1573: warning: Excess function parameter 'exclusive' description in 'swsusp_close' Signed-off-by: Randy Dunlap <rdunlap@infradead.org> [ rjw: Subject edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-20PM: sleep: Remove obsolete comment from unlock_system_sleep()Kevin Hao1-16/+0
With the freezer changes introduced by commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"), the comment in unlock_system_sleep() has become obsolete, there is no need to retain it. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-19PM: hibernate: Use kmap_local_page() in copy_data_page()Chen Haonan1-6/+6
kmap_atomic() has been deprecated in favor of kmap_local_page(). kmap_atomic() disables page-faults and preemption (the latter only for !PREEMPT_RT kernels).The code between the mapping and un-mapping in this patch does not depend on the above-mentioned side effects.So simply replaced kmap_atomic() with kmap_local_page(). Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn> [ rjw: Subject edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-15PM: hibernate: Enforce ordering during image compression/decompressionHongchen Zhang1-19/+19
An S4 (suspend to disk) test on the LoongArch 3A6000 platform sometimes fails with the following error messaged in the dmesg log: Invalid LZO compressed length That happens because when compressing/decompressing the image, the synchronization between the control thread and the compress/decompress/crc thread is based on a relaxed ordering interface, which is unreliable, and the following situation may occur: CPU 0 CPU 1 save_image_lzo lzo_compress_threadfn atomic_set(&d->stop, 1); atomic_read(&data[thr].stop) data[thr].cmp = data[thr].cmp_len; WRITE data[thr].cmp_len Then CPU0 gets a stale cmp_len and writes it to disk. During resume from S4, wrong cmp_len is loaded. To maintain data consistency between the two threads, use the acquire/release variants of atomic set and read operations. Fixes: 081a9d043c98 ("PM / Hibernate: Improve performance of LZO/plain hibernation, checksum image") Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Hongchen Zhang <zhanghongchen@loongson.cn> Co-developed-by: Weihao Li <liweihao@loongson.cn> Signed-off-by: Weihao Li <liweihao@loongson.cn> [ rjw: Subject rewrite and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-15PM: hibernate: Avoid missing wakeup events during hibernationChris Feng2-2/+10
Wakeup events that occur in the hibernation process's hibernation_platform_enter() cannot wake up the system. Although the current hibernation framework will execute part of the recovery process after a wakeup event occurs, it ultimately performs a shutdown operation because the system does not check the return value of hibernation_platform_enter(). In short, if a wakeup event occurs before putting the system into the final low-power state, it will be missed. To solve this problem, check the return value of hibernation_platform_enter(). When it returns -EAGAIN or -EBUSY (indicate the occurrence of a wakeup event), execute the hibernation recovery process, discard the previously saved image, and ultimately return to the working state. Signed-off-by: Chris Feng <chris.feng@mediatek.com> [ rjw: Rephrase the message printed when going back to the working state ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-11PM: hibernate: Do not initialize error in snapshot_write_next()Li zeming1-1/+1
The error variable in snapshot_write_next() gets a value before it is used, so don't initialize it to 0 upfront. Signed-off-by: Li zeming <zeming@nfschina.com> [ rjw: Subject and changelog rewrite ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-11PM: hibernate: Do not initialize error in swap_write_page()Li zeming1-1/+1
'error' first receives the function result before it is used, and it does not need to be assigned a value during definition. Signed-off-by: Li zeming <zeming@nfschina.com> [ rjw: Subject rewrite ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-12-11PM: hibernate: Drop unnecessary local variable initializationWang chaodong1-1/+1
It is not necessary to intialize the error variable in create_basic_memory_bitmaps(), because it is only read after being assigned a value. Signed-off-by: Wang chaodong <chaodong@nfschina.com> [ rjw: Subject and changelog rewrite ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-10-31Merge tag 'pm-6.7-rc1' of ↵Linus Torvalds2-11/+9
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management updates from Rafael Wysocki: "These add new hardware support (new Qualcomm SoC versions in cpufreq, RK3568/RK3588 in devfreq), extend the OPP (operating performance points) framework, improve cpufreq governors, fix issues and clean up code (most of the changes are in cpufreq and devfreq). Specifics: - Add support for several Qualcomm SoC versions and other similar changes (Christian Marangi, Dmitry Baryshkov, Luca Weiss, Neil Armstrong, Richard Acayan, Robert Marko, Rohit Agarwal, Stephan Gerhold and Varadarajan Narayanan) - Clean up the tegra cpufreq driver (Sumit Gupta) - Use of_property_read_reg() to parse "reg" in pmac32 driver (Rob Herring) - Add support for TI's am62p5 Soc (Bryan Brattlof) - Make ARM_BRCMSTB_AVS_CPUFREQ depends on !ARM_SCMI_CPUFREQ (Florian Fainelli) - Update Kconfig to mention i.MX7 as well (Alexander Stein) - Revise global turbo disable check in intel_pstate (Srinivas Pandruvada) - Carry out initialization of sg_cpu in the schedutil cpufreq governor in one loop (Liao Chang) - Simplify the condition for storing 'down_threshold' in the conservative cpufreq governor (Liao Chang) - Use fine-grained mutex in the userspace cpufreq governor (Liao Chang) - Move is_managed indicator in the userspace cpufreq governor into a per-policy structure (Liao Chang) - Rebuild sched-domains when removing cpufreq driver (Pierre Gondois) - Fix buffer overflow detection in trans_stats() (Christian Marangi) - Switch to dev_pm_opp_find_freq_(ceil/floor)_indexed() APIs to support specific devices like UFS which handle multiple clocks through OPP (Operating Performance Point) framework (Manivannan Sadhasivam) - Add perf support to the Rockchip DFI (DDR Monitor Module) devfreq- event driver: * Generalize rockchip-dfi.c to support new RK3568/RK3588 using different DDR type (Sascha Hauer). * Convert DT binding document format to yaml (Sascha Hauer). * Add perf support for DFI (a unit suitable for measuring DDR utilization) to rockchip-dfi.c to extend DFI usage (Sascha Hauer) - Add locking to the OPP handling code in the Mediatek CCI devfreq driver, because the voltage of shared OPP might be changed by multiple drivers (Mark Tseng, Dan Carpenter) - Use device_get_match_data() in the Samsung Exynos PPMU devfreq-event driver (Rob Herring) - Extend support for the opp-level beyond required-opps (Ulf Hansson) - Add dev_pm_opp_find_level_floor() (Krishna chaitanya chundru) - dt-bindings: Allow opp-peak-kBpsfor kryo CPUs, support Qualcomm Krait SoCs and document named opp-microvolt property (Bjorn Andersson, Dmitry Baryshkov and Christian Marangi) - Fix -Wunsequenced warning _of_add_opp_table_v1() (Nathan Chancellor) - General cleanup of OPP code (Viresh Kumar) - Use __get_safe_page() rather than touching the list in hibernation snapshot code (Brian Geffon) - Fix symbol export for _SIMPLE_ variants of _PM_OPS() (Raag Jadav) - Clean up sync_read handling in snapshot_write_next() (Brian Geffon) - Fix kerneldoc comments for swsusp_check() and swsusp_close() to better match code (Christoph Hellwig) - Downgrade BIOS locked limits pr_warn() in the Intel RAPL power capping driver to pr_debug() (Ville Syrjälä) - Change the minimum python version for the intel_pstate_tracer utility from 2.7 to 3.6 (Doug Smythies)" * tag 'pm-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (82 commits) dt-bindings: cpufreq: qcom-hw: document SM8650 CPUFREQ Hardware cpufreq: arm: Kconfig: Add i.MX7 to supported SoC for ARM_IMX_CPUFREQ_DT cpufreq: qcom-nvmem: add support for IPQ8064 cpufreq: qcom-nvmem: also accept operating-points-v2-krait-cpu cpufreq: qcom-nvmem: drop pvs_ver for format a fuses dt-bindings: cpufreq: qcom-cpufreq-nvmem: Document krait-cpu cpufreq: qcom-nvmem: add support for IPQ6018 dt-bindings: cpufreq: qcom-cpufreq-nvmem: document IPQ6018 cpufreq: qcom-nvmem: Add MSM8909 cpufreq: qcom-nvmem: Simplify driver data allocation powercap: intel_rapl: Downgrade BIOS locked limits pr_warn() to pr_debug() cpufreq: stats: Fix buffer overflow detection in trans_stats() dt-bindings: devfreq: event: rockchip,dfi: Add rk3588 support dt-bindings: devfreq: event: rockchip,dfi: Add rk3568 support dt-bindings: devfreq: event: convert Rockchip DFI binding to yaml PM / devfreq: rockchip-dfi: add support for RK3588 PM / devfreq: rockchip-dfi: account for multiple DDRMON_CTRL registers PM / devfreq: rockchip-dfi: make register stride SoC specific PM / devfreq: rockchip-dfi: Add perf support PM / devfreq: rockchip-dfi: give variable a better name ...
2023-10-28PM: hibernate: Drop unused snapshot_test argumentJan Kara3-11/+11
snapshot_test argument is now unused in swsusp_close() and load_image_and_restore(). Drop it CC: linux-pm@vger.kernel.org Acked-by: Christoph Hellwig <hch@lst.de> Acked-by: "Rafael J. Wysocki" <rafael@kernel.org> Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-17-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-28PM: hibernate: Convert to bdev_open_by_dev()Jan Kara1-15/+16
Convert hibernation code to use bdev_open_by_dev(). CC: linux-pm@vger.kernel.org Acked-by: Christoph Hellwig <hch@lst.de> Acked-by: "Rafael J. Wysocki" <rafael@kernel.org> Acked-by: Christian Brauner <brauner@kernel.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230927093442.25915-16-jack@suse.cz Signed-off-by: Christian Brauner <brauner@kernel.org>
2023-10-09Merge back earlier system-wide PM changes for v6.7.Rafael J. Wysocki2-11/+9
2023-10-04PM: hibernate: Fix copying the zero bitmap to safe pagesPavankumar Kondeti1-2/+2
The following crash is observed 100% of the time during resume from the hibernation on a x86 QEMU system. [ 12.931887] ? __die_body+0x1a/0x60 [ 12.932324] ? page_fault_oops+0x156/0x420 [ 12.932824] ? search_exception_tables+0x37/0x50 [ 12.933389] ? fixup_exception+0x21/0x300 [ 12.933889] ? exc_page_fault+0x69/0x150 [ 12.934371] ? asm_exc_page_fault+0x26/0x30 [ 12.934869] ? get_buffer.constprop.0+0xac/0x100 [ 12.935428] snapshot_write_next+0x7c/0x9f0 [ 12.935929] ? submit_bio_noacct_nocheck+0x2c2/0x370 [ 12.936530] ? submit_bio_noacct+0x44/0x2c0 [ 12.937035] ? hib_submit_io+0xa5/0x110 [ 12.937501] load_image+0x83/0x1a0 [ 12.937919] swsusp_read+0x17f/0x1d0 [ 12.938355] ? create_basic_memory_bitmaps+0x1b7/0x240 [ 12.938967] load_image_and_restore+0x45/0xc0 [ 12.939494] software_resume+0x13c/0x180 [ 12.939994] resume_store+0xa3/0x1d0 The commit being fixed introduced a bug in copying the zero bitmap to safe pages. A temporary bitmap is allocated with PG_ANY flag in prepare_image() to make a copy of zero bitmap after the unsafe pages are marked. Freeing this temporary bitmap with PG_UNSAFE_KEEP later results in an inconsistent state of unsafe pages. Since free bit is left as is for this temporary bitmap after free, these pages are treated as unsafe pages when they are allocated again. This results in incorrect calculation of the number of pages pre-allocated for the image. nr_pages = (nr_zero_pages + nr_copy_pages) - nr_highmem - allocated_unsafe_pages; The allocate_unsafe_pages is estimated to be higher than the actual which results in running short of pages in safe_pages_list. Hence the crash is observed in get_buffer() due to NULL pointer access of safe_pages_list. Fix this issue by creating the temporary zero bitmap from safe pages (free bit not set) so that the corresponding free bits can be cleared while freeing this bitmap. Fixes: 005e8dddd497 ("PM: hibernate: don't store zero pages in the image file") Suggested-by:: Brian Geffon <bgeffon@google.com> Signed-off-by: Pavankumar Kondeti <quic_pkondeti@quicinc.com> Reviewed-by: Brian Geffon <bgeffon@google.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-26PM: hibernate: fix the kerneldoc comment for swsusp_check() and swsusp_close()Christoph Hellwig1-2/+2
The comments for both swsusp_check() and swsusp_close() don't actually describe what they are doing. Just removing the comments would probably better, but as the file is full of useless kerneldoc comments for non-exported symbols this fits in better with the style. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-26PM: hibernate: Clean up sync_read handling in snapshot_write_next()Brian Geffon1-5/+1
In snapshot_write_next(), sync_read is set and unset in three different spots unnecessiarly. As a result there is a subtle bug where the first page after the meta data has been loaded unconditionally sets sync_read to 0. If this first PFN was actually a highmem page, then the returned buffer will be the global "buffer," and the page needs to be loaded synchronously. That is, I'm not sure we can always assume the following to be safe: handle->buffer = get_buffer(&orig_bm, &ca); handle->sync_read = 0; Because get_buffer() can call get_highmem_page_buffer() which can return 'buffer'. The easiest way to address this is just set sync_read before snapshot_write_next() returns if handle->buffer == buffer. Signed-off-by: Brian Geffon <bgeffon@google.com> Fixes: 8357376d3df2 ("[PATCH] swsusp: Improve handling of highmem") Cc: All applicable <stable@vger.kernel.org> [ rjw: Subject and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-21PM: hibernate: Use __get_safe_page() rather than touching the listBrian Geffon1-4/+6
We found at least one situation where the safe pages list was empty and get_buffer() would gladly try to use a NULL pointer. Signed-off-by: Brian Geffon <bgeffon@google.com> Fixes: 8357376d3df2 ("[PATCH] swsusp: Improve handling of highmem") Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-12PM: hibernate: Fix the exclusive get block device in test_resume modeChen Yu1-6/+6
Commit 5904de0d735b ("PM: hibernate: Do not get block device exclusively in test_resume mode") fixes a hibernation issue under test_resume mode. That commit is supposed to open the block device in non-exclusive mode when in test_resume. However the code does the opposite, which is against its description. In summary, the swap device is only opened exclusively by swsusp_check() with its corresponding *close(), and must be in non test_resume mode. This is to avoid the race condition that different processes scribble the device at the same time. All the other cases should use non-exclusive mode. Fix it by really disabling exclusive mode under test_resume. Fixes: 5904de0d735b ("PM: hibernate: Do not get block device exclusively in test_resume mode") Closes: https://lore.kernel.org/lkml/000000000000761f5f0603324129@google.com/ Reported-by: Pengfei Xu <pengfei.xu@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Tested-by: Chenzhou Feng <chenzhoux.feng@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-12PM: hibernate: Rename function parameter from snapshot_test to exclusiveChen Yu2-8/+10
Several functions reply on snapshot_test to decide whether to open the resume device exclusively. However there is no strict connection between the snapshot_test and the open mode. Rename the 'snapshot_test' input parameter to 'exclusive' to better reflect the use case. No functional change is expected. Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-09-01Merge tag 'tty-6.6-rc1' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty Pull tty/serial driver updates from Greg KH: "Here is the big set of tty and serial driver changes for 6.6-rc1. Lots of cleanups in here this cycle, and some driver updates. Short summary is: - Jiri's continued work to make the tty code and apis be a bit more sane with regards to modern kernel coding style and types - cpm_uart driver updates - n_gsm updates and fixes - meson driver updates - sc16is7xx driver updates - 8250 driver updates for different hardware types - qcom-geni driver fixes - tegra serial driver change - stm32 driver updates - synclink_gt driver cleanups - tty structure size reduction All of these have been in linux-next this week with no reported issues. The last bit of cleanups from Jiri and the tty structure size reduction came in last week, a bit late but as they were just style changes and size reductions, I figured they should get into this merge cycle so that others can work on top of them with no merge conflicts" * tag 'tty-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (199 commits) tty: shrink the size of struct tty_struct by 40 bytes tty: n_tty: deduplicate copy code in n_tty_receive_buf_real_raw() tty: n_tty: extract ECHO_OP processing to a separate function tty: n_tty: unify counts to size_t tty: n_tty: use u8 for chars and flags tty: n_tty: simplify chars_in_buffer() tty: n_tty: remove unsigned char casts from character constants tty: n_tty: move newline handling to a separate function tty: n_tty: move canon handling to a separate function tty: n_tty: use MASK() for masking out size bits tty: n_tty: make n_tty_data::num_overrun unsigned tty: n_tty: use time_is_before_jiffies() in n_tty_receive_overrun() tty: n_tty: use 'num' for writes' counts tty: n_tty: use output character directly tty: n_tty: make flow of n_tty_receive_buf_common() a bool Revert "tty: serial: meson: Add a earlycon for the T7 SoC" Documentation: devices.txt: Fix minors for ttyCPM* Documentation: devices.txt: Remove ttySIOC* Documentation: devices.txt: Remove ttyIOC* serial: 8250_bcm7271: improve bcm7271 8250 port ...
2023-08-25Merge branches 'pm-sleep', 'pm-qos' and 'powercap'Rafael J. Wysocki2-40/+156
Merge system-wide power management changes and power capping updates for 6.6-rc1: - Add device PM helpers to allow a device to remain powered-on during system-wide transitions (Ulf Hansson). - Rework hibernation memory snapshotting to avoid storing pages filled with zeros in hibernation image files (Brian Geffon). - Add check to make sure that CPU latency QoS constraints do not use negative values (Clive Lin). - Optimize rp->domains memory allocation in the Intel RAPL power capping driver (xiongxin). - Remove recursion while parsing zones in the arm_scmi power capping driver (Cristian Marussi). * pm-sleep: PM: sleep: Add helpers to allow a device to remain powered-on PM: hibernate: don't store zero pages in the image file * pm-qos: PM: QoS: Add check to make sure CPU latency is non-negative * powercap: powercap: intel_rapl: Optimize rp->domains memory allocation powercap: arm_scmi: Remove recursion while parsing zones
2023-08-22PM: QoS: Add check to make sure CPU latency is non-negativeClive Lin1-2/+7
CPU latency should never be negative, which will be incorrectly high when converted to unsigned data type. Commit 8d36694245f2 ("PM: QoS: Add check to make sure CPU freq is non-negative") makes sure CPU frequency is non-negative to fix incorrect behavior in freqency QoS. Add an analogous check to make sure CPU latency is non-negative so as to prevent this problem from happening in CPU latency QoS. Signed-off-by: Clive Lin <clive.lin@mediatek.com> [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-08-20Merge commit b320441c04c9 ("Merge tag 'tty-6.5-rc7' of ↵Greg Kroah-Hartman1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty") into tty-next We need the serial-core fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-14Merge back system-wide sleep material for v6.6.Rafael J. Wysocki1-38/+149
2023-08-07PM: hibernate: fix resume_store() return value when hibernation not availableVlastimil Babka1-1/+1
On a laptop with hibernation set up but not actively used, and with secure boot and lockdown enabled kernel, 6.5-rc1 gets stuck on boot with the following repeated messages: A start job is running for Resume from hibernation using device /dev/system/swap (24s / no limit) lockdown_is_locked_down: 25311154 callbacks suppressed Lockdown: systemd-hiberna: hibernation is restricted; see man kernel_lockdown.7 ... Checking the resume code leads to commit cc89c63e2fe3 ("PM: hibernate: move finding the resume device out of software_resume") which inadvertently changed the return value from resume_store() to 0 when !hibernation_available(). This apparently translates to userspace write() returning 0 as in number of bytes written, and userspace looping indefinitely in the attempt to write the intended value. Fix this by returning the full number of bytes that were to be written, as that's what was done before the commit. Fixes: cc89c63e2fe3 ("PM: hibernate: move finding the resume device out of software_resume") Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-07-25tty: sysrq: switch sysrq handlers from int to u8Jiri Slaby1-1/+1
The passed parameter to sysrq handlers is a key (a character). So change the type from 'int' to 'u8'. Let it specifically be 'u8' for two reasons: * unsigned: unsigned values come from the upper layers (devices) and the tty layer assumes unsigned on most places, and * 8-bit: as that what's supposed to be one day in all the layers built on the top of tty. (Currently, we use mostly 'unsigned char' and somewhere still only 'char'. (But that also translates to the former thanks to -funsigned-char.)) Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Douglas Anderson <dianders@chromium.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Zqiang <qiang.zhang1211@gmail.com> Acked-by: Thomas Zimmermann <tzimmermann@suse.de> # DRM Acked-by: WANG Xuerui <git@xen0n.name> # loongarch Acked-by: Paul E. McKenney <paulmck@kernel.org> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Link: https://lore.kernel.org/r/20230712081811.29004-3-jirislaby@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24PM: hibernate: don't store zero pages in the image fileBrian Geffon1-38/+149
On ChromeOS we've observed a considerable number of in-use pages filled with zeros. Today with hibernate it's entirely possible that saveable pages are just zero filled. Since we're already copying pages word-by-word in do_copy_page it becomes almost free to determine if a page was completely filled with zeros. This change introduces a new bitmap which will track these zero pages. If a page is zero it will not be included in the saved image, instead to track these zero pages in the image file we will introduce a new flag which we will set on the packed PFN list. When reading back in the image file we will detect these zero page PFNs and rebuild the zero page bitmap. When the image is being loaded through calls to write_next_page if we encounter a zero page we will silently memset it to 0 and then continue on to the next page. Given the implementation in snapshot_read_next/snapshot_write_next this change will be transparent to non-compressed/compressed and swsusp modes of operation. To provide some concrete numbers from simple ad-hoc testing, on a device which was lightly in use we saw that: PM: hibernation: Image created (964408 pages copied, 548304 zero pages) Of the approximately 6.2GB of saveable pages 2.2GB (36%) were just zero filled and could be tracked entirely within the packed PFN list. The savings would obviously be much lower for lzo compressed images, but even in the case of compression not copying pages across to the compression threads will still speed things up. It's also possible that we would see better overall compression ratios as larger regions of "real data" would improve the compressibility. Finally, such an approach could dramatically improve swsusp performance as each one of those zero pages requires a write syscall to reload, by handling it as part of the packed PFN list we're able to fully avoid that. Signed-off-by: Brian Geffon <bgeffon@google.com> [ rjw: Whitespace adjustments, removal of redundant parentheses ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-07-14Merge branches 'pm-sleep' and 'pm-qos'Rafael J. Wysocki2-2/+8
Merge a PM QoS fix and a hibernation fix for 6.5-rc2. - Unbreak the /sys/power/resume interface after recent changes (Azat Khuzhin). - Allow PM_QOS_DEFAULT_VALUE to be used with frequency QoS (Chungkai Yang). * pm-sleep: PM: hibernate: Fix writing maj:min to /sys/power/resume * pm-qos: PM: QoS: Restore support for default value on frequency QoS
2023-07-11PM: QoS: Restore support for default value on frequency QoSChungkai Yang1-2/+7
Commit 8d36694245f2 ("PM: QoS: Add check to make sure CPU freq is non-negative") makes sure CPU freq is non-negative to avoid negative value converting to unsigned data type. However, when the value is PM_QOS_DEFAULT_VALUE, pm_qos_update_target specifically uses c->default_value which is set to FREQ_QOS_MIN/MAX_DEFAULT_VALUE when cpufreq_policy_alloc is executed, for this case handling. Adding check for PM_QOS_DEFAULT_VALUE to let default setting work will fix this problem. Fixes: 8d36694245f2 ("PM: QoS: Add check to make sure CPU freq is non-negative") Link: https://lore.kernel.org/lkml/20230626035144.19717-1-Chung-kai.Yang@mediatek.com/ Link: https://lore.kernel.org/lkml/20230627071727.16646-1-Chung-kai.Yang@mediatek.com/ Link: https://lore.kernel.org/lkml/CAJZ5v0gxNOWhC58PHeUhW_tgf6d1fGJVZ1x91zkDdht11yUv-A@mail.gmail.com/ Signed-off-by: Chungkai Yang <Chung-kai.Yang@mediatek.com> Cc: 6.0+ <stable@vger.kernel.org> # 6.0+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-07-11PM: hibernate: Fix writing maj:min to /sys/power/resumeAzat Khuzhin1-0/+1
resume_store() first calls lookup_bdev() and after tries to handle maj:min, but it does not reset the error before, hence if you will write maj:min you will get ENOENT: # echo 259:2 >| /sys/power/resume bash: echo: write error: No such file or directory This also should fix hiberation via systemd, since it uses this way. Fixes: 1e8c813b083c4 ("PM: hibernate: don't use early_lookup_bdev in resume_store") Signed-off-by: Azat Khuzhin <a3at.mail@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> [ rjw: Subject edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>