Introduction

The Linux operating system used in an enterprise environment will be running a large number of processes both system and application at any given point of time. These processes will use the compute resources of the underlying hardware depending on the current load on the system and the process priorities or nice values. There might be processes which could be consuming an excessive amount of compute resources thereby adversely affecting the performance of the system. Other processes would have to wait for the processes presently consuming large amount of compute resources to complete before they could be granted access to the system’s CPU capacity to perform required operations. The problem of certain processes consuming too much compute power can now be solved by using a utility called CPUlimit. As the name implies, CPULimit limits the CPU usage of a process.  The main aim of the CPULimit tool is prevent a process from running for more than a specified time ratio.  It does not change the nice value i.e. the process priority but instead it changes the actual CPU usage of a process. It is able to adapt itself to the overall system load, dynamically and quickly.  In this article we will demonstrate how we can install and use this utility to limit CPU usage of processes on our Linux systems.

Installation
Installing CPULimit is a fairly straightforward process.

On apt based systems execute the following command

apt install cpulimit

On YUM based systems we need to have the EPEL repositoy enabled. Once the EPEL repositoy is enabled and available, execute the following command

[root@linuxnix ~]# yum install cpulimit -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.usonyx.net
* epel: mirrors.aliyun.com
* extras: centos.usonyx.net
* nux-dextop: li.nux.ro
* updates: centos.usonyx.net
Resolving Dependencies
--> Running transaction check
---> Package cpulimit.x86_64 1:0.2-1.20151118gitf4d2682.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
cpulimit x86_64 1:0.2-1.20151118gitf4d2682.el7 epel 16 k

Transaction Summary
================================================================================
Install 1 Package

Total download size: 16 k
Installed size: 26 k
Downloading packages:
cpulimit-0.2-1.20151118gitf4d2682.el7.x86_64.rpm | 16 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 1:cpulimit-0.2-1.20151118gitf4d2682.el7.x86_64 1/1
Verifying : 1:cpulimit-0.2-1.20151118gitf4d2682.el7.x86_64 1/1

Installed:
cpulimit.x86_64 1:0.2-1.20151118gitf4d2682.el7

Complete!
[root@linuxnix ~]#

Testing and demonstration
Now that CPULimit is installed on our system, we’ll now put it to use and validate its functionality. To do this we’ll need a process that can use a lot of CPU on the system.  So I’ve created the below script which is basically an infinite while loop and will end up using a lot of compute resources.

[root@linuxnix ~]# cat shoot_cpu.bash
#!/bin/bash

while true
do
:
done
[root@linuxnix ~]#

To run the script let’s make it executable.

[root@linuxnix ~]# chmod +x shoot_cpu.bash
[root@linuxnix ~]# ls -l shoot_cpu.bash
-rwxr-xr-x. 1 root root 36 Sep 5 16:23 shoot_cpu.bash
[root@linuxnix ~]#

Now let’s execute the script with the & argument next to it so that it runs in the background and we get our prompt back as soon as the script execution commences.

[root@linuxnix ~]# ./shoot_cpu.bash &
[1] 1748
[root@linuxnix ~]#

Our little script containing the infinite while loop is now in execution.

[root@linuxnix ~]# ps -ef | grep 174[8]
root 1748 1474 99 16:24 pts/0 00:00:36 /bin/bash ./shoot_cpu.bash
[root@linuxnix ~]#

Now let’s check the CPU usage for this process using the top command

[root@linuxnix ~]# top -p 1748
top - 16:26:36 up 23 min, 1 user, load average: 0.81, 0.29, 0.14
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1881196 total, 1300368 free, 137024 used, 443804 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 1556400 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1748 root 20 0 113172 1188 1004 R 99.7 0.1 1:37.49 shoot_cpu.bash

As you may have observed the CPU utilization has hit 100% and all of it is being used by the script we executed. Now we will use CPUlimit to limit the CPU utilization of this process to 20%.

[root@linuxnix ~]# cpulimit -l 20 -p 1748 &

Let’s check the CPU utilization of the process again using the top command like we did earlier.

[root@linuxnix ~]# top -p 1748
top - 16:45:57 up 43 min, 1 user, load average: 0.72, 0.99, 0.80
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 19.7 us, 0.3 sy, 0.0 ni, 80.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1881196 total, 1291248 free, 140576 used, 449372 buff/cache
KiB Swap: 2097148 total, 2097148 free, 0 used. 1550700 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1748 root 20 0 113172 1188 1004 R 19.6 0.1 20:23.00 shoot_cpu.bash

 

As you see in the above output, the CPU usage of shoot_cpu.bash has been dropped to 19.6%, which is very close to 20%.  Now, you can have more CPU resources to run other processes freely.

 

Conclusion

CPU limit can prove to be a valuable tool to have in your toolkit when you are working with systems that run processes frequently constrained for system resources. It can also be used to monitor the behavior of application processes under limited compute capacity.

The following two tabs change content below.

Sahil Suri

He started his career in IT in 2011 as a system administrator. He has since worked with HP-UX, Solaris and Linux operating systems along with exposure to high availability and virtualization solutions. He has a keen interest in shell, Python and Perl scripting and is learning the ropes on AWS cloud, DevOps tools, and methodologies. He enjoys sharing the knowledge he's gained over the years with the rest of the community.