Skip to content
 

Top – Linux System Metrics: Statistics And Interpretation

Background

An important part of running Linux servers is to be able to identify and interpret the performance metrics the operating system provides.  It is usually easy (but not always) to identify performance bottlenecks.  Many a system administrator has gotten the monitoring alert that things are wrong.  With a bit of experience it should be fairly easy to identify, fix, and prevent the problem before the owner or stakeholder gets notified by a downstream alert or angry client.

‘top’

‘top’ is probably the most used utility for this task.  Top shows the time, uptime, users, load average, task info, cpu info, memory info, and individual process info.  99% of the time, in my experience, this data will help lead to the issues.  Here is an example ‘top’ header output from a low power virtual machine:

top – 02:57:33 up 37 days, 9:52, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 80 total, 1 running, 79 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 534332k total, 456008k used, 78324k free, 4236k buffers
Swap: 1048568k total, 132972k used, 915596k free, 169448k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 40 0 23588 964 536 S 0 0.2 3:38.65 init
2 root 40 0 0 0 0 S 0 0.0 0:00.00 kthreadd
3 root RT 0 0 0 0 S 0 0.0 0:00.06 migration/0
4 root 20 0 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
5 root 20 0 0 0 0 S 0 0.0 0:00.07 events/0
6 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper

Load Average

Load average is usually a very trustworthy metric on how the general system is doing.  I say usually because I have experienced situations where the load average belies system health and I will cover that.  If your system must respond in real-time you do not want it to get backlogged with CPU demand.  If you do not need real time response, based on experience, your load should not exceed 4-8x the number of cores for any appreciable amount of time.  At some point the CPU demand coupled with the underlying kernel requirements for context switches and interrupts could cascade out of control and just about freeze the system.

A load average metric represents the number of processes demanding CPU-time for a given time frame.  If you have one single program demanding 100% cpu time for 1 minute, the load average is 1.00 for that 1 minute.  If you have one single program using 100% of the CPU for .1 minutes, the load average would be .1 for a minute period.  The three load averages given by ‘top’ are for 1, 5, and 15 minute intervals.  As a general rule of thumb you do not want your load average to exceed the number of CPU cores.  Here are some examples:

  1.  load average: 1.00, 0.20, 0.06  – computer generally idle, 1 program at 100% for 1 minute; should be ok for a 1+ core system
  2. load average: 0.00, 0.5, 2.50 – computer idle for the last minute, had a bit of load over the last 5 minutes, something intensive the last 15 minutes; questionable for 2+ core system
  3. load average: 2.15, 3.20, 1.50 – constant load; problematic for a single core system, pushing limits for a 2 core system, probably ok for a 3+ core system
  4. load average: 6.25, 9.76, 8.44 – serious load; could see major delays for anything less than 8 cores, i’d recommend 16 cores for this load.

Exception: the one instance I have seen a good system yet crazy load averages ( e.g. 45.00, 56.00, 47.00 – quad core) has been running Tomcat servers.  They appear to have processes that demand CPU time but do not actually use the CPU for some reason.  The system was very responsive (low cpu usage) yet according to the load should have been hung up.  If the load points to problems it is always good to make a web, db, or other application request to confirm things are slower than they should be.  A slow shell generally points to lagging performance.

Tasks

The task statistics show the total, active, sleeping, stopped, and zombie processes.  The total is simply the total of processes loaded into memory.  Active processes are the current number of processes that are running.  Sleeping processes are those that are active but ‘blocking’ at the system level waiting for a reason to run.  Stopped processes are active processes that have been ‘paused’ via a signal.  Zombie processes are those which have exited but the parent process has not acknowledged that exit.  Zombie processes used to be quite common years ago but nowadays rarely show up in the most used applications.  I view any zombies as a sign of problems unless dealing with an application that routinely creates them, which is very rare.

Generally unless there is a runaway spawning of programs the number of tasks will be of little concern.  Each process requires a small amount of CPU time to check on it to see if it needs to be run.  On modern processors even several hundred programs will require a fairly minuscule amount of management overhead.  If for some reason there are many thousands of processes running this could begin to take a noticable amount of CPU to manage them.  There could be also be significant CPU usage if there are many hundreds of programs waking up many hundreds of times per second even if they go almost directly back to sleep.

CPU Statistics

The CPU usage statistics are broken up into eight categories.

  • us – user:  User space programs
  • sy – system:  Kernel usage
  • ni – nice: User space programs running at the lowest priority, any other priority levels would fully preempt these
  • id – idle: Percent of cycles not being used
  • wa – wait: Percent of cycles not used due to IO waits, e.g. Hard drive access
  • hi – hardware interrupt: Percent of cycles spent in hardware interrupt code
  • si – software interrupt: Percent of cycles spent in software interrupt code
  • st – stolen time: Percent of cycles demanded given to other virtual machines (when your system is a virtual machine)

If you have a single core or are not listing all CPUs these percentages will equate to total system capability. Pressing the ‘1’ key will expand the CPU list to show the stats for each individual CPU. In combined mode on a quad core system, 25% CPU usage would equate to a single core at full usage. In expanded mode the single CPU would show as 100%.

‘us’ or user time is generally the one to keep an eye on.  If you are running at full capacity you may need to upgrade or spread the workload.  If things are slow and only the ‘wa’ (wait) value is high, then the bottleneck is with IO (network, disk, etc).  If ‘st’ is high the VM host may be too busy, think about spreading out your VMs.  ‘sy’ (system) and ‘hi’ (hardware interrupts) may be higher than  normal during heavy network loads.

Memory

The memory statistics show the system-wide usage of your available memory.  The metrics are ‘total, used, free, buffers, cached’.  ‘Cached’ shows up on the swap line but is not related.

  • Total – The total amount of RAM available to the system
  • Used – The total amount of RAM allocated by the system.  This includes ‘buffers’ and ‘cached’.
  • Free – The total amount of pure unused RAM in the system.
  • Buffers – The total amount of RAM used for critical system buffers
  • Cached – The total amount of cached data.  This data will be free’d whenever there is not enough ‘free’ RAM in the system.

The major thing to keep in mind is that ‘cached’ data is almost as good as ‘free’ data.  If you have 8GB of ram, 7.5GB used, 7GB in cache, and .5GB free, you effectively have 7.5GB of unused RAM.  The cache will fill up with accessed disk bocks and things to make their future access faster, but will be given up the moment more RAM is needed.  The more that is cached the better the performance will be from disks, but you can’t just go by the ‘free’ value.

Swap should generally not be actively used if a system requires real time performance.  Swap pushes out memory blocks to storage devices to make more free RAM.  Generally storage devices are very slow compared to RAM and constant use will affect system performance heavily compared to physical RAM.

Process Statistics

The process statistics show stats for the individual running processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

  • PID – the process ID.  This can be important to know when you want to send signals to a process (kill for example)
  • User – this is the username or user id of the user running the process
  • PR – this is the priority of the process
  • NI – this is the ‘nice’ level of the process
  • VIRT – this shows the ‘total’ ram accessible by the process.  It may be very large because it includes all mapped libraries, mapped buffers (e.g. video), and shared memory.
  • RES – resident usage.  This number is the physical total of the program itself and used libray components and is directly used for the %MEM field.
  • SHR – shared usage.  This represents the RAM that is actually shareable.  A mapped library will show up under VIRT and SHR while only the components in use will be added to RES.
  • S – state.  This can be ‘D’ (uninterruptable – e.g disk access), ‘R’ (running), ‘S’ (sleeping), ‘T’ (being traced), ‘Z’ (zombie)
  • %CPU, %MEM – show the current instant CPU and RAM usage respectively.
  • Time+ – the total number of CPU time used since the process has started.  On a quad core machine a constant running 4 thread program can use almost 4 seconds of CPU time per second.
  • Command – the name of the command that was run.

Conclusion

‘top’ is one of the most important tools in the Linux system administrators toolbox.  Learning to interpret the output is key to successfully troubleshooting problems and identifying system and process health.

Leave a Reply