Diagnosing and reallocating linux space

From Phormix Wiki
Jump to: navigation, search

Diagnosing linux space issues (and reallocating space as necessary)

Purpose

This document explains how to check space on a linux host and reallocate as necessary

Pre-Requisities

  • A user with access to the host in question via SSH or console
  • Root-access via su or sudo
  • Filesystem is on LVM with some unallocated space


Process

Login to host (via SSH or console)

Become "root" with either su or sudo

[joeuser@myserver<prod>:~]# sudo su -
[sudo] password for joeuser:

 

Check partitions for space (megabytes)

[root@myserver<prod>:~]# df -m
Filesystem                         1M-blocks  Used Available Use% Mounted on
devtmpfs                                7922     0      7922   0% /dev
tmpfs                                   7938     0      7938   0% /dev/shm
tmpfs                                   7938    17      7922   1% /run
tmpfs                                   7938     0      7938   0% /sys/fs/cgroup
/dev/mapper/vg_local-root               2038    73      1966   4% /
/dev/mapper/vg_local-usr                4086  2360      1727  58% /usr
/dev/sda1                                495   254       242  52% /boot
/dev/mapper/vg_local-opt                4086    61      4026   2% /opt
/dev/mapper/vg_local-home              28662 24174      4489  85% /home
/dev/mapper/vg_local-var               57334  1471     55864   3% /var
/dev/mapper/vg_local-var_tmp            2038    47      1992   3% /var/tmp
/dev/mapper/vg_local-var_log            4086   116      3971   3% /var/log
/dev/mapper/vg_local-var_log_audit      1014  1014         1 100% /var/log/audit
/dev/mapper/vg_local-tmp                1014    40       975   4% /tmp

In some cases, you may have enough space but have run out of iNodes (too many files). You can also check this

[root@myserver<prod>:~]# df -i
Filesystem                           Inodes IUsed    IFree IUse% Mounted on
devtmpfs                            2027893   450  2027443    1% /dev
tmpfs                               2032080     1  2032079    1% /dev/shm
tmpfs                               2032080   756  2031324    1% /run
tmpfs                               2032080    17  2032063    1% /sys/fs/cgroup
/dev/mapper/vg_local-root           1048576  1099  1047477    1% /
/dev/mapper/vg_local-usr            2097152 56526  2040626    3% /usr
/dev/sda1                            256000   315   255685    1% /boot
/dev/mapper/vg_local-opt            2097152     6  2097146    1% /opt
/dev/mapper/vg_local-home           9192328    70  9192258    1% /home
/dev/mapper/vg_local-var           29360128 27052 29333076    1% /var
/dev/mapper/vg_local-var_tmp        1048576     3  1048573    1% /var/tmp
/dev/mapper/vg_local-var_log        2097152   140  2097012    1% /var/log
/dev/mapper/vg_local-var_log_audit  1048576   125  1048451    1% /var/log/audit
/dev/mapper/vg_local-tmp             524288    11   524277    1% /tmp

In this case, the issue is with actual disk space and not inodes. This is the more common issue
Per the above, we can see that the full partition is in /var/log/audit

Check out that location

[root@myserver<prod>:~]# cd /var/log/audit
[root@myserver<prod>:audit]# du -ms *
6       audit.log
9       audit.log.1
9       audit.log.10
9       audit.log.100
9       audit.log.101
9       audit.log.102
9       audit.log.103
9       audit.log.104
9       audit.log.105
9       audit.log.106
9       audit.log.107
9       audit.log.108
9       audit.log.109
9       audit.log.11
9       audit.log.110
9       audit.log.111
9       audit.log.112
9       audit.log.113
9       audit.log.114
9       audit.log.115
9       audit.log.116
9       audit.log.117
9       audit.log.118
9       audit.log.119
9       audit.log.12
9       audit.log.120
9       audit.log.121
9       audit.log.13
9       audit.log.14
9       audit.log.15
9       audit.log.16
9       audit.log.17
9       audit.log.18
etc etc

So we see that there are plenty of files in here. They appear to be uncompressed and plentiful, so it's just full.
Let's check what's mounted at that location

[root@myserver<prod>:audit]# mount | grep audit
/dev/mapper/vg_local-var_log_audit on /var/log/audit type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

The good news it that this is part of an an LVM volume-group, as seen by the /dev/mapper/vg_ xxxxx
Assuming we've got some space to spare, we can reallocate some space to it.

Let's check our volume group

[root@myserver<prod>:audit]# vgdisplay vg_local
  --- Volume group ---
  VG Name               vg_local
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  16
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                10
  Open LV               10
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               109.50 GiB
  PE Size               4.00 MiB
  Total PE              28033
  Alloc PE / Size       26880 / 105.00 GiB
  Free  PE / Size       1153 / 4.50 GiB
  VG UUID               l4GCci-d22C-gB0B-VxGI-o2dU-iTL6-rzAt8H

The row near the bottom which says "Free PE / Size" indicates we've got 4.5GiB unallocated, so let's give a bit more to that partition as it only had 1GB to begin with
We can allocate some of that free space to the full partition, so let's give it an extra gigabyte

[root@myserver<prod>:audit]# lvresize -L +1G /dev/mapper/vg_local-var_log_audit
  Size of logical volume vg_local/var_log_audit changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents).
  Logical volume vg_local/var_log_audit successfully resized.

That looks good, so how's our space now?

[root@myserver<prod>:audit]# df -m
Filesystem                         1M-blocks  Used Available Use% Mounted on
devtmpfs                                7922     0      7922   0% /dev
tmpfs                                   7938     0      7938   0% /dev/shm
tmpfs                                   7938    17      7922   1% /run
tmpfs                                   7938     0      7938   0% /sys/fs/cgroup
/dev/mapper/vg_local-root               2038    73      1966   4% /
/dev/mapper/vg_local-usr                4086  2360      1727  58% /usr
/dev/sda1                                495   254       242  52% /boot
/dev/mapper/vg_local-opt                4086    61      4026   2% /opt
/dev/mapper/vg_local-home              28662 24174      4489  85% /home
/dev/mapper/vg_local-var               57334  1471     55864   3% /var
/dev/mapper/vg_local-var_tmp            2038    47      1992   3% /var/tmp
/dev/mapper/vg_local-var_log            4086   116      3971   3% /var/log
/dev/mapper/vg_local-var_log_audit      1014  1014         1 100% /var/log/audit
/dev/mapper/vg_local-tmp                1014    40       975   4% /tmp

Oops, still full? That's because we also need resize the filesystem. Again, we're lucky as it's an XFS filesystem which can be dynamically increased while mounted (ext4 also supports this)

[root@myserver<prod>:audit]# xfs_growfs /var/log/audit
meta-data=/dev/mapper/vg_local-var_log_audit isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 524288

Let's check the space again...

[root@myserver<prod>:audit]# df -m
Filesystem                         1M-blocks  Used Available Use% Mounted on
devtmpfs                                7922     0      7922   0% /dev
tmpfs                                   7938     0      7938   0% /dev/shm
tmpfs                                   7938    17      7922   1% /run
tmpfs                                   7938     0      7938   0% /sys/fs/cgroup
/dev/mapper/vg_local-root               2038    73      1966   4% /
/dev/mapper/vg_local-usr                4086  2360      1727  58% /usr
/dev/sda1                                495   254       242  52% /boot
/dev/mapper/vg_local-opt                4086    61      4026   2% /opt
/dev/mapper/vg_local-home              28662 24174      4489  85% /home
/dev/mapper/vg_local-var               57334  1471     55864   3% /var
/dev/mapper/vg_local-var_tmp            2038    47      1992   3% /var/tmp
/dev/mapper/vg_local-var_log            4086   114      3973   3% /var/log
/dev/mapper/vg_local-var_log_audit      2038  1022      1017  51% /var/log/audit
/dev/mapper/vg_local-tmp                1014    40       975   4% /tmp

Much better. Our partition now has double the space it had previously. However, I would still look at the underlying issue and ensure any files are being compressed and rotated on an appropriate schedule

 

Other References

For a slightly more in-depth view, see also Resizing_a_Linux_partition_without_reboot