DB Node Patching Error : Total amount of free space in the volume group is less than 2GB

Faced below issue while patching on Exadata DB node.

While running dbnodeupdate.sh we got below warning.

./dbnodeupdate.sh -u -l /u01/19625719/Infrastructure/11.2.3.3.1/ExadataDatabaseServer/p18876946_112331_Linux-x86-64.zip -v
Continue ? [Y/n]
y
(*) 2015-02-03 19:33:28: Unzipping helpers (/u01/19625719/Infrastructure/ExadataDBNodeUpdate/3.60/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
(*) 2015-02-03 19:33:28: Initializing logfile /var/log/cellos/dbnodeupdate.log
(*) 2015-02-03 19:33:29: Collecting system configuration details. This may take a while...
(*) 2015-02-03 19:33:56: Validating system details for known issues and best practices. This may take a while...
(*) 2015-02-03 19:33:56: Checking free space in /u01/19625719/Infrastructure/11.2.3.3.1/ExadataDatabaseServer/iso.stage.030215193245
(*) 2015-02-03 19:33:57: Unzipping /u01/19625719/Infrastructure/11.2.3.3.1/ExadataDatabaseServer/p18876946_112331_Linux-x86-64.zip to /u01/19625719/Infrastructure/11.2.3.3.1/ExadataDatabaseServer/iso.stage.030215193240, this may take a while
(*) 2015-02-03 19:34:08: Original /etc/yum.conf moved to /etc/yum.conf.030215193240, generating new yum.conf
(*) 2015-02-03 19:34:08: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo

Warning: Total amount of free space in the volume group is less than 2GB. Unable to make a snapshot

Warning: Backup functionality will not be provided by dbnodeupdate.sh and need to be made separately

Issue:


Not enough free space (at least 2GB) in the volume group for a backup snapshot to be taken.

This is because the logical volume associated with the /u01 file system must have been manually edited to increase useable space thus leaving very little free space at the volume group level. 

This could be seen on one or many nodes.

Solution:

To overcome from above situation we need to reduce the LV associated with /u01. Here we have reduce it to 725GB from 739GB

1. Find the LV name associated with /u01

[root@db01 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   24G  5.0G  83% /
/dev/sda1             494M   38M  431M   9% /boot
/dev/mapper/VGExaDb-LVDbOra1
                      734G  516G  162G  77% /u01
tmpfs                 215G   59G  157G  28% /dev/shm

2. Backup all data on /u01 in case of data loss during reduce procedure.

3. Stop all applications utilizing /u01. In most cases this means stoping Databases and CRS. And un-mount the /u01 file system

#umount /u01


4. Run File System check

[root@db01 /]# e2fsck -f /dev/mapper/VGExaDb-LVDbOra1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
DBORA: 4324615/98172928 files (8.8% non-contiguous), 112484897/196345856 blocks

5. Reduce the size of the file system to 725 GB

[root@db01 /]# resize2fs /dev/mapper/VGExaDb-LVDbOra1 725G
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/mapper/VGExaDb-LVDbOra1 to 190054400 (4k) blocks.
The filesystem on /dev/mapper/VGExaDb-LVDbOra1 is now 190054400 blocks long.

6. Now reduce the size of the LV to 725 GB

[root@db02 /]#lvreduce -L 725G /dev/mapper/VGExaDb-LVDbOra1
  WARNING: Reducing active logical volume to 725.00 GB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LVDbOra1? [y/n]: y
  Reducing logical volume LVDbOra1 to 725.00 GB
  Logical volume LVDbOra1 successfully resized
  
7. Now mount resized /u01 partition.

#mount -t ext3 /dev/mapper/VGExaDb-LVDbOra1 /u01

8. Execute vgs command to confirm free space

[root@db01 ~]# vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  VGExaDb   1   4   0 wz--n- 833.98G 25.98G

9. Now you can go ahead with DB node patching using dbnodeupdate.sh utility.

You can also refer Oracle Document ID - 1644872.1

Note: You can refer this article as reference only as we have truncated some data because of the security concern.

No comments:

Post a Comment