본문 바로가기
[AWS-FRF]/EC2

[중요][AWS] xfs 파일 시스템이있는 AWS EBS 볼륨의 크기를 줄이는 방법!!

by METAVERSE STORY 2024. 11. 13.
반응형
728x170

 

 

Asked 6 years, 9 months ago
Viewed 2k times
 
1

I am having an EBS volume of 50 GB and I want to decrease it to 20 GB. The OS is centos 7 and has XFS file system. I have followed this link but it is more specific to ext4 and Ubuntu, can someone tell me how to proceed for XFS file system type

asked Jan 23, 2018 at 7:49
Venkata S S Krishna Manikeswar
552 silver badges7 bronze badges
  •  
    I wonder if doing it in two steps would work: 1) reduce the file system size 2) Reduce the EBS volume size. You can create a copy of your current volume and try that out before you do it on a production volume. 
    – Tim
     CommentedJan 23, 2018 at 8:09

2 Answers

Sorted by:
                                              Highest score (default)                                                                   Date modified (newest first)                                                                   Date created (oldest first)                              
1

You may be able to change the XFS file system size, but you cannot decrease the size of an EBS volume.

You will need to create a new EBS volume, attach it to your instance, create a file system on the new EBS volume and then copy (migrate) the files from the old EBS volume to the new EBS volume.

answered Jan 23, 2018 at 8:39
John Hanley
5,0441 gold badge13 silver badges22 bronze badges
 
0

Here are (not really basic) but instructions on how to do this:

  1. Spin up an ec2 instance that will be your "Dupe" Server. a mX.large should be sufficient. Once you can connect to it, shut it down as you will need to mount some volumes.
  2. Create an EBS volume that will be the "New" volume. I suggest labeling both new and old volumes in the AWS console.
  3. Shut down the EC2 instance "TooBig" that you want to shrink. Hopefully you can do without it for 5-10 minutes. Depending on your volume size, it shouldn't take longer than that.
  4. Detach volume "Old" from "TooBig".
  5. Attach volume "Old" to "Dupe"
  6. Attach volume "New" to "Dupe".
  7. Start "Dupe". You will need to check that the volumes are mounted in this order. If not you'll have to modify the attached script.

-> /dev/nvme0n1p1 (this is your "Dupe" boot partition)

-> /dev/nvme1n1p1 (this should be your "Old" EBS)

-> /dev/nvme2n1p1 (this should be your "New" EBS)

Run to get all your block IDs. You will need to add them to below script.blkid

"blkidold" is the block id of your "Old" EBS.

#!/bin/bash

fdisk /dev/nvme2n1
sleep 1
mkfs.xfs /dev/nvme2n1p1
mount -o ro /dev/nvme1n1p1 /mnt/nvme1n1p1
mount /dev/nvme2n1p1 /mnt/nvme2n1p1
xfsdump -l0 -J - /mnt/nvme1n1p1 | xfsrestore -J - /mnt/nvme2n1p1
umount /mnt/*
xfs_admin -U blkidold /dev/nvme1n1p1
xfs_admin -U blkidold /dev/nvme2n1p1
chroot / grub2-install /dev/nvme2n1
  1. Stop Dupe. Detach "Old" and "New"
  2. Attach "New" to "TooBig"

You should be good to go.

Asked 6 years, 9 months ago
Viewed 2k times
 
1

I am having an EBS volume of 50 GB and I want to decrease it to 20 GB. The OS is centos 7 and has XFS file system. I have followed this link but it is more specific to ext4 and Ubuntu, can someone tell me how to proceed for XFS file system type

asked Jan 23, 2018 at 7:49
Venkata S S Krishna Manikeswar
552 silver badges7 bronze badges
  •  
    I wonder if doing it in two steps would work: 1) reduce the file system size 2) Reduce the EBS volume size. You can create a copy of your current volume and try that out before you do it on a production volume. 
    – Tim
     CommentedJan 23, 2018 at 8:09

2 Answers

Sorted by:
                                              Highest score (default)                                                                   Date modified (newest first)                                                                   Date created (oldest first)                              
1

You may be able to change the XFS file system size, but you cannot decrease the size of an EBS volume.

You will need to create a new EBS volume, attach it to your instance, create a file system on the new EBS volume and then copy (migrate) the files from the old EBS volume to the new EBS volume.

answered Jan 23, 2018 at 8:39
John Hanley
5,0441 gold badge13 silver badges22 bronze badges
 
0

Here are (not really basic) but instructions on how to do this:

  1. Spin up an ec2 instance that will be your "Dupe" Server. a mX.large should be sufficient. Once you can connect to it, shut it down as you will need to mount some volumes.
  2. Create an EBS volume that will be the "New" volume. I suggest labeling both new and old volumes in the AWS console.
  3. Shut down the EC2 instance "TooBig" that you want to shrink. Hopefully you can do without it for 5-10 minutes. Depending on your volume size, it shouldn't take longer than that.
  4. Detach volume "Old" from "TooBig".
  5. Attach volume "Old" to "Dupe"
  6. Attach volume "New" to "Dupe".
  7. Start "Dupe". You will need to check that the volumes are mounted in this order. If not you'll have to modify the attached script.

-> /dev/nvme0n1p1 (this is your "Dupe" boot partition)

-> /dev/nvme1n1p1 (this should be your "Old" EBS)

-> /dev/nvme2n1p1 (this should be your "New" EBS)

Run to get all your block IDs. You will need to add them to below script.blkid

"blkidold" is the block id of your "Old" EBS.

#!/bin/bash

fdisk /dev/nvme2n1
sleep 1
mkfs.xfs /dev/nvme2n1p1
mount -o ro /dev/nvme1n1p1 /mnt/nvme1n1p1
mount /dev/nvme2n1p1 /mnt/nvme2n1p1
xfsdump -l0 -J - /mnt/nvme1n1p1 | xfsrestore -J - /mnt/nvme2n1p1
umount /mnt/*
xfs_admin -U blkidold /dev/nvme1n1p1
xfs_admin -U blkidold /dev/nvme2n1p1
chroot / grub2-install /dev/nvme2n1
  1. Stop Dupe. Detach "Old" and "New"
  2. Attach "New" to "TooBig"

You should be good to go.

 

 

출처 : amazon web services - How to decrease the size of AWS EBS volume having xfs file system - Server Fault

 

===========================================

 

 

 

1. 전제조건
Amazon Linux EC2를 하나 생성 후 서버만 삭제 (disk는 남겨놓기 해당 디스크가 NEW DATA)
원본 디스크 스냅샷 필요 ( 데이터 마이그레이션 용도 해당 디스크는 OLD DATA)

2. 작업순서
데이터 마이그레이션 용도의 EC2를 생성 (temp EC2)
OLD DATA, NEW DATA 를 temp EC2에 ATTACH


[root@ip-10-0-2-232 ~]# lsblk

NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1        259:0    0   8G  0 disk
├─nvme0n1p1   259:1    0   8G  0 part /
├─nvme0n1p127 259:2    0   1M  0 part
└─nvme0n1p128 259:3    0  10M  0 part /boot/efi
nvme1n1        259:4    0  20G  0 disk
├─nvme1n1p1   259:5    0  20G  0 part
├─nvme1n1p127 259:6    0   1M  0 part
└─nvme1n1p128 259:7    0  10M  0 part
nvme2n1        259:8    0  10G  0 disk
├─nvme2n1p1   259:9    0  10G  0 part
├─nvme2n1p127 259:10   0   1M  0 part
└─nvme2n1p128 259:11   0  10M  0 part

nvme1n1이 OLD DATA, nvme2n1이 New DATA 이다. 



## UUID 확인 
[root@ip-10-0-2-232 ~]# blkid

/dev/nvme0n1p1: LABEL="/" UUID="4852a7fc-c9bb-4adb-868c-1d581e504784" BLOCK_SIZE="4096" TYPE="xfs" PARTLABEL="Linux" PARTUUID="b3ec81c5-2c1b-4de9-b486-153198edb40e"
/dev/nvme0n1p128: SEC_TYPE="msdos" UUID="8521-A91D" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="b9adbfb3-c8b3-4ab4-a2ed-27328001f274"
/dev/nvme0n1p127: PARTLABEL="BIOS Boot Partition" PARTUUID="9262e49d-6c5b-4e93-a5fe-07c4f4b43e4f"

/dev/nvme2n1p127: PARTLABEL="BIOS Boot Partition" PARTUUID="9262e49d-6c5b-4e93-a5fe-07c4f4b43e4f"
/dev/nvme2n1p1: LABEL="/" UUID="4852a7fc-c9bb-4adb-868c-1d581e504784" BLOCK_SIZE="4096" TYPE="xfs" PARTLABEL="Linux" PARTUUID="b3ec81c5-2c1b-4de9-b486-153198edb40e"
/dev/nvme2n1p128: SEC_TYPE="msdos" UUID="8521-A91D" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="b9adbfb3-c8b3-4ab4-a2ed-27328001f274"

/dev/nvme1n1p127: PARTLABEL="BIOS Boot Partition" PARTUUID="b96ba8ec-c3b3-4ee3-95e9-8a2fca3897df"
/dev/nvme1n1p1: LABEL="/" UUID="7951bdaa-ebaf-456e-b87a-9ac58dec1fd0" BLOCK_SIZE="4096" TYPE="xfs" PARTLABEL="Linux" PARTUUID="9ca9624e-302f-4930-b9fa-32c0f0286e19"
/dev/nvme1n1p128: SEC_TYPE="msdos" UUID="5202-6EA7" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="eb30c2f0-3796-4964-914f-1bb56900e3e9"


각 디스크의 UUID 를 확인할 수 있다. 여기서 nvme0n1p1과 nvme2np1의 UUID가 같다는 것을 알 수 있다. 



## 마운트 지점 생성하기 
[root@ip-10-0-2-232 ~]# mkdir /mnt/new_data
[root@ip-10-0-2-232 ~]# mkdir /mnt/old_data




## 마운트
[root@ip-10-0-2-232 ~]# mount /dev/nvme1n1p1 /mnt/old_data/
[root@ip-10-0-2-232 ~]# mount /dev/nvme2n1p1 /mnt/new_data/
mount: /mnt/new_data: wrong fs type, bad option, bad superblock on /dev/nvme2n1p1, missing codepage or helper program, or other error.


Old DATA는 잘 Mount 됐지만 New Data는 에러가 떴다. 
이는 위에서본 UUID가 겹쳐서 생기는 일이다. 

해당 New DATA를 Mount 하려면 옵션값을 줘야 한다. 

[root@ip-10-0-2-232 ~]# mount -o nouuid /dev/nvme2n1p1 /mnt/new_data/

이렇게 -o nouuid를 입력하면 해당 작업에서 uuid를 일시적으로 중복체크를 하지 않게 된다. 


[root@ip-10-0-2-232 ~]# xfsdump -l0 -J - /mnt/old_data | xfsrestore -J - /mnt/new_data

xfsdump: using file dump (drive_simple) strategy
xfsrestore: using file dump (drive_simple) strategy
xfsdump: version 3.1.11 (dump format 3.0)
xfsrestore: version 3.1.11 (dump format 3.0)
xfsdump: level 0 dump of ip-10-0-2-232.ap-northeast-2.compute.internal:/mnt/old_data
xfsdump: dump date: Tue Nov 12 01:22:21 2024
xfsdump: session id: 795cde68-fff5-4474-8c14-36a375c14448
xfsdump: session label: ""
xfsrestore: searching media for dump
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 6172019584 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: ip-10-0-2-232.ap-northeast-2.compute.internal
xfsrestore: mount point: /mnt/old_data
xfsrestore: volume: /dev/nvme1n1p1
xfsrestore: session time: Tue Nov 12 01:22:21 2024
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: 7951bdaa-ebaf-456e-b87a-9ac58dec1fd0
xfsrestore: session id: 795cde68-fff5-4474-8c14-36a375c14448
xfsrestore: media id: 17b537c4-bd23-4656-9478-6d9965e467e0
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsdump: dumping non-directory files
xfsrestore: 7864 directories and 81698 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsdump: ending media file
xfsdump: media file size 6081518992 bytes
xfsdump: dump size (non-dir files) : 6029213392 bytes
xfsdump: dump complete: 75 seconds elapsed
xfsdump: Dump Status: SUCCESS
xfsrestore: restore complete: 76 seconds elapsed
xfsrestore: Restore Status: SUCCESS

해당 명령을 통해서 old_data의 전체 내용을 백업해 new_data로 옮겨주는 작업을 진행했다. 






## 작업1) 부팅 디스크 작업
[root@ip-10-0-2-232 ~]# mount /dev/nvme2n1p128 /mnt/new_data/boot/efi/

nvme2n1p128은 부팅관련 파티션이므로 boot/efi 쪽으로 마운트 해 준다. 

[root@ip-10-0-2-232 ~]# cd /mnt/new_data/boot/efi/
[root@ip-10-0-2-232 efi]# cd EFI/amzn/
[root@ip-10-0-2-232 amzn]# vi grub.cfg

기존 (NEW 디스크)
search.fs_uuid 4852a7fc-c9bb-4adb-868c-1d581e504784 root
set no_modules=y
set prefix=($root)'/boot/grub2'
configfile $prefix/grub.cfg

변경 (NEW 디스크)
search.fs_uuid 7951bdaa-ebaf-456e-b87a-9ac58dec1fd0 root
set no_modules=y
set prefix=($root)'/boot/grub2'
configfile $prefix/grub.cfg







## 작업2) fstab 변경
[root@ip-10-0-2-232 ~]# vi /mnt/new_data/etc/fstab


기존 (NEW 디스크)
UUID=7951bdaa-ebaf-456e-b87a-9ac58dec1fd0     /           xfs    defaults,noatime  1   1
UUID=5202-6EA7        /boot/efi       vfat    defaults,noatime,uid=0,gid=0,umask=0077,shortname=winnt,x-systemd.automount 0 2

변경 (NEW 디스크)
UUID=7951bdaa-ebaf-456e-b87a-9ac58dec1fd0     /           xfs    defaults,noatime  1   1
UUID=8521-A91D        /boot/efi       vfat    defaults,noatime,uid=0,gid=0,umask=0077,shortname=winnt,x-systemd.automount 0 2


부팅 시 UUID를 확인하는데 이를 New Data 의 UUID를 OLD Data의 UUID에 맞춰 변경해 준 것이다. 

[root@ip-10-0-2-232 ~]# umount /mnt/new_data/boot/efi
[root@ip-10-0-2-232 ~]# umount /mnt/new_data





## 작업3) UUID 변경 (UMOUNT 후, 할것!!)
[root@ip-10-0-2-232 ~]# xfs_admin -U 7951bdaa-ebaf-456e-b87a-9ac58dec1fd0 /dev/nvme2n1p1
Clearing log and setting UUID
writing all SBs
new UUID = 7951bdaa-ebaf-456e-b87a-9ac58dec1fd0ㅣ






## 작업4) Disk 교체

 

 

 

 

반응형
그리드형

댓글