How to grow a file System in AIX
bash-3.00$ lsvg db06vg
VOLUME GROUP: db06vg VG IDENTIFIER: 00caf8cd00004c00000001043dae214d
VG STATE: active PP SIZE: 64 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 7820 (500480 megabytes)
MAX LVs: 256 FREE PPs: 2 (128 megabytes)
LVs: 1 USED PPs: 7818 (500352 megabytes)
OPEN LVs: 1 QUORUM: 6
TOTAL PVs: 10 VG DESCRIPTORS: 10
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 10 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 2032 MAX PVs: 16
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
bash-3.00$
bash-3.00$ lsvg -p db06vg
db06vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath16 active 782 0 00..00..00..00..00
vpath36 active 782 0 00..00..00..00..00
vpath37 active 782 0 00..00..00..00..00
vpath42 active 782 0 00..00..00..00..00
vpath56 active 782 0 00..00..00..00..00
vpath63 active 782 1 00..00..00..00..01
vpath17 active 782 0 00..00..00..00..00
vpath18 active 782 0 00..00..00..00..00
vpath19 active 782 1 00..00..00..00..01
vpath20 active 782 0 00..00..00..00..00
bash-3.00$ lsvpcfg vpath16
vpath16 (Avail pv db06vg) 50029479 = hdisk20 (Avail ) hdisk84 (Avail )
bash-3.00$ lsvpcfg vpath36
vpath36 (Avail pv db06vg) 00429467 = hdisk40 (Avail ) hdisk104 (Avail )
bash-3.00$ lsvpcfg vpath37
vpath37 (Avail pv db06vg) 00529467 = hdisk41 (Avail ) hdisk105 (Avail )
bash-3.00$ lsvpcfg vpath42
vpath42 (Avail pv db06vg) 10229467 = hdisk46 (Avail ) hdisk110 (Avail )
bash-3.00$ lsvpcfg vpath56
vpath56 (Avail pv db06vg) 30229467 = hdisk60 (Avail ) hdisk124 (Avail )
bash-3.00$ lsvpcfg vpath63
vpath63 (Avail pv db06vg) 60229467 = hdisk67 (Avail ) hdisk131 (Avail )
bash-3.00$ lsvpcfg vpath17
vpath17 (Avail pv db06vg) 50129479 = hdisk21 (Avail ) hdisk85 (Avail )
bash-3.00$ lsvpcfg vpath18
vpath18 (Avail pv db06vg) 50229479 = hdisk22 (Avail ) hdisk86 (Avail )
bash-3.00$ lsvpcfg vpath19
vpath19 (Avail pv db06vg) 50329479 = hdisk23 (Avail ) hdisk87 (Avail )
bash-3.00$ lsvpcfg vpath20
vpath20 (Avail pv db06vg) 50429479 = hdisk24 (Avail ) hdisk88 (Avail )
bash-3.00$
Separate the PVs by location (29479 - ESS800 in Cplace; 29467 - ESS800 in Chouse)
bash-3.00$ lsvpcfg vpath16
vpath16 (Avail pv db06vg) 50029479 = hdisk20 (Avail ) hdisk84 (Avail )
bash-3.00$ lsvpcfg vpath17
vpath17 (Avail pv db06vg) 50129479 = hdisk21 (Avail ) hdisk85 (Avail )
bash-3.00$ lsvpcfg vpath18
vpath18 (Avail pv db06vg) 50229479 = hdisk22 (Avail ) hdisk86 (Avail )
bash-3.00$ lsvpcfg vpath19
vpath19 (Avail pv db06vg) 50329479 = hdisk23 (Avail ) hdisk87 (Avail )
bash-3.00$ lsvpcfg vpath20
vpath20 (Avail pv db06vg) 50429479 = hdisk24 (Avail ) hdisk88 (Avail )
bash-3.00$ lsvpcfg vpath36
vpath36 (Avail pv db06vg) 00429467 = hdisk40 (Avail ) hdisk104 (Avail )
bash-3.00$ lsvpcfg vpath37
vpath37 (Avail pv db06vg) 00529467 = hdisk41 (Avail ) hdisk105 (Avail )
bash-3.00$ lsvpcfg vpath42
vpath42 (Avail pv db06vg) 10229467 = hdisk46 (Avail ) hdisk110 (Avail )
bash-3.00$ lsvpcfg vpath56
vpath56 (Avail pv db06vg) 30229467 = hdisk60 (Avail ) hdisk124 (Avail )
bash-3.00$ lsvpcfg vpath63
vpath63 (Avail pv db06vg) 60229467 = hdisk67 (Avail ) hdisk131 (Avail )
Unmirrorvg from the PVs in Chouse (in other to ensure that "primary" copy of VG data is on the PVs in Cplace



bash-3.00# lsvg -l db06vg
db06vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
db06lv jfs2 3909 3909 5 open/syncd /db06
bash-3.00# lsvg -p db06vg
db06vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath16 active 782 0 00..00..00..00..00
vpath36 active 782 782 157..156..156..156..157
vpath37 active 782 782 157..156..156..156..157
vpath42 active 782 782 157..156..156..156..157
vpath56 active 782 782 157..156..156..156..157
vpath63 active 782 782 157..156..156..156..157
vpath17 active 782 0 00..00..00..00..00
vpath18 active 782 0 00..00..00..00..00
vpath19 active 782 1 00..00..00..00..01
vpath20 active 782 0 00..00..00..00..00
bash-3.00#
Notice that the “FREE PPs” and “TOTAL PPs” for PVs on ESS800 in Chouse are equal i.e., they have no allocated PPs.
Add at least 2 disks to the server (1 from ESS 800 in Chouse and 1 from ESS 800 in Cplace) – if there are no free PVs on the server already.
bash-3.00# lspv | grep vpath | wc -l
78
bash-3.00# cfgmgr
bash-3.00# lspv | grep vpath | wc -l
82
(above 4 disks are added to the server – 2 from each Storage Server)
bash-3.00# lsvpcfg
.
.
.
vpath78 (Avail pv ) 10829479 = hdisk316 (Avail ) hdisk318 (Avail ) hdisk324 (Avail ) hdisk326 (Avail )
vpath79 (Avail pv ) 10929479 = hdisk317 (Avail ) hdisk319 (Avail ) hdisk325 (Avail ) hdisk327 (Avail )
vpath80 (Avail ) 00D29467 = hdisk320 (Avail ) hdisk322 (Avail ) hdisk328 (Avail ) hdisk330 (Avail )
vpath81 (Avail ) 00E29467 = hdisk321 (Avail ) hdisk323 (Avail ) hdisk329 (Avail ) hdisk331 (Avail )
bash-3.00#
Add the PVs to the VG
bash-3.00# extendvg db06vg vpath78
bash-3.00# extendvg db06vg vpath80
0516-1254 extendvg: Changing the PVID in the ODM.
bash-3.00#
bash-3.00# lsvg -p db06vg
db06vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath16 active 782 0 00..00..00..00..00
vpath36 active 782 782 157..156..156..156..157
vpath37 active 782 782 157..156..156..156..157
vpath42 active 782 782 157..156..156..156..157
vpath56 active 782 782 157..156..156..156..157
vpath63 active 782 782 157..156..156..156..157
vpath17 active 782 0 00..00..00..00..00
vpath18 active 782 0 00..00..00..00..00
vpath19 active 782 1 00..00..00..00..01
vpath20 active 782 0 00..00..00..00..00
vpath78 active 782 782 157..156..156..156..157
vpath80 active 782 782 157..156..156..156..157
bash-3.00#
Grow the filesystem by the required amount e.g., 5GB
bash-3.00# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/db06lv 244.31 13.76 95% 36 1% /db06
bash-3.00# smit lv




“80” in the figure above is derived as follows: each PP in the VG db06vg is 64MB (see output of “lsvg db06vg” command). Thus to grow the LV by 5GB, 80PPs are needed (i.e., 64MBx80 = 5GB)

bash-3.00# lsvg -p db06vg
db06vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath16 active 782 0 00..00..00..00..00
vpath36 active 782 782 157..156..156..156..157
vpath37 active 782 782 157..156..156..156..157
vpath42 active 782 782 157..156..156..156..157
vpath56 active 782 782 157..156..156..156..157
vpath63 active 782 782 157..156..156..156..157
vpath17 active 782 0 00..00..00..00..00
vpath18 active 782 0 00..00..00..00..00
vpath19 active 782 0 00..00..00..00..00
vpath20 active 782 0 00..00..00..00..00
vpath78 active 782 703 157..156..77..156..157
vpath80 active 782 782 157..156..156..156..157
(it is clear from the above command, that all the PVs on the ESS800 in Chouse are still completely unallocated i.e., vpaths 36, 37, 42, 56, 63 and 80)
Grow the filesystem on db06lv LV
bash-3.00# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/db06lv 244.31 13.76 95% 36 1% /db06
bash-3.00# chfs -a size=+10485760 /db06
Filesystem size changed to 522846208
bash-3.00# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/db06lv 249.31 18.76 93% 36 1% /db06
(Re)mirror the Volume Group
bash-3.00# smit vg

Using the PVs on the ESS800 in Chouse . . .



Check that the mirroring/syncing is going on . . .
bash-3.00# ps -ef | grep syncvg
root 2675470 1 0 18:47:36 pts/0 0:00 /bin/ksh /usr/sbin/syncvg -v db06vg
One Final Check . . . (ratio of LPs to PPs is 1:2 which implies mirroring)
bash-3.00# lsvg -l db06vg
db06vg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
db06lv jfs2 3989 7978 12 open/stale /db06
Same number of used and free PPs on both sets of PVs (Chouse versus Cplace)
bash-3.00# lsvg -p db06vg
db06vg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
vpath16 active 782 0 00..00..00..00..00
vpath36 active 782 0 00..00..00..00..00
vpath37 active 782 0 00..00..00..00..00
vpath42 active 782 0 00..00..00..00..00
vpath56 active 782 0 00..00..00..00..00
vpath63 active 782 0 00..00..00..00..00
vpath17 active 782 0 00..00..00..00..00
vpath18 active 782 0 00..00..00..00..00
vpath19 active 782 0 00..00..00..00..00
vpath20 active 782 0 00..00..00..00..00
vpath78 active 782 703 157..156..77..156..157
vpath80 active 782 703 157..156..77..156..157
Comments
Post a Comment