AIX环境通过增加lv大小来增加ASM diskgroup 大小----非常规方法,aixdiskgroup
先交代环境: AIX 7.1 Oracle/ASM 11.2.0.3 单实例今测试环境需要将ASM中flashdg的大小增大到35G(目前10G),diskgroup是基于在vg中以raw方式划分出来的lv创建的。 由于是测试环境仅通知AIX管理员,AIX管理员直接通过smit lv 增加fsflashdglv的pp个数,添加之后再操作系统中可以发现lv是增大了,但是flashdg1没有增大。下面来描述整个过程。 -----此为非常规方案,常规方案应该是新建lv后,以add disk的方式来增加flashdg的大小。
系统管理员增加fsflashdglv的大小后: root@seven1:/.root>lsvg -l datavg datavg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT fsoraapplv jfs2 280 280 1 open/syncd /oraapp loglv00 jfs2log 1 1 1 open/syncd N/A fsdatadglv raw 160 160 1 open/syncd N/A fsdatadg1lv raw 240 240 2 open/syncd N/A fsflashdglv raw 280 280 2 open/syncd N/A
grid@seven1:/home/grid>asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 51200 15238 0 15238 0 N DATADG/ MOUNTED EXTERN N 512 4096 1048576 10240 4057 0 4057 0 N FLASHDG1/ grid@seven1:/home/grid>
发现flashdg并没有增大。
利用Oracle的kfed工具获取asmdisk的信息: grid@seven1:/home/grid>kfod disk=all -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 30720 Mb /dev/rfsdatadg1lv grid asmadmin 2: 20480 Mb /dev/rfsdatadglv grid asmadmin 3: 35840 Mb /dev/rfsflashdglv grid asmadmin -------------------------------------------------------------------------------- ORACLE_SID ORACLE_HOME ================================================================================ +ASM /oraapp/grid/gridhome
lv的大小已经是35G,再看一下disk header block: grid@seven1:/home/grid>kfed read /dev/rfsflashdglv aun=0 kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 3587268014 ; 0x00c: 0xd5d15dae kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISK ; 0x000: length=8 kfdhdb.driver.reserved[0]: 0 ; 0x008: 0x00000000 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 168820736 ; 0x020: 0x0a100000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: FLASHDG1_0000 ; 0x028: length=13 kfdhdb.grpname: FLASHDG1 ; 0x048: length=8 kfdhdb.fgname: FLASHDG1_0000 ; 0x068: length=13 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33006034 ; 0x0a8: HOUR=0x12 DAYS=0xe MNTH=0x8 YEAR=0x7de kfdhdb.crestmp.lo: 3537131520 ; 0x0ac: USEC=0x0 MSEC=0x116 SECS=0x2d MINS=0x34 kfdhdb.mntstmp.hi: 33006410 ; 0x0b0: HOUR=0xa DAYS=0x1a MNTH=0x8 YEAR=0x7de kfdhdb.mntstmp.lo: 1403252736 ; 0x0b4: USEC=0x0 MSEC=0xfc SECS=0x3a MINS=0x14 kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 10240 ; 0x0c4: 0x00002800 kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 33006034 ; 0x0e4: HOUR=0x12 DAYS=0xe MNTH=0x8 YEAR=0x7de kfdhdb.grpstmp.lo: 3536962560 ; 0x0e8: USEC=0x0 MSEC=0x71 SECS=0x2d MINS=0x34 kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 0 ; 0x0f4: 0x00000000 kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000 kfdhdb.ub4spare[0]: 0 ; 0x0fc: 0x00000000 kfdhdb.ub4spare[1]: 0 ; 0x100: 0x00000000 kfdhdb.ub4spare[2]: 0 ; 0x104: 0x00000000 kfdhdb.ub4spare[3]: 0 ; 0x108: 0x00000000 kfdhdb.ub4spare[4]: 0 ; 0x10c: 0x00000000 kfdhdb.ub4spare[5]: 0 ; 0x110: 0x00000000 kfdhdb.ub4spare[6]: 0 ; 0x114: 0x00000000 kfdhdb.ub4spare[7]: 0 ; 0x118: 0x00000000 kfdhdb.ub4spare[8]: 0 ; 0x11c: 0x00000000 kfdhdb.ub4spare[9]: 0 ; 0x120: 0x00000000 kfdhdb.ub4spare[10]: 0 ; 0x124: 0x00000000 kfdhdb.ub4spare[11]: 0 ; 0x128: 0x00000000 kfdhdb.ub4spare[12]: 0 ; 0x12c: 0x00000000 kfdhdb.ub4spare[13]: 0 ; 0x130: 0x00000000 kfdhdb.ub4spare[14]: 0 ; 0x134: 0x00000000 kfdhdb.ub4spare[15]: 0 ; 0x138: 0x00000000 kfdhdb.ub4spare[16]: 0 ; 0x13c: 0x00000000 kfdhdb.ub4spare[17]: 0 ; 0x140: 0x00000000 kfdhdb.ub4spare[18]: 0 ; 0x144: 0x00000000 kfdhdb.ub4spare[19]: 0 ; 0x148: 0x00000000 kfdhdb.ub4spare[20]: 0 ; 0x14c: 0x00000000 kfdhdb.ub4spare[21]: 0 ; 0x150: 0x00000000 kfdhdb.ub4spare[22]: 0 ; 0x154: 0x00000000 kfdhdb.ub4spare[23]: 0 ; 0x158: 0x00000000 kfdhdb.ub4spare[24]: 0 ; 0x15c: 0x00000000 kfdhdb.ub4spare[25]: 0 ; 0x160: 0x00000000 kfdhdb.ub4spare[26]: 0 ; 0x164: 0x00000000 kfdhdb.ub4spare[27]: 0 ; 0x168: 0x00000000 kfdhdb.ub4spare[28]: 0 ; 0x16c: 0x00000000 kfdhdb.ub4spare[29]: 0 ; 0x170: 0x00000000 kfdhdb.ub4spare[30]: 0 ; 0x174: 0x00000000 kfdhdb.ub4spare[31]: 0 ; 0x178: 0x00000000 kfdhdb.ub4spare[32]: 0 ; 0x17c: 0x00000000 kfdhdb.ub4spare[33]: 0 ; 0x180: 0x00000000 kfdhdb.ub4spare[34]: 0 ; 0x184: 0x00000000 kfdhdb.ub4spare[35]: 0 ; 0x188: 0x00000000 kfdhdb.ub4spare[36]: 0 ; 0x18c: 0x00000000 kfdhdb.ub4spare[37]: 0 ; 0x190: 0x00000000 kfdhdb.ub4spare[38]: 0 ; 0x194: 0x00000000 kfdhdb.ub4spare[39]: 0 ; 0x198: 0x00000000 kfdhdb.ub4spare[40]: 0 ; 0x19c: 0x00000000 kfdhdb.ub4spare[41]: 0 ; 0x1a0: 0x00000000 kfdhdb.ub4spare[42]: 0 ; 0x1a4: 0x00000000 kfdhdb.ub4spare[43]: 0 ; 0x1a8: 0x00000000 kfdhdb.ub4spare[44]: 0 ; 0x1ac: 0x00000000 kfdhdb.ub4spare[45]: 0 ; 0x1b0: 0x00000000 kfdhdb.ub4spare[46]: 0 ; 0x1b4: 0x00000000 kfdhdb.ub4spare[47]: 0 ; 0x1b8: 0x00000000 kfdhdb.ub4spare[48]: 0 ; 0x1bc: 0x00000000 kfdhdb.ub4spare[49]: 0 ; 0x1c0: 0x00000000 kfdhdb.ub4spare[50]: 0 ; 0x1c4: 0x00000000 kfdhdb.ub4spare[51]: 0 ; 0x1c8: 0x00000000 kfdhdb.ub4spare[52]: 0 ; 0x1cc: 0x00000000 kfdhdb.ub4spare[53]: 0 ; 0x1d0: 0x00000000 kfdhdb.acdb.aba.seq: 0 ; 0x1d4: 0x00000000 kfdhdb.acdb.aba.blk: 0 ; 0x1d8: 0x00000000 kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000 kfdhdb.acdb.ub2spare: 0 ; 0x1de: 0x0000 grid@seven1:/home/grid>
disk header block显示此时asm diskgroup 中的lv确实是10G,并非35G。最后想是不是grid端还会有其他操作?
问了一圈朋友,没问出来,最后问了一个大神同事,他告诉我试试将disk resize。 反正是测试环境,试试呗: SQL>alter diskgroup FLASHDG1 resize disk FLASHDG1_0000 size 35840M;
Diskgroup altered.
貌似可以了,赶紧看看: grid@seven1:/home/grid>asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 51200 15238 0 15238 0 N DATADG/ MOUNTED EXTERN N 512 4096 1048576 35840 29657 0 29657 0 N FLASHDG1/ grid@seven1:/home/grid>
Ok,此时flashdg1 认到35G了。
如果在10.2.0.4 以后版本当向ASM
Diskgroup中加入新的磁盘后diskgroup被dismount,尝试mount该diskgroup时报错ORA-15042: ASM
disk is missing after add disk took place,那么可以参考本帖。
Tue Feb 12 17:33:59 2013
NOTE: X->S down convert bast on F1B3 bastCount=2
Wed Feb 13 04:06:38 2013 < ALTER DISKGROUP DG1 ADD DISK
'/dev/mapper/t1_asm03p1',
'/dev/mapper/t1_asm04p1',
'/dev/mapper/t1_asm05p1',
'/dev/mapper/t1_asm06p1'
rebalance power 4
Wed Feb 13 04:06:38 2013
NOTE: reconfiguration of group 1/0x53bffa1 (DG1), full=1
Wed Feb 13 04:06:39 2013
NOTE: initializing header on grp 1 disk DG1_0026
NOTE: initializing header on grp 1 disk DG1_0027
NOTE: initializing header on grp 1 disk DG1_0028
NOTE: initializing header on grp 1 disk DG1_0029
NOTE: cache opening disk 26 of grp 1: DG1_0026 path:/dev/mapper/t1_asm03p1
NOTE: cache opening disk 27 of grp 1: DG1_0027 path:/dev/mapper/t1_asm04p1
NOTE: cache opening disk 28 of grp 1: DG1_0028 path:/dev/mapper/t1_asm05p1
NOTE: cache opening disk 29 of grp 1: DG1_0029 path:/dev/mapper/t1_asm06p1
NOTE: PST update: grp = 1
NOTE: requesting all-instance disk validation for group=1
Wed Feb 13 04:06:39 2013
NOTE: disk validation pending for group 1/0x53bffa1 (DG1)
Wed Feb 13 04:06:40 2013
NOTE: requesting all-instance membership refresh for group=1
Wed Feb 13 04:06:40 2013
NOTE: membership refresh pending for group 1/0x53bffa1 (DG1)
SU......余下全文>>
这里可以看到Diskgroup mount到了recovering COD for group 1/0x3a2c35d6 (DG)阶段时,发现了一个逻辑坏块WARNING: cache read a corrupted block gn=1 dsk=0 blk=2817 from disk 0 NOTE: a corrupted block was dumped to the trace file ERROR: cache failed to read dsk=0 blk=2817 from disk(s): 0,并因为该坏块引起了ORA-15196: invalid ASM block header [kfc.c:8281] [endian_kfbh] [2147483648] [2817] [173 != 1]。
这里2817是出错的ASM metadata的block number,173是实际从endian_kfbh位置读出的值,173!=1 这里的1是该位置理论上该有的值,由于读取到block中错误的字节序endian_kfbh信息,所以这里出现了ASM ORA-600错误。
这里recovering COD for group 1/0x3a2c35d6 (DG) 里的COD 指asm metadata file number 4 COD, Continuing Operation Directory (COD) 该metadata file 4 中记录的是在单个metadata block中无法完成的操作记录到COD中,这样当ASM instance crash时可以恢复这些操作。例如创建 删除和resize文件,这其中file number 4 blkn=1为KFBTYP_COD_RB 即回滚rollback数据,后面的数据为KFBTYP_COD_DATA。
可回滚的操作opcodes包括:
1 - Create a file
2 - Delete a file
3 - Resize a file
4 - Drop alias entry
5 - Rename alias entry
6 - Rebalance space COD
7 - Drop disks force
8 - Attribute drop
9 - Disk Resync
10 - Disk Repair Time
11 - Volume create
12 - Volume delete
13 - Attribute directory creation
14 - Set zone attributes
15 - User drop
每次ASM diskgroup 尝试mount时都会读取FILE number 4 COD中的数据来保证操作要么完成、要么回滚。
对于此类ASM file number 4 COD出现了源数据坏块的情况, 一般需要手动设置内部事件,并尝试手动Patch ASM metadata的手法才能修复。
建议遇到此类事件第一时间备份ASM disk header 100M的数据,保护现场,以便专业恢复人员介入恢复时现场不被破坏。
如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!...余下全文>>