lab-time: upgrading grid infrastructure (gi) from 12.1 to 18c – the final version
In an earlier blogpost, I was playing around in an unsupported way to upgrade my lab Grid Infrastructure from 12.1 to 18c. The problem there was, that the 18c software was not officially available for on-premises installations. Now it is! During my holidays I had a little time to play around with it and this is how I upgraded my cluster.
Reading the documentation it seems very easy (and it really is). Unzip the software and run gridsetup.sh. I wouldn’t write a blogpost if I didn’t encounter something, would I?
Installation
Software staging
Create the new directories
1 2 3 |
[root@labvmr01n01 ~]# mkdir -p /u01/app/18.0.0/grid [root@labvmr01n01 ~]# chown -R grid:oinstall /u01/app/18.0.0 [root@labvmr01n01 ~]# |
And this has to be done on all the nodes.
Unzipping the software, has to be done as the owner of the Grid Infrastructure on the first node only:
1 2 |
[grid@labvmr01n01 grid]$ unzip -qd /u01/app/18.0.0/grid /ora18csoft/V974952-01.zip [grid@labvmr01n01 grid]$ |
Prechecks
Perform some very basic quick checks to ensure the cluster is healthy.
Check your current release and version:
1 2 3 |
[root@labvmr01n01 ~]# crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [12.1.0.2.0] [root@labvmr01n01 ~]# |
1 2 3 |
[root@labvmr01n01 ~]# crsctl query crs softwareversion Oracle Clusterware version on node [labvmr01n01] is [12.1.0.2.0] [root@labvmr01n01 ~]# |
Determine your active version:
1 2 3 |
[root@labvmr01n01 ~]# crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3544584551]. [root@labvmr01n01 ~]# |
Remark. My active patch level is 3544584551. This is important. Because the installer checks for patch 21255373 being installed on your software. It’s a full rolling patch which is applied using opatchauto and I did not have any issues on my environment, so I won’t cover that here.
And a quick check if we can talk to the crs
1 2 3 4 5 6 |
[root@labvmr01n01 ~]# /u01/app/12.1.0/grid/bin/crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@labvmr01n01 ~]# |
And then I usually gather some evidence.
1 2 |
[root@labvmr01n01 ~]# /u01/app/12.1.0/grid/bin/crsctl stat res -t > /root/crsstatus.txt [root@labvmr01n01 ~]# |
That way, I can always refer back on “which was the output again”.
Cluvfy
As the Grid infrastructure owner, run the cluvfy in pre-install mode.
1 |
/u01/app/18.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0/grid -dest_crshome /u01/app/18.0.0/grid -dest_version 18.0.0.0.0 -fixup -verbose |
The outcome in my case:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
Pre-check for cluster services setup was unsuccessful. Checks did not pass for the following nodes: labvmr01n04,labvmr01n03,labvmr01n02,labvmr01n01 Failures were encountered during execution of CVU verification request "stage -pre crsinst". Verifying Swap Size ...FAILED labvmr01n04: PRVF-7573 : Sufficient swap size is not available on node "labvmr01n04" [Required = 7.6739GB (8046680.0KB) ; Found = 1023.9961MB (1048572.0KB)] labvmr01n03: PRVF-7573 : Sufficient swap size is not available on node "labvmr01n03" [Required = 7.6739GB (8046680.0KB) ; Found = 1023.9961MB (1048572.0KB)] labvmr01n02: PRVF-7573 : Sufficient swap size is not available on node "labvmr01n02" [Required = 7.6739GB (8046680.0KB) ; Found = 1023.9961MB (1048572.0KB)] labvmr01n01: PRVF-7573 : Sufficient swap size is not available on node "labvmr01n01" [Required = 7.6739GB (8046680.0KB) ; Found = 1023.9961MB (1048572.0KB)] CVU operation performed: stage -pre crsinst Date: Aug 27, 2018 4:20:31 PM CVU home: /u01/app/18.0.0/grid/ User: grid [grid@labvmr01n01 grid]$ |
this looks good to me, so good to go.
Setup
As written earlier, doing the upgrade is simple. Unzip the software and run gridSetup.sh. When you have a response file, you can use that to do a silent installation upgrade, otherwise just use the gui.
I have an x-server available and a stable network, so this time I did it interactively:
1 2 |
[grid@labvmr01n01 ~]$ export DISPLAY=:1 [grid@labvmr01n01 ~]$ /u01/app/18.0.0/grid/gridSetup.sh |
Then the installer window pops up:
OUI reads my mind and selected the good radio button in my case “Upgrade Oracle Grid infrastructure”.
You need to verify if all your nodes are listed. My 4 nodes are correctly listed and I have SSH key equivalence already setup from my previous installation.
I’m fine with the default Oracle Base and my Software Location (GI home) matches the directory in which the software has been unzipped previously.
Let’s be a little lazy and check if it works, so I entered my root password so Oracle can run the root.sh and other configuration scripts for me.
I like this option for bigger clusters actually. You can determine batches in which the root and configuration scripts will be executed. As you will see later in the wizard, it OUI gives you the choice to execute the scripts now or on a later moment in time. I can think of some use cases in which this comes in useful. So let’s try it, and I created 2 batches.
you won’t escape, the OUI will do some pre-checks as well
There we go. After unpacking the 18c software on my first node, he thinks /u01 is too small. It actually is not, but this is my lab and I will remove the 12.1 software afterwards anyhow and I have the necessary space available. So in this particular case it is safe to ignore. I would not continue in a production environment. It would be better to extend the /u01 filesystem. But again, it’s a lab and I know it fits, so I could ignore this one safely.
Mandatory confirmation
and off we can go. I usually save my response file for later use
And the installers takes off. Of cours I forgot to take the initial screenshot but the next one is interesting
How nice. The installer is being nice to you and asks permission to use the root password.
This excites the root scripts
and here it asks if you want to perform the scripts on the second batch as well. I want this, so “Execute now”.
And finally my upgrade succeeded! Yay!
Post tasks
First things first, verify if all went well
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[root@labvmr01n01 ~]# which crsctl /u01/app/18.0.0/grid/bin/crsctl [root@labvmr01n01 ~]# crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [18.0.0.0.0] [root@labvmr01n01 ~]# crsctl query crs softwareversion Oracle Clusterware version on node [labvmr01n01] is [18.0.0.0.0] [root@labvmr01n01 ~]# crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493]. [root@labvmr01n01 ~]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@labvmr01n01 ~]# |
That looks good to me.
The easiest thing to see if all is running again:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
[grid@labvmr01n01 ~]$ /u01/app/18.0.0/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFS.VOL_ACFS.advm ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ACFS.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.DATA.GHCHKPT.advm OFFLINE OFFLINE labvmr01n01 STABLE OFFLINE OFFLINE labvmr01n02 STABLE OFFLINE OFFLINE labvmr01n03 STABLE OFFLINE OFFLINE labvmr01n04 STABLE ora.DATA.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.LISTENER.lsnr ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.RECO.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.acfs.vol_acfs.acfs ONLINE ONLINE labvmr01n01 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n02 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n03 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n04 mounted on /acfs,STA BLE ora.data.ghchkpt.acfs OFFLINE OFFLINE labvmr01n01 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE OFFLINE OFFLINE labvmr01n02 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE OFFLINE OFFLINE labvmr01n03 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE OFFLINE OFFLINE labvmr01n04 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE ora.helper OFFLINE OFFLINE labvmr01n01 STABLE OFFLINE OFFLINE labvmr01n02 STABLE OFFLINE OFFLINE labvmr01n03 STABLE OFFLINE OFFLINE labvmr01n04 STABLE ora.net1.network ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ons ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.proxy_advm ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE labvmr01n01 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE labvmr01n02 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE labvmr01n03 STABLE ora.MGMTLSNR 1 ONLINE ONLINE labvmr01n01 169.254.21.36 192.16 8.123.1,STABLE ora.asm 1 ONLINE ONLINE labvmr01n01 STABLE 2 ONLINE ONLINE labvmr01n02 STABLE 3 ONLINE ONLINE labvmr01n03 STABLE 4 ONLINE ONLINE labvmr01n04 STABLE ora.cdbrac.db 1 ONLINE ONLINE labvmr01n01 Open,HOME=/u01/app/o racle/product/12.1.0 /dbhome_1,STABLE 2 ONLINE ONLINE labvmr01n02 Open,HOME=/u01/app/o racle/product/12.1.0 /dbhome_1,STABLE ora.cdbrac.raclab_pdb_ha_test.svc 1 ONLINE ONLINE labvmr01n01 STABLE 2 ONLINE ONLINE labvmr01n02 STABLE ora.cvu 1 ONLINE ONLINE labvmr01n04 STABLE ora.gns 1 ONLINE ONLINE labvmr01n01 STABLE ora.gns.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.labvmr01n01.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.labvmr01n02.vip 1 ONLINE ONLINE labvmr01n02 STABLE ora.labvmr01n03.vip 1 ONLINE ONLINE labvmr01n03 STABLE ora.labvmr01n04.vip 1 ONLINE ONLINE labvmr01n04 STABLE ora.mgmtdb 1 ONLINE ONLINE labvmr01n01 Open,STABLE ora.qosmserver 1 ONLINE ONLINE labvmr01n04 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.scan2.vip 1 ONLINE ONLINE labvmr01n02 STABLE ora.scan3.vip 1 ONLINE ONLINE labvmr01n03 STABLE -------------------------------------------------------------------------------- [grid@labvmr01n01 ~]$ |
Next steps was to
- enable the volume and acfs volume GHCHKPT.
- enable and start the rhp (rapid home provisioning)
In my case they were not enabled by default. You can choose, or you do it in the brand new fancy asmca and click around. In the settings box, you can enter the root password, which makes life a little easier, or you use the commandline. It’s up to you.
right click the GHCKPT volume and click “enable on all nodes”
and in my case it threw me an error
“CRS-2501: Resource ‘ora.DATA.GHCHKPT.advm’ is disabled”
now what …
ok… back to the CLI then, because I didn’t found a quick way to do it in the asmca interface:
first check the volumes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[grid@labvmr01n01 ~]$ srvctl config volume Diskgroup name: ACFS Volume name: VOL_ACFS Volume device: /dev/asm/vol_acfs-159 Volume is enabled. Volume is individually enabled on nodes: Volume is individually disabled on nodes: Diskgroup name: DATA Volume name: GHCHKPT Volume device: /dev/asm/ghchkpt-436 Volume is disabled. Volume is individually enabled on nodes: Volume is individually disabled on nodes: [grid@labvmr01n01 ~]$ |
Then we now know the device name and we can enable it.
1 2 |
[grid@labvmr01n01 ~]$ srvctl enable volume -device /dev/asm/ghchkpt-436 [grid@labvmr01n01 ~]$ |
When we now retry the same operation in the asmca gui, the operation succeeds
Then you can ask the interface to show the acfs mount command:
and it tells you exactly what to do
So basically … you should mount the filesystem yourself. That’s ok for me.
So back to the cli
1 2 3 4 |
[root@labvmr01n01 ~]# /u01/app/18.0.0/grid/bin/srvctl start filesystem -d /dev/asm/ghchkpt-436 PRCA-1138 : failed to start one or more file system resources: CRS-2501: Resource 'ora.data.ghchkpt.acfs' is disabled [root@labvmr01n01 ~]# |
*sigh* ok ok … this error is simple, enable it and then start it:
1 2 3 4 5 |
[root@labvmr01n01 ~]# /u01/app/18.0.0/grid/bin/srvctl enable filesystem -d /dev/asm/ghchkpt-436 [root@labvmr01n01 ~]# /u01/app/18.0.0/grid/bin/srvctl start filesystem -d /dev/asm/ghchkpt-436 [root@labvmr01n01 ~]# /u01/app/18.0.0/grid/bin/srvctl status filesystem -d /dev/asm/ghchkpt-436 ACFS file system /opt/oracle/rhp_images/chkbase is mounted on nodes labvmr01n01,labvmr01n02,labvmr01n03,labvmr01n04 [root@labvmr01n01 ~]# |
and then it works.
Then the rhp is still to be done.
For rhp, you must do it using cli as the grid user
1 2 3 4 5 |
[grid@labvmr01n01 ~]$ /u01/app/18.0.0/grid/bin/srvctl start rhpserver [grid@labvmr01n01 ~]$ /u01/app/18.0.0/grid/bin/srvctl status rhpserver Rapid Home Provisioning Server is enabled Rapid Home Provisioning Server is running on node labvmr01n03 [grid@labvmr01n01 ~]$ |
after doing all this, everything is online
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
[grid@labvmr01n01 ~]$ /u01/app/18.0.0/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFS.VOL_ACFS.advm ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ACFS.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.DATA.GHCHKPT.advm ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.DATA.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.LISTENER.lsnr ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.RECO.dg ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.acfs.vol_acfs.acfs ONLINE ONLINE labvmr01n01 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n02 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n03 mounted on /acfs,STA BLE ONLINE ONLINE labvmr01n04 mounted on /acfs,STA BLE ora.data.ghchkpt.acfs ONLINE ONLINE labvmr01n01 mounted on /opt/orac le/rhp_images/chkbas e,STABLE ONLINE ONLINE labvmr01n02 mounted on /opt/orac le/rhp_images/chkbas e,STABLE ONLINE ONLINE labvmr01n03 mounted on /opt/orac le/rhp_images/chkbas e,STABLE ONLINE ONLINE labvmr01n04 mounted on /opt/orac le/rhp_images/chkbas e,STABLE ora.helper ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.net1.network ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.ons ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE ora.proxy_advm ONLINE ONLINE labvmr01n01 STABLE ONLINE ONLINE labvmr01n02 STABLE ONLINE ONLINE labvmr01n03 STABLE ONLINE ONLINE labvmr01n04 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE labvmr01n01 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE labvmr01n02 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE labvmr01n03 STABLE ora.MGMTLSNR 1 ONLINE ONLINE labvmr01n01 169.254.21.36 192.16 8.123.1,STABLE ora.asm 1 ONLINE ONLINE labvmr01n01 STABLE 2 ONLINE ONLINE labvmr01n02 STABLE 3 ONLINE ONLINE labvmr01n03 STABLE 4 ONLINE ONLINE labvmr01n04 STABLE ora.cdbrac.db 1 ONLINE ONLINE labvmr01n01 Open,HOME=/u01/app/o racle/product/12.1.0 /dbhome_1,STABLE 2 ONLINE ONLINE labvmr01n02 Open,HOME=/u01/app/o racle/product/12.1.0 /dbhome_1,STABLE ora.cdbrac.raclab_pdb_ha_test.svc 1 ONLINE ONLINE labvmr01n01 STABLE 2 ONLINE ONLINE labvmr01n02 STABLE ora.cvu 1 ONLINE ONLINE labvmr01n04 STABLE ora.gns 1 ONLINE ONLINE labvmr01n01 STABLE ora.gns.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.labvmr01n01.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.labvmr01n02.vip 1 ONLINE ONLINE labvmr01n02 STABLE ora.labvmr01n03.vip 1 ONLINE ONLINE labvmr01n03 STABLE ora.labvmr01n04.vip 1 ONLINE ONLINE labvmr01n04 STABLE ora.mgmtdb 1 ONLINE ONLINE labvmr01n01 Open,STABLE ora.qosmserver 1 ONLINE ONLINE labvmr01n04 STABLE ora.rhpserver 1 ONLINE ONLINE labvmr01n03 STABLE ora.scan1.vip 1 ONLINE ONLINE labvmr01n01 STABLE ora.scan2.vip 1 ONLINE ONLINE labvmr01n02 STABLE ora.scan3.vip 1 ONLINE ONLINE labvmr01n03 STABLE -------------------------------------------------------------------------------- [grid@labvmr01n01 ~]$ |
Yay!
Gotcha
Biggest gotcha’s during this upgrade was basically the advm and acfs volume which weren’t enabled by default. Is this a problem? Not really. Just something to take into account and something to check/verify. It also depends if you want this or not.
Something else I noticed.
I did not document this here as such, but in order to perform the upgrade (coming from 12.1.0.2), you need 23,5GB usable free in the diskgroup for the cluster. In my case the +DATA diskgroup. To do this, (on 12.1) I moved the GIMR (MGMTDB) out of ASM and I have put it into an acfs filesystem:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[grid@labvmr01n01 ~]$ cd /acfs/GIMR/_MGMTDB/datafile/ [grid@labvmr01n01 datafile]$ ls -la total 2097288 drwxr-x--- 2 grid oinstall 4096 Aug 27 22:06 . drwxr-x--- 3 grid oinstall 4096 Aug 27 22:06 .. -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysaux_fr85zpjv_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysgridh_fr85zpk6_.dbf -rw-r----- 1 grid oinstall 2147491840 Aug 27 17:25 o1_mf_sysmgmtd_fr85zpj1_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysmgmtd_fr85zpkl_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_system_fr85zpjh_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_users_fr85zpky_.dbf [grid@labvmr01n01 datafile]$ [grid@labvmr01n01 datafile]$ df -h /acfs/ Filesystem Size Used Avail Use% Mounted on /dev/asm/vol_acfs-159 9.0G 2.4G 6.7G 27% /acfs [grid@labvmr01n01 datafile]$ |
So far so good. But why does the upgrade need 23GB for then? You would be surprised (or not) … it’s the GIMR.
1 2 3 4 5 6 |
[grid@labvmr01n01 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 1048576 10239 840 0 840 0 N ACFS/ MOUNTED EXTERN N 512 512 4096 1048576 30717 1327 0 1327 0 Y DATA/ MOUNTED EXTERN N 512 512 4096 1048576 10239 5816 0 5816 0 N RECO/ [grid@labvmr01n01 ~]$ |
you see who’s back?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[grid@labvmr01n01 ~]$ asmcmd ASMCMD> cd +DATA ASMCMD> ls -l Type Redund Striped Time Sys Name Y ASM/ Y CDBRAC/ Y _MGMTDB/ Y labvmr01-clu/ PASSWORD UNPROT COARSE MAR 24 15:00:00 N orapwasm => +DATA/ASM/PASSWORD/pwdasm.256.971624423 PASSWORD UNPROT COARSE AUG 27 20:00:00 N orapwasm_backup => +DATA/ASM/PASSWORD/pwdasm.260.985294029 ASMCMD> cd _MGMTDB/ ASMCMD> ls -l Type Redund Striped Time Sys Name Y 7471E8972D0817FEE0534101000A2C36/ Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ ASMCMD> du Used_MB Mirror_used_MB 21530 21530 ASMCMD> du DATAFILE/ Used_MB Mirror_used_MB 1081 1081 ASMCMD> du 7471E8972D0817FEE0534101000A2C36/ Used_MB Mirror_used_MB 17508 17508 ASMCMD> |
GIMR is playing Houdini. But you have freed up my “old” gimr location, didn’t you?
1 2 3 4 5 6 7 8 9 10 11 12 |
[grid@labvmr01n01 ~]$ cd /acfs/GIMR/_MGMTDB/datafile/ [grid@labvmr01n01 datafile]$ ls -la total 2097288 drwxr-x--- 2 grid oinstall 4096 Aug 27 22:06 . drwxr-x--- 3 grid oinstall 4096 Aug 27 22:06 .. -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysaux_fr85zpjv_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysgridh_fr85zpk6_.dbf -rw-r----- 1 grid oinstall 2147491840 Aug 27 17:25 o1_mf_sysmgmtd_fr85zpj1_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_sysmgmtd_fr85zpkl_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_system_fr85zpjh_.dbf -rw-r----- 1 grid oinstall 16384 Aug 27 17:25 o1_mf_users_fr85zpky_.dbf [grid@labvmr01n01 datafile]$ |
what did you think… I wonder if when you keep the GIMR on the same location (in ASM), if same behaviour appears, but in case you played around with it. Take this into account.
Cleanup
A common thing to forget, but get rid of the old 12.1 home. But that I will cover in another blogpost.
As always, questions, remarks? find me on twitter @vanpupi