sexta-feira, 26 de julho de 2013

Oracle 11.2 PSU Patch GI + RDBMS - Auto

Aplicando um patch psu em ambiente com Oracle Infrastructure, GI Standalone com Oracle RDBMS através do opatchauto.

O opatch auto autonomiza a aplicação completa do Patch para todo ambiente, seja GI, RAC e RDBMS. Deve ser criado um response file e o executado através do root. A aplicação de patchs em ambientes com tais caracteristicas podem ser realizadas manualmente e por etapas, porém acredito que devam ser realizadas com bastante cautela por algum DBA com experiencia e conhecimento suficientes, pois o procedimento deve ser seguido a risca senão o patch irá falhar, a seguir há uma imagem do workflow feito pelo opatch auto tirada do manual da Oracle Opatch Auto Workflow.

Ambiente


SO Version: Oracle Linux 4.7
Oracle Version: Oracle 11.2.0.3
Grid OS Owner: grid
RDBMS OS Owner: oracle
RDBMS BASE: /oracle/app/oracle
RDBMS HOME: /oracle/app/oracle/product/11.2.0.3/db_1
GI BASE: /grid/app/grid
GI HOME: /grid/app/11.2.0.3/grid
Instances: +ASM , orcl

Pre-Requisitos

Opatch Version
Validar segundo documento do patch qual a versão minima do Opatch requerida

$ /OPatch/opatch version
 

Validar o Invetário Conforme Home

$ /OPatch/opatch lsinventory -oh

Download e extração do Pacote do Patch

Patch deve ser acessivel ao grupo de instalação do Oracle e não deve estar contido no /tmp.

Configurar um response file

Parar os serviços do Agent do Enterprise manager:

$ORACLE_HOME/bin/emctl stop dbconsole

Configurar um response file

Atividade obrigatória para realizar a instalação do Patch, durante a instalação do patch será questionado sobre o OCM ( Oracle Configuration Manager ), o qual devido a instalação em silent mode deve estar contido no response file.
A seguir estou configurando o response file para não configuração do OCM, a primeira pergunta não informo e-mail e dou enter, e confirmo novamente.

[grid@os-indaia-orcl-prod ~]$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output $ORACLE_HOME/config.rsp
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  y
The OCM configuration response file (/grid/app/11.2.0.3/grid/config.rsp) was successfully created.
 

Boas Praticas

  • Garantir um backup Full da base de dados.
  • Garantir um backup da instalaçao dos binarios do Oracle.
  • Garantir um backup do inventário.
  • Verificar os conflitos entre os patchs existente com o novo Pacote. 
  • Guardar informações do banco de dados relacionados a pacotes antigos , e componentes .
  • Lista de objetos invalidos.
  • Lista de indices invalidos.
  • Garantir espaço livre no home do Oracle para aplicação do Patch. 

 Instalando

Para esta instalação estaremos aplicando o patch em ambos produtos, GI e RDBMS simultaneamente, fora o stop do EM Agent não será necessário parar mais nenhum serviço, o opatch auto irá cuidar de todo o processo.

# opatch auto -ocmft

[root@os-indaia-orcl-prod /]# /grid/app/11.2.0.3/grid/OPatch/opatch auto /oracle_patch -ocmrf /grid/app/11.2.0.3/grid/config.rsp

Executing /grid/app/11.2.0.3/grid/perl/bin/perl /grid/app/11.2.0.3/grid/OPatch/crs/patch11203.pl -patchdir / -patchn oracle_patch -ocmrf /grid/app/11.2.0.3/grid/config.rsp -paramfile /grid/app/11.2.0.3/grid/crs/install/crsconfig_params

/grid/app/11.2.0.3/grid/crs/install/crsconfig_params
/grid/app/11.2.0.3/grid/crs/install/s_crsconfig_defs

This is the main log file: /grid/app/11.2.0.3/grid/cfgtoollogs/opatchauto2013-07-26_22-46-53.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /grid/app/11.2.0.3/grid/cfgtoollogs/opatchauto2013-07-26_22-46-53.report.log

2013-07-26 22:46:53: Starting Oracle Restart Patch Setup

Using configuration parameter file: /grid/app/11.2.0.3/grid/crs/install/crsconfig_params

patch //oracle_patch/16619898/custom/server/16619898  apply successful for home  /oracle/app/oracle/product/11.2.0.3/db_1
patch //oracle_patch/16619892  apply successful for home  /oracle/app/oracle/product/11.2.0.3/db_1

Successfully unlock /grid/app/11.2.0.3/grid

patch //oracle_patch/16619898  apply successful for home  /grid/app/11.2.0.3/grid
patch //oracle_patch/16619892  apply successful for home  /grid/app/11.2.0.3/grid

CRS-4123: Oracle High Availability Services has been started.
Failed to start resources from  database home /oracle/app/oracle/product/11.2.0.3/db_1


No meu caso ocorreu erro para iniciar o banco de dados, este passo fiz manualmente com o sqlplus.

Post Installation

Após iniciado o ou os bancos de dados, seguir com esta passo para todas os DBs:


[oracle@os-indaia-orcl-prod admin]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Fri Jul 26 23:34:58 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL> connect / as sysdba
Connected.

SQL> @catbundle_PSU_ORCL_APPLY.sql
...
...
...
SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/oracle/app/oracle/cfgtoollogs/catbundle/catbundle_PSU_ORCL_APPLY_2013Jul26_23_35_44.log


Como melhores praticas sempre executo um @?/rdbms/admin/utlrp.sql e um stop/start de cada instancia antes de liberar o ambiente, mas fica a critério, depende de cada patch.

Validação

Para confirmar que a instalação foi com sucesso, vamos verificar o dicionário e os binários:

select action_time, version, COMMENTS, BUNDLE_SERIES
from registry$history;

ACTION_TIME             VERSION        COMMENTS         BUNDLE_SERIES
---------------------------- -------------- -------------------- --------------
18-SEP-11 06.02.59.465921 PM 11.2.0.3        Patchset 11.2.0.2.0  PSU
01-JUN-13 07.10.20.940999 PM 11.2.0.3        Patchset 11.2.0.2.0  PSU
26-JUL-13 11.35.45.217246 PM 11.2.0.3        Patchset 11.2.0.2.0  PSU


$ opatch lsinventory -bugs_fixed

O que irá retornar muitas moleculas portanto deixo de fora.

Concluindo, a aplicação de um patch completa executado pelo opatch auto ficou muito simples, para quem já passou pelos patchs de 8i ou até mesmo 9i vai se surpreender quando se deparar com uma situação dessa, futuramente quero demonstrar esse mesmo patch porém um pouco mais complexo onde iremos exeutar manualmente os steps do opatch auto.

Por favor não deixe de me corrigir ou comentar caso tenha alguma falha no artigo.

Abraços.

domingo, 21 de julho de 2013

Linux - Logical Partitions

Introduction

LVM is used to create logical partition that can span data between multiple physical hard disks. Online LV and VG resizing, raid volumes, mirror and strip are some features of LVM. A well documentation about LVM can be find in Logical Volume Manager (Linux) .

Architecture

The follow image describe very well what is the relationship between LVM components.
- Physical driver are divided into Partitions.
- Physical Volumes are made from Partitions.
- Volumes Groups contains Partitions.
- Volumes Groups are divided into Logical Volumes.
- And Logical Volumes are base to File Systems.



Implementing

 

Verify if lvm2 is install  

[root@os-indaia-12c ~]# yum list lvm2
Loaded plugins: refresh-packagekit, security
Installed Packages
lvm2.x86_64                                                2.02.98-9.el6                                                 @anaconda-OracleLinuxServer-201302251503.x86_64/6.4

Listing Physical Drivers

Use fdisk -l to list the Psysical Drivers available to use. (the command bellow is truncated).

[root@os-indaia-12c yum.repos.d]# fdisk -l

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Creating Partitions

Use fdisk to create a disk partition.
[root@os-indaia-12c yum.repos.d]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xd968e05e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044):
Using default value 1044

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Create Physical Volumes

Use pvcreate to create physical volumes. This will associate the physical partition we made before to a Psysical Volume.
[root@os-indaia-12c yum.repos.d]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
[root@os-indaia-12c yum.repos.d]# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created

Listing Physical Volumes

We can verify the pvs created with pvdisplay

[root@os-indaia-12c ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               oracle_vg
  PV Size               8.00 GiB / not usable 1.35 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2047
  Free PE               0
  Allocated PE          2047
  PV UUID               QK3nS5-ZREV-na8F-jZIw-06Ph-PJxD-pZsv1N
  
  --- Physical volume ---
  PV Name               /dev/sdc1
  VG Name               oracle_vg
  PV Size               8.00 GiB / not usable 1.35 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              2047
  Free PE               1022
  Allocated PE          1025
  PV UUID               pQz7Kq-Wbwy-GaPc-FbBr-hzvd-C6eq-m9bfEB

  

Create the Volume Group

We can use vgcreate to create the volume group. A single volume group can contain one or more physical volume. The command bellow creates a VG called oracle_vg with two physical volumes /dev/sdb1 and /dev/sdc1.
[root@os-indaia-12c yum.repos.d]# vgcreate  oracle_vg /dev/sdb1 /dev/sdc1
  Volume group "oracle_vg" successfully created

 Listing VG

 The vgdisplay can be used to list all VG and VG attributes in OS.
[root@os-indaia-12c yum.repos.d]# vgdisplay
  --- Volume group ---
  VG Name               oracle_vg
  System ID            
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               15.99 GiB
  PE Size               4.00 MiB
  Total PE              4094
  Alloc PE / Size       0 / 0  
  Free  PE / Size       4094 / 15.99 GiB
  VG UUID               gKwhC0-kriN-dkbH-7sjp-NPGm-d8u1-age5VN

Creating Logical Volume

The follow command (lvcreate) can be used to create a LV called ora_home_lv with 12G.
[root@os-indaia-12c yum.repos.d]# lvcreate -n ora_home_lv   --size 12G oracle_vg
  Logical volume "ora_home_lv" created

Listing LVs

[root@os-indaia-12c ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/oracle_vg/ora_home_lv
  LV Name                ora_home_lv
  VG Name                oracle_vg
  LV UUID                fTUjS3-V3P7-cwNi-qnQS-3j0R-AmhZ-fQJ9z6
  LV Write Access        read/write
  LV Creation host, time os-indaia-12c, 2013-07-21 15:38:54 -0300
  LV Status              available
  # open                 1
  LV Size                12.00 GiB
  Current LE             3072
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

Formating

[root@os-indaia-12c yum.repos.d]# mkfs.ext3 /dev/oracle_vg/ora_home_lv
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
786432 inodes, 3145728 blocks
157286 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3221225472
96 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Writing inode tables: done                           
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mounting the File System

[root@os-indaia-12c yum.repos.d]# mkdir /u01 && mount -t ext3 /dev/oracle_vg/ra_home_lv /u01
[root@os-indaia-12c yum.repos.d]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/root_vg-lv_root
                       12G  8.4G  2.6G  77% /
tmpfs                 2.0G  304K  2.0G   1% /dev/shm
/dev/sda1             485M  101M  359M  22% /boot
/dev/sr0               55M   55M     0 100% /media/VBOXADDITIONS_4.2.10_84104
/dev/mapper/oracle_vg-ora_home_lv
                       12G  159M   12G   2% /u01
[root@os-indaia-12c yum.repos.d]#

And Finally add the follow line to the /etc/fstab to make the moint point available after any boot:

/dev/oracle_vg/ora_home_lv /u01                 ext3    defaults        0 0

Sources:

http://linuxconfig.org/linux-lvm-logical-volume-manager
http://www.howtoforge.com/linux_lvm