Thursday, March 15, 2012

Oracle ASM RPM Link

Oracle ASM RPM Link
Just wanted to make your search for ASM rpms easier.

http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html 

Oracle Virtual Private Databases (VPD)

Oracle Virtual Private Databases!
Oracle VPD is a great free feature in implementing data masking or data privacy at zero cost. Data privacy can be implemented with simple database policies. Database policies are implemented using simple Oracle build-in PL/SQL packages and are not hard to use.
Oracle VPD can provide us only one masking: NULL values. VPD does not provide us with other masking options. Masking can be implemented for either a subset of rows or certain specific columns and does not require any program change from the end user client application. VPDs work both for databases and applications in the same way. DBMS_RLS is the main package to be used for implementing the database policy!

Sample Scripts

The below sample function can be  used to identified the users WHO DO NEED visibility the standard data. Everybody else will be defaulted with NULL values.


CREATE OR REPLACE FUNCTION ggate.pf_ssn (oowner IN VARCHAR2, ojname IN VARCHAR2)
RETURN VARCHAR2 AS
d_predicate varchar2(2000);
begin
  --d_predicate := '1=2';
  IF (SYS_CONTEXT('USERENV','SESSION_USER') = upper('GGATE')) or
  (SYS_CONTEXT('USERENV','SESSION_USER') = upper('SYSTEM')) or
  (SYS_CONTEXT('USERENV','SESSION_USER') = upper('SYS'))  THEN
  d_predicate := NULL;
ELSE
  --d_predicate := 'colvpd=sys_context(''work'',''pf_ssn'')';
  d_predicate := '1=2';

END IF;
RETURN d_predicate;
end pf_ssn;
/

Enabling the policy! You are done implementing simple masking in a very simple way!


BEGIN
  DBMS_RLS.ADD_POLICY (object_schema         => 'GGATE',
                       object_name           => 'WORK',
                       policy_name           => 'sp_ssn',
                       function_schema       => 'GGATE',
                       policy_function       => 'pf_ssn',
      statement_types     => 'select',
                       sec_relevant_cols     => 'ssn',
                       sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS);
END;
/


If the policy needs to be dropped....

begin
dbms_rls.drop_policy(object_schema         => 'GGATE',
                       policy_name           => 'pf_ssn',
                       object_name           => 'WORK');
end;
/



Data encryption with Oracle Advanced Security(ASO) option


 Data encryption with Oracle Advanced Security Option
}Oracle TDE(Transparent Data Encryption)
}Oracle ASO features
}Advanced Security offers the following 3 main options:
Transparent Data Encryption
Network Encryption
Strong authentication

}Risks
}Datafile data viewable using hex editor
}strings users.dbf | grep SSN
}Access to memory dump
}Lost or stolen backups cannot be viewed for privacy data without the master key
}Lack of compliance from auditing perspective.

}Why TDE?
}To protect data files, Oracle Database provides transparent data encryption. Transparent data encryption encrypts sensitive table data stored in data files. To prevent unauthorized decryption, transparent data encryption stores the encryption keys in a security module external to the database
}No application changes reqd.



}TDE Overview
}Key based access control system
}Wallet file reside outside the db – has password
}Wallet file holds master key – no relationship to wallet password
}Table keys stored in data dictionary
}TDE not used for access control



}Encryption Algorithms

}3DES168  -    (DES – Data Encryption Standard)
}AES128   -       (AES – Advanced Encryption Standard)
}AES192  - Default
}AES256

}Encryption Levels
}Tablespace level
  Create tablespace…encryption using..
  Better performance
}Column level
ALTER TABLE employees MODIFY (salary ENCRYPT);
ALTER TABLE employees MODIFY (salary ENCRYPT USING 'AES256');
ALTER TABLE employees MODIFY(salary DECRYPT);
}
}Key Management ways
}HSM(Hardware Security Modules)
HSM provides storage for master key
Avoids usage of memory for all cryptographic operations
Advanced Security Network Encryption Option recommended to encrypt the traffic between the database server and the HSM
A Hardware Security Module cannot be used for tablespace encryption, encrypted exports and encrypted RMAN backups
}Filesystem
Regular filesystem/ACFS for key management
}
}What is SALT?
}TDE by default applies a salt.
}Salt is added to the data that makes the encrypted value different even if the input data is same.
}Adding Salt to an Encrypted Column
ALTER TABLE employees MODIFY (salary ENCRYPT SALT);
}Performance Considerations
}
}Affects preformance only when selecting or inserting into encrypted column while no reduction of performance occurs for such operations on other columns, even in a table containing encrypted columns.
}If indexes are used on encrypted column, searches with exatch match can only benefit from index whereas if the query instead uses a LIKE predicate full table scan occurs.
}The redo log impact of a full table update on a large table should also be kept in mind. If transparent data encryption is being enabled on a very large table, then the redo log size might need to be increased to accommodate the operation.
}
}Using datapump
}If user wants to create/read dump of export in data pump directory it will need read/write permissions on the directory.
}ORA-39173: Encrypted data has been stored unencrypted in dump file set.
}strings export.dmp | grep sensitive
}Use encryption_password parameter for export dump to keep data in export dump encrypted.
}Use above flag while doing import

}Backups
}Use “set encryption on” to encrypt backups as well
}With encryption : Datafiles of encrypted tablespaces are passed on without change
}Without encryption : Backups decrypted, compressed and re-encrypted
}Tablespace encryption:
}Encrypted columns are treated as if they are not encrypted. Double encryption happens

}Oracle Virtual Private Database
}Enables us to create security policies to control database access
}Fine level of granularity directly on database objects
}Bring security, simplicity and flexibility
}Columns and rows will display NULL values where data is not to be seen
}Create policies and attach to objects – Done!


Oracle cross platform migration





Cross-Platform Oracle
Database Migration and Upgrade to 11g R2

Authors      :         Lokesh Krishnaswamy
                            




Purpose:

            The purpose of this document is to detail and elaborate on the steps involved in migrating databases across heterogenous platform. Oracle databases running on different flavors of UNIX such as Linux and Solaris operating systems can be migrated to AIX operating system. There are challenges being faced with respect to endian format, size and availability of the Oracle database datafiles when migrating to AIX platform and this document can be used as means to overcome these challenges. This document also details on how to use the transportable tablespaces feature of Oracle to upgrade to a higher version of Oracle running on a different operating system than the source operating system along with the scripts needed at various stages to perform the same.












Migration Strategy Matrix

Source Operating System
Source Endian format
Target Operating System
Target Endian format
Action Needed
Solaris 64 bit
Big
AIX 64 bit
Big
Endian conversion not needed. Perform upgrade steps from the doc
Linux 64 bit
Little
AIX 64 bit
Big
Endian conversion needed. Use this document fully

















Database size consideration

Database of different sizes can be a challenge in terms of storage speed available for the operation and the criticality of the database as well. Approach has to be weighed in adopting the correct strategy to do the migration.
As a general rule the following can be used.
Size
Approach
<50Gb
Perform datapump export/import and not use transportable tablespaces
>50Gb
Ideal to use transportable tablespaces












Migration and Upgrade Steps
1.      Get the source and the target database platform information by running the following query

SET LINESIZE 100 
COL PLATFORM_NAME FOR A40 
SELECT A.platform_id, A.platform_name, B.endian_format 
FROM   v$database A, v$transportable_platform B 
WHERE  B.platform_id (+) = A.platform_id; 


2. Check the database characterset on source and the target. They should be the same.

select * from database_properties where property_name like '%CHARACTERSET';


3. Verify the database options and components

Select * from v$options
Select * from dba_registry

4. Create a database link on the target database

connect system/password
create database link ttslink using '<TNS ENTRY of DATABASE NAME>;

Do not forget to add the tnsnames.ora entry on the target database tnsnames.ora file

5.  Create the impdp/expdp directories on both source and target databases

create directory impdp_dir as '/bckp/<DATABASE NAME>/impdp';
create directory expdp_dir as '/bckp/<DATABASE NAME>/expdp';

6.  Create all the source tablespaces exists in the target database with a minimal size of 10m size.

7. Import metadata required for TTS using database link on the target server.

impdp system/password DIRECTORY=impdp_dir LOGFILE=dp_userimp.log NETWORK_LINK=ttslink FULL=y INCLUDE=USER,ROLE,ROLE_GRANT,PROFILE

8. Set the default tablespaces to system and not any other tablespace on the target database.

 Drop user tablespaces on target database

select property_value from database_properties where property_name=’DEFAULT_PERMANENT_TABLESPACE’;

PROPERTY_VALUE
--------------
USERS

alter database default tablespace system;


9.  Drop all tablespaces on target database except system, sysaux using the following script

set heading off feedback off trimspool on linesize 500 spool tts_drop_ts.sql
prompt /* ===================== */
prompt /* Drop user tablespaces */
prompt /* ===================== */

select ‘DROP TABLESPACE ‘ || tablespace_name || ‘ INCLUDING CONTENTS AND DATAFILES;’ from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = ‘PERMANENT’;

spool off


10.  On the source database run the self-containment check and make sure there are no violations

declare
checklist varchar2(4000);
i number := 0;
begin
for ts in
(select tablespace_name
from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = ‘PERMANENT’)
loop
if (i=0) then
checklist := ts.tablespace_name;
else
checklist := checklist||’,’||ts.tablespace_name;
end if;
i := 1; end loop;
dbms_tts.transport_set_check(checklist,TRUE,TRUE);
end;
/

select * from transport_set_violations;

11.  Do a metadata export on the source database.

expdp system/password DIRECTORY=ttsdir LOGFILE=dp_fullexp_meta.log DUMPFILE=dp_full.dmp FULL=y CONTENT=METADATA_ONLY EXCLUDE=USER,ROLE,ROLE_GRANT,PROFILE



12. Make all the Source database tablespaces readonly

Script to make tablespaces readonly

set heading off feedback off trimspool on linesize 500 spool tts_tsro.sql
prompt /* ==================================== */
prompt /* Make all user tablespaces READ only */
prompt /* ==================================== */

select 'ALTER TABLESPACE ' || tablespace_name || ' READ only;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
/

13. Capture the sequence values

set heading off feedback off trimspool on escape off
set long 1000 linesize 1000 pagesize 0
col SEQDDL format A300
spool tts_create_seq.sql
prompt /* ========================= */
prompt /* Drop and create sequences */
prompt /* ========================= */

select regexp_replace(
dbms_metadata.get_ddl('SEQUENCE',sequence_name,sequence_owner),
'^.*(CREATE SEQUENCE.*CYCLE).*$',
'DROP SEQUENCE "'||sequence_owner||'"."'||sequence_name
||'";'||chr(10)||'\1;') SEQDDL
from dba_sequences
where sequence_owner not in
(select name
from system.logstdby$skip_support
where action=0)
;
spool off


14. Generate the par file for the metadata export of user tablespaces in source database using the following script.

REM
REM Create TTS Data Pump export and import PAR files
REM
set feedback off trimspool on
set serveroutput on size 1000000
REM
REM Data Pump parameter file for TTS export
REM
spool dp_ttsexp.par
declare
tsname varchar(30);
i number := 0;
begin
dbms_output.put_line('directory=ttsdir');
dbms_output.put_line('dumpfile=dp_tts.dmp');
dbms_output.put_line('logfile=dp_ttsexp.log');
dbms_output.put_line('transport_full_check=no');
dbms_output.put('transport_tablespaces=');
for ts in
(select tablespace_name from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(tsname||',');
end if;
i := 1;
tsname := ts.tablespace_name;
end loop;
dbms_output.put_line(tsname);
dbms_output.put_line('');
end;
/
spool off


REM
REM Data Pump parameter file for TTS import
REM
spool dp_ttsimp.par
declare
fname varchar(513);
i number := 0;
begin
dbms_output.put_line('directory=ttsdir');
dbms_output.put_line('dumpfile=dp_tts.dmp');
dbms_output.put_line('logfile=dp_ttsimp.log');
dbms_output.put('transport_datafiles=');
for df in
(select file_name from dba_tablespaces a, dba_data_files b
where a.tablespace_name = b.tablespace_name
and a.tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by a.tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(''''||fname||''',');
end if;
i := 1;
fname := df.file_name;
end loop;
dbms_output.put_line(''''||fname||'''');
dbms_output.put_line('');
end;
/
spool off


15. Export the tablespaces on the source database using the script generated from the previous step

expdp system/password PARFILE=dp_ttsexp.par

Contents of the dp_ttsexp.par files run on <DATABASE NAME> database
directory=expdp_diretory
dumpfile=dp_tts.dmp
logfile=dp_ttsexp.log
transport_full_check=no
transport_tablespaces=IDM_DATA,
IDM_INDEXES,
LCRM9_DATA,
LCRM9_INDEX,
P4SECDB_DATA,
P4SECDB_INDEX,
TOOLS,
USERS.

16. Copy the datapump export files to the target server

scp dp_full.dmp dp_tts.dmp target:/tmp

17. The endian format conversion should be done on the source system using the following rman convert command. The db_file_name_convert should point to either a new staging directory or an NFS mountpoint to stage the converted files.

Source system
convert tablespace 'TOOLS','USERS','IDM_DATA','IDM_INDEXES','LCRM9_DATA','LCRM9_INDEX','P4SECDB_DATA','P4SECDB_INDEX'
            to platform="AIX-Based Systems (64-bit)"
            db_file_name_convert='/db01/oradata/DRMLNX/','/bckp/DRMLNX/stage/'
parallelism 4;

18.  scp the converted files from the source system to the stage directory on the target system.  

This step is avoided if we have a NFS mountpoint.
Generate the Scp commands by logging into the source database system and by running the following command.

select 'scp '||file_name||' to oracle@sj1asm044:/bckp/<DATABASE NAME>/aixstage/'||substr(file_name,20,50)
from dba_data_files
/

The output can be scripted in a shell script and executed from the source system.

19. Import the tablespaces metadata to the target by running the following command on the target system. For this import use the existing files in the NFS directory to make them part of the database. After they are part of the database, we will be moving the datafiles to target ASM.

impdp system/password PARFILE=dp_ttsimp.par


20. On the target database the tablespaces will be read only. Bring the tablespaces online and then convert them to read write. Finally make them offline to copy the database files to the destination ASM filesystem

Script to make tablespaces online

set heading off feedback off trimspool on linesize 500 spool tts_tson.sql
prompt /* ==================================== */
prompt /* Make all user tablespaces offline */
prompt /* ==================================== */

select 'ALTER TABLESPACE ' || tablespace_name || ' offline;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
/


Script to make tablespaces offline

set heading off feedback off trimspool on linesize 500 spool tts_tsoff.sql
prompt /* ==================================== */
prompt /* Make all user tablespaces offline */
prompt /* ==================================== */

select 'ALTER TABLESPACE ' || tablespace_name || ' offline;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
/

set heading off feedback off trimspool on linesize 500 spool tts_tsrw.sql
prompt /* ==================================== */ prompt /* Make all user tablespaces READ WRITE */
prompt /* ==================================== */ select ‘ALTER TABLESPACE ‘ || tablespace_name || ‘ READ WRITE;’ from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = ‘PERMANENT’; spool off

Run the following scripts in sql prompt.
@tts_tson.sql
@tts_tsrw.sql
@tts_tsoff.sql

21. Copy the staged files from NFS directory to target ASM using the following command.

All the rman commands can be generated by logging to the source system and generating a spool file

select 'copy datafile '||''''||'NFS directory'||substr(file_name,22,50)||''''||' to '||''''||'+DATADG/oradata/<DATABASE NAME>/'||substr(file_name,22,50)||''''||';'
from dba_data_files
where tablespace not in (‘SYSTEM’,’SYSAUX’0
/
Now, run the generated commands on the target system

The above files can be split into multiple files so that they can run parallel.
Example

rman  target /
copy datafile '/bckp/<DATABASE NAME>/aixstage/ idm_data01.dbf ' to '+DATADG/oradata/<DATABASE NAME>/ idm_data01.dbf’;


22. Rename the datafiles on the tablespaces to the target ASM location.
set heading off feedback off trimspool on linesize 500 spool tts_ren.sql
select  ‘alter database rename file ‘||’’’’||file_name||’’’’||’ to ‘||’+DATADG/oradata/<db sid>’||substr(file_name,13,50)||’’’’||’;’ from dba_data_files where tablespace_name not in (‘SYSTEM’,’SYSAUX’);
spool  off


 23. Make all the tablespaces online. Run the following the sql prompt

@tts_tson.sql



24. Verify the new file path for all the tablespaces by running the following query.

Select file_name from dba_data_files where tablespace_name not in (‘SYSTEM’,’SYSAUX’)
/

25. All the remaining metadata will be imported to the target system using the following command. Double for the tablespaces to be read write mode.

impdp system/password DIRECTORY=ttsdir LOGFILE=dp_fullimp.log DUMPFILE=dp_full.dmp FULL=y

26. Create the sequences on the target system using the script generated from the source system. Run the following on the source system to generate the script.

set heading off feedback off trimspool on escape off
set long 1000 linesize 1000 pagesize 0
col SEQDDL format A300
spool tts_create_seq.sql
prompt /* ========================= */
prompt /* Drop and create sequences */
prompt /* ========================= */

select regexp_replace(
dbms_metadata.get_ddl('SEQUENCE',sequence_name,sequence_owner),
'^.*(CREATE SEQUENCE.*CYCLE).*$',
'DROP SEQUENCE "'||sequence_owner||'"."'||sequence_name
||'";'||chr(10)||'\1;') SEQDDL
from dba_sequences
where sequence_owner not in
(select name
from system.logstdby$skip_support
where action=0)
;
spool off

27. Compile invalid objects on the target system.

@?/rdbms/admin/utlrp.sql

28.  Verify the object counts between the source and target database systems.

Select count(*),owner from dba_objects group by owner

If any of the schemas do not match query the individual schemas for the object_type counts

Select count(*),object_type from dba_objects where owner=<owner name> group by object_type
/


29. Verify the datafiles on target database for block corruption using the dbv utility.

dbv FILE=+DATADG/oradata/<DATABASE NAME>/tool01.dbf