Showing posts with label Logminer. Show all posts
Showing posts with label Logminer. Show all posts

Logminer Fails With ORA-01284, ORA-00308

Problem Description:
Whenever you try to run the logminer in the mining database (different from source database. Source database is one in which redo or archived redo logs are generated and mining database is one in which you try to analysis those logs) to add logfile using DBMS_LOGMNR.ADD_LOGFILE it generates error ORA-01284, ORA-00308, ORA-27047.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/export/home/oracle/o1_mf_1_61856_48637xkh_.arc', -
OPTIONS => DBMS_LOGMNR.NEW);

BEGIN DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/o1_mf_1_61856_48637xkh_.arc', OPTIONS => DBMS_LOGMNR.NEW); END;

*
ERROR at line 1:
ORA-01284: file /export/home/oracle/o1_mf_1_61856_48637xkh_.arc cannot be
opened
ORA-00308: cannot open archived log
'/export/home/oracle/o1_mf_1_61856_48637xkh_.arc'
ORA-27047: unable to read the header block of file
Additional information: 2
ORA-06512: at "SYS.DBMS_LOGMNR", line 68
ORA-06512: at line 1

But if you see in your mining database hard disk the archived redo log file actually exist.
SQL> !ls -l /export/home/oracle/o1_mf_1_61856_48637xkh_.arc
-rwxrwxrwx 1 oracle oinstall 24671232 Jul 21 00:38 /export/home/oracle/o1_mf_1_61856_48637xkh_.arc

Cause of The Problem:
To be able to run oracle logminer there are several requirements. If the requirements are not satisfy logminer will not run successfully. One of the requirement to run logminer is both the source database and the mining database must be running on the same hardware platform.

The database block sizes of the analyzing instance and the log source database must also be the same.

If they are different then logminer will not work and working with it will produce error ORA-01284, ORA-00308, ORA-27047.

Solution of The problem:
Use the mining database as the same hardware platform as of source database. If you don't have same platform then it is not possible. In that case you can use source database for mining.

Related Post:-

How Oracle Logminer Is Used To Analysis Logfile

Any changes to database is recored in online redo logfiles. If your database archival mode on then online redo log files are archived which contains the database changes. With these logfile we can analysis any changes in database.
 
we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. Because undo segments are database changes, they also generate redo entries. So we can get them from online redo logs and then to archived logs.

So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.

Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.

In this step I will show you the step by step procedure how we can use logminer.

1)Ensure that you have on at a minimal level supplemental logging
To work with logminer you must have database supplemental logging on of the source database at a minimum level.By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. To make it use you need to on it.

You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
--------
NO

In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

2)Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE ... command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql

3)Grant the EXECUTE_CATALOG_ROLE role.

The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is ARJU.

SQL>GRANT EXECUTE_CATALOG_ROLE TO ARJU;

4)Create the synonym. ARJU creates a public synonym:

CREATE PUBLIC SYNONYM DBMS_LOGMNR FOR SYS.DBMS_LOGMNR;

All above four steps are needed just for once.

5)Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario. In this post suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
--------------------------------------------------------------------------------
/oradata2/data1/dbase/redo01.log
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo3.log

Sometimes, you want to mine the redo log file that was most recently archived.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
--------------------------------------------------------------------------------
/oradata2/flash_recovery_area/DBASE/archivelog/2008_08_03/o1_mf_1_839_49bodr0k_.
arc
With the DBMS_LOGMNR.ADD_LOGFILE specify those files that you want to mine.
Here I have given the online redo log files redo01.log and redo03.log.
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo01.log');
DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo03.log');
END;
/

6)Start the LogMiner session and specify a dictionary.
To start the LogMiner session:

BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/

Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.

7)Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table TEST3 we can issue
SQL>select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'TEST3';

USERNAM TIMESTAMP SEG_TY SEG_NAME TABLE_SPAC SID SERIAL# OPERATION
------- ----------------- ------ ---------- ---------- ---------- ---------- --------------------------------
08/03/08 02:57:35 TABLE TEST3 153 3272 DDL
08/03/08 02:57:41 TABLE TEST3 SYSTEM 153 3272 INSERT
08/03/08 02:57:55 TABLE TEST3 SYSTEM 153 3272 UPDATE
08/03/08 03:01:35 TABLE TEST3 SYSTEM 153 3272 UPDATE

We can get SQL_UNDO and SQL_REDO information by,

SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = 'TEST3' and OPERATION='UPDATE';

SQL_UNDO SQL_REDO
-------------------------------------------------- --------------------------------------------------
update "SYS"."TEST3" set "A" = '3' where "A" = '9' update "SYS"."TEST3" set "A" = '9' where "A" = '3'
and ROWID = 'AAAOKVAABAAAP8qAAA'; and ROWID = 'AAAOKVAABAAAP8qAAA';

update "SYS"."TEST3" set "A" = '9' where "A" = '10 update "SYS"."TEST3" set "A" = '10' where "A" = '9
' and ROWID = 'AAAOKVAABAAAP8qAAA'; ' and ROWID = 'AAAOKVAABAAAP8qAAA';

8)End the LogMiner session.

Use the DBMS_LOGMNR.END_LOGMNR procedure.
SQL> BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

PL/SQL procedure successfully completed.

Related Post:-

STEPS TO USE LOGMINER

Many of us are curious to know about remote mining. Source database mining in production database you might avoid. So remote mining comes. Remote mining means the database that analyze of redo log files or archived redo log files is not same as of source database- that generate redo/archived log files.

The system of using logminer in remote database is transfer the analysis redo or archived redo log files as well as tranfer the mining dictionary to the remote database.

In the following section I demonstrate an step by step idea to do the task.

Scenario: Online redo log file redo01 need to be analyzed in remote machine jupiter.
Step1 -Extract logminer dictionary: (On Source Database)
In source that is in production server build the dictionary by,
SQL>EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
PL/SQL procedure successfully completed.

After executing this procedure in archive log mode database adds the dictionary information to the online redo logs, which, in turn, go into the archived logs.

This redo log file must have been created before the redo log file that you want to analyze, but should be as recent as possible.

You can find out which redo log files contain the start and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, by,
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
NAME
--------------------------------------------------------------------------------
/oradata1/ARJU/archivelog/2008_08_24/o1_mf_1_1284_4c23hcd6_.arc

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
NAME
--------------------------------------------------------------------------------
/oradata1/ARJU/archivelog/2008_08_24/o1_mf_1_1284_4c23hcd6_.arc

Step 2- Transfer the logminer dictionary and log for analysis into the mining database.

SQL> !scp /oradata1/ARJU/archivelog/2008_08_24/o1_mf_1_1284_4c23hcd6_.arc oracle@jupiter:
Password:
o1_mf_1_1284_4c23hcd 100% |*************************************************| 9465 KB 00:01

Then also transfer the redo log. Based on your requirement you can transfer archived log or online redo log. To see a defined time archived log query by select NAME from v$archived_log where completion_time >SYSDATE-1;
In this example I will analysis online redo log file.

SQL> !scp /oradata1/arju/ARJU/redo01.log oracle@jupiter:
Password:
redo03.log 100% |*************************************************| 51200 KB 00:08


Step 3- Specify the files for analysis.(In mining/remote database)

Here specify the logfile that need to mine plus specify the dictionary files.
SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/redo01.log', OPTIONS => DBMS_LOGMNR.ADDFILE);

SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/o1_mf_1_1284_4c23hcd6_.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 4- Start the logminer session(In mining database)

Don't specify here DICTFILENAME option. Execute the following,
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS);

Step 5- Query V$LOGMNR_CONTENTS:
You can follow for query as in How to use Oracle Logminer to analysis Logfile

Step 6- End the LogMiner Session:
EXECUTE DBMS_LOGMNR.END_LOGMNR;


Related Post:

Share

Twitter Delicious Facebook Digg Stumbleupon Favorites More