Oracle Total Archive log size and used rate for ASM

Generally the archive log location of oracle database is file system or ASM disk. In many cases oracle should be RAC architecture if its archive log location is ASM (or you might think that oracle standalone is also for ASM but it might put archive log location in file system or ASM, it’s always not absolute).

If oracle archive log location is file system we should use OS command to acquire total disk size of archive log location and further calculating archive log used rate. Of course about total archive log size we can use simple oracle view v$archived_log to get the sum size (unnecessary to get it via OS command). But oracle archive log location is ASM it’s more easy to acquire total disk size of archive log location by oracle view v$asm_diskgroup and total archive log size is able to get by v$asm_file or v$archived_log.

Thus in this blog note I’ll use 2 number of approach to get oracle total archive log size and used rate specially for ASM.

Approach 1

We use view v$archived_log and v$asm_diskgroup to get them.

Firstly, acquiring the total archive log size via v$archived_log, take a look at the follow sql query:

select substr(name, 2, instr(name, '/', 1)-2) as disk_name,
       round((sum(blocks*block_size)/1024/1024/1024), 2) as arc_used_gb
from v$archived_log
where standby_dest = 'NO'
and deleted = 'NO'
and name like '+%'
group by substr(name, 2, instr(name, '/', 1)-2);

As you can see from the previous sql it’s only for ASM because column name has a full archive log location and name itself we have to use function substr() to extract its ASM disk name and sum all of archive log size in this ASM disk.

Secondly, we can get the asm disk actual total size by v$asm_diskgroup (here we consider ASM disk redundancy type is ‘NORMAL’ or ‘HIGH’).

select ad.name,
       case type
         when 'NORMAL' then round(ad.total_mb/1024/2, 2)
         when 'HIGH'   then round(ad.total_mb/1024/3, 2)
       end act_total_gb
from v$asm_diskgroup ad;

Finally we can get the entire sql query as below:

with al as
(select substr(name, 2, instr(name, '/', 1)-2) as disk_name,
        round((sum(blocks*block_size)/1024/1024/1024), 2) as arc_used_gb
 from v$archived_log
 where standby_dest = 'NO'
 and deleted = 'NO'
 and name like '+%'
 group by substr(name, 2, instr(name, '/', 1)-2)
),
atg as
(select ad.name,
        al.arc_used_gb,
        case type
          when 'NORMAL' then round(ad.total_mb/1024/2, 2)
          when 'HIGH'   then round(ad.total_mb/1024/3, 2)
        end act_total_gb
 from v$asm_diskgroup ad, al
 where ad.name = al.disk_name
)
select name as ASM_DISK_NAME,
       arc_used_gb as ARC_USED_GB,
       round(arc_used_gb/act_total_gb, 4)*100 as PERCENT_SPACE_USED
from atg;

For example, using the prior sql query to check an oracle rac test environment the result looks like this:

ASM_DISK_NAME                  ARC_USED_GB PERCENT_SPACE_USED
------------------------------ ----------- ------------------
DATA                                 15.85               31.7

Approach 2

We use view v$asm_file and v$asm_diskgroup to acquire them.

Yes, column type has possible values in v$asm_file as follows.

Hence we check the total archive log size like this:

select group_number,
       redundancy,
       round(sum(space)/1024/1024/1024, 2) used_gb
from v$asm_file
where type in ('ARCHIVELOG')
group by group_number,
         redundancy;

We can also notice that the ASM disk’s redundancy type, so the next sql query should be the actual total archive log size.

with af as
(select group_number,
        redundancy,
        round(sum(space)/1024/1024/1024, 2) used_gb
 from v$asm_file
 where type in ('ARCHIVELOG')
 group by group_number,
          redundancy
)
select af.group_number,
       case af.redundancy
         when 'MIRROR' then af.used_gb/2
         when 'HIGH'   then af.used_gb/3
       end act_used_gb
from af;

Eventually the sql query is as below:

with af as
(select group_number,
        redundancy,
        round(sum(space)/1024/1024/1024, 2) used_gb
 from v$asm_file
 where type in ('ARCHIVELOG')
 group by group_number,
          redundancy
),
aug as
(select af.group_number,
        case af.redundancy
          when 'MIRROR' then af.used_gb/2
          when 'HIGH'   then af.used_gb/3
        end act_used_gb
 from af
)
select ad.name as ASM_DISK_NAME,
       aug.act_used_gb as ARC_USED_GB,
       round(af.used_gb/(ad.total_mb/1024), 4)*100 as PERCENT_SPACE_USED
from v$asm_diskgroup ad, af, aug
where ad.group_number = af.group_number
and af.group_number = aug.group_number;

Based on the same oracle rac test environment, we get the following query result.

ASM_DISK_NAME                  ARC_USED_GB PERCENT_SPACE_USED
------------------------------ ----------- ------------------
DATA                                16.045              32.09

It’s no curious about the result of preceding two approach has a bit of difference, the several metadata information of ASM disk should have been contained in it.

P.S.: If you’re using the following sql you’ll get the error about “ORA-00934: group function is not allowed here“.

with af as
(select group_number,
        case redundancy
          when 'MIRROR' then round(sum(space)/1024/1024/1024, 2)/2
          when 'HIGH'   then round(sum(space)/1024/1024/1024, 2)/3
        end act_used_gb,
        round(sum(space)/1024/1024/1024, 2)used_gb
 from v$asm_file
 where type in ('ARCHIVELOG')
 group by group_number,
          case redundancy
            when 'MIRROR' then round(sum(space)/1024/1024/1024, 2)/2
            when 'HIGH'   then round(sum(space)/1024/1024/1024, 2)/3
          end
)
select ad.name as ASM_DISK_NAME,
       af.act_used_gb as ARC_USED_GB,
       round(af.used_gb/(ad.total_mb/1024), 4)*100 as PERCENT_SPACE_USED
from v$asm_diskgroup ad, af
where ad.group_number = af.group_number;

when 'MIRROR' then round(sum(space)/1024/1024/1024, 2)/2
                                     *
ERROR at line 12:
ORA-00934: group function is not allowed here

All of those SQL queries you can get them from here.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.