SET LINESIZE 150
SET PAGESIZE 150
SET SERVEROUTPUT OFF
ALTER SESSION SET max_dump_file_size = 'UNLIMITED';
ALTER SESSION SET tracefile_identifier = '10046';
ALTER SESSION SET events '10046 trace name context forever,level 12';
select count(*) from all_objects;
ALTER SESSION SET events '10046 trace name context off';
SELECT value FROM v$diag_info WHERE name = 'Default Trace File';
Next generating a trc file named “…10046.trc” such as, “C:\APP\ADMINISTRATOR\VIRTUAL\diag\rdbms\ora19c\ora19c\trace\ora19c_ora_20224_10046.trc”.
EARLY WARNING: at the end of May my old (unchanged since September 2012) website jlcomp.demon.co.uk will be taken off the internet. I may update and post a very small number of the articles on my blog eventually, but if there’s anything you want to keep from it, copy it now.
04:40PM 2020-04-04 Twitter Web App
WARNING: by end May the DEMON domain (with my old website jlcomp.demon.co.uk) will disappear. If there’s anything you want from the website copy it now.
I heard of this bad news and asked him what reason it is afterwards he replied to me – the license of sub-domain DEMON was about to expire. So it’s time to copy those stuff from his old website if you keep somthing.
So far I’ve written two number of blog notes about how to optimize sql with LIKE expression (here and here). A couple of days ago I built another funny case including two different columns in the WHERE clause of SQL statement, the former is a like expression and the latter is a constant expression. Hence this post will focus on talking about how to optimize this kind of SQL query.
Here is a demo (same as my previous two posts) creating a test table and quickly inserting some random data with 3e5 lines.
CREATE TABLE person
( id NUMBER GENERATED ALWAYS AS IDENTITY
, name VARCHAR2(6) NOT NULL
, sex VARCHAR2(1) NOT NULL
, flag VARCHAR2(1)
, pwd VARCHAR2(6)
, CONSTRAINT person_pk PRIMARY KEY(id)
INSERT /*+APPEND*/ INTO person (name, sex, flag, pwd)
SELECT DBMS_RANDOM.string('A', 6) AS name
, CASE MOD(ROWNUM, 2) WHEN 0 THEN 'F'
WHEN 1 THEN 'M'
, CASE MOD(ROWNUM, 3) WHEN 0 THEN 'T'
WHEN 1 THEN NULL
WHEN 2 THEN 'F'
, DBMS_RANDOM.string ('p', 6) AS pwd
CONNECT BY level <= 3e5;
EXEC DBMS_STATS.gather_table_stats(ownname => NULL, tabname => 'PERSON');
As known as wondering a running SQL query is whether more efficient or not, it’s best to check its row-source execution plan that acquired from memory. The simple SQL query is as below.
WHERE name LIKE 'qw%'
AND sex = 'F'
Now I will use the following three type of cases to optimize the preceding SQL query.
creating two seprarate normal indexes for two columns
creating a normal index and a bitmap index for two columns
In my previous post I introduced 4 number of cases about how to optimize SQL with LIKE expression on oracle database. Even if I thought that I didn’t completely extend some more important details to describe/discuss those optimization skills particularly in previous case 2 and 4 as follows.
This post is focused on talking about how to optimize several SQL statements with LIKE expression in oracle database. Now let me quote some important knowledge point about the pattern of LIKE condition on 20c.
The LIKE conditions specify a test involving pattern matching. Whereas the equality operator (=) exactly matches one character value to another, the LIKE conditions match a portion of one character value to another by searching the first value for the pattern specified by the second.
The pattern can contain special pattern-matching characters:
An underscore (_) in the pattern matches exactly one character (as opposed to one byte in a multibyte character set) in the value.
A percent sign (%) in the pattern can match zero or more characters (as opposed to bytes in a multibyte character set) in the value. The pattern ‘%’ cannot match a null.
In general a LIKE expression has four number of situation in a SQL query statement. Take, for example, some random combinations of between ‘%’ and character strings ‘qw’.
A few days ago in particular Nigel Bayliss gave me some good ideas that was never in my mind (by communicating with email back and forth) so it is a reason why today I am going to write this blog note. Here’s my test demo (running on 19.3) creating table “staff”.
About how to optimize the query of Top-N and Pagination on oracle database I’ve written total two number of notes (here and here). Today this note is the last one in this series. From time to time (occasionally) there appears some strange bug on descending index that I once mentioned on the previous note. Thus I only can talk about several SQL queries with ascending index (default) and here’s my demo via creating test table and quickly inserting some random data into all columns of that table.
After sending Tweets message based on my previous blog post on Twitter, unexpectedly, Jonathan Lewis, Andrew Sayer, Patrick Jolliffe (oracle gurus all over the world) and Tiger Liu (China) all gave me so many pretty nice comments and very valuable opinions, so I have added some stuff (at that moment my confusion about whether it’s bug or not) on that blog note with [Update Mar 04, 2020: ……] and at the same time also marked those old sentences (wrong analysis process) using a conspicuous strikethrough.
According to their good suggestions I’ve done two funny test as well:
index full scan VS index range scan
descending index oddity (bug) VS ascending index (default)
At the end of last month I once wrote a blog note about Classic Pagination, at that moment I just mentioned that using 3 approaches (ROWNUM, ROW_NUMBER() and OFFSET … FETCH …) to do pagination query and never made a comparison on their execution plan, so which is an intention of today this note as well.
At the end of last month I once introduced classic pagination using the ROWNUM query on Oracle database. Today this blog note I’ll continue to introduce how to avoid changing the ROWNUM query result in an interval range according to the first two methods (rownum, row_number) on that blog note.
For vividly simultaing the case that seems like the Production System in reality. Firstly I need build a test table named “person” with 1e7 (10,000,000) lines of random data. Here’s a snippet of SQL code.