Monday, July 30, 2018

Oracle create View with Parameter

View with Parameter
1-virtual private database context
2-global package variable
3-Lookup Tables



#1 Virtual Private Database Context
I will use in where clause SYS_CONTEXT function as parameter to filter data of the query of the view.

First step is creating context that will handle session variables
 CREATE OR REPLACE CONTEXT MY_CTX USING CTX_PKG
 ACCESSED GLOBALLY;

Second I will create CTX_PKG package that context will use it to handle session parameters.
The package contains three procedures
a-SET_SESSION_ID to set unique id for every session I will use in my context.
b-CLEAR_SESSION to clear session from my context.
c-SET_CONTEXT to set variable value in my context.

Package Specification
 CREATE OR REPLACE PACKAGE CTX_PKG
 IS
   PROCEDURE SET_SESSION_ID (IN_SESSION_ID VARCHAR2);
   PROCEDURE CLEAR_SESSION (IN_SESSION_ID VARCHAR2);
   PROCEDURE SET_CONTEXT (IN_NAME VARCHAR2, IN_VALUE VARCHAR2);
 END CTX_PKG;


Package Body
 CREATE OR REPLACE PACKAGE BODY CTX_PKG
 IS
   GC$SESSION_ID  VARCHAR2 (100);
   PROCEDURE SET_SESSION_ID (IN_SESSION_ID VARCHAR2)
   IS
   BEGIN
    GC$SESSION_ID := IN_SESSION_ID;
    DBMS_SESSION.SET_IDENTIFIER (IN_SESSION_ID);
  END;
   PROCEDURE CLEAR_SESSION (IN_SESSION_ID VARCHAR2)
   IS
   BEGIN
    DBMS_SESSION.SET_IDENTIFIER (IN_SESSION_ID);
    DBMS_SESSION.CLEAR_IDENTIFIER;
   END;
   PROCEDURE SET_CONTEXT (IN_NAME VARCHAR2, IN_VALUE VARCHAR2)
   IS
   BEGIN
    DBMS_SESSION.SET_CONTEXT ('MY_CTX',
                 IN_NAME,
                 IN_VALUE,
                 USER,
                 GC$SESSION_ID);
   END;
 END CTX_PKG;

Now let's test context and my package
 BEGIN
   CTX_PKG.SET_SESSION_ID (222);
   CTX_PKG.SET_CONTEXT ('my_name', 'Mahmoud A. El-Sayed');
   CTX_PKG.SET_CONTEXT ('my_age', '26 YO');
 END;

Now I set two context variable my_name, my_age
to query this variable I will use SYS_CONTEXT function
 SELECT SYS_CONTEXT ('MY_CTX', 'my_name'), SYS_CONTEXT ('MY_CTX', 'my_age')
  FROM DUAL;

The output will be


After insuring that my context and package working true let's now create view on table EMP to get employee in department at SCOTT schema
 CREATE OR REPLACE VIEW EMP_IN_DEPRATMENT
 AS
   SELECT *
    FROM EMP
   WHERE DEPTNO = SYS_CONTEXT ('MY_CTX', 'deptno');

to filter view by employees in department 20 only you should execute this PLSQL block first
 BEGIN
   CTX_PKG.SET_SESSION_ID (222);
   CTX_PKG.SET_CONTEXT ('deptno', '20');
 END;

Now lets create select statement against EMP_IN_DEPARTMENT view and see the result
 SELECT * FROM EMP_IN_DEPRATMENT;

The output result is like below



#2 Global Package Variables
I will use in where clause global package variables as parameter to filter data of the query of the view.

I will create package that hold every global variables which I will use it as parameters in view.
Package Specification
CREATE OR REPLACE PACKAGE GLB_VARIABLES
IS
   GN$DEPTNO   EMP.DEPTNO%TYPE;

   PROCEDURE SET_DEPTNO (
      IN_DEPTNO EMP.DEPTNO%TYPE);

   FUNCTION GET_DEPTNO
      RETURN EMP.DEPTNO%TYPE;
END;

Package Body
 CREATE OR REPLACE PACKAGE BODY GLB_VARIABLES
 IS
   PROCEDURE SET_DEPTNO (
    IN_DEPTNO EMP.DEPTNO%TYPE)
   IS
   BEGIN
    GN$DEPTNO := IN_DEPTNO;
   END;
   FUNCTION GET_DEPTNO
    RETURN EMP.DEPTNO%TYPE
   IS
   BEGIN
    RETURN GN$DEPTNO;
   END;
 END;

Let's now create view filter its data by global variables in GLBL_VARIABLES package
 CREATE OR REPLACE VIEW EMP_IN_DEPRATMENT2
 AS
   SELECT *
    FROM EMP
   WHERE DEPTNO =GLB_VARIABLES.GET_DEPTNO;

Now lets create select statement against EMP_IN_DEPARTMENT2 view and see the result
 EXEC GLB_VARIABLES.SET_DEPTNO(20);
 SELECT * FROM EMP_IN_DEPRATMENT2;

The output result is like below


3-Lookup Tables
another solution is to create lockup table for storing view parameters on it and build view based on the data stored in lockup table.

Thursday, July 26, 2018

analytic function


This article provides a clear, thorough concept of analytic functions and its various options by a series of simple yet concept building examples. The article is intended for SQL coders, who for might be not be using analytic functions due to unfamiliarity with its cryptic syntax or uncertainty about its logic of operation. Often I see that people tend to reinvent the feature provided by analytic functions by native join and sub-query SQL. This article assumes familiarity with basic Oracle SQL, sub-query, join and group function from the reader. Based on that familiarity, it builds the concept of analytic functions through a series of examples.
It is true that whatever an analytic function does can be done by native SQL, with join and sub-queries. But the same routine done by analytic function is always faster, or at least as fast, when compared to native SQL. Moreover, I am not considering here the amount of time that is spent in coding the native SQLs, testing, debugging and tuning them.

The general syntax of analytic function is:

Function(arg1,..., argn) OVER ( [PARTITION BY <...>] [ORDER BY <....>] [<window_clause>] )

<window_clause> is like "ROW <?>" or "RANK <?>"
All the keywords will be dealt in details as we walk through the examples. The script for creating the schema (SCOTT) on which the example queries of this article are run can be obtained in ORACLE_HOME/sqlplus/demo/demobld.sql of any standard Oracle installation.

How are analytic functions different from group or aggregate functions?
SELECT deptno,
COUNT(*) DEPT_COUNT
FROM emp
WHERE deptno IN (20, 30)
GROUP BY deptno;

DEPTNO                 DEPT_COUNT            
---------------------- ----------------------
20                     5                     
30                     6                     

2 rows selected
Query-1
Consider the Query-1 and its result. Query-1 returns departments and their employee count. Most importantly it groups the records into departments in accordance with the GROUP BY clause. As such any non-"group by" column is not allowed in the select clause.

SELECT empno, deptno,
COUNT(*) OVER (PARTITION BY
deptno) DEPT_COUNT
FROM emp
WHERE deptno IN (20, 30);

     EMPNO     DEPTNO DEPT_COUNT
---------- ---------- ----------
      7369         20          5
      7566         20          5
      7788         20          5
      7902         20          5
      7876         20          5
      7499         30          6
      7900         30          6
      7844         30          6
      7698         30          6
      7654         30          6
      7521         30          6

11 rows selected.
Query-2
Now consider the analytic function query (Query-2) and its result. Note the repeating values of DEPT_COUNT column.

This brings out the main difference between aggregate and analytic functions. Though analytic functions give aggregate result they do not group the result set. They return the group value multiple times with each record. As such any other non-"group by" column or expression can be present in the select clause, for example, the column EMPNO in Query-2.

Analytic functions are computed after all joins, WHERE clause, GROUP BY and HAVING are computed on the query. The main ORDER BY clause of the query operates after the analytic functions. So analytic functions can only appear in the select list and in the main ORDER BY clause of the query.

In absence of any PARTITION or <window_clause> inside the OVER( ) portion, the function acts on entire record set returned by the where clause. Note the results of Query-3 and compare it with the result of aggregate function query Query-4.

SELECT empno, deptno,
COUNT(*) OVER ( ) CNT
FROM emp
WHERE deptno IN (10, 20)
ORDER BY 2, 1;

     EMPNO     DEPTNO        CNT
---------- ---------- ----------
      7782         10          8
      7839         10          8
      7934         10          8
      7369         20          8
      7566         20          8
      7788         20          8
      7876         20          8
      7902         20          8
Query-3
SELECT COUNT(*) FROM emp
WHERE deptno IN (10, 20);

  COUNT(*)
----------
         8
Query-4
How to break the result set in groups or partitions?
It might be obvious from the previous example that the clause PARTITION BY is used to break the result set into groups. PARTITION BY can take any non-analytic SQL expression.

Some functions support the <window_clause> inside the partition to further limit the records they act on. In the absence of any <window_clause> analytic functions are computed on all the records of the partition clause.

The functions SUM, COUNT, AVG, MIN, MAX are the common analytic functions the result of which does not depend on the order of the records.

Functions like LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE depends on order of records. In the next example we will see how to specify that.

How to specify the order of the records in the partition?
The answer is simple, by the "ORDER BY" clause inside the OVER( ) clause. This is different from the ORDER BY clause of the main query which comes after WHERE. In this section we go ahead and introduce each of the very useful functions LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE and show how each depend on the order of the record.

The general syntax of specifying the ORDER BY clause in analytic function is:

ORDER BY <sql_expr> [ASC or DESC] NULLS [FIRST or LAST]

The syntax is self-explanatory.

ROW_NUMBER, RANK and DENSE_RANK
All the above three functions assign integer values to the rows depending on their order. That is the reason of clubbing them together.

ROW_NUMBER( ) gives a running serial number to a partition of records. It is very useful in reporting, especially in places where different partitions have their own serial numbers. In Query-5, the function ROW_NUMBER( ) is used to give separate sets of running serial to employees of departments 10 and 20 based on their HIREDATE.

SELECT empno, deptno, hiredate,
ROW_NUMBER( ) OVER (PARTITION BY
deptno ORDER BY hiredate
NULLS LAST) SRLNO
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, SRLNO;

EMPNO  DEPTNO HIREDATE       SRLNO
------ ------- --------- ----------
  7782      10 09-JUN-81          1
  7839      10 17-NOV-81          2
  7934      10 23-JAN-82          3
  7369      20 17-DEC-80          1
  7566      20 02-APR-81          2
  7902      20 03-DEC-81          3
  7788      20 09-DEC-82          4
  7876      20 12-JAN-83          5

8 rows selected.
Query-5 (ROW_NUMBER example)
RANK and DENSE_RANK both provide rank to the records based on some column value or expression. In case of a tie of 2 records at position N, RANK declares 2 positions N and skips position N+1 and gives position N+2 to the next record. While DENSE_RANK declares 2 positions N but does not skip position N+1.

Query-6 shows the usage of both RANK and DENSE_RANK. For DEPTNO 20 there are two contenders for the first position (EMPNO 7788 and 7902). Both RANK and DENSE_RANK declares them as joint toppers. RANK skips the next value that is 2 and next employee EMPNO 7566 is given the position 3. For DENSE_RANK there are no such gaps.

SELECT empno, deptno, sal,
RANK() OVER (PARTITION BY deptno
ORDER BY sal DESC NULLS LAST) RANK,
DENSE_RANK() OVER (PARTITION BY
deptno ORDER BY sal DESC NULLS
LAST) DENSE_RANK
FROM emp
WHERE deptno IN (10, 20)
ORDER BY 2, RANK;

EMPNO  DEPTNO   SAL  RANK DENSE_RANK
------ ------- ----- ----- ----------
  7839      10  5000     1          1
  7782      10  2450     2          2
  7934      10  1300     3          3
  7788      20  3000     1          1
  7902      20  3000     1          1
  7566      20  2975     3          2
  7876      20  1100     4          3
  7369      20   800     5          4

8 rows selected.
Query-6 (RANK and DENSE_RANK example)
LEAD and LAG
LEAD has the ability to compute an expression on the next rows (rows which are going to come after the current row) and return the value to the current row. The general syntax of LEAD is shown below:

LEAD (<sql_expr>, <offset>, <default>) OVER (<analytic_clause>)

<sql_expr> is the expression to compute from the leading row.
<offset> is the index of the leading row relative to the current row.
<offset> is a positive integer with default 1.
<default> is the value to return if the <offset> points to a row outside the partition range.

The syntax of LAG is similar except that the offset for LAG goes into the previous rows.
Query-7 and its result show simple usage of LAG and LEAD function.

SELECT deptno, empno, sal,
LEAD(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) NEXT_LOWER_SAL,
LAG(sal, 1, 0) OVER (PARTITION BY dept ORDER BY sal DESC NULLS LAST) PREV_HIGHER_SAL
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, sal DESC;

 DEPTNO  EMPNO   SAL NEXT_LOWER_SAL PREV_HIGHER_SAL
------- ------ ----- -------------- ---------------
     10   7839  5000           2450               0
     10   7782  2450           1300            5000
     10   7934  1300              0            2450
     20   7788  3000           3000               0
     20   7902  3000           2975            3000
     20   7566  2975           1100            3000
     20   7876  1100            800            2975
     20   7369   800              0            1100

8 rows selected.
Query-7 (LEAD and LAG)
FIRST VALUE and LAST VALUE function
The general syntax is:

FIRST_VALUE(<sql_expr>) OVER (<analytic_clause>)

The FIRST_VALUE analytic function picks the first record from the partition after doing the ORDER BY. The <sql_expr> is computed on the columns of this first record and results are returned. The LAST_VALUE function is used in similar context except that it acts on the last record of the partition.

-- How many days after the first hire of each department were the next
-- employees hired?

SELECT empno, deptno, hiredate ? FIRST_VALUE(hiredate)
OVER (PARTITION BY deptno ORDER BY hiredate) DAY_GAP
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, DAY_GAP;

     EMPNO     DEPTNO    DAY_GAP
---------- ---------- ----------
      7369         20          0
      7566         20        106
      7902         20        351
      7788         20        722
      7876         20        756
      7499         30          0
      7521         30          2
      7698         30         70
      7844         30        200
      7654         30        220
      7900         30        286

11 rows selected.
Query-8 (FIRST_VALUE)
FIRST and LAST function
The FIRST function (or more properly KEEP FIRST function) is used in a very special situation. Suppose we rank a group of record and found several records in the first rank. Now we want to apply an aggregate function on the records of the first rank. KEEP FIRST enables that.

The general syntax is:

Function( ) KEEP (DENSE_RANK FIRST ORDER BY <expr>) OVER (<partitioning_clause>)

Please note that FIRST and LAST are the only functions that deviate from the general syntax of analytic functions. They do not have the ORDER BY inside the OVER clause. Neither do they support any <window> clause. The ranking done in FIRST and LAST is always DENSE_RANK. The query below shows the usage of FIRST function. The LAST function is used in similar context to perform computations on last ranked records.

-- How each employee's salary compare with the average salary of the first
-- year hires of their department?

SELECT empno, deptno, TO_CHAR(hiredate,'YYYY') HIRE_YR, sal,
TRUNC(
AVG(sal) KEEP (DENSE_RANK FIRST
ORDER BY TO_CHAR(hiredate,'YYYY') )
OVER (PARTITION BY deptno)
     ) AVG_SAL_YR1_HIRE
FROM emp
WHERE deptno IN (20, 10)
ORDER BY deptno, empno, HIRE_YR;

     EMPNO     DEPTNO HIRE        SAL AVG_SAL_YR1_HIRE
---------- ---------- ---- ---------- ----------------
      7782         10 1981       2450             3725
      7839         10 1981       5000             3725
      7934         10 1982       1300             3725
      7369         20 1980        800              800
      7566         20 1981       2975              800
      7788         20 1982       3000              800
      7876         20 1983       1100              800
      7902         20 1981       3000              800

8 rows selected.
Query-9 (KEEP FIRST)
How to specify the Window clause (ROW type or RANGE type windows)?
Some analytic functions (AVG, COUNT, FIRST_VALUE, LAST_VALUE, MAX, MIN and SUM among the ones we discussed) can take a window clause to further sub-partition the result and apply the analytic function. An important feature of the windowing clause is that it is dynamic in nature.

The general syntax of the <window_clause> is

[ROW or RANGE] BETWEEN <start_expr> AND <end_expr>

<start_expr> can be any one of the following
UNBOUNDED PECEDING
CURRENT ROW
<sql_expr> PRECEDING or FOLLOWING.
<end_expr> can be any one of the following
UNBOUNDED FOLLOWING or
CURRENT ROW or
<sql_expr> PRECEDING or FOLLOWING.
For ROW type windows the definition is in terms of row numbers before or after the current row. So for ROW type windows <sql_expr> must evaluate to a positive integer.

For RANGE type windows the definition is in terms of values before or after the current ORDER. We will take this up in details latter.

The ROW or RANGE window cannot appear together in one OVER clause. The window clause is defined in terms of the current row. But may or may not include the current row. The start point of the window and the end point of the window can finish before the current row or after the current row. Only start point cannot come after the end point of the window. In case any point of the window is undefined the default is UNBOUNDED PRECEDING for <start_expr> and UNBOUNDED FOLLOWING for <end_expr>.

If the end point is the current row, syntax only in terms of the start point can be can be
[ROW or RANGE] [<start_expr> PRECEDING or UNBOUNDED PRECEDING ]

[ROW or RANGE] CURRENT ROW is also allowed but this is redundant. In this case the function behaves as a single-row function and acts only on the current row.

ROW Type Windows
For analytic functions with ROW type windows, the general syntax is:

Function( ) OVER (PARTITIN BY <expr1> ORDER BY <expr2,..> ROWS BETWEEN <start_expr> AND <end_expr>)
or
Function( ) OVER (PARTITON BY <expr1> ORDER BY <expr2,..> ROWS [<start_expr> PRECEDING or UNBOUNDED PRECEDING]

For ROW type windows the windowing clause is in terms of record numbers.

The query Query-10 has no apparent real life description (except column FROM_PU_C) but the various windowing clause are illustrated by a COUNT(*) function. The count simply shows the number of rows inside the window definition. Note the build up of the count for each column for the YEAR 1981.

The column FROM_P3_TO_F1 shows an example where start point of the window is before the current row and end point of the window is after current row. This is a 5 row window; it shows values less than 5 during the beginning and end.

-- The query below has no apparent real life description (except
-- column FROM_PU_C) but is remarkable in illustrating the various windowing
-- clause by a COUNT(*) function.

SELECT empno, deptno, TO_CHAR(hiredate, 'YYYY') YEAR,
COUNT(*) OVER (PARTITION BY TO_CHAR(hiredate, 'YYYY')
ORDER BY hiredate ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) FROM_P3_TO_F1,
COUNT(*) OVER (PARTITION BY TO_CHAR(hiredate, 'YYYY')
ORDER BY hiredate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) FROM_PU_TO_C,
COUNT(*) OVER (PARTITION BY TO_CHAR(hiredate, 'YYYY')
ORDER BY hiredate ROWS BETWEEN 3 PRECEDING AND 1 PRECEDING) FROM_P2_TO_P1,
COUNT(*) OVER (PARTITION BY TO_CHAR(hiredate, 'YYYY')
ORDER BY hiredate ROWS BETWEEN 1 FOLLOWING AND 3 FOLLOWING) FROM_F1_TO_F3
FROM emp
ORDEDR BY hiredate

 EMPNO  DEPTNO YEAR FROM_P3_TO_F1 FROM_PU_TO_C FROM_P2_TO_P1 FROM_F1_TO_F3
------ ------- ---- ------------- ------------ ------------- -------------
  7369      20 1980             1            1             0             0
  <font bgcolor=yellow>7499      30 1981             2            1             0             3
  7521      30 1981             3            2             1             3
  7566      20 1981             4            3             2             3
  7698      30 1981             5            4             3             3
  7782      10 1981             5            5             3             3
  7844      30 1981             5            6             3             3
  7654      30 1981             5            7             3             3
  7839      10 1981             5            8             3             2
  7900      30 1981             5            9             3             1
  7902      20 1981             4           10             3             0</font>
  7934      10 1982             2            1             0             1
  7788      20 1982             2            2             1             0
  7876      20 1983             1            1             0             0

14 rows selected.
Query-10 (ROW type windowing example)
The column FROM_PU_TO_CURR shows an example where start point of the window is before the current row and end point of the window is the current row. This column only has some real world significance. It can be thought of as the yearly employee build-up of the organization as each employee is getting hired.

The column FROM_P2_TO_P1 shows an example where start point of the window is before the current row and end point of the window is before the current row. This is a 3 row window and the count remains constant after it has got 3 previous rows.

The column FROM_F1_TO_F3 shows an example where start point of the window is after the current row and end point of the window is after the current row. This is a reverse of the previous column. Note how the count declines during the end.

RANGE Windows
For RANGE windows the general syntax is same as that of ROW:

Function( ) OVER (PARTITION BY <expr1> ORDER BY <expr2> RANGE BETWEEN <start_expr> AND <end_expr>)
or
Function( ) OVER (PARTITION BY <expr1> ORDER BY <expr2> RANGE [<start_expr> PRECEDING or UNBOUNDED PRECEDING]

For <start_expr> or <end_expr> we can use UNBOUNDED PECEDING, CURRENT ROW or <sql_expr> PRECEDING or FOLLOWING. However for RANGE type windows <sql_expr> must evaluate to value compatible with ORDER BY expression <expr1>.

<sql_expr> is a logical offset. It must be a constant or expression that evaluates to a positive numeric value or an interval literal. Only one ORDER BY expression is allowed.

If <sql_expr> evaluates to a numeric value, then the ORDER BY expr must be a NUMBER or DATE datatype. If <sql_expr> evaluates to an interval value, then the ORDER BY expr must be a DATE datatype.

Note the example (Query-11) below which uses RANGE windowing. The important thing here is that the size of the window in terms of the number of records can vary.

-- For each employee give the count of employees getting half more that their
-- salary and also the count of employees in the departments 20 and 30 getting half
-- less than their salary.

SELECT deptno, empno, sal,
Count(*) OVER (PARTITION BY deptno ORDER BY sal RANGE
BETWEEN UNBOUNDED PRECEDING AND (sal/2) PRECEDING) CNT_LT_HALF,
COUNT(*) OVER (PARTITION BY deptno ORDER BY sal RANGE
BETWEEN (sal/2) FOLLOWING AND UNBOUNDED FOLLOWING) CNT_MT_HALF
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, sal

 DEPTNO  EMPNO   SAL CNT_LT_HALF CNT_MT_HALF
------- ------ ----- ----------- -----------
     20   7369   800           0           3
     20   7876  1100           0           3
     20   7566  2975           2           0
     20   7788  3000           2           0
     20   7902  3000           2           0
     30   7900   950           0           3
     30   7521  1250           0           1
     30   7654  1250           0           1
     30   7844  1500           0           1
     30   7499  1600           0           1
     30   7698  2850           3           0

11 rows selected.
Query-11 (RANGE type windowing example)
Order of computation and performance tips
Defining the PARTITOIN BY and ORDER BY clauses on indexed columns (ordered in accordance with the PARTITION CLAUSE and then the ORDER BY clause in analytic function) will provide optimum performance. For Query-5, for example, a composite index on (deptno, hiredate) columns will prove effective.

It is advisable to always use CBO for queries using analytic functions. The tables and indexes should be analyzed and optimizer mode should be CHOOSE.

Even in absence of indexes analytic functions provide acceptable performance but need to do sorting for computing partition and order by clause. If the query contains multiple analytic functions, sorting and partitioning on two different columns should be avoided if they are both not indexed.

Conclusion
The aim of this article is not to make the reader try analytic functions forcibly in every other complex SQL. It is meant for a SQL coder, who has been avoiding analytic functions till now, even in complex analytic queries and reinventing the same feature much painstakingly by native SQL and join query. Its job is done if such a person finds analytic functions clear, understandable and usable after going through the article, and starts using them.
» Shouvik Basu's blog Log in to post comments
Comments
Permalink Submitted by Stephen (not verified) on Fri, 2005-08-26 07:14.
Most of the examples that I have come across deal with simple dates. What about when you have a data set that contains records with timestamps and you would like to roll them up to the second and then look at a sliding window to find, say the busiest 5 minutes of the day? It is trivial to create the query to aggregate the data into per-second blocks but I cannot figure out the syntax to put a window around it.

Thanks.

» Log in to post comments
useful topic
Permalink Submitted by abuleen on Mon, 2009-01-12 04:28.
This is very useful topic,
But I have simple problem I can not solve,
I want the id(or any column) of the row before the current row?
» Log in to post comments
Solution
Permalink Submitted by probal1u on Mon, 2012-04-09 04:41.
Hi abuleen, try this :

select empno,ename,deptno,lag(empno,1,999999) over (order by empno nulls last) prev_id
from emp
order by empno
» Log in to post comments
dynamic window range
Permalink Submitted by Veslar on Thu, 2010-09-16 02:58.
Hi Stephen,

try this > > >

create table trt (timepoint timestamp,random_str varchar2(7));

insert into trt values(to_timestamp('16.09.2010 07:05:51:781000','DD.MM.YYYY HH24:MI:SS:FF6'),'piNX');
insert into trt values(to_timestamp('16.09.2010 07:09:11:453000','DD.MM.YYYY HH24:MI:SS:FF6'),'EKzU');
insert into trt values(to_timestamp('16.09.2010 07:09:16:515000','DD.MM.YYYY HH24:MI:SS:FF6'),'rnZg');
insert into trt values(to_timestamp('16.09.2010 07:09:18:890000','DD.MM.YYYY HH24:MI:SS:FF6'),'LFEy');
insert into trt values(to_timestamp('16.09.2010 07:09:24:187000','DD.MM.YYYY HH24:MI:SS:FF6'),'BNZf');
insert into trt values(to_timestamp('16.09.2010 07:09:26:937000','DD.MM.YYYY HH24:MI:SS:FF6'),'FXFD');
insert into trt values(to_timestamp('16.09.2010 07:09:29:140000','DD.MM.YYYY HH24:MI:SS:FF6'),'Esgz');
insert into trt values(to_timestamp('16.09.2010 07:09:30:921000','DD.MM.YYYY HH24:MI:SS:FF6'),'wolR');
insert into trt values(to_timestamp('16.09.2010 07:09:32:218000','DD.MM.YYYY HH24:MI:SS:FF6'),'AijN');
insert into trt values(to_timestamp('16.09.2010 07:09:33:500000','DD.MM.YYYY HH24:MI:SS:FF6'),'PUot');
insert into trt values(to_timestamp('16.09.2010 07:09:34:625000','DD.MM.YYYY HH24:MI:SS:FF6'),'nIUX');
insert into trt values(to_timestamp('16.09.2010 07:09:35:796000','DD.MM.YYYY HH24:MI:SS:FF6'),'DGTf');
insert into trt values(to_timestamp('16.09.2010 07:09:36:750000','DD.MM.YYYY HH24:MI:SS:FF6'),'eEYe');
insert into trt values(to_timestamp('16.09.2010 07:09:37:828000','DD.MM.YYYY HH24:MI:SS:FF6'),'mHYt');
insert into trt values(to_timestamp('16.09.2010 07:09:38:859000','DD.MM.YYYY HH24:MI:SS:FF6'),'buWA');
insert into trt values(to_timestamp('16.09.2010 07:09:39:968000','DD.MM.YYYY HH24:MI:SS:FF6'),'yllr');
insert into trt values(to_timestamp('16.09.2010 07:09:45:359000','DD.MM.YYYY HH24:MI:SS:FF6'),'WJua');
insert into trt values(to_timestamp('16.09.2010 07:09:48:328000','DD.MM.YYYY HH24:MI:SS:FF6'),'Zsrp');
insert into trt values(to_timestamp('16.09.2010 07:09:50:656000','DD.MM.YYYY HH24:MI:SS:FF6'),'WtDJ');
commit;

SELECT TO_CHAR(timepoint,'DD.MM.YYYY HH24:MI:SS.FF3') "Start_bussiest_5sec_interval"
FROM
  (SELECT timepoint,
    DENSE_RANK() over (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint) rank
  FROM
    (SELECT timepoint,
      COUNT(*) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) forw_dens_per_5sec,
      MAX(timepoint) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) end_time_5sec
    FROM trt
    )
  )
WHERE rank=1;

Regards

Veslar

» Log in to post comments
Correction
Permalink Submitted by pakkiaraj on Thu, 2010-12-30 03:19.
The output of your query is: Start_bussiest_5sec_interval=
16.09.2010 07:09:35.796. But I guess this is not the starting of the busiest 5 sec interval. I guess it should be = 16.09.2010 07:09:30.921. If I am correct than your query should be modifed here:

DENSE_RANK() over (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint DESC,timepoint) rank.

Regards.
Pakkia
» Log in to post comments
Explanation
Permalink Submitted by Veslar on Sat, 2011-07-02 06:35.
Try this and you'll see why it is so, as I wrote the first time > > >

SELECT *
FROM
(SELECT
DENSE_RANK() over (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint) rank,
s.*,
end_time_5sec-timepoint dff
FROM
(SELECT timepoint,
COUNT(*) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) forw_dens_per_5sec,
MAX(timepoint) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) end_time_5sec
FROM trt
) s
)ss
--WHERE rank=1
;

The density is 5 records per interval for both (my and your solution) but mine has minor diff between first member and last one (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint).

» Log in to post comments
Permalink Submitted by Sateesh (not verified) on Mon, 2005-08-22 14:40.
When I was stuck with the oracle documentation reading about analytic functions, being described highly formally, I got this
excellent article describing complex things in really simple terms with examples.

» Log in to post comments
Permalink Submitted by Dwarak (not verified) on Thu, 2005-02-03 18:32.
Hey, why don't u write a book on this.
This article is really cool.

» Log in to post comments
Permalink Submitted by Russell (not verified) on Thu, 2005-04-14 18:34.
I was stuck with some Oracle Analytical function and was looking for some help. Bumped across this article.

Really neat! Keep up the good work.

» Log in to post comments
Permalink Submitted by bill gfroehrer (not verified) on Tue, 2005-06-14 08:20.
Hi Shouvik,

I was dusting off my OLAP stuff (been a while) seeking a solution to a data loading issue. Your article "Analytic functions by Example" helped me zero in on the solution.

Gratitude and a "Thousand At-A-Boys" to you.

Keep up the good work!

BG...

» Log in to post comments
Permalink Submitted by Luke Curran (not verified) on Wed, 2005-06-08 09:11.
Excellent article. I personally learn best with simple examples to demonstrate potentially complex concepts ... and this article was perfect. Thanks.

» Log in to post comments
Permalink Submitted by usha (not verified) on Sun, 2005-07-10 22:46.
Thanx a lot!! This doc really helped me a lot.
One can get a clear idea regarding analytical functions if he/she go thru this page once..
Usha

» Log in to post comments
Permalink Submitted by Manoj Pathak (not verified) on Wed, 2005-04-27 15:44.
Excellent!! it is a great privilege to work with you.

» Log in to post comments
Permalink Submitted by Vadi (not verified) on Fri, 2005-05-06 12:50.
-- How many days after the first hire of each department were the next employees hired?
SELECT empno, deptno, hiredate ? FIRST_VALUE(hiredate)
OVER (PARTITION BY deptno ORDER BY hiredate) DAY_GAP
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, DAY_GAP;

I think there is an error in this query. It does not return the same results as shown in the example.
Thanks in advance

» Log in to post comments
correction
Permalink Submitted by santoshshinde (not verified) on Fri, 2006-09-22 10:54.
Quote:
A great article with everything put together about getting started with analytical functions.
Regarding correction:

Replace the '?' with '-'. I think that will give the difference in dates to calculate the days.

SELECT empno, deptno, hiredate - FIRST_VALUE(hiredate)
OVER (PARTITION BY deptno ORDER BY hiredate) DAY_GAP
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, DAY_GAP;
» Log in to post comments
Permalink Submitted by Ivor Oorloff (not verified) on Fri, 2005-04-29 05:33.
I was trying to use the dense_rank function to tune a query but really couldn't understand the Oracle doco - your page helped me make the breakthrough and my query runs instantly now whereas it took an hour before.

» Log in to post comments
This is an excellent
Permalink Submitted by Karen Sullivan (not verified) on Tue, 2005-10-25 10:23.
This is an excellent treatment of analytic functions. Other analytic function articles just didn't cut it for me. After reading this, was able to quickly solve an SQL problem.

I've added your URL to our Business Unit's Web Site.

Thank you.

» Log in to post comments
Really Enthralling
Permalink Submitted by Arindam Mukherjee (not verified) on Tue, 2006-03-21 04:12.
Hello,

What a presentation you have had in this web page. I appreciate your intuition and noble efforts. Looking forward to seeing your article on how to write well-tuned SQL.
May GOD bless you!!

Regards,
Arindam Mukherjee

» Log in to post comments
Analytic Functions - Awesome article
Permalink Submitted by Prakash Rai (not verified) on Sat, 2006-10-07 23:26.
Shouvik -

Like many other readers I am your real fan.

I have been looking to explore the analytic functions and this article with examples is just a great startup. I won't amaze to Google your name to find something rare gems about Oracle.

Keep up the great work!

Regards,
Prakash
» Log in to post comments
Analytic Functions - Great
Permalink Submitted by B Adhvaryu (not verified) on Fri, 2006-11-24 11:56.
Shouvik,

This is one of the best article I have ever found to start up with Analytic Function.

I would say just keep it up and forward the same related articles.

Regards,
Ba

» Log in to post comments
Very Great Effort - Add more here with time
Permalink Submitted by Aasif (not verified) on Fri, 2006-12-15 00:15.
This is a really great effort. It helped me a lot to clear my ambiguities. Please add more in it with the passage of time when you learns/experiences more about these functions.

Best Regards,
Aasif
» Log in to post comments
execllent effort
Permalink Submitted by mirahmad on Fri, 2007-03-09 04:42.
This is a very good artical for the people who don't know how to use the function for analytical work.

» Log in to post comments
awesome
Permalink Submitted by kang on Wed, 2008-03-12 00:05.
awesome.
Thanks.

» Log in to post comments
is this a bug?
Permalink Submitted by kang on Wed, 2008-03-12 19:27.
create table emp (
empno varchar2(10),
deptno varchar2(10),
sal number(10)
)

insert into emp(empno,deptno,sal) values('1','10',101);
insert into emp(empno,deptno,sal) values('2','20',102);
insert into emp(empno,deptno,sal) values('3','20',103);
insert into emp(empno,deptno,sal) values('4','10',104);
insert into emp(empno,deptno,sal) values('5','30',105);
insert into emp(empno,deptno,sal) values('6','30',106);
insert into emp(empno,deptno,sal) values('7','30',107);
insert into emp(empno,deptno,sal) values('8','40',108);
insert into emp(empno,deptno,sal) values('9','30',109);
insert into emp(empno,deptno,sal) values('10','30',110);
insert into emp(empno,deptno,sal) values('11','30',100);

SELECT empno, deptno, sal,
last_value(sal)
OVER (PARTITION BY deptno order by sal desc) col1,
first_value(sal)
OVER (PARTITION BY deptno order by sal asc) col2,
first_value(sal)
OVER (PARTITION BY deptno order by sal desc) col3,
last_value(sal)
OVER (PARTITION BY deptno order by sal asc) col4
FROM emp

col2, col3 return what I expect.

I don't know why col1 and col4 return the these kinds of results.

» Log in to post comments
 shouvikb's picture
Caveat with LAST_VALUE function (answering Kang's concern)
Permalink Submitted by shouvikb on Tue, 2008-03-18 00:44.
If anyone has Metalink Access, read Doc ID 696344.992.

Broadly speaking,
All analytic functions with ORDER BY operate on a default window - a subset of contiguous rows (according to the analytic ORDER BY clause) within the partition. The default windowing clause is "RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW". This affects the LAST_VALUE.

To get the results as you want you need to have.
(PARTITION BY deptno order by sal desc ROWS
BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING).

Hope this helps,
Shouvik Basu

» Log in to post comments
Query -11 column CNT_MT_HALF
Permalink Submitted by jayesh_nazre on Wed, 2008-03-26 11:12.
I am finding it hard to understand this can someone explain how its counting the last column
Based on what I understand "CNT_LT_HALF" divides the curron row sal and compares with precedding sal row values and that looks fine (not the count does not include the existing row in its count so its less that not less than equal to)
however when I try to apply the same for "CNT_MT_HALF" colum
say take current rows sal/2 eg. 800/2 = 400
now within dept = 20 if you compare this value with all the following sal then for the first row "CNT_LT_HALF" column should have a value of "4" and not "3". I ran the query in Oracle database and the query matches what the article output is ("3"). I know I am understanding it wrong, can someone explain

» Log in to post comments
Way to do a running total by date.
Permalink Submitted by ccheilig on Mon, 2008-10-13 11:24.
I had a really slow sql query that had columns of effecivedate, amount and running total to date.
by using analytic function I was able to make a 90 second query a < 2 second query with following syntax.

SELECT effectivedate, amount, SUM(amount) OVER (PARTITION BY sysdate ORDER BY effectivedate)
FROM mytable
ORDER BY effectivedate DESC

I think this little snippet will be useful in many applications.
Truly enyoyed your article which gave me the idea to try it this way! Thanks so much!

» Log in to post comments
Analytic Function
Permalink Submitted by zhopka on Tue, 2008-11-11 10:20.
Am I doing something wrong?
I have table a and table b, the relation between them is one to many. In table a I have column tax, in table b I have unit price and qty, what I need to get by my query is avg tax, total (sum) tax and sum of price*qty. I have problem with avg_tax_chg and doc_$_tot when join tables. Can somebody help?
select a.doc_num, a.tax_chg,
avg(a.tax_chg) over ( ) avg_tax_chg,
sum(a.tax_chg) over (partition by a.ord_tp_cd,a.final_dt,a.stat_cd) tot_tax_chg,
sum(b.qty*b.unit_prc) over (partition by s.del_doc_num) doc_$_tot
from a, b
where a.doc_num = b.doc_num

Thanks
» Log in to post comments
Recursive Sum
Permalink Submitted by muratyildiz on Sat, 2008-11-29 20:02.
I want to recursevi summary by using analytic functions.
How can I do?
Thanks
For example:

A TABLE B TABLE SQL RESULT MUST BE
CODE CODE VALUES CODE SUM()
----- ------------- ------------------------
1 10000 200 1 600
10 10001 300 10 600
100 10010 100 100 600
1000 2000 200 1000 500
10000 10000 200
10001 10001 300
1001 1001 100
10010 10010 100
2 2 200
20 ....................
200
2000
....

» Log in to post comments
This topic is however quite
Permalink Submitted by swainprafullaranjan on Tue, 2008-12-09 03:03.
This topic is however quite informative and constructive .
But there are some explainations in which still abscruity is there
So in the next topic please try to give the proper and cleare explaination

» Log in to post comments
This Article is a Model Example of Analytical Writing Clarity
Permalink Submitted by harrysmall3 on Fri, 2008-12-19 18:27.
RE:"This topic is however quite"

Mr. Swainprafullaranjan, the only obscurity with the text on this page lay within the content of your comment. You provide incomplete sentences with the majority of words mis-spelled and absolutely no reference to what part of the article text that you find unclear or improper .

Had you written your thoughts truly as just a comment I would have refrained from mine. Since you felt obliged to lecture the author on improvement before writing further topics, I feel absolutely obliged to provide the same suggestion to you before critiquing any future article, post, or comment on this site.

Shouvic Basu effectively conveyed in his article, completely, what his topic statement intended- to provide ".. a clear, thorough concept of analytic functions and its various options by a series of simple yet concept building examples".

The flow of his presentation demonstrates a unique ability that I would coin as "precision in detail" of the subject matter. The quantity of detail and selection of organization in my opinion produced an article of many abilities - readability, learnability, capability... and enjoyability - a rare treat in reference material work.

I believe this view is the shared consensus amongst his readers. I can relate exactly to the response by Luke Curran: "Excellent article. I personally learn best with simple examples to demonstrate potentially complex concepts ... and this article was perfect. Thanks. "

Shouvic, I appreciate with great admiration your knowledge sharing and look forward to all future topics that you chose to cover. I have been working extensively with analytic functions on my current work projects and this article filled in a lot of background for me as well as new areas to further this study.

As an author of a chess novel myself, there were periods -painstaking times -in my writing that in retrospect had I posessed the same ability to convey detail as you with such precision, I believe that I would not be dyeing as much grey hair today!

Best Regards,
Harry

» Log in to post comments
Thanks for a Very Useful Article
Permalink Submitted by Dipali Vithalani on Sat, 2009-01-03 07:21.
Hi,

I decided to learn analytical functions conceptually and luckily I got this article. Its very useful. The way of treating the subject is very effective. While reading, I tried them practically and now I am quite confident about their usage.

Thanks a lot for such a useful article!

Regards,
Dipali.

» Log in to post comments
It is very helpful
Permalink Submitted by cpsundar on Fri, 2009-02-27 14:48.
It gives quick understanding of analytical function.
Great work.

Thanks for your article.

» Log in to post comments
Awesome
Permalink Submitted by holdingbe on Thu, 2009-04-02 09:53.
Thanks a lot for such a useful article!

» Log in to post comments
Very Good Overview
Permalink Submitted by Vackar on Tue, 2009-11-03 08:40.
That was one of the best articles I've read in terms of how to explain an otherwise rather complicated topic.

Great work!!

» Log in to post comments
YTD Calculation using analytical functions
Permalink Submitted by tssr_2001 on Thu, 2010-02-04 03:41.
Hi,

I am using Oracle Database 10gR1.
Please help me to write the following query.

I want to calculate ytd ( year to date calculation ), then i have to sum up monthly then sum up for yearly.
Eg : I have to calculate YTD as follows :

GL Curr Day Amount Month YTD
5805 45454 1-Jan 5.23 5.23 5.23
5805 45454 2-Jan -4.52 0.71 5.94
5805 45454 3-Jan 25.3 26.01 31.95
5805 45454 4-Jan 10.53 36.54 68.49
5805 45454 5-Jan -1.88 34.66 103.15

Here in this example, we have column names as gl, curr, day, amount, month, year ; taken from a 1 transaction table.
Now, To calculate ytd (year to-date calculation), i have to calculate month first and then year.

Step 1: Month wise summation :
I have to calculate sum for each day of the month.

From above example for month wise summation:
GL Curr Day Amount Month
5805 45454 1-Jan 5.23 5.23 ( sum of jan1 = 5.23 )
5805 45454 2-Jan -4.52 0.71 ( sum of jan2 = 5.23 + (-4.52) = 0.71 )
5805 45454 3-Jan 25.3 26.01 ( sum of jan3 = 0.71 + 25.3 = 26.01 )
5805 45454 4-Jan 10.53 36.54 ( sum of jan4 = 26.01 + 10.53 = 36.54 )
5805 45454 5-Jan -1.88 34.66 ( sum of jan4 = 36.54 + (-1.88) = 34.66 )

Step 2: Year wise summation : YTD Calculation :
We have done Step1 process to calculate this Step2 process, i.e., YTD Calculation.
So, we have to do Year wise summation with the Step1 month wise data ( with the above output date ).

Again, from above example for year wise summation:
GL Curr Day Amount Month YTD
5805 45454 1-Jan 5.23 5.23 5.23 ( ytd = 5.23 )
5805 45454 2-Jan -4.52 0.71 5.94 ( ytd = 5.23 + 0.71 = 5.94)
5805 45454 3-Jan 25.3 26.01 31.95 ( ytd = 5.94 + 26.01 = 31.95 )
5805 45454 4-Jan 10.53 36.54 68.49 ( ytd = 31.95 + 36.54 = 68.49 )
5805 45454 5-Jan -1.88 34.66 103.15 ( ytd = 68.49 + 34.66 = 103.15 )

So for year to-date calculation, we have to sum all the dates for a month and then sum all the month to get ytd.
How can i do this ytd using any analytical functions (i.e., i have to sum up for every row) ?

Thanks.

» Log in to post comments
answer to tssr_2001
Permalink Submitted by pakkiaraj on Thu, 2010-12-30 00:46.
Hi,

select b.*,sum(month) over(partition by GL order by day) YEAR from (select gl,curr,day,amount,sum(amount) over(partition by GL order by day) month from temp1_p ) b.

This article is very nice. HatZ off to ORAFAQ!

» Log in to post comments
Good explanation of Analytical Functions
Permalink Submitted by bhavin_rudani on Sat, 2010-02-20 04:41.
Thanks for the details on Analytical Functions

» Log in to post comments
Great teacher!
Permalink Submitted by sadenwala on Fri, 2010-03-26 13:32.
With 16 years of Oracle experience, I would just stay away from "partion by" and "over" clause, because I never really understood what it did. Your article is an eye opener for me. Yeah, promise I will buy your book if you write one!!

THANK YOU!!

» Log in to post comments
Thanks for the Post
Permalink Submitted by panyamravi on Tue, 2010-03-30 09:08.
It's nice to see a detailed explanation about the Analytical Functions by considering our old "EMP" database as the source.

» Log in to post comments
Good Article on analytical functions
Permalink Submitted by shivakrishnas on Fri, 2010-06-25 07:09.
Thank you for giving clear explanation about analytical functions. Very useful.

» Log in to post comments
Very good article for analytical functions
Permalink Submitted by prashantgauda on Mon, 2010-11-15 03:23.
Yesterday, I was dealing with some of the queries asked by my brother which I find very cumbersome with group by and long query.

I searched for analytical functions and believe me the content in your post is so easy to understand that I quickly solved his query and protected my image of knowing a bit more than him. :)

Anyway, I have suggested him to stick to your examples for basic understanding.

Thanks for very good post.

Thanks,
Prashant Gauda

» Log in to post comments
wha are analytic functions different from group or aggregate fun
Permalink Submitted by ramth85 on Mon, 2010-11-22 03:13.
Fine answer!

» Log in to post comments
Thank you very much...
Permalink Submitted by gknauss on Mon, 2011-11-28 07:33.
I came across this article back in 2009 and printed it out (wanted to make sure that if the post was ever removed, I would still this vital information). I have referred back to these examples many times as a reference, they have helped me numerous times in solving a problem. Just wanted to let you know that an article you wrote in 2004 is still being used today. Thank you.

» Log in to post comments
Kindly clear me about the rows and ranges, please?
Permalink Submitted by seetha_ta on Tue, 2011-12-27 05:39.
Hi

Its very easy to follow your article.
But i am not aware about the concept ROWS AND RANGE.
In your example what does the "3 PRECEDING AND 1 FOLLOWING" refers and how it works in the below syntax ? I tried lot to get it, but in vain.
Kindly clear me about it. It will be very helpfull

SELECT empno, deptno, TO_CHAR(hiredate, 'YYYY') YEAR,
COUNT(*) OVER (PARTITION BY TO_CHAR(hiredate, 'YYYY')
ORDER BY hiredate ROWS BETWEEN 3 PRECEDING AND 1 FOLLOWING) FROM_P3_TO_F1 FROM emp
ORDEDR BY hiredate

» Log in to post comments
analytic functions - partition order
Permalink Submitted by qietok on Tue, 2012-03-20 05:12.
Hi,

Thanks for great blog :-) I want to ask, if it is possible to do some kind of partition ordering. I know how to order records in partitions created by partition by and order clause, I can assign row numbering specific for partition. What I want is to order these partitions according to first record in that partition.

Is this possible? :)

Thanks.

» Log in to post comments
Need help for rank based dates
Permalink Submitted by debamishra on Wed, 2012-12-26 02:49.
Hi,

I can't find any documentation to create two ranks based on single field, like rank() over (partition by field order by date where date > sysdate) rank_1, rank() over (partition by field order by date where date <= sysdate) rank_2.

Is it possible?
» Log in to post comments
One of my reports have a
Permalink Submitted by kimzplaze on Sun, 2012-12-30 21:00.
One of my reports have a requirement to derive statistics by Last of something where the conditions will defer case to case. The example on partition by has helped me a lot. I thought I would have to do that report entirely in PL/SQL :) This page has definitely saved me a lot of time.

Thanks for helping a newbie SQL user like me to gain some levels :)
» Log in to post comments
clarification required
Permalink Submitted by rupika on Tue, 2013-01-29 04:06.
Hi,

You have said that:

"The main ORDER BY clause of the query operates after the analytic functions. So analytic functions can only appear in the select list and in the main ORDER BY clause of the query."

Can you please give example of analytic function being used in the main order by clause.
That would be a great help from your side.

TIA

» Log in to post comments
clarification given
Permalink Submitted by Singha on Wed, 2013-02-20 11:09.
You can just have a look at the Query-5 example!
The SRLNO field is what you ask for.

SELECT empno, deptno, hiredate, ROW_NUMBER( ) OVER (PARTITION BY deptno ORDER BY hiredate NULLS LAST) >>> SRLNO <<< -- this is the select list
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, >>> SRLNO <<<; -- this is the main order by clause

Sunday, May 20, 2018

Oracle error FRm-40656

I faced problem when i update the header of form which give me
FRm-40656 Record has been updaetd by another User Requery to see chages
and i have just define "ON-LOCK" trigger and code there only NULL.
and my problem is solved
Regards,
 Yasser

Wednesday, May 2, 2018

Oracle file not found /opt/oracle/product/11.2.0/dbhome_1/oc4j/j2ee/oc4j_applications/applications/em.ear

Situation

Solution

Monday, April 30, 2018

ORA-01033

The "ORA-01033: ORACLE initialization or shutdown in progress." error can also happen when Oracle is attempting startup or shutdown and is "hanging" on a resource such as a failed disk, writing to redo, etc.

Monday, March 26, 2018

Oracle work with sql plus

Using SQL*Plus
SQL*Plus is a command-line tool that provides access to the Oracle RDBMS.

SQL*Plus enables you to:

Enter SQL*Plus commands to configure the SQL*Plus environment
Startup and shutdown an Oracle database
Connect to an Oracle database
Enter and execute SQL commands and PL/SQL blocks
Format and print query results
SQL*Plus is available on several platforms. In addition, it has a web-based user

interface, iSQL*Plus.

SQL*Plus is a client terminal software allowing users to interact with Oracle server to manipulate data and data structures. Users type in SQL statements in SQL*Plus that send statements to Oracle server. Oracle server then validates and executes the statements on its databases. The query results are returned to SQL*Plus and displayed to the user.  Besides sending SQL statements to the server, SQL*Plus also saves them into a local buffer and allow users to view and change the statements. The following figure illustrates the process.

After you login into SQL*Plus, at the SQL prompt, you can begin typing any SQL command. Upon hitting return (i.e., enter key) the SQL prompt will change to line number prompts. When you are finished typing a command, type / or RUN to execute the SQL command. Also, a semicolon at the end of the SQL command will execute the command immediately after hitting return. In addition to SQL commands, /, and RUN, you can also executes SQL*Plus file commands.

SQL*PLUS Commands Quick Reference

Below table shows, SQL*Plus commands available in the command-line interface. Not all commands or command parameters are shown.

How to …    SQL*Plus Command
Log in to SQL*Plus    SQLPLUS [ { username[/passward][@connect_identifier] | / }  [ AS { SYSDBA | SYSOPER } ] | /NOLOG ]
List help topics available in SQL*Plus    HELP [ INDEX | topic ]
Execute host commands    HOST [ command ]
Show SQL*Plus system variables or environment settings    SHOW { ALL | ERRORS | USER | system_variable | … }
Alter SQL*Plus system variables or environment settings    SET system_variable value
Start up a database    STARTUP PFILE = filename  [ MOUNT [ dbname ] | NOMOUNT | … ]
Connect to a database    CONNECT [ [ username [ /password ] [ @connect_identifier ]          [ / AS { SYSOPER | SYSDBA } ]
]

List column definitions for a table, view, or synonym, or specifications for afunction or procedure    DESCRIBE [ schema. ] object
Edit contents of the SQL buffer or a file    EDIT [ filename [ .ext ] ]
Get a file and load its contents into the SQLBuffer    GET filename [ .ext ] [ LIST | NOLLIST ]
Save contents of the SQL buffer to a file    SAVE filename [ .ext ] [ CREATE | REPLACE | APPEND ]
List contents of the SQL Buffer    LIST [ n | nm | n LAST | … ]
Delete contents of the SQL Buffer    DEL [ n | nm | n LAST | … ]
Add new lines following current line in the SQL buffer    INPUT [ text ]
Append text to end ofcurrent line in the SQL
buffer

APPEND text
Find and replace first occurrence of a text string in current line of the SQL buffer    CHANGE sepchar old [ sepchar [ new [ sepchar ] ] ]sepchar can be any non-alphanumeric character such as “/” or “!”
Capture query results in a file and, optionally, send contents of file to default printer    SPOOL [ filename [ .ext ]  [ CREATE | REPLACE | APPEND | OFF | OUT ]
Run SQL*Plus statements stored in a file    @ { url | filename [ .ext ] } [ arg… ]START filename [ .ext ] [ arg… ]
.ext can be omitted if the filename extension is .sql

Execute commands stored in the SQL buffer    /
List and execute commands stored in the SQL buffer    RUN
Execute a single PL/SQL statement or run a stored procedure    EXECUTE statement
Disconnect from a database    DISCONNECT
Shut down a database    SHUTDOWN [ ABORT | IMMEDIATE | NORMAL | … ]
Log out of SQL*Plus    { EXIT | QUIT }  [ SUCCESS | FAILURE | WARNING | … ]
[ COMMIT | ROLLBACK ]


SQL*Plus file command allow you to execute commands (or programs) stored in an external file, input or output data from/to a file, and save SQL commands typed during current session.

Some SQL*Plus file commands are:

SAVE filename. This allows you to save buffer contents into a file.
START filename. This allows you to execute a batch of SQL statements stored in a file.
SPOOL filename. This allows you save SQL statements together with their outputs to a file.
GET filename. This retrieve a file and places it into the buffer.
@ filename. This allows you to execute a PL/SQL procedure(s) stored in a file.
Recall that the previously executed commands (in current SQL*Plus session) are stored in the local buffer. One way to change an SQL statement in the buffer is by using the line editor. The following are a list of line edit commands.

LIST or L–Lists the contents of the buffer
LIST n or L n–Lists the contents of line number n in the buffer and makes the line current
LIST * or L *–Lists the current line
LIST m n–Lists the range from m to n line
Append text or A text–Adds to the end of the current line (e.g., “A ,” adds a comma to the end of line
INPUT or I–Adds one or more lines after the current line so you can begin adding the text.
CHANGE /text–Deletes text from the current line
CHANGE /oldtext/newtext–Replaces oldtext with newtext in the current line
DEL — Deletes the current line
Besides line editor, you can also use the vi editor if you are a fan of Unix editor!.

To invoke the vi editor, type Edit at the SQL Prompt.   Multiple SQL commands can be typed in vi editor. End each SQL command (except the last one) with a semicolon. After exiting notepad, type Start to run all of the commands.

Run SQL statements in a batch
To run SQL commands in a batch, you can put all your SQL commands into a text file and execute these commands in this file in SQL*PLUS.

Use your favorite editor to type in your SQL queries into a text file.
For Example,

$ more table.sql
DROP TABLE employee
/

commit
/

CREATE TABLE employee (
empno INTEGER NOT NULL,
name VARCHAR2(50) NOT NULL,
sal REAL NOT NULL,
primary key (empno));
/

INSERT INTO employee VALUES (1, ‘Jack’, 6000);
INSERT INTO employee VALUES (2, ‘Tom’,  6000);
INSERT INTO employee VALUES (3, ‘John’, 6000);
INSERT INTO employee VALUES (4, ‘Jane’, 6000);
/

UPDATE employee SET sal=500 WHERE name=’Jack’
/

CREATE INDEX test_index on employee(sal)
/
$

Connect into SQL*Plus, and run the batch of commands. For example, assume that you name the SQL file as table.sql.
SQL> START table.sql;

Output results
You can record your SQL command outputs to a file for output or editing purpose.
SQL> SPOOL <your file name>

For example,

SQL> SPOOL myoutput.out

All SQL commands and their outputs after this command are written into the file myoutput.out that by default is stored in the current working directory where you invoked SQL*Plus.

To end recording, use the following command:
SQL> SPOOL OFF

DUAL and select the current time
DUAL is the dummy table, mostly used to view the results from functions and calculations. The built-in function SYSDATE returns a DATE value containing the current date and time on your system. (Note Oracle is a client-server architecture and SQL*Plus is the client. SYSDATE gives you the time of the Unix system which you telnet in. It may NOT be the time of Oracle server unless you telnet into the machine running Oracle server.)

For example,

SQL> SELECT TO_CHAR(SYSDATE , 'Dy DD-Mon-YYYY HH24:MI:SS') as "Current Date/Time" FROM DUAL;;
Result:

Current Time
------------------------------------------------------------------------
Mon 15-July-2002 10:01:29
TO_CHAR is a function to format a value.
DUAL is built-in relation in Oracle which serves as a dummy relation to put in the FROM clause when nothing else is appropriate. For example, try “SELECT 2+2 FROM DUAL;”
To format a number attribute to a dollar format, use the column <attribute> format <format>:
       SQL> COLUMN salary FORMAT $999,999
To indicate the displayed width of a character string attribute, use the column <attribute> format <A’format>. For example, set the width of the name attribute to 8 characters.
       SQL> COLUMN name FORMAT A8
If a name is longer than 8 characters, the remaining is displayed at the second line (or several lines)

The set command can be used to change the default number of lines per page (14) and the number of characters per line (80).
For example, to set the number of lines per page to 60, use the following command:

   SQL> SET PAGESIZE 60
All formatting remain active until they are cleared or reset or after you exit from SQL*Plus.
   SQL> CLEAR COLUMN
If you forget a specific SQL command you could enter
SQL> HELP <the SQL command>;

You could also find out all commands by entering:

SQL> HELP menu;

Sometimes when you get something fuzzy, you  can try the following
SQL> SET SERVEROUTPUT ON
SQL> SET ARRAYSIZE 1

Thursday, March 1, 2018

CONNECT BY LEVEL

A condition that identifies the relationship between parent rows and child rows of the hierarchy

CREATE TABLE t (
rid NUMBER(12),
col1 VARCHAR2(30),
col2 VARCHAR2(300));

CREATE SEQUENCE seq_t_rid;

INSERT INTO t
SELECT seq_t_rid.NEXTVAL, dbms_crypto.randombytes(15), dbms_crypto.randombytes(150)
FROM DUAL
CONNECT BY LEVEL <= 1000;

Thursday, February 1, 2018

Oracle Types of Triggers

Types of Triggers

This section describes the different types of triggers:

Row Triggers and Statement Triggers

When you define a trigger, you can specify the number of times the trigger action is to be run:
  • Once for every row affected by the triggering statement, such as a trigger fired by an UPDATE statement that updates many rows
  • Once for the triggering statement, no matter how many rows it affects

Row Triggers

A row trigger is fired each time the table is affected by the triggering statement. For example, if an UPDATEstatement updates multiple rows of a table, a row trigger is fired once for each row affected by the UPDATEstatement. If a triggering statement affects no rows, a row trigger is not run.
Row triggers are useful if the code in the trigger action depends on data provided by the triggering statement or rows that are affected. For example, Figure 22-3 illustrates a row trigger that uses the values of each row affected by the triggering statement.

Statement Triggers

A statement trigger is fired once on behalf of the triggering statement, regardless of the number of rows in the table that the triggering statement affects, even if no rows are affected. For example, if a DELETE statement deletes several rows from a table, a statement-level DELETE trigger is fired only once.
Statement triggers are useful if the code in the trigger action does not depend on the data provided by the triggering statement or the rows affected. For example, use a statement trigger to:
  • Make a complex security check on the current time or user
  • Generate a single audit record

BEFORE and AFTER Triggers

When defining a trigger, you can specify the trigger timingwhether the trigger action is to be run before or after the triggering statement. BEFORE and AFTER apply to both statement and row triggers.
BEFORE and AFTER triggers fired by DML statements can be defined only on tables, not on views. However, triggers on the base tables of a view are fired if an INSERT, UPDATE, or DELETE statement is issued against the view. BEFORE and AFTER triggers fired by DDL statements can be defined only on the database or a schema, not on particular tables.
See Also:

BEFORE Triggers

BEFORE triggers run the trigger action before the triggering statement is run. This type of trigger is commonly used in the following situations:
  • When the trigger action determines whether the triggering statement should be allowed to complete. Using a BEFORE trigger for this purpose, you can eliminate unnecessary processing of the triggering statement and its eventual rollback in cases where an exception is raised in the trigger action.
  • To derive specific column values before completing a triggering INSERT or UPDATE statement.

AFTER Triggers

AFTER triggers run the trigger action after the triggering statement is run.

Trigger Type Combinations

Using the options listed previously, you can create four types of row and statement triggers:
  • BEFORE statement trigger
    Before executing the triggering statement, the trigger action is run.
  • BEFORE row trigger
    Before modifying each row affected by the triggering statement and before checking appropriate integrity constraints, the trigger action is run, if the trigger restriction was not violated.
  • AFTER statement trigger
    After executing the triggering statement and applying any deferred integrity constraints, the trigger action is run.
  • AFTER row trigger
    After modifying each row affected by the triggering statement and possibly applying appropriate integrity constraints, the trigger action is run for the current row provided the trigger restriction was not violated. Unlike BEFORE row triggers, AFTER row triggers lock rows.
You can have multiple triggers of the same type for the same statement for any given table. For example, you can have two BEFORE statement triggers for UPDATE statements on the employees table. Multiple triggers of the same type permit modular installation of applications that have triggers on the same tables. Also, Oracle materialized view logs use AFTER row triggers, so you can design your own AFTER row trigger in addition to the Oracle-defined AFTER row trigger.
You can create as many triggers of the preceding different types as you need for each type of DML statement, (INSERT, UPDATE, or DELETE).
See Also:
Oracle Database Application Developer's Guide - Fundamentals for examples of trigger applications

INSTEAD OF Triggers

INSTEAD OF triggers provide a transparent way of modifying views that cannot be modified directly through DML statements (INSERT, UPDATE, and DELETE). These triggers are called INSTEAD OF triggers because, unlike other types of triggers, Oracle fires the trigger instead of executing the triggering statement.
You can write normal INSERT, UPDATE, and DELETE statements against the view and the INSTEAD OF trigger is fired to update the underlying tables appropriately. INSTEAD OF triggers are activated for each row of the view that gets modified.

Modify Views

Modifying views can have ambiguous results:
  • Deleting a row in a view could either mean deleting it from the base table or updating some values so that it is no longer selected by the view.
  • Inserting a row in a view could either mean inserting a new row into the base table or updating an existing row so that it is projected by the view.
  • Updating a column in a view that involves joins might change the semantics of other columns that are not projected by the view.
Object views present additional problems. For example, a key use of object views is to represent master/detail relationships. This operation inevitably involves joins, but modifying joins is inherently ambiguous.
As a result of these ambiguities, there are many restrictions on which views are modifiable. An INSTEAD OFtrigger can be used on object views as well as relational views that are not otherwise modifiable.
A view is inherently modifiable if data can be inserted, updated, or deleted without using INSTEAD OFtriggers and if it conforms to the restrictions listed as follows. Even if the view is inherently modifiable, you might want to perform validations on the values being inserted, updated or deleted. INSTEAD OF triggers can also be used in this case. Here the trigger code performs the validation on the rows being modified and if valid, propagate the changes to the underlying tables.
INSTEAD OF triggers also enable you to modify object view instances on the client-side through OCI. To modify an object materialized by an object view in the client-side object cache and flush it back to the persistent store, you must specify INSTEAD OF triggers, unless the object view is inherently modifiable. However, it is not necessary to define these triggers for just pinning and reading the view object in the object cache.

Views That Are Not Modifiable

If the view query contains any of the following constructs, the view is not inherently modifiable and you therefore cannot perform inserts, updates, or deletes on the view:
  • Set operators
  • Aggregate functions
  • GROUP BY, CONNECT BY, or START WITH clauses
  • The DISTINCT operator
  • Joins (however, some join views are updatable)
If a view contains pseudocolumns or expressions, you can only update the view with an UPDATE statement that does not refer to any of the pseudocolumns or expressions.

INSTEAD OF Triggers on Nested Tables

You cannot modify the elements of a nested table column in a view directly with the TABLE clause. However, you can do so by defining an INSTEAD OF trigger on the nested table column of the view. The triggers on the nested tables fire if a nested table element is updated, inserted, or deleted and handle the actual modifications to the underlying tables.
See Also:

Triggers on System Events and User Events

You can use triggers to publish information about database events to subscribers. Applications can subscribe to database events just as they subscribe to messages from other applications. These database events can include:
  • System events
    • Database startup and shutdown
    • Data Guard role transitions
    • Server error message events
  • User events
    • User logon and logoff
    • DDL statements (CREATE, ALTER, and DROP)
    • DML statements (INSERT, DELETE, and UPDATE)
Triggers on system events can be defined at the database level or schema level. The DBMS_AQ package is one example of using database triggers to perform certain actions. For example, a database shutdown trigger is defined at the database level:
CREATE TRIGGER register_shutdown 
  ON DATABASE 
  SHUTDOWN 
    BEGIN 
    ...
    DBMS_AQ.ENQUEUE(...); 
    ... 
    END; 
    
Triggers on DDL statements or logon/logoff events can also be defined at the database level or schema level. Triggers on DML statements can be defined on a table or view. A trigger defined at the database level fires for all users, and a trigger defined at the schema or table level fires only when the triggering event involves that schema or table.

Event Publication

Event publication uses the publish-subscribe mechanism of Oracle Streams Advanced Queuing. A queueserves as a message repository for subjects of interest to various subscribers. Triggers use the DBMS_AQpackage to enqueue a message when specific system or user events occur.

Event Attributes

Each event allows the use of attributes within the trigger text. For example, the database startup and shutdown triggers have attributes for the instance number and the database name, and the logon and logoff triggers have attributes for the user name. You can specify a function with the same name as an attribute when you create a trigger if you want to publish that attribute when the event occurs. The attribute's value is then passed to the function or payload when the trigger fires. For triggers on DML statements, the :OLD column values pass the attribute's value to the :NEW column value.

System Events

System events that can fire triggers are related to instance startup and shutdown and error messages. Triggers created on startup and shutdown events have to be associated with the database. Triggers created on error events can be associated with the database or with a schema.
  • STARTUP triggers fire when the database is opened by an instance. Their attributes include the system event, instance number, and database name.
  • SHUTDOWN triggers fire just before the server starts shutting down an instance. You can use these triggers to make subscribing applications shut down completely when the database shuts down. For abnormal instance shutdown, these triggers cannot be fired. The attributes of SHUTDOWN triggers include the system event, instance number, and database name.
  • SERVERERROR triggers fire when a specified error occurs, or when any error occurs if no error number is specified. Their attributes include the system event and error number.
  • DB_ROLE_CHANGE triggers fire when a role transition (failover or switchover) occurs in a Data Guard configuration. The trigger notifies users when a role transition occurs, so that client connections can be processed on the new primary database and applications can continue to run.

User Events

User events that can fire triggers are related to user logon and logoff, DDL statements, and DML statements.
Triggers on LOGON and LOGOFF Events
LOGON and LOGOFF triggers can be associated with the database or with a schema. Their attributes include the system event and user name, and they can specify simple conditions on USERID and USERNAME.
  • LOGON triggers fire after a successful logon of a user.
  • LOGOFF triggers fire at the start of a user logoff.
Triggers on DDL Statements
DDL triggers can be associated with the database or with a schema. Their attributes include the system event, the type of schema object, and its name. They can specify simple conditions on the type and name of the schema object, as well as functions like USERID and USERNAME. DDL triggers include the following types of triggers:
  • BEFORE CREATE and AFTER CREATE triggers fire when a schema object is created in the database or schema.
  • BEFORE ALTER and AFTER ALTER triggers fire when a schema object is altered in the database or schema.
  • BEFORE DROP and AFTER DROP triggers fire when a schema object is dropped from the database or schema.
Triggers on DML Statements
DML triggers for event publication are associated with a table. They can be either BEFORE or AFTER triggers that fire for each row on which the specified DML operation occurs. You cannot use INSTEAD OF triggers on views to publish events related to DML statements—instead, you can publish events using BEFORE or AFTERtriggers for the DML operations on a view's underlying tables that are caused by INSTEAD OF triggers.
The attributes of DML triggers for event publication include the system event and the columns defined by the user in the SELECT list. They can specify simple conditions on the type and name of the schema object, as well as functions (such as UID, USER, USERENV, and SYSDATE), pseudocolumns, and columns. The columns can be prefixed by :OLD and :NEW for old and new values. Triggers on DML statements include the following triggers:
  • BEFORE INSERT and AFTER INSERT triggers fire for each row inserted into the table.
  • BEFORE UPDATE and AFTER UPDATE triggers fire for each row updated in the table.
  • BEFORE DELETE and AFTER DELETE triggers fire for each row deleted from the table.
See Also:

Trigger Execution

A trigger is in either of two distinct modes:
Trigger ModeDefinition
EnabledAn enabled trigger runs its trigger action if a triggering statement is issued and the trigger restriction (if any) evaluates to true.
DisabledA disabled trigger does not run its trigger action, even if a triggering statement is issued and the trigger restriction (if any) would evaluate to true.
For enabled triggers, Oracle automatically performs the following actions:
  • Oracle runs triggers of each type in a planned firing sequence when more than one trigger is fired by a single SQL statement. First, statement level triggers are fired, and then row level triggers are fired.
  • Oracle performs integrity constraint checking at a set point in time with respect to the different types of triggers and guarantees that triggers cannot compromise integrity constraints.
  • Oracle provides read-consistent views for queries and constraints.
  • Oracle manages the dependencies among triggers and schema objects referenced in the code of the trigger action
  • Oracle uses two-phase commit if a trigger updates remote tables in a distributed database.
  • Oracle fires multiple triggers in an unspecified, random order, if more than one trigger of the same type exists for a given statement; that is, triggers of the same type for the same statement are not guaranteed to fire in any specific order.

The Execution Model for Triggers and Integrity Constraint Checking

A single SQL statement can potentially fire up to four types of triggers:
  • BEFORE row triggers
  • BEFORE statement triggers
  • AFTER row triggers
  • AFTER statement triggers
A triggering statement or a statement within a trigger can cause one or more integrity constraints to be checked. Also, triggers can contain statements that cause other triggers to fire (cascading triggers).
Oracle uses the following execution model to maintain the proper firing sequence of multiple triggers and constraint checking:
  1. Run all BEFORE statement triggers that apply to the statement.
  2. Loop for each row affected by the SQL statement.
    1. Run all BEFORE row triggers that apply to the statement.
    2. Lock and change row, and perform integrity constraint checking. (The lock is not released until the transaction is committed.)
    3. Run all AFTER row triggers that apply to the statement.
  3. Complete deferred integrity constraint checking.
  4. Run all AFTER statement triggers that apply to the statement.
The definition of the execution model is recursive. For example, a given SQL statement can cause a BEFORErow trigger to be fired and an integrity constraint to be checked. That BEFORE row trigger, in turn, might perform an update that causes an integrity constraint to be checked and an AFTER statement trigger to be fired. The AFTER statement trigger causes an integrity constraint to be checked. In this case, the execution model runs the steps recursively, as follows:
Original SQL statement issued.
  1. BEFORE row triggers fired.
    1. AFTER statement triggers fired by UPDATE in BEFORE row trigger.
      i. Statements of AFTER statement triggers run.
      ii. Integrity constraint checked on tables changed by AFTER statement triggers.
    2. Statements of BEFORE row triggers run.
    3. Integrity constraint checked on tables changed by BEFORE row triggers.
  2. SQL statement run.
  3. Integrity constraint from SQL statement checked.
There are two exceptions to this recursion:
  • When a triggering statement modifies one table in a referential constraint (either the primary key or foreign key table), and a triggered statement modifies the other, only the triggering statement will check the integrity constraint. This allows row triggers to enhance referential integrity.
  • Statement triggers fired due to DELETE CASCADE and DELETE SET NULL are fired before and after the user DELETE statement, not before and after the individual enforcement statements. This prevents those statement triggers from encountering mutating errors.
An important property of the execution model is that all actions and checks done as a result of a SQL statement must succeed. If an exception is raised within a trigger, and the exception is not explicitly handled, all actions performed as a result of the original SQL statement, including the actions performed by fired triggers, are rolled back. Thus, integrity constraints cannot be compromised by triggers. The execution model takes into account integrity constraints and disallows triggers that violate declarative integrity constraints.
For example, in the previously outlined scenario, suppose that the integrity constraint is violated. As a result of this violation, all changes made by the SQL statement, the fired BEFORE row trigger, and the fired AFTERstatement trigger are rolled back.
Note:
Although triggers of different types are fired in a specific order, triggers of the same type for the same statement are not guaranteed to fire in any specific order. For example, all BEFORE rowtriggers for a single UPDATE statement may not always fire in the same order. Design your applications so they do not rely on the firing order of multiple triggers of the same type.

Data Access for Triggers

When a trigger is fired, the tables referenced in the trigger action might be currently undergoing changes by SQL statements in other users' transactions. In all cases, the SQL statements run within triggers follow the common rules used for standalone SQL statements. In particular, if an uncommitted transaction has modified values that a trigger being fired either needs to read (query) or write (update), then the SQL statements in the body of the trigger being fired use the following guidelines:
  • Queries see the current read-consistent materialized view of referenced tables and any data changed within the same transaction.
  • Updates wait for existing data locks to be released before proceeding.

Storage of PL/SQL Triggers

Oracle stores PL/SQL triggers in compiled form, just like stored procedures. When a CREATE TRIGGERstatement commits, the compiled PL/SQL code, called P code (for pseudocode), is stored in the database and the source code of the trigger is flushed from the shared pool.
See Also:
Oracle Database PL/SQL User's Guide and Reference for more information about compiling and storing PL/SQL code

Execution of Triggers

Oracle runs a trigger internally using the same steps used for procedure execution. The only subtle difference is that a user has the right to fire a trigger if he or she has the privilege to run the triggering statement. Other than this, triggers are validated and run the same way as stored procedures