DBA Blogs

Upload files directly to static application files

Tom Kyte - Wed, 2024-05-15 23:26
Hi, I have a use case in which I want the end user to upload files from UI and it will directly store into the application's static files section. Are there any APIs available? Thanks, Tushar
Categories: DBA Blogs

Call shell script using stored procedure /function.

Tom Kyte - Wed, 2024-05-15 23:26
Hi, I want to create a stored procedure / function which will call shell script and shell script will have command to copy the file from particular location of DB server to another location of DB server. I tried using the scheduler job same is working fine but i don't want to use scheduler job. I want to use procedure/function to call shell script. Request your help in how to call shell script in stored procedure/function. Regards GirishR
Categories: DBA Blogs

Finding purchases on 10+ consecutive days

Tom Kyte - Wed, 2024-05-15 23:26
I'm trying to use march_recognize() to find purchases made for each customer for 10+ consecutive days. A day being the next calendar date. For example, if customer 1 made 2 purchases on 10-MAY-2024 at 1300 hours and 1400 hours this would not be 2 consecutive days it would be considered 1 day.Whereas if customer 1 made a purchase on 10-MAY-2024 at 23:59:59 and on 11-MAY-2024 at 00:00:00 this would be considered 2 consecutive days since the calendar date has changed although it's not 24 hours after the first purchase on 10-MAY-2024 at 23:59:59. Based on my test CASE below and sample data I appear to be finding the following streak of days CUSTOMER_ID FIRST_NAME LAST_NAME START_DATE END_DATE CONSECUTIVE_DAYS and I am unsure why? 2 Jane Smith 15-JAN-2023 20-JAN-2023 6 As you can see this is only 6 consecutive days not 10 or more therefore I thought the match_recognize() would have filtered this out. Is this something match_recognize can detect? If so, how? If not, can you suggest a workaround? <code>ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'DD-MON-YYYY'; CREATE TABLE customers (CUSTOMER_ID, FIRST_NAME, LAST_NAME) AS SELECT 1, 'Ann', 'Aaron' FROM DUAL UNION ALL SELECT 2, 'Jane', 'Smith' FROM DUAL UNION ALL SELECT 3, 'Bonnie', 'Winterbottom' FROM DUAL UNION ALL SELECT 4, 'Sandy', 'Herring' FROM DUAL UNION ALL SELECT 5, 'Roz', 'Doyle' FROM DUAL; create table purchases( ORDER_ID NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL, customer_id number, PRODUCT_ID NUMBER, QUANTITY NUMBER, purchase_date timestamp ); insert into purchases (customer_id, product_id, quantity, purchase_date) select 2 customer_id, 102 product_id, 2 quantity, TIMESTAMP '2024-04-03 00:00:00' + INTERVAL '18' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:01' DAY TO SECOND) * -1 + ((LEVEL-1) * interval '0.007125' second) as purchase_date from dual connect by level <= 15 UNION all select 1, 101, 1, DATE '2024-03-08' + INTERVAL '14' HOUR + ((LEVEL-1) * INTERVAL '1 00:00:00' DAY TO SECOND) * -1 from dual connect by level <= 5 UNION ALL select 3, 103, 3, DATE '2024-02-08' + INTERVAL '15' HOUR + ((LEVEL-1) * INTERVAL '0 23:59:59' DAY TO SECOND) * -1 from dual connect by level <= 5 UNION all select 2, 102,1, date '2023-07-29' + level * interval '1' day from dual connect by level <= 12 union all select 2, 103,1, date '2023-08-29' + level * interval '1' day from dual connect by level <= 15 union all select 2, 104,1, date '2023-11-11' + level * interval '1' day from dual connect by level <= 9 union all select 4, 103,(3*LEVEL), TIMESTAMP '2023-06-01 05:18:03' + numtodsinterval ( (LEVEL -1) * 1, 'day' ) + numtodsinterval ( LEVEL * 37, 'minute' ) + numtodsinterval ( LEVEL * 3, 'second' ) FROM dual CONNECT BY LEVEL <= 4 UNION ALL SELECT 3, 102, 4,TIMESTAMP '2022-12-22 21:44:35' + NUMTODSINTERVAL ( ...
Categories: DBA Blogs

Dependant package not invalidated and recompiled thus providing bad results

Tom Kyte - Wed, 2024-05-15 23:26
Hi Connor/Chris, I am struggling to reason about the case attached in the liveSQL session. P1 spec ('get_field_value' just returns the value of 'field' from the record given) <code> CREATE OR REPLACE PACKAGE p1 AS field_default CONSTANT varchar2(4) := '0000'; TYPE typ_rec IS RECORD ( field varchar2(4) default field_default ); function get_field_value(p_rec typ_rec) return varchar2; end; </code> P2 body <code> CREATE OR REPLACE PACKAGE BODY p2 AS function get_field_value return varchar2 as l_rec p1.typ_rec; begin return p1.get_field_value(l_rec); end; end; / </code> Now we should get '0000' no matter how we get to the record.field value <code> DECLARE l_rec p1.typ_rec; BEGIN dbms_output.put_line(p1.get_field_value(l_rec)); dbms_output.put_line(p2.get_field_value()); END; / </code> However, now if we prepend a new constant to P1 <code> CREATE OR REPLACE PACKAGE p1 AS dummy CONSTANT varchar2(4) := 'XXXX'; field_default CONSTANT varchar2(4) := '0000'; TYPE typ_rec IS RECORD ( field varchar2(4) default field_default ); function get_field_value(p_rec typ_rec) return varchar2; end; </code> P2 is not invalidated and starts to return 'XXXX' as a value for the field. It looks like it stored the index of the constant from P1 to be used and now it happily returns the incorrect value. If one recompiles P2 manually, it starts to return correct result of '0000' again. You can image the fun one has when a values of 200 constants are suddenly offset. I tried to find in the docs some explanation of this behaviour but to no avail.
Categories: DBA Blogs

csv output using sqlplus spool is very slower than expdp

Tom Kyte - Wed, 2024-05-15 23:26
Dear friends, I try to export csv from a patitoned table ( 300GB ) it takes 3 hours (only one table), using following code <code> set term off set feed off set colsep '|' set echo off set feedback off set linesize 1000 set pagesize 0 set trimspool on SET TERMOUT OFF spool '/backup/csv/Mytable.csv' SELECT /*+ parallel */ /*csv*/ col1 || '|' || col2 || '|' || col3 FROM MySchema.MyTable ; spool off exit;</code> but when I export (expdp ) all schema tables & its data (3TB) it takes only 20 minutes! why expdp is very fast comparing to sql spool ? what is fast method csv output from oracle table ? regards Siya AK Hardware 4 TB Ram + 196 core cpu + nvme disk oracle 11g r2
Categories: DBA Blogs

Sequence issue

Tom Kyte - Wed, 2024-05-15 23:26
Hi Tom , Not sure about category of this question but will explain the same We had come across one issue .. In pre staging server environment sequence column is generating incremented value as per the last record updated in a table via procedure (which is what we want and is fine ) ?-issue below Whereas in production (distributed env)we saw that most recently inserted record has lower sequence value than the previous one for one of the cases.As sequence is incremental it should generate highest value for last updated record. We are using golden gate in production Wherein sequence values are taken in odd number for one server and in even numbers for other server . What could be the scenario ? Is it beacuse of multiple instances of that server using the same table (distributed server) but that should nt be the issue i guess because server replication should not create wrong data Note :all commits are in place after dml in a procedure . Is there any pros/cons of using sequence or cache/no order keyword in Oracle which may be causing this issue ? Is there a issue with using sequence ? How to rectify this issue as 100s of procedures are using that sequence functionality?are there any gaps which can be covered while using sequence generated value ? Kindly confirm
Categories: DBA Blogs

Is it possible to create a private user under a DB schema?

Tom Kyte - Wed, 2024-05-15 23:26
We are writing to discuss an operational challenge that we are currently facing with the integration of Oracle Integration Cloud (OIC) and Oracle Autonomous Database within our organization. Our setup utilizes OIC as the primary integration layer, in conjunction with the Autonomous Database for staging data, performing validations, and executing other data derivation tasks. Our infrastructure includes multiple OIC environments, each configured to connect to the same Autonomous Database but utilizing distinct database schemas. These schemas primarily contain custom tables and packages essential for enforcing our business rules. During the integration process, particularly when invoking subroutines, it is necessary to specify the database name along with the package or procedure name, as detailed in the Oracle documentation (https://docs.oracle.com/en/cloud/paas/integration-cloud/atp-adapter/invoke-stored-procedure-page.html). We encounter significant challenges when migrating integrations between different OIC environments due to the requirement of manually updating the schema name in each database activity to match the target environment's schema. This process is not only time-consuming but also prone to errors, impacting our efficiency and operational continuity. In previous discussions with the Oracle Support team, the suggestion was made to utilize separate databases with identical schema names to circumvent this issue. However, due to resource constraints, expanding beyond our current setup of one database for production and another for non-production environments is not feasible. Given these circumstances, we are reaching out to inquire if there might be an alternative solution or workaround that could facilitate a more streamlined migration process between OIC environments without the need for manual updates. Any suggestions or guidance you could provide would be greatly appreciated
Categories: DBA Blogs

How to retrieve single hierarchy from Multiple Hierarchies

Tom Kyte - Wed, 2024-05-15 23:26
Hi, <b>There are several examples of hierarchal queries using the employees-manager example. Recently i came across hierarchal scenario where the table was storing multiple hierarchies. example data: </b> <code>CREATE TABLE example ( Title VARCHAR2(50), ID NUMBER, Link_id NUMBER );</code> <code>INSERT INTO your_table_name (Title, ID, Link_id) VALUES ('A', 1, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('B', 2, 1); INSERT INTO example(Title, ID, Link_id) VALUES ('C', 3, 2); INSERT INTO example (Title, ID, Link_id) VALUES ('D', 4, 3); INSERT INTO example (Title, ID, Link_id) VALUES ('E', 5, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('F', 6, 5); INSERT INTO example (Title, ID, Link_id) VALUES ('G', 7, 6); INSERT INTO example (Title, ID, Link_id) VALUES ('H', 8, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('I', 9, NULL); INSERT INTO example (Title, ID, Link_id) VALUES ('J', 10, 9);</code> Title |ID |Link_id A | 1 | null B | 2 | 1 C | 3 | 2 D | 4 | 3 E | 5 | null F | 6 | 5 G | 7 | 6 H | 8 | null I | 9 |null J | 10 | 9 and i wanted retrieve the whole hierarchy given any node i.e. passing ID 3 should return: A B C D i wrote the following function to get the root id: <code>create or replace FUNCTION find_root( p_id IN example.id%TYPE ) RETURN example.id%TYPE AS v_given_id example.id%TYPE := p_id; v_root_id example.id%TYPE; BEGIN LOOP SELECT Link_id INTO v_root_id FROM example WHERE id = v_given_id; IF v_root_id IS NULL THEN EXIT; -- Exit the loop if Link_id is null ELSE v_given_id := v_root_id; -- Update v_given_id with Link_id END IF; END LOOP; RETURN v_given_id; EXCEPTION WHEN NO_DATA_FOUND THEN RETURN NULL; -- Return NULL if no record found for the given id WHEN OTHERS THEN RETURN NULL; -- Handle other errors by returning NULL END;</code> <b>The following stored procedure will then call the above function and return the hierarchy based on the id returned by the function:</b> <code>create or replace PROCEDURE get_hierarchy( p_given_id IN example.id%TYPE, hmm OUT SYS_REFCURSOR ) IS v_root_id example.id%TYPE; BEGIN SELECT find_root_id(p_given_id) INTO v_root_id FROM dual; -- Check if final case ID is not null IF v_final_case_id IS NOT NULL THEN -- Use explicit cursor declaration OPEN hmm FOR SELECT Title, ID, Link_id from example START WITH example.id = v_root_id CONNECT BY PRIOR example.id = example.link_id; ELSE -- Use RAISE_APPLICATION_ERROR for customized error messages RAISE_APPLICATION_ERROR(-20001, 'No record foun...
Categories: DBA Blogs

FAST out-of-place materialized view refresh problem

Tom Kyte - Wed, 2024-05-15 23:26
I encountered a problem related to forcing the refresh procedure on the materialized view in a combined manner: - refresh_method = 'F' - out_of_place = true <code>DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'F');</code> For the past few days, I have made many different attempts and tests to force a situation in which MV is refreshed using a combination of: refresh-method = FAST and out-of-place = TRUE but only succeeded in achieving the combinations: refresh-method = COMPLETE and out-of-place = TRUE refresh-method = FAST and out-of-place = FALSE Therefore, my main question is: <b>Are there any internal restrictions or conditions that must be met in order to perform FAST out-of-place refresh?</b> Because after reviewing the official documentation: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dwhsg/refreshing-materialized-views.html#GUID-51191C38-D52F-4A4D-B6FF-E631965AD69A I have not found anything that would prevent such a combination from succeeding in my case. It is even clearly stated that out-of-place should work with any refresh method, with FAST preferred first. Below I attach a script setting up and demonstrating the problem I am facing. Due to limited privileges in the LiveSQL tool, I recommend using the script on a local database <code> -- Clean Workspace DROP TABLE FOO; DROP MATERIALIZED VIEW FOO_MV; -- Create the Base Table FOO CREATE TABLE FOO ( product_id NUMBER PRIMARY KEY, product_name VARCHAR2(100), product_price NUMBER(10, 2) ); -- Insert Sample Data into FOO table INSERT INTO FOO (product_id, product_name, product_price) VALUES (1, 'Widget A', 19.99); INSERT INTO FOO (product_id, product_name, product_price) VALUES (2, 'Gizmo B', 29.99); COMMIT; -- Create Materialized View Log CREATE MATERIALIZED VIEW LOG ON FOO WITH ROWID, PRIMARY KEY, SEQUENCE; -- Create simple Materialized View CREATE MATERIALIZED VIEW FOO_MV BUILD DEFERRED REFRESH FAST ON DEMAND AS SELECT product_id, product_name, product_price FROM FOO; -- Drop PK on MV prebuilt table to meet out-of-place refresh requirements ALTER TABLE FOO_MV DROP PRIMARY KEY; -- Enable Advanced statistics collection EXEC DBMS_MVIEW_STATS.SET_MVREF_STATS_PARAMS ('FOO_MV','ADVANCED',30); -- Intial COMPLETE refresh of the Materialized View (out-of-place) EXEC DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'C'); -- Insert incremental sample data into FOO table INSERT INTO FOO (product_id, product_name, product_price) VALUES (3, 'Gadget X', 49.99); INSERT INTO FOO (product_id, product_name, product_price) VALUES (4, 'Widget B', 24.99); COMMIT; -- Incremental FAST refresh of the Materialized View (HERE IS THE PROBLEM => despite the fact that out-of-place flag is true, the MV is refreshed in-place) EXEC DBMS_MVIEW.REFRESH('FOO_MV', out_of_place => true, atomic_refresh => false, method => 'F...
Categories: DBA Blogs

Testing DEFAULT ON NULL FOR UPDATE in 23ai

Hemant K Chitale - Thu, 2024-05-09 10:25

 Testing  a new feature in 23ai that allows you to define a DEFAULT value for a column if a user/front-end/application sends a NULL :


[oracle@oel9 ~]$ sqlplus hemant/hemant@freepdb1

SQL*Plus: Release 23.0.0.0.0 - Production on Thu May 9 14:54:25 2024
Version 23.4.0.24.05

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05

SQL> create table my_new_employees(
  2  employee_id number(12) primary key,
  3  employee_name varchar2(48),
  4  department_id number(12)
  5  )
  6  /

Table created.

SQL>
SQL> insert into my_new_employees
  2  values (1,'Hemant',NULL)
  3  /

1 row created.

SQL>
SQL> select * from my_new_employees;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant

SQL>
SQL> update my_new_employees
  2  set department_id=100  -- setting a non-NULL value
  3  where employee_id=1
  4  /

1 row updated.

SQL> select * from my_new_employees;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant                                                     100

SQL>
SQL> alter table my_new_employees
  2  modify (department_id default on null for insert and update 512);

Table altered.

SQL> insert into my_new_employees
  2  values (2,'Larry');    -- I am not specifying a value for DEPARTMENT_ID 
insert into my_new_employees
            *
ERROR at line 1:
ORA-00947: not enough values
Help: https://docs.oracle.com/error-help/db/ora-00947/


SQL> insert into my_new_employees
  2  values(2,'Larry', NULL); -- I explicitly specify NULL for DEPARTMENT_ID

1 row created.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant                                                     100
          2 Larry                                                      512  -- it got set to 512 ON INSERT

SQL>
SQL> update my_new_employees
  2  set employee_name = 'Hemant Chitale', department_id=NULL -- I explicitly specify NULL for DEPARMTENT_ID
  3  where employee_id=1
  4  /

1 row updated.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant Chitale                                             512  -- it got set to 512 ON UPDATE
          2 Larry                                                      512

SQL>
SQL> commit;

Commit complete.

SQL> select * from my_new_employees order by 1;

EMPLOYEE_ID EMPLOYEE_NAME                                    DEPARTMENT_ID
----------- ------------------------------------------------ -------------
          1 Hemant Chitale                                             512
          2 Larry                                                      512

SQL>


So, now the column has a proper DEFAULT value when an UPDATE statement sends a NULL.  This allows the developer to avoid having to write a Front-End Value Check or a Database Table Trigger to convert an incoming NULL to a DEFAULT value.

Categories: DBA Blogs

parameters supplied to a @script.sql in SQLPlus

Tom Kyte - Tue, 2024-05-07 19:26
I have done a little searching, and not found anything that speaks to this, thought I would ask the experts: Recently I had to build an SQL script to be run in SQLPlus, and this script invoked another with the @ usage, and passed in a derived value as an argument (would be received as &1 in the 2nd-level code). I wondered if I was facing a misunderstanding with scope, and was hoping you could tell the world for sure. Test case (no tables or other types involved): 1st-level SQL <code> set pages 0 feed off head off verify off trims on variable roll_qtr1 VARCHAR2(6) exec :roll_qtr1 := to_char(sysdate,'yyyy')||'0'||to_char(sysdate,'q'); col filename1 new_val filename1 SELECT 'test_file_'||:roll_qtr1||'.dat' filename1 FROM dual; --spool &filename1 @ get_file_data.sql :roll_qtr1 --spool off </code> 2nd-level SQL (@ get_file_data.sql from above) <code> set pages 0 feed off head off verify on lines 9000 colsep ',' variable parm_qtr varchar2(6) exec :parm_qtr := '&1'; SELECT :parm_qtr FROM dual; </code> Now removing the single quotes off the &1 in the 2nd-level SQL gets the value I expect, whereas the code as it is gives: BEGIN :parm_qtr := <b>':roll_qtr1'</b>; END; * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at line 1 <b>What gives?</b> It passed the actual bind variable name instead of the value? Which says to me that running an SQL file with "@" is very much the same as, say, an <i>include</i> command in a C program, or running a KSH file in UNIX using the "." notation for execution - all making whatever happens as if it's all in one process/session. Wouldn't this negate the value of command line parameters within the SQLPlus session? Is there a by-reference vs. by-value thing going on? I sure would value a chance to learn the right understanding.
Categories: DBA Blogs

SQL Loader free file format data load

Tom Kyte - Tue, 2024-05-07 19:26
We have many files available in our Linux mount point and wanted to load them into the table. The solution I'm seeking is to load all the files(with no-header, a different set of columns in each file, comma separated and new line char is available). <b>Sample file data:</b> files1.csv 1,"Test1" 2,"Test2" ----- files2.csv 1,123,"Case0" 2,456,"MyName" ----- files3.csv 1234234,"2024-01-01","foo" 5894234,"2024-02-01","foo3" I'm looking for a way to load these files in a single table as given below. Is there a way we can achieve this using SQL Loader? <b>Oracle Table: </b> Create table generic_files(COLUMN_1 VARCHAR2(4000), COLUMN_2 VARCHAR2(4000), COLUMN_3 VARCHAR2(4000), FILE_NAME VARCHAR2(4000), INSERT_DT DATE default SYSDATE) COLUMN_1. | COLUMN_2. | COLUMN_3 | FILE_NAME. | INSERT_DT (will have sysdate) 1 Test1 null files1.csv 2 Test2 null files1.csv 1 123 Case0 files2.csv 2 456 MyName files2.csv 1234234 2024-01-01 foo files3.csv 5894234 2024-02-01 foo3 files3.csv
Categories: DBA Blogs

PLW-07204: conversion away from column type may result in sub-optimal query plan

Tom Kyte - Tue, 2024-05-07 19:26
I have been wondering for a long time why a select like "SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1" in the following example issues a PLW-07204 warning but for example "SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1" does not. <code> CREATE TABLE foo (ts DATE, tx VARCHAR2(30)); INSERT INTO foo VALUES (SYSDATE - 1, 'foo'); INSERT INTO foo VALUES (SYSDATE - 2, 'bar'); COMMIT; ALTER SESSION SET PLSQL_WARNINGS = 'ENABLE:7204'; CREATE OR REPLACE FUNCTION new_foo RETURN NUMBER IS c_now CONSTANT DATE := SYSDATE; l_count NUMBER; BEGIN SELECT COUNT(*) INTO l_count FROM foo WHERE c_now >= ts + 1; -- SELECT COUNT(*) INTO l_count FROM foo WHERE ts < c_now - 1; RETURN l_count; END; / SELECT * FROM user_errors WHERE name = 'NEW_FOO'; </code>
Categories: DBA Blogs

Oracle GoldenGate 23ai: Powering Real-Time Data Integration 

DBASolved - Mon, 2024-05-06 22:25

Oracle GoldenGate has long been the go-to solution for real-time data integration, and with the release of Oracle GoldenGate 23ai, […]

The post Oracle GoldenGate 23ai: Powering Real-Time Data Integration  appeared first on DBASolved.

Categories: DBA Blogs

Unlocking the Power of AI Vector Search in Oracle Database 23ai

DBASolved - Mon, 2024-05-06 21:46

Oracle has recently, May 2, 2024, unveiled Oracle Database 23ai, packed with innovative AI capabilities that are transforming the way […]

The post Unlocking the Power of AI Vector Search in Oracle Database 23ai appeared first on DBASolved.

Categories: DBA Blogs

Testing DB_FLASHBACK_LOG_DEST in 23ai

Hemant K Chitale - Mon, 2024-05-06 09:38

 Pre-23ai, Database Flashback Logs always were created in the DB_RECOVERY_FILE_DEST location (i.e.. the well-known "Fast Recovery Area" also known as "FRA").

However, these would share space with ArchiveLogs (if you've configured ArchiveLogs to the same location) and RMAN Backups (if you aren't specifically writing RMAN Backups to a different location).

23ai introduced the DB_FLASHBACK_LOG_DEST (and corresponding DB_FLASHBACK_LOG_DEST_SIZE) parameter to allow you to separate your Database Flashback Logs to a dedicated location.

The 23ai  New Features documentation has this to say :

In previous releases, you could store flashback database logs only in the fast recovery area. Now you can optionally designate a separate location for flashback logging. For example, if you have write-intensive database workloads, then flashback database logging can slow down the database if the fast recovery area is not fast enough. In this scenario, you can now choose to write the flashback logs to faster disks. Using a separate destination also eliminates the manual administration to manage the free space in the fast recovery area.

Managing flashback database logs outside the fast recovery area lowers the operational costs related to space management and guarantees the best performance for workloads that are typically impacted by flashback logging on traditional storage.


And it provides a link to the documentation on the parameter.


You might think that DB_FLASHBACK_LOG_DEST is now completely independent of DB_RECOVERY_FILE_DEST.


Here is my test run where I configured DB_FLASHBACK_LOG_DEST  without configuring DB_RECOVERY_FILE_DEST :




h-4.4$ cd /opt/oracle
sh-4.4$ mkdir FBL
sh-4.4$ mkdir FRA
sh-4.4$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - Production on Sun May 5 10:26:26 2024
Version 23.4.0.24.05

Copyright (c) 1982, 2024, Oracle.  All rights reserved.


Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.4.0.24.05

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1603726344 bytes
Fixed Size                  5360648 bytes
Variable Size             402653184 bytes
Database Buffers         1191182336 bytes
Redo Buffers                4530176 bytes
Database mounted.
SQL> show parameter spfile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /opt/oracle/product/23ai/dbhom
                                                 eFree/dbs/spfileFREE.ora
SQL>
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> show parameter db_flashback

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_log_dest                string
db_flashback_log_dest_size           big integer 0
SQL> alter system set db_flashback_log_dest_size=10G;

System altered.

SQL> alter system  set db_flashback_log_dest='/opt/oracle/FBL';

System altered.

SQL> create restore point MY_FIRST_RP ;

Restore point created.

SQL> alter system archive log current;

System altered.

SQL>
SQL> create table x as select * from cdb_objects;

Table created.

SQL> insert into x select * from x;

141420 rows created.

SQL> delete x;

282840 rows deleted.

SQL> commit;

Commit complete.

SQL> alter system archive log current;

System altered.

SQL>
SQL> alter database flashback on;  -- only here I enable Flashback

Database altered.

==============================================
alert log messages :
2024-05-05T10:38:35.262274+00:00
alter database flashback on
2024-05-05T10:38:35.423698+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 8388608 bytes in shared pool for flashback generation buffer
Flashback Database Enabled at SCN 3124894
===============================================

SQL> create restore point MY_FIRST_RP;  -- testing if I can create another RP with the same name
create restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38778: Restore point 'MY_FIRST_RP' already exists.
Help: https://docs.oracle.com/error-help/db/ora-38778/


SQL> drop restore point MY_FIRST_RP;

Restore point dropped.

SQL> create restore point MY_FIRST_RP;

Restore point created.

SQL> drop table x;

Table dropped.

SQL> create table x as select * from cdb_objects;

Table created.

SQL>
SQL> alter system archive log current;

System altered.

SQL> delete x;

141420 rows deleted.

SQL> insert into x select * from cdb_objects;

141421 rows created.

SQL> commit;

Commit complete.

SQL> alter system archive log current;

System altered.

SQL>
SQL> select substr(name,1,32), scn, time from v$restore_point;  -- identify the RP that has been created

SUBSTR(NAME,1,32)
--------------------------------------------------------------------------------------------------------------------------------
       SCN TIME
---------- ---------------------------------------------------------------------------
MY_FIRST_RP
   3124955 05-MAY-24 10.39.30.000000000 AM


SQL> select * from v$flashback_database_log;  -- identify the FBDB Logs Size

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE     CON_ID
-------------------- --------- ---------------- -------------- ------------------------ ----------
             3124893 05-MAY-24             1440      419430400                        0          0

SQL> select * from v$flashback_log_dest; -- identify the FB Log Dest (why isn't the view named V$FLASHBACK_DATABASE_LOG_DEST ?)

NAME
------------------------------------------------------------------------------------------------------------------------------------
SPACE_LIMIT SPACE_USED NUMBER_OF_FILES     CON_ID
----------- ---------- --------------- ----------
/opt/oracle/FBL
 1.0737E+10  419430400               2          0


SQL>
SQL> !sh
sh-4.4$ cd /opt/oracle/FBL
sh-4.4$ du -sh *
401M    FREE
sh-4.4$ cd FREE
sh-4.4$ ls
flashback
sh-4.4$ cd flashback
sh-4.4$ ls -l
total 409620
-rw-r----- 1 oracle oinstall 209723392 May  5 10:41 o1_mf_m3grfc8t_.flb
-rw-r----- 1 oracle oinstall 209723392 May  5 10:38 o1_mf_m3grfg1v_.flb
sh-4.4$
sh-4.4$ cd $ORACLE_HOME/dbs
sh-4.4$ ls -l arch1*
-rw-r----- 1 oracle oinstall  98164736 May  5 10:31 arch1_2_1167168121.dbf
-rw-r----- 1 oracle oinstall 106480640 May  5 10:33 arch1_3_1167168121.dbf
-rw-r----- 1 oracle oinstall  37506048 May  5 10:40 arch1_4_1167168121.dbf
-rw-r----- 1 oracle oinstall  52515840 May  5 10:40 arch1_5_1167168121.dbf
sh-4.4$
sh-4.4$ exit
exit

SQL> select count(*) from x;

  COUNT(*)
----------
    141421

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1603726344 bytes
Fixed Size                  5360648 bytes
Variable Size             419430400 bytes
Database Buffers         1174405120 bytes
Redo Buffers                4530176 bytes
Database mounted.
SQL> flashback database to restore point MY_FIRST_RP;   -- try to Flashback the Database
flashback database to restore point MY_FIRST_RP
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database
Help: https://docs.oracle.com/error-help/db/ora-38760/


============================================
alert log messages :
2024-05-05T10:45:28.380285+00:00
Successful mount of redo thread 1, with mount id 1440201864
2024-05-05T10:45:28.380506+00:00
Allocating 8388608 bytes in shared pool for flashback generation buffer.
Allocated 5807328 bytes in shared pool for flashback generation buffer
RVWR could not begin generation of flashback log data because
DB_RECOVERY_FILE_DEST is not set.
2024-05-05T10:45:28.392865+00:00
Errors in file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_rvwr_2343.trc:
ORA-38776: cannot begin flashback generation - recovery area is disabled
2024-05-05T10:45:28.392899+00:00
WARNING: Cannot open the flashback thread for this instance due to the above error.
WARNING: Flashback thread open failed - to resolve this, either correct the reported error or turn off database flashbac
k.
2024-05-05T10:45:28.393060+00:00
Database mounted in Exclusive Mode
Lost write protection mode set to "auto"
Completed: ALTER DATABASE   MOUNT
2024-05-05T10:46:04.458087+00:00
flashback database to restore point MY_FIRST_RP
ORA-38760 signalled during: flashback database to restore point MY_FIRST_RP...
2024-05-05T10:50:43.887137+00:00
==============================================


Explanation of the Error :
===========================
38776, 00000, "cannot begin flashback generation - recovery area is disabled"
// *Cause: During a database mount, the RVWR process discovered that the
//         recovery area was disabled.  DB_RECOVERY_FILE_DEST must have
//         been set null or removed from the INIT.ORA file while the database
//         was unmounted.
// *Action: Flashback database requires the recovery area to be enabled.
//          Either enable the recovery area by setting the
//          DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE initialization
//          parameters, or turn off flashback database with the
//          ALTER DATABASE FLASHBACK OFF command.



So, Oracle 
1.  allows me to create a Restore Point
2.  generates Flashback Log
3.  confirms that they exist
BUT DOES NOT ALLOW ME TO FLASHBACK THE DATABASE.

DB_RECOVERY_FILE_DEST is still mandatory (even if you have ArchiveLogs being written elsewhere -- my ArchiveLogs were going to the default location under $ORACLE_HOME/dbs).

If you look at the documentation (column "Required" in Table 5-4) it does say that DB__RECOVERY_FILE_DEST is a required parameter -- although it doesnt explicitly say that this is required for the FLASHBACK LOGS.  Also, my database was happy to generate ArchiveLogs in another, default, location ($ORACLE_HOME/dbs).

The CREATE RESTORE POINT command didn't give me any warning that the Restore Point would not be usable -- I would treat this as a Bug.

Categories: DBA Blogs

Simple MySQL Range Scan Test

Bobby Durrett's DBA Blog - Fri, 2024-05-03 15:00

I was trying to tune a MySQL query this week. I ran the same query against Oracle with the same data and got a much faster runtime on Oracle. I couldn’t get MySQL to do a range scan on the column that Oracle was doing it on. So, I just started barely scratching the surface with a simple test of when MySQL will use an index versus a full table scan in a range query. In my test MySQL always uses an index except on extreme out of range conditions. This is funny because in my real problem query it was the opposite. But I might as well document what I found for what it’s worth. I haven’t blogged much lately.

Here is my testcase and its output:

https://www.bobbydurrettdba.com/uploads/mysqlindexuserangequeries.zip

This is on 8.0.26 as part of an AWS Aurora MySQL RDS instance with 2 cores and 16 gigabytes of RAM.

I created a simple test table and put 10485760 rows in it:

create table test 
(a integer NOT NULL AUTO_INCREMENT,
b integer,
PRIMARY KEY (a));

The value of b is always 1 and a ranges from 1 to 10878873.

This query uses a range query using the index:

select
sum(b)
from
test
where
a > -2147483648;

This query uses a full table scan:

select
sum(b)
from
test
where
a > -2147483649;

The full scan is slightly faster.

Somehow when you are 2147483650 units away from the smallest value of a the MySQL optimizer suddenly thinks you need a full scan.

There are a million more tests I could do like things with a million variables, but I thought I might as well put this out there. I’m not really any the wiser but it is a type of test that might be worth mentioning.

Bobby

Categories: DBA Blogs

Video on monitoring (a single instance) DataGuard Standby for a RAC (2-node) Cluster

Hemant K Chitale - Wed, 2024-05-01 04:03

 I've just uploaded a new video on monitoring DataGuard where the Standby is a Single Instance environment for a 2-node RAC Cluster.


The script used in the demo are in this ZIP  (script files with extension TXT)

Categories: DBA Blogs

CPU Utilization

Tom Kyte - Tue, 2024-04-30 13:46
Hi Tom, We run a multi-user OLTP system on Exadata Quarter Rack (Linux version). Though the system response time is consistent and is performing well. We observed Run queues with CPU utilization at 70% on both the nodes. What could be the reason? My understanding always has been that Run queues are formed only if the system utilization exceeds 100%. But in this case CPU on both the nodes is 65% utilized and 30% is free. But may be my understanding is flawed. Could you pls explain the concept of cpu utilization, run queues vis-avis cpu count, specially in OLTP workload?
Categories: DBA Blogs

Read consistency accross cursors in one procedure

Tom Kyte - Tue, 2024-04-30 13:46
I am looking for read consistency across multiple cursors in a packaged procedure. In the past I have opened the cursors that I wanted to be consistent at the start of the procedure, used them, and closed them at the end. I am starting to think that the time the first cursor takes to open, and resolve it's result set is making the subsequent cursor inconsistent, although this seems to have worked 99% of the time. Example: DECLARE CURSOR Cur1 IS SELECT SUM(X) FROM A WHERE SummaryFlag = 'N'; CURSOR Cur2 IS SELECT ROWID FROM A WHERE SummaryFlag = 'N'; BEGIN OPEN Cur1; OPEN Cur2; . FOR Rows IN Cur1 UPDATE ASummary . . FOR Rows IN Cur2 UPDATE A SET SummaryFlag = 'Y' WHERE RowId = Cur2.ROWID; I have had a few occasions where the summary table does not contain the information that has now been flagged as summarized. Does opening the cursors one right after the other guarantee a consistent result set, and if not why? Will using "ALTER TRANSACTION ISOLATION LEVEL SERIALIZABLE" fix this? How can I set my ISOLATION LEVEL and ROLLBACK segment at the same time? Thanks in advance.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs