DBA Blogs

Question from USER399341 for Database In-Memory Office Hours - 2021-11-24

Tom Kyte - Wed, 2021-11-24 08:06
The following question was asked, but got attached to the Nov 24th session which was re-scheduled to this Nov 18th session: Question from USER399341 for Database In-Memory Office Hours - 2021-11-24 I have a Table with all transaction amounts (both +ve and -ve amount). I have to find a combination of these amounts that equate to a given sum transaction amount stored in another table. There will be 10s of thousands of transactions int he first table. I am using Oracle 19c. Appreciate your help.
Categories: DBA Blogs

Unnest a nested table with the extracted data in single row

Tom Kyte - Tue, 2021-11-23 13:46
Hi, I have a nested table with three columns within the nested column. I have 3 entries for the same ID within the nested column. I want to unnest this table and get the 3 entries as separate columns in single row. How do I do it? Below is the code: <code>create or replace TYPE "TEST" AS OBJECT ( point NUMBER(3), latitude NUMBER(10), longitude NUMBER(10) ) create or replace TYPE "TESTS" IS TABLE OF TEST; CREATE TABLE TEST_TABLE ( "ID" NUMBER(3,0), "LOCATION" "SYS"."TESTS" ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SYSTEM" NESTED TABLE "LOCATION" STORE AS "LOCATIONS" (PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) NOCOMPRESS TABLESPACE "SYSTEM" ) RETURN AS VALUE; Insert into TEST_TABLE (ID,LOCATION) values (161,SYS.TESTS(SYS.TESTS(0, 4009716, 50056416), SYS.TESTS(1, 4324450, 51769233), SYS.TESTS(2, 5570283, 51604983), SYS.TESTS(3, 5845666, 49989300))); Insert into TEST_TABLE (ID,LOCATION) values (162,SYS.TESTS(SYS.TESTS(0, 4862133, 43994149), SYS.TESTS(1, 3183550, 43960533), SYS.TESTS(2, 3970383, 45314300), SYS.TESTS(3, 5032600, 44909200)));</code> Expected Output: <code>ID POINT1 LATITUDE1 LONGITUDE1 POINT2 LATITUDE2 LONGITUDE2 POINT3 LATITUDE3 LONGITUDE3 POINT4 LATITUDE4 LONGITUDE4 --- ------- --------- ---------- ------ --------- ---------- ------ --------- ---------- ------ --------- ---------- 161 0 4009716 50056416 1 4324450 51769233 2 5570283 51604983 3 5845666 49989300 162 0 4862133 43994149 1 3183550 43960533 2 3970383 45314300 3 5032600 44909200</code>
Categories: DBA Blogs

Json_Transform in Oracle 21c

Tom Kyte - Tue, 2021-11-23 13:46
Team, Given this JSON, how do i increment the quantity by two in each level? tried the below using json_transform but ended up with error. is that possible using json_transform function? kinldy help. <code> demo@XEPDB1> select json_serialize(y pretty) y 2 from t 3 where x =2; Y ---------------------------------------- { "produce" : [ { "fruit" : "apple", "quantity" : 10 }, { "fruit" : "orange", "quantity" : 15 } ] } demo@XEPDB1> select json_serialize( 2 json_transform( y, set '$.produce[*].quantity' = 3 '$.produce[*].quantity' + 2 ) pretty ) 4 from t 5 where x =2; '$.produce[*].quantity' + 2 ) pretty ) * ERROR at line 3: ORA-01722: invalid number demo@XEPDB1> </code> Was able to get this done using JSON object types from PL/SQL. would like to know if the same can be done using JSON_TRANSFROM function in sql? <code> demo@XEPDB1> create or replace function update_json( p_input json ) 2 return json 3 as 4 l_data json_object_t; 5 l_size number := 0; 6 l_ele json_element_t; 7 l_array json_array_t; 8 l_obj json_object_t; 9 l_qty number := 0; 10 begin 11 l_data := json_object_t( p_input ); 12 13 if l_data.has('produce') then 14 l_ele := l_data.get('produce'); 15 if l_ele.is_array then 16 l_size := l_ele.get_size()-1; 17 l_array := json_array_t( l_ele ); 18 for i in 0..l_size 19 loop 20 l_obj := treat( l_array.get(i) as json_object_t ); 21 l_qty := l_obj.get_Number('quantity'); 22 l_obj.put( 'quantity', l_qty+2 ); 23 end loop; 24 end if; 25 end if; 26 return l_data.to_json; 27 end; 28 / Function created. demo@XEPDB1> select json_serialize(y) 2 from t t1 3 where x =2; JSON_SERIALIZE(Y) -------------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":10},{"fruit":"orange","quantity":15}]} demo@XEPDB1> select update_json(y) 2 from t t1 3 where x =2; UPDATE_JSON(Y) -------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":12},{"fruit":"orange","quantity":17}]} demo@XEPDB1> </code>
Categories: DBA Blogs

SQL Server Migration Assistant for Oracle

Hemant K Chitale - Sat, 2021-11-20 08:08

 Here's a Video Demo of the SQL Server Migration Assistant for Oracle   https://youtu.be/zNTF1ncr45g  


The tool is available for download here

Categories: DBA Blogs

How to change the file name dynamically for an external table creation (ex ABC_YYYYMMDDHH24MISS)

Tom Kyte - Fri, 2021-11-19 18:06
<code></code>Team, <i>The version of oracle we use is :" Oracle Database 19c Enterprise Edition Release 19.0.0.0.0". </i> Lets consider our basic scott.emp table for this scenario limited to few columns. The external table creation script goes something like this : <code>CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( 'EMPLOYEE_20211116203018.csv' ) ) REJECT LIMIT UNLIMITED; The file name passed over here is : 'EMPLOYEE_20211116203018.csv'</code> If we see the file name structure it is some thing like EMPLOYEE_YYYYMMDDHH24MISS YYYY--2021,MM-11,DD-16,HH24:MI:SS --203018 if the file name had only extension of only YYYYMMDD , this could have been handled using dynamic sql like including execute immediate <code>select 'EMPLOYEE_'||to_char(sysdate,'YYYYMMDD')||'.csv' AS tablename from dual CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( tablename ) ) REJE?CT LI?MIT UNLIMITED;</code> But unfortunately the file extension is including hh24miss along with YYYYMMDD extension. The timestamp is not a fixed value , as it may vary atleast in minutes and seconds. Any code snippet to handle timestamp as well in the external creation script is most welcome and appreciated. Regards, Satyam Reddy.
Categories: DBA Blogs

Move data to warehouse

Tom Kyte - Thu, 2021-11-18 23:46
looking for the best way to move data from oracle 19c oltp system(size 13tb) to oracle 19c warehouse db. Currently using stantby database to fetch the modified rows and add to warehouse system. This process is taking so much time.
Categories: DBA Blogs

dblink blocked

Tom Kyte - Thu, 2021-11-18 23:46
Hi, I have a simple sql like "SELECT max(case when day=:1 then tra else 0 end) tra, max(case when tra=1 and day<:2 then day else '0' end) lasttradingday FROM settlement.t_calendar@dl_otcdb WHERE day <= :3 ". It query blocked offen. I cannot find any blocking session in v$session. After killed this session, it works after query again. How could it be blocked? it query at 19c, and remote database is 11g.
Categories: DBA Blogs

RMAN Recover Database

Tom Kyte - Thu, 2021-11-18 23:46
Hi Team, I have a disaster recovery scenario using RMAN. RMAN backup settings: - Database is backup using RMAN incremental Level 0 every Sunday - Database is backup using RMAN incremental Level 1 daily (Mon-Sat) - The archive log is backup every 15 minutes - Controlfile is configured autobackup in RMAN - SPFILE is configured autobackup in RMAN - Backup set is stored in separate backup server storage (SBT) - Environment is running on Windows x64 - Virtual Machine is backup daily Scenario: 1. Sunday Level=0 database backup was successful (01:00H) 2. Monday Level=1 database backup was successful (01:00H) 3. Tuesday Level=1 database backup was successful (01:00H) 4. All archivelog backup (15 minutes interval) was successful 5. At around 10AM Tuesday the server crash, and to found out that the restore point of the server/machine is Monday. 6. The machine was restored based on monday backup. At this state, the database state is back to Monday after the machine was successfully restored. My question, since the database is restored back to Monday state, is it possible to roll forward the database to Tuesday since the date of failure is Tuesday? What will be the strategy to do? Thank you AskTom.
Categories: DBA Blogs

Oracle Groundbreakers APAC Virtual Tour 2021: “Automatic Indexing: An Update On Improvements and New Capabilities”

Richard Foote - Thu, 2021-11-18 17:43
I’m very excited to have another opportunity to present at the upcoming APACOUC Oracle Groundbreakers APAC Virtual Tour 2021 a somewhat expanded version of my new presentation “Automatic Indexing: An Update On Improvements and New Capabilities”. This massive virtual conference runs from 22 November through to 11 December and features some of the very best […]
Categories: DBA Blogs

How to return result of a query in chunks of 100 rows at a time.

Tom Kyte - Thu, 2021-11-18 05:46
Hi Tom, I have a requirement where in users want that on the screen where we display result set of a query only 100 rows should be displayed at a time and again when user clicks button ( on the screen named Next) next 100 rows should be displayed from the query result and so on till result ends. Can I handle this scenario in a PLSQL/SQL? Many thanks to you for providing such a interface to us. Regards, Shreem
Categories: DBA Blogs

Mapping system GENERATED PARTITION names

Tom Kyte - Thu, 2021-11-18 05:46
I recently converted a few tables to use INTERVAL PARTITIONs, which worked fine. Since the conversion the PARTITION names have changed to SYS_####. Unbeknownst to me I found there were some applications, which were referencing partitions by names (as ridiculous as it sounds) ie SELECT count(*) from table partition P_MMDDYYYY ), which no longer works. Is there some sort of function or procedure that I can give these developers that can map a system GENERATED PARTITION name back to the format P_MMDDYYYY as I'm getting blow back because some still insist on using the PARTITION name).
Categories: DBA Blogs

Find query causing NOLOGGING

Tom Kyte - Thu, 2021-11-18 05:46
Hello team, RMAN report unrecoverable is showing that we have NOLOGGING operations in the database. Is there a way to find out which query performed the NOLOGGING operation? Thanks for your help.
Categories: DBA Blogs

Ways to find why a SQL Plan Baseline is not being used

Tom Kyte - Thu, 2021-11-18 05:46
Hi Chris/Connor. We have a production 12c Database (12.2.0.1) and have enabled a particular SQL Plan Baseline for a high-load query, but, for some reason, it's not being used. Baseline is ENABLED and FIXED, and REPRODUCED column reads YES, but I can't see a date on LAST_EXECUTED nor in LAST_REPRODUCED and neither can I see the plan name in the notes when executing DBMS_XPLAN.DISPLAY_CURSOR or so. The plan is the only one in the baseline and plan history. I have checked that tables and indexes named there exist and are valid (and visible for indexes), no remote tables are being referenced, and the SQL is being executed from the same schema specified in PARSING_SCHEMA_NAME of DBA_SQL_PLAN_BASELINES. I want to know what options do I have to determine why the plan is not being executed (i.e. a data dictionary view that shows why a particular plan is not being used or so). Does the optimizer save somewhere its decision for not using a sql plan baseline? Thanks in advance for your help.
Categories: DBA Blogs

Row lock contention

Tom Kyte - Thu, 2021-11-18 05:46
Hi Tom, This is regarding locking issue in database. One fine Monday morning client asked me to check slowness in the system reported by customer on Saturday. ASH report looks like as below for issue period. Blocking Sid (Inst) % Activity Event Caused % Event User Program 1132, 3397( 2) 9.12 enq: TX - row lock contention 9.12 ** NOT FOUND ** BLOCKING SESSION NOT FOUND From report its clear that there was some row lock contention for almost 2 hours(same update query was present in AWR for almost 2 hours) So I run below query on table (DBA_HIST_ACTIVE_SESS_HISTORY) to check further: SELECT DISTINCT A.SQL_ID,TO_CHAR(A.SAMPLE_TIME,'YYYY-MM-DD HH24:MI:SS') SAMPLE_TIME,A.BLOCKING_SESSION,D.OBJECT_NAME FROM DBA_HIST_ACTIVE_SESS_HISTORY A, GV$SQL S, DBA_OBJECTS D WHERE A.SQL_ID=S.SQL_ID AND BLOCKING_SESSION IS NOT NULL AND A.USER_ID <> 0 AND A.CURRENT_OBJ# = D.OBJECT_ID AND A.SAMPLE_TIME BETWEEN TO_TIMESTAMP('06.11.2021 13:00:00', 'dd.mm.yyyy hh24:mi:ss') AND TO_TIMESTAMP('06.11.2021 16:40:59', 'dd.mm.yyyy hh24:mi:ss') AND A.EVENT = 'enq: TX - row lock contention' ORDER BY SAMPLE_TIME DESC; and output confirmed the same that blocking session is 1132 (as shown in ASH report). Now when I queried the same table with session Id 1132, sample output is below: session Id sql_id 1132 aykcpznxu0k8c 1132 cgj479fjtszs0 1132 8ryy5pw5gjbn6 1132 0pcgv72dynnfv 1132 2npxyk4vq7575 So here I am stuck and my question is as below: 1. Since multiple sqls are running with same session id, how to uniquely identify the blocking sql. 2. All above sqls with session id 1132 are select statement. So can a select statement cause lock if query runs for 30 minute and in between underlying data changes. Thanks
Categories: DBA Blogs

What is the proper way to clear active/current corrupted redo log group

Tom Kyte - Tue, 2021-11-16 17:06
Hi tom i wanted to know what is the proper way to clear active /current corrupted redo log group from what i know that you need to do a point in time recovery before the corrupt happens.is this the best way or is there a another way to clear a corrupted active/current redo log group without doing a PITR
Categories: DBA Blogs

Database links across Kerberized databases

Tom Kyte - Tue, 2021-11-16 17:06
Hi, We have 2 databases (db-1 and db-2), and 2 users (user-1 and user-2) setup on both of the databases. Our databases use only kerberized connections. From db-1, there is a database link created under user-1 (db-link) that points to db-2. There is a procedure on db-1 under user-1 that accesses a table in db-2: (pseudo code below) <code>create procedure load_data_across_db_link as begin ... ... insert into user-1.table(cols) select cols from user-1.table@db-link; end;</code> When the Java application connects to db-1 using user-1 credential cache (since the DB is kerberized) and executes the above procedure, it works and data is getting populated into table. But when the Java application connects using user-2 credential cache, the procedure gives the error: "ORA-04052 error occurred when looking up remote object" We believe this could be due to the fact that when the procedure is executed by user-2, it doesn't have access to the credentials cache of user-1 to access the object over the remote link. So we created another db link with the same name (db-link) under user-2. But even then we get the same error How can we setup the accounts such that Java application connects using user-2 and executes the procedure owned by user-1 to load the data across the DB link ? (Please note that we have provided rights for user-2 to perform DML on user-1 owned objects, and execute permission on the above mentioned procedure) Edit: When we try to connect to user-2 using "proxy" access in SQL Developer, and run the package, we get "ORA-12662: proxy ticket retrieval failed" (our DBA said that we won't be able to use proxy access to test database links, so we have always deployed the code and then run it using "user-1" (or user-2) account
Categories: DBA Blogs

Deferrable Constraints

Tom Kyte - Tue, 2021-11-16 17:06
I Recently implemented constraints on a system as follows: i) All primary keys were not declared as being not deferrable. ii) All foreign keys were declared as being deferrable initially deferred. The reasoning behind having the primary keys as not being deferrable was due to uniqueness not being enforced until commit time, and I get the impression that lookups using the index on a deferrable Constraint will be slower than the lookups using the index on a normal constraint because uniqueness is not enforced on deferrable constraints. The reasoning for having the foreign keys as being deferrable,initially deferred was to allow for easier programming in the application. There was no need to alter the constraints to be deferred or to worry about the order of inserts into the tables. Is this a sensible guideline to follow for using deferred integrity constraints
Categories: DBA Blogs

Formula to get exact years of experience of one employee (date delta)

Tom Kyte - Mon, 2021-11-15 22:46
Sir, I want to get exact experience of an employee. Is there any formula to calculate it ? Because we can't consider each month as 30 days, each year as 365 days. I want to know , how many years, how many months, how many days of experience the particular employee have. By using MONTHS_BETWEEN we can calculate total no. of months then by dividing it with 12 we can get total no. of years and no.of months. Then how to calculate no. of days out of that ? we can also calculate hours, minutes and seconds out of that particular time , but how?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs