Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 15 hours 43 min ago

sqlplus 'set feedback off' is not working for oracle client 19.14 as expected.

Tue, 2023-01-31 05:46
Hi Tom, I have recently upgraded a couple of databases (12.1 -> 19.17) and remote oracle client (19.3 - 19.14 ) on application servers connecting upgraded databases. There exist only one case where 'set feedback off' is not working as expected when I commit from sqlplus client 19.14 to only one of those upgraded 19.17 DBs. How may I trace sqlplus to find out reason of unexpected behaviour for 'set feedback of'. Here is the test case I produced. <code> #########Connect to a database ( test1 ) on the same db_host and using same db_oracle_home where "SET FEEDBACK OFF" is working as EXPECTED. infa64@dwhsas15> sqlplus test_user@dwhpx-scan:1521/test1 SQL*Plus: Release 19.0.0.0.0 - Production on Mi Jan 25 09:47:34 2023 Version 19.14.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Kennwort eingeben: Letzte erfolgreiche Anmeldezeit: Mi Jan 25 2023 09:46:38 +01:00 Verbunden mit: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 SQL> commit; Transaktion mit COMMIT abgeschlossen. SQL> set feedback off SQL> commit; SQL> exit Verbindung zu Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 beendet #########Connect to a different database ( test2 ) on the same db_host and using same db_oracle_home where "SET FEEDBACK OFF" is NOT working as EXPECTED. infa64@dwhsas15> sqlplus test_user@dwhpx-scan:1521/sim SQL*Plus: Release 19.0.0.0.0 - Production on Mi Jan 25 09:48:01 2023 Version 19.14.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Kennwort eingeben: Letzte erfolgreiche Anmeldezeit: Mi Jan 25 2023 06:01:50 +01:00 Verbunden mit: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 SQL> commit; Transaktion mit COMMIT abgeschlossen. SQL> set feedback off SQL> commit; Transaktion mit COMMIT abgeschlossen. --<< UNEXPECTED >>-- SQL> </code>
Categories: DBA Blogs

How to convert a collection into json_object

Tue, 2023-01-31 05:46
Hi, Is it possible to convert a collection into json_object? I have a collection named code1 of type codelist of table of Varchar2(2). code1 = codelist('AB','CD', 'EF'); I want the above in JSOn format as below: Is it possible? Please help { "codelist" : ['AB','CD',EF'] }
Categories: DBA Blogs

ODBC Transparent Gateway

Tue, 2023-01-31 05:46
Hi I had Oracle <b>12c Enterprise on Windows 64 </b>and MariaDB 10.2 on Ubuntu . I have been using ODBC transparent gateway to send the data from Oracle to MariaDB. I am in the process of transitioning to <b>Oracle Enterprise 19c on Solari</b>s and MariaDB 10.4 on RHEL8 . It looks like , there are no native mariadb odbc drivers for Solaris. What is your recommendation on connecting Maria DB ( RHEL) to Oracle 19 ( Solaris ) ? With the current licensing model , only the ODBC Transparent Gateway covered with regular licensing? This data transfer happens once a day ( via dbms_scheduler). At the maximum , we will have 2000 records transferred between these DBMSes. What is the best approach ? Thanks very much.
Categories: DBA Blogs

Poorly Performing Query after RLS / VPD Policy

Mon, 2023-01-30 11:26
We have an old query that was running fine (though it could use improvements in general) but after having a VPD policy put in place, it started to run long and not complete. The difference between an 1 - 1.5 hours to not completing after running for well over 48 hours. After digging into the VPD policy in place, in this scenario of an application ID, they did not want to apply the policy so the predicate they were adding was just "1 = 1." From what I have experienced (and after doing some research) I haven't really found anything where 1 = 1 could really affect a query. In this case, when we run the explain plan for the query in an environment where the new policies are not present, we get a different explain plan when the "1 = 1" is added into the query. In a nutshell, this query will select against a large detail table, but it must first check whether detail has been completed (subquery with the vw_header_detail view) and then it will need to check whether the order was modified in the window of time we are looking at (2nd subquery that does an exist clause into 4 different tables). Note: these are all views that reference tables. If it is completed and it shows that it has been modified in the time window, it goes back to the detail table and pulls all the applicable rows for that seq_id. Which is a very large table. <code>SELECT * FROM vw_detail dtl WHERE EXISTS (SELECT 1 FROM vw_header_detail sts1 WHERE ((sts1.posted_date IS NOT NULL) OR (sts1.secondary_posted_date IS NOT NULL)) AND sts1.seq_id = dtl.seq_id AND sts1.c_status in ('P','D','C','G','K','A') AND sts1.p_status in ('S','P','F','O') ) AND (EXISTS (SELECT 1 FROM vw_header hdr WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND hdr.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_detail dtl2 WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND dtl2.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_secondary_header sech WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND sech.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_secondary_detail secd WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND secd.seq_id = dtl.seq_id)) ORDER BY dtl.seq_id, dtl.line_number</code> The existing query above will return this explain plan: <code>------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 21 | 9282 | 6282K (1)| 00:04:06 | | 1 | SORT ORDER BY | | 21 | 9282 | 6282K (1)| 00:04:06 | | 2 | NESTED LOOPS SEMI | | 21 | 9282 | 6282K (1)| 00:04:06 | | 3 | NESTED LOOPS | | 21 | 9240 | 6281K (1)| 00:04:06 | | 4 | VIEW | VW_SQ_1 | 945K| 6463K| 6278K (1)| 00:04:06 | | 5 | HASH UNIQUE ...
Categories: DBA Blogs

Generate N TIMESTAMPs

Mon, 2023-01-30 11:26
I need to CREATE some test data where I want to CREATE N number of timestamps between 2 times for every 15 minute interval including the start time and less than the end time. N would be defined as dbms_random between a low_number and a high_number. I want to run something like this get_dates TIMESTAMP'2023-01-25 03:00:00', TIMESTAMP'2023-01-27 13:30:00', 10, 25 Any help would be greatly appreciated.
Categories: DBA Blogs

SYS.DBMS_NETWORK_ACL_ADMIN

Mon, 2023-01-30 11:26
ORA-04063: package body "SYS.DBMS_NETWORK_ACL_ADMIN" has errors, how do I reinstall this package?
Categories: DBA Blogs

Bind variables in Live SQL

Mon, 2023-01-30 11:26
Dear Tom, how can I create bind variable in Oracle Live SQL, <code><variable variable_name data_type;> example <variable dusko number;> </code> just doesn't work, i get "Ussuported command" message... Thanks and best regards
Categories: DBA Blogs

Grouping and updating column with equal parts

Mon, 2023-01-30 11:26
Hi, I have a table that holds about 2 million records. No column in the table can be used to divide rows equally. Initially when the data is loaded into the table, the bucket_number column is null. After data loaded, I would like to calculate and equally divide the total no. of records and update bucket_number column with number. Later the buckets will be processed parallelly by another batch process. I could use ntile to bucketize data but I want to persist the bucket number in table. What is the best way to bucketize and update column in Oracle?
Categories: DBA Blogs

Oracle Materialized View - adding a Where clause

Mon, 2023-01-30 11:26
I have a materialised view in my own schema which based on a table in another schema. However I want to modify the matrealise view to add a where clause which will never becomes true e.g. Where 1=2 Can i alter an existing materialised view to add this where clause in the MView query . Will alter command works or I have to drop and recreate the Mview (Which i am rluctatant to do actually)
Categories: DBA Blogs

Trying to Connect to a Local Oracle inside a Container

Mon, 2023-01-30 11:26
I have a Docker Container with Oracle Database Entrepise Edition (followed the basic instructions on container-regitry.oracle.com) And i'm trying to create a NodeJs application to connect to the Oracle Database inside the container (tried the "how to basics" on node-oracledb.readthedocs.io), but without success. My connection object is the following (using same credentials and connectString i'm able to connect on SQL Developer): <code> { user: 'SYS', password: 'Paulo@oracle123', connectString: 'localhost/OracleSID ' } </code> I've been receiving the following error <code>Error: DPI-1047: Cannot locate a 64-bit Oracle Client library: "The specified module could not be found". See https://oracle.github.io/node-oracledb/INSTALL.html for help Node-oracledb installation instructions: https://oracle.github.io/node-oracledb/INSTALL.html You must have 64-bit Oracle Client libraries in your PATH environment variable. If you do not have Oracle Database on this computer, then install the Instant Client Basic or Basic Light package from https://www.oracle.com/database/technologies/instant-client/winx64-64-downloads.html A Microsoft Visual Studio Redistributable suitable for your Oracle client library version must be available. at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async connectToDb (D:\PersonalDev\Migrate Oracle to MSSQL\index.js:10:18) { errorNum: 0, offset: 0 } </code> But coudn't figure it out. Please HELP!!
Categories: DBA Blogs

Directing trigger output to queue

Mon, 2023-01-30 11:26
We currently use triggers on a subset of tables in our transactional database (on-premises) to capture row / column level changes and insert them into a separate table. With the availability of DBMS_CLOUD package for our product version we are exploring the option to rewrite some of the existing triggers and push the row / column changes to a native queueing service on the cloud instead of writing to another table in the database. The idea is to then have an asynchronous process to subscribe to the queue and push the data to a downstream data store. The developer blog below has some very relevant information but it is written for Autonomous database & Oracle cloud. https://blogs.oracle.com/developers/post/back-to-the-database-part-3-publishing-database-changes-to-a-stream Has anyone had success with using DBMS_CLOUD package on 19c to work with streaming / queuing services on Azure / AWS cloud? On the documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CLOUD.html), I see that it can interact with an object store but unlike the documentation for autonomous database there is no mention of REST_API capabilities. Is that an undocumented feature for 19c? I am looking for some clarity before embarking on a proof of concept. Thank you.
Categories: DBA Blogs

Materialize Hint used in almost all queries

Mon, 2023-01-30 11:26
Hello, Thanks to this forum a lot of my answers were answered. I noticed that most of the queries in an application that am working on is using the MATERIALIZE hint in the subqueries. Our queries heavily use CTEs and relies on multiple joins. Our query performance is not great. In OEM, i noticed all these queries have high user I/O and am guessing, its because all the queries that query tables of 100s million rows are materialized. Is that true? Is it better to keep it there rather than remove it? When should i REALLY be using materialized hint? The queries also use "where column1 = CASE WHEN this THEN 1 WHEN THAT THEN 2 WHEN OTHER_THAT THEN 3" Also uses Select NVL(this, NVL(that,99)), column, column from table; Does the above impact performance? If so how can i improve it? I also noticed use of "use_concat materialize" hint. For some of the queries, when i execute it without "use_concat" the speed is almost 300% faster. Under what condition should this hint be used? What would help with performance
Categories: DBA Blogs

Different number of where condition function call between direct and indirect calls

Fri, 2023-01-20 13:06
For code and test output, please see LiveSQL Link https://livesql.oracle.com/apex/livesql/s/onh4jsczpzaa2bp9t0r9yfkxh When called with: exec test_proc(1000, 1, 5) the output shows 500 vs. 77. Why InDirect Count = 77 calls ? Statement processed. ---------- Compare test_proc(1000, 1, 5) Function-Calls ------------- Direct Count = 500 InDirect Count = 77 In following calls, we can see that test_proc(183, 0, 5) vs test_proc(184, 0, 5), InDirect Count changed from 50 to 51. But we can see that test_proc(186, 1, 5) vs test_proc(187, 1, 5), InDirect Count changed from 50 to 51. Why such call number difference ?
Categories: DBA Blogs

Recursion - Why is anchor value not repeated

Thu, 2023-01-19 18:46
In the <b>Hierarchical Queries: Databases for Developers</b> first recursion example we run the query for the initial case in the recursive loop: <code> select employee_id, first_name, last_name, manager_id from employees where manager_id is null </code> Before a union all: <code>union all</code> Before then running the part that will repeat until it hits it's termination condition: <code> select e.employee_id, e.first_name, e.last_name, e.manager_id from org_chart oc join employees e on e.manager_id = oc.employee_id ) </code> My question is: <b>Why is the initial anchor value not repeated?</b> Is it not searching for cases where manager id is null for every line? I would have expected the root to be repeated between every other entry of the data.
Categories: DBA Blogs

Write a SQL Query to Calculate New Salaries

Thu, 2023-01-19 00:46
Assume that the company you work in wants to increase the salaries by 20% this year, and also wants to add 1000 as bonus to each salary after 20% increase. Based on this information, write a SQL query that returns the new salaries from the EMPLOYEES table. Your query should return the employee_id, salary and new_salary columns. (new_salary should be the alias for the calculated salary)
Categories: DBA Blogs

Rebuilding text index

Thu, 2023-01-19 00:46
Hello. I create text index <code>CREATE INDEX IDXC_SEARCH_FULL_NAME ON T_SEARCH (main.name_full) INDEXTYPE IS CTXSYS.context PARAMETERS ('' datastore main.name_search_ds lexer main.name_search_lexer wordlist main.name_search_wl section group main.name_search_sg stoplist CTXSYS.empty_stoplist storage main.name_store sync (ON COMMIT) '')'</code> After that <code>SELECT dbms_metadata.get_ddl(object_type => 'INDEX', NAME => 'IDXC_SEARCH_FULL_NAME', SCHEMA => 'MAIN') FROM dual;</code> returns correct PARAMETERS string and also it is shown in <code>SELECT parameters FROM dba_indexes WHERE index_name = 'IDXC_SEARCH_FULL_NAME'; </code> Then, when I do <code>ALTER INDEX IDXC_SEARCH_FULL_NAME REBUILD</code> parameters disappear <code>SELECT parameters FROM dba_indexes WHERE index_name = 'IDXC_SEARCH_FULL_NAME';</code> returns NULL Why is that? Do I should add PARAMETERS with REPLACE when REBUILDing? Even if I don't change parameters?
Categories: DBA Blogs

Fast refreshable mviews with JOIN operators - using PK?

Thu, 2023-01-19 00:46
I have a fast refreshable materialized view question. At the moment, the issue is, I cannot do any maintenance on mview base tables (e.g. tables that have mview log defined on them). Our materialized views consist of joins to join multiple tables together, and due to that mview log must be created with rowid (not possible via PK). Now the problem that thw rowid opposes - whenever I move base table rows (whatever method I use, table redefinition perhaps), then my mviews are not fast refreshable any more, and since complete refresh takes more time to just drop and recreate it, then I have used drop/recreate approach, but this does mean downtime, since mview objects are missing from data dictionary for a certain amount of time. I am curious, will there (23c) be any materialized view improvements compared to 19c ? Curious about mview fast refresh using PK when mview consists JOIN operators. At the moment, fast refresh is only possible with rowid's when mview consists JOIN operators. Or, if there is no such improvements done yet, what are future views regarding this issue - will there be hope that somedays this will be done. I mean in technical point of view, is it really tht difficult to implement. I mean the PK vs rowid - yes these both are different things, but share the same purpose. With PK, you only have that extra index access time cost in addition compared to rowid. Raul
Categories: DBA Blogs

Bulk DELETE in chunks

Thu, 2023-01-19 00:46
Hi, We have a requirement to delete rows in chunks of say 100 from a large table and commit, then proceed for next 100 delete. Without looping delete written is as below <code>delete from large_table where code=888 and id in (select id from small_table OFFSET 100 ROWS FETCH NEXT 100 ROWS ONLY);</code> Will this be right approach ? If not how to write a single delete statement Thanks, Girish
Categories: DBA Blogs

Best practice to delete large volumn of data

Thu, 2023-01-19 00:46
Hi, I have a use case where my table contains ~19 million records and the performance is slow now. I have old data as well in the data and I can easily remove old data. Now I have two possibilities, either prepare delete queries with limit or create partition say on monthly basis and then delete the partition. As per my know with delete queries the table space will not be reclaimed automatically but with partition deletion it would. Kindly suggest a better way considering the above scenario.
Categories: DBA Blogs

Datapump Export - Table Partition

Thu, 2023-01-19 00:46
Hey Guys, We are attempting to export selective partitions of a table and would like to filter records for each partition. Would you please explain how we can achieve using one parameter file? Below is an example of the parameter file that we are using. For example, I have a table in my schema ( mytable ) with 20 partiions, I want to export 2 partitions ( SYS_P9095 and SYS_P9096 ). The where clause must be different for each of the partitions, my question to you is how do I accomplish this ? The query parameter in the current parameter file ( listed below ) applies to all partitions exported. --Contents of parameter file dumpfile=my_dump.dmp logfile=my_dump.log job_name=my_exp_dump compression=all parallel=4 cluster=N tables='ORAGUY.MYTABLE:SYS_P9095', 'ORAGUY.MYTABLE:SYS_P9096' query='ORAGUY.MYTABLE:"WHERE SELECT_FLAG <= 20"'
Categories: DBA Blogs

Pages