Feed aggregator

Many BLOB fields in table

Tom Kyte - 2 hours 5 min ago
Hi Tom, Is it advisable to have more BLOB fields in a table? (Planning to have 20), Each BLOB field will be stored with small amount of data per row. table may have 1.5 million rows. what will be disadvantages (if any) if we have more BLOB fields in a table.
Categories: DBA Blogs

Identifying candidate tables that can be shrunk to reclaim space

Tom Kyte - 2 hours 5 min ago
Is there a SQL script I can use to determine if a table is a good candidate to be shrunk or moved to reclaim space. I'm not looking to use segment advisor, I want something in SQL. My ultimate goal is to automate this task in PLSQL (without reinventing the wheel) by identifying tables, enabling/disabling row movement and any foreign keys and report on the time taken and spaced reclaimed. Below is a test CASE with a CLOB. But many of my tables will have one or more Foriegn keys. <code>create table t1(c1 clob) lob (c1) store as basicfile; insert into t1 select lpad('X', 4000) from dual connect by level<=1000; select bytes/1024/1024 from user_segments where segment_name = (select segment_name from user_lobs where table_name = 'T1' and column_name = 'C1'); BYTES/1024/1024 9 alter table t1 enable row movement; delete from t1 where rownum <= 900; alter table t1 modify lob (c1) (shrink space); select bytes/1024/1024 from user_segments where segment_name = (select segment_name from user_lobs where table_name = 'T1' and column_name = 'C1'); BYTES/1024/1024 .9375 </code>
Categories: DBA Blogs

Effect of Oracle Resource Manager

Tom Kyte - 2 hours 5 min ago
Below are three Resource Manager Plan directives: <code> begin DBMS_RESOURCE_MANAGER.create_plan_directive ( plan => 'EQ_ALLOCATION', group_or_subplan => 'STAGE', mgmt_p2 => 100, parallel_degree_limit_p1 => 2); DBMS_RESOURCE_MANAGER.create_plan_directive ( plan => 'EQ_ALLOCATION', group_or_subplan => 'APPLICATION', mgmt_p1 => 100, parallel_degree_limit_p1 => 6); DBMS_RESOURCE_MANAGER.create_plan_directive ( plan => 'EQ_ALLOCATION', group_or_subplan => 'OTHER_GROUPS', comment => '50% Priority for other operation on 2nd level', mgmt_p2 => 50, parallel_degree_limit_p1 => 1); end; / </code> below is the code to associate the resource directives with Schemas: <code> BEGIN -- Assign users to consumer groups DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group( grantee_name => 'USER1', consumer_group => 'STAGE', grant_option => FALSE); DBMS_RESOURCE_MANAGER.set_initial_consumer_group('USER1', 'STAGE'); DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group( grantee_name => 'USER2', consumer_group => 'MART', grant_option => FALSE); DBMS_RESOURCE_MANAGER.set_initial_consumer_group('USER2', 'MART'); END; / </code> As per the Oracle documentation, resource available with on mgmt_p1 level with be given to mgmt_p2 processes if needed. Questions: 1) How to check CPU count/SGA/PGA that are being used by a particular session? 2) Will Oracle resource manager work since the begining of the session or it will kick in once cpu reaches 100% utilization mark? I tried doing some POC on my local.I put a max limit on parallel degree, say 5 and executed a query using parallel(10) hint. Few executions happened with 5 threads, others happened with 10 threads though the consumer group was should the new plan and I had implemented. 3) If I mgmt_p1 process kicks in, when mgmt_p2 process are in session with 100% CPU utilization, how Oracle will make the resources available for mgmt_p1 process? 4) What can I do take the CPU utilization on 100% mark?
Categories: DBA Blogs

Bad dates in table like 00-000-0000

Tom Kyte - 2 hours 5 min ago
We have a table with an END_DATE column defined as <code> Column_Name Data_Type Nullable Data_Default END_DATE DATE Yes (null)</code> During the period of 24-oct-2007 07:36:04 to 23-sep-2019 11:41:41 a number (but not all) of the rows have a dump century of 10,000 for END_DATE <code>select dump(end_date), end_date, to_char(end_date,'dd-mon-yyyy hh24:mi:ss') from rate where rate_id =8076496;</code> <code> DUMP(END_DATE) END_DATE TO_CHAR(END_DATE,'DD-MON-YYYY HH24:MI:SS') Typ=12 Len=7: 200,122,12,31,1,1,1 31-DEC-22 00-000-0000 00:00:00 </code> while others have the expected century, 2000 <code> DUMP(END_DATE) END_DATE TO_CHAR(END_DATE,'DD-MON-YYYY HH24:MI:SS') Typ=12 Len=7: 120,120,12,31,1,1,1 31-DEC-20 31-DEC-2020 00:00:00 </code> The problem went away after 23-sep-2019 11:41:41, but a customer who queries for rows including the period of 24-oct-2007 07:36:04 to 23-sep-2019 11:41:41 encounters errors in processing the "00-000-0000 00:00:00" values. While we could fix the problematic rows with SQL, the customer wants us to tell them why the problem could occur in the first case. So the question is, how could this erroneous century get into an Oracle database in the first place?
Categories: DBA Blogs

Ecoute Installation and Introduction for Transcription - Step by Step

Pakistan's First Oracle Blog - Tue, 2023-06-06 19:09
Following is step by step instruction as how to install Ecoute AI tool for live transcription that provides real time transcripts for both microphone input and speakers output. There are lot of use cases for Ecoute in Job interview training for example.



Commands used in video to install ecoute on Windows:

Prerequisites:

  • Chocolatey
  • Python >=3.8.0
  • An OpenAI API key
  • Windows OS (Not tested on others)
  • FFmpeg

Step 1: Install Chocolaty

Open Powershell as administrator and run following in powershell window:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))

Step 2: Install FFMPEG which is audio and video converter.

choco install ffmpeg

Step 3: Clone ecoute repo

git clone https://github.com/SevaSk/ecoute
cd ecoute
pip install -r requirements.txt

Step 4: Get free Open API Key from https://platform.openai.com/account/api-keys

Step 5: Create keys.py file in ecoute folder and insert following in that file.

OPENAI_API_KEY="API KEY"

Step 6: Run the ecoute AI

python main.py --api

Reference: https://github.com/SevaSk/ecoute
Categories: DBA Blogs

OGG Veridata Bundle Patch 12.2.1.4.220831 (PS4 BP6)

Michael Dinh - Mon, 2023-06-05 23:26

Something as simple as patching made convoluted by unclear documentation.

Ignacio from Oracle support was a great help by offering Zoom session.

https://blogs.oracle.com/dataintegration/post/oracle-goldengate-veridata-12214220831-is-now-available

33953823 – OGG Veridata Bundle Patch 12.2.1.4.220831 (PS4 BP6) (Server+Agent)
============================================================
From README.txt

3. Verify the OUI Inventory.
============================================================

OPatch needs access to a valid OUI inventory to apply patches.

Note: This needs the ORACLE_HOME to be set(refer section “2. Pre-Installation Instructions”)
prior to run the below commands:

Validate the OUI inventory with the following commands:

$ opatch lsinventory -jre $ORACLE_HOME/oracle_common/jdk/jre

Note:
Make sure the JDK version you use is the certified version for your product.

If the command errors out, contact Oracle Support and work to validate
and verify the inventory setup before proceeding.

[oracle@localhost ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -help

-jre
This option tells OPatch to use JRE (java) from the
specified location instead of the default location
under Oracle Home

============================================================
### NOTE:
============================================================

NO PROCESSES ARE RUNNING AND ONLY WLS AND VERIDATA HAVE BEEN INSTALLED

veridata_12.2.1.4.0
wls_infra_12.2.1.4.0

============================================================
### START PATCHING
============================================================

[oracle@localhost patch]$ pwd
/vagrant/software/patch
[oracle@localhost patch]$ ls -l
total 60584
-rwxrwxrwx. 1 vagrant vagrant 54494225 Jun 6 02:36 p28186730_1394212_Generic.zip
-rwxrwxrwx. 1 vagrant vagrant 7539013 Jun 6 02:36 p33953823_122140_Generic.zip
[oracle@localhost patch]$ unzip -qo p33953823_122140_Generic.zip; echo $?
0
[oracle@localhost patch]$ ls -l
total 60860
drwxrwxrwx. 1 vagrant vagrant 0 Sep 15 2022 33953823
-rwxrwxrwx. 1 vagrant vagrant 268073 Sep 20 2022 oracle-goldengate-veridata-release-notes_12.2.1.4.220831.pdf
-rwxrwxrwx. 1 vagrant vagrant 54494225 Jun 6 02:36 p28186730_1394212_Generic.zip
-rwxrwxrwx. 1 vagrant vagrant 7539013 Jun 6 02:36 p33953823_122140_Generic.zip
-rwxrwxrwx. 1 vagrant vagrant 11408 Sep 21 2022 README.txt
[oracle@localhost patch]$ export ORACLE_HOME=/opt/oracle/wls
[oracle@localhost patch]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 13.9.4.2.1

OPatch succeeded.
[oracle@localhost patch]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home “/opt/oracle/wls”.

OPatch succeeded.
[oracle@localhost patch]$ cd 33953823
[oracle@localhost 33953823]$ pwd
/vagrant/software/patch/33953823
[oracle@localhost 33953823]$

[oracle@localhost 33953823]$ $ORACLE_HOME/OPatch/opatch apply /vagrant/software/patch/33953823
Oracle Interim Patch Installer version 13.9.4.2.1
Copyright (c) 2023, Oracle Corporation. All rights reserved.


Oracle Home : /opt/oracle/wls
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/wls/oraInst.loc
OPatch version : 13.9.4.2.1
OUI version : 13.9.4.0.0
Log file location : /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-46-11AM_1.log


OPatch detects the Middleware Home as “/opt/oracle/wls”

Verifying environment and performing prerequisite checks…
OPatch continues with these patches: 33953823

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = ‘/opt/oracle/wls’)


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files…
Applying interim patch ‘33953823’ to OH ‘/opt/oracle/wls’
ApplySession: Optional component(s) [ oracle.veridata.agent.core, 12.2.1.4.0 ] , [ oracle.veridata.agent.core, 12.2.1.4.0 ] not present in the Oracle Home or a higher version is found.

Patching component oracle.fmw.upgrade.veridata, 12.2.1.4.0…

Patching component oracle.veridata.web.core, 12.2.1.4.0…

Patching component oracle.veridata.web.core, 12.2.1.4.0…

Patching component oracle.rcu.veridata, 12.2.1.4.0…
Patch 33953823 successfully applied.
Log file location: /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-46-11AM_1.log

OPatch succeeded.
[oracle@localhost 33953823]$

[oracle@localhost 33953823]$ $ORACLE_HOME/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 13.9.4.2.1
Copyright (c) 2023, Oracle Corporation. All rights reserved.


Oracle Home : /opt/oracle/wls
Central Inventory : /opt/oracle/oraInventory
from : /opt/oracle/wls/oraInst.loc
OPatch version : 13.9.4.2.1
OUI version : 13.9.4.0.0
Log file location : /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-48-44AM_1.log


OPatch detects the Middleware Home as “/opt/oracle/wls”

Lsinventory Output file location : /opt/oracle/wls/cfgtoollogs/opatch/lsinv/lsinventory2023-06-06_02-48-44AM.txt

——————————————————————————–
Local Machine Information::
Hostname: localhost
ARU platform id: 226
ARU platform description:: Linux x86-64


Interim patches (1) :

Patch 33953823 : applied on Tue Jun 06 02:46:42 UTC 2023
Unique Patch ID: 24943512
Patch description: “One-off”
Created on 14 Sep 2022, 19:25:44 hrs PST8PDT
Bugs fixed:
34322822, 34202990, 33979317, 34275308, 30257704, 34028901, 33885974
33750454, 33719877, 33627880, 32585259, 32133466, 28279315, 33188570
20403129, 32852132, 32286962, 32821397, 32461542, 32213540, 32322787
32717596, 32313798, 30533210, 32531882, 32258415, 32486366, 32050877
32250963, 32249623, 32113971, 30351843, 32348306, 29376272, 30608181
30771003, 30811737, 30923601, 31126930, 31152591, 31290595, 31344851
31401520, 31442050, 31568607, 30778093, 21276396, 30712451, 25418342
30392409, 30425385, 30558507



——————————————————————————–

OPatch succeeded.
[oracle@localhost 33953823]$

Source Code for Creating LLM on Laptop

Pakistan's First Oracle Blog - Mon, 2023-06-05 21:34

 This is the collection of open source code snippets in Python which you can use to download, finetune, and generate content with LLM on your laptop:

import tensorflow as tf
import numpy as np
import os
import json
import random
import time
import argparse

with open(os.path.join(args.model_path, "hyperparams.json"), "r") as f:
    hyperparams = json.load(f)
    model = tf.compat.v1.estimator.Estimator( model_fn=model_fn, model_dir=args.output_path, params=hyperparams,
            config=tf.compat.v1.estimator.RunConfig( save_checkpoints_steps=5000, keep_checkpoint_max=10,
            save_summary_steps=5000 ))

#Define the input function for the dataset

def input_fn(mode):
    dataset = tf.data.TextLineDataset(args.dataset_path)
    dataset = dataset.repeat()
    dataset = dataset.shuffle(buffer_size=10000)
    dataset = dataset.batch(args.batch_size)
    dataset = dataset.map(lambda x: tf.strings.substr(x, 0, hparams["n_ctx"]))
    iterator = dataset.make_one_shot_iterator()
return iterator.get_next()


#Define the training function

def train():
    for epoch in range(args.epochs):
        model.train(input_fn=lambda: input_fn(tf.estimator.ModeKeys.TRAIN))
        print(f"Epoch {epoch+1} completed.")

#Start the training

train()

#Define the command-line arguments

parser = argparse.ArgumentParser()
parser.add_argument("--model_path", type=str, required=True)
parser.add_argument("--length", type=int, default=110)
parser.add_argument("--temperature", type=float, default=0.8)
args = parser.parse_args()

#Load the fine-tuned model

with open(os.path.join(args.model_path, "hyperparams.json"), "r") as f:
    hyperparams = json.load(f)
    model_fn = model_fn(hyperparams, tf.estimator.ModeKeys.PREDICT)
    model = tf.compat.v1.estimator.Estimator(model_fn=model_fn, model_dir=args.model_path, params=hyperparams)

#Define the generation function


def generate_text(length, temperature):
    start_token = "<|startoftext|>"
    tokens = tokenizer.convert_tokens_to_ids([start_token])
    token_length = len(tokens)
    while token_length < length:
        prediction_input = np.array(tokens[-hyperparams["n_ctx"]:])
        output = list(model.predict(input_fn=lambda: [[prediction_input]]))[0]["logits"]
        logits = output[-1] / temperature
        logits = logits - np.max(logits)
        probs = np.exp(logits) / np.sum(np.exp(logits))
        token = np.random.choice(range(hyperparams["n_vocab"]), p=probs)
        tokens.append(token)
        token_length += 1
        output_text = tokenizer.convert_ids_to_tokens(tokens)
        output_text = "".join(output_text).replace("▁", " ")
        output_text = output_text.replace(start_token, "")
return output_text

#Generate text

text = generate_text(args.length, args.temperature)
print(text)
Categories: DBA Blogs

Oracle 23c Deprecated Parameters that could Affect Data Security

Pete Finnigan - Mon, 2023-06-05 10:26
Let us have a brief look at Oracle 23c database parameters marked as deprecated in the database that can be in some respect related to security. Here are the parameters are marked as deprecated in 23c: SQL> col name for....[Read More]

Posted by Pete On 05/06/23 At 11:24 AM

Categories: Security Blogs

SQL-profile

Tom Kyte - Fri, 2023-06-02 09:46
SQL-Profile fundamentally are additional optimizer statistics at the SQL statement level generated by SQL-Tuning-Advisor to fill the gaps of Oracle Optimizer. My question is, can these additional finer optimizer-statistics within profiles, be shared/used by other similar SQL statements. Or is it that only that specific SQL for which the sql-profile was generated benefits?
Categories: DBA Blogs

ETL and replication

Tom Kyte - Fri, 2023-06-02 09:46
We have a system in which reference data (about 100 tables) is created and maintained by a set SMEs in a specific database for this purpose. On frequent (by not regular) occasions, the schema is cloned to another schema which acts as a source for distribution to other databases. The reason for the second schema is so those SMEs are not interrupted for the duration of the distribution to 20ish other database/schemas. We have significant problems with the hand-coded and maintained ETL process. For instance, when a new column is added to a reference table, someone must remember to add this to the ETL, and it is not infrequently forgotten. Dependency order of foreign keys can also bollix the process. This distribution is on demand and not continuous, so it doesn't appear that regular database replication would work. Do you have any suggestions? Second question, is it possible for delayed foreign keys and unique constraints to result in a non-acid compliant data condition. Thanx in advance. In response to your request for clarification, yes, the "ETL" is cloning of the base schema to the secondary schema. In this case, there is not transformation; it is a literal clone. There was a time when there was massive transformations, and the term just stuck.
Categories: DBA Blogs

Configuring TLS for connections from Oracle database to another server

Tom Kyte - Fri, 2023-06-02 09:46
Hello, How to configure TLS for connections from Oracle database to another server (e.g mailing server)? We have an on premises SMTP email sever to which the database pushes the emails to be sent out. All internal emails are working perfectly but external emails (e.g to Gmail) are failing with ORA-29024: Certificate validation failure. The TLS wallet has been created on the database server and the SMTP email sever certificate (certificate are still valid) have been added to this wallet. The code has also been updated with wallet location and wallet password. The mailing team confirmed they did not need our server certificate. At this point we are not sure what we are missing on our setting. Please, we'll appreciate if you can point us to a documentation/video or provide us with advise on how to figure it out. Thank you
Categories: DBA Blogs

Large table and small table behave differently when defining cursors

Tom Kyte - Fri, 2023-06-02 09:46
Hi, Tom, I do two tests about cursor in ORACLE 19.3C. In test1 why print "x" is normal but print "y" is abnormal ? In test2 why print "y" run half of the way ? I get information from: https://asktom.oracle.com/pls/apex/asktom.search?tag=ora-08103object-no-longer-exists <b>[Test1]</b> <code>SQL> select count(*) from dba_objects; COUNT(*) ---------- 13770 create table tx as select * from dba_objects where rownum < 11; create table ty as select * from dba_objects; variable x refcursor variable y refcursor declare begin open :x for select * from tx; open :y for select * from ty; end; / drop table tx; drop table ty; SQL> print x OWNER OBJECT_NAME .... --------- ---------------- SYS ICOL$ SYS I_USER1 SYS CON$ SYS UNDO$ SYS C_COBJ# SYS I_OBJ# SYS PROXY_ROLE_DATA$ SYS I_IND1 SYS I_CDEF2 SYS I_OBJ5 10 rows selected. SQL> print y ERROR: ORA-08103: object no longer exists no rows selected</code> <b>[Test2]</b> <code>create table tx as select * from dba_objects where rownum < 11; create table ty as select * from dba_objects; variable x refcursor variable y refcursor declare nomeaning tx%rowtype; begin open :x for select * from tx; open :y for select * from ty; fetch :y into nomeaning; end; / drop table tx; drop table ty; SQL> print x OWNER OBJECT_NAME .... --------- ---------------- SYS ICOL$ SYS I_USER1 SYS CON$ SYS UNDO$ SYS C_COBJ# SYS I_OBJ# SYS PROXY_ROLE_DATA$ SYS I_IND1 SYS I_CDEF2 SYS I_OBJ5 10 rows selected. SQL> print y OWNER OBJECT_NAME .... --------- ---------------- ... ... SYS SYS_LOB0000006212C00003$$ ERROR: ORA-08103: object no longer exists 6210 rows selected.</code>
Categories: DBA Blogs

New DB_DEVELOPER_ROLE in Oracle 23c Introduction

Pakistan's First Oracle Blog - Thu, 2023-06-01 20:44

 This video introduces db developer role in Oracle database 23c.



Categories: DBA Blogs

Table TTL and foreign key relationships

Tom Kyte - Wed, 2023-05-31 02:46
I would like my schema to delete rows automatically after a certain number of days. Therefore, I was planning to use this statement for example: <code> ALTER TABLE MY_TABLE USING TTL 5 days </code> However, I'm unable to find specifics in the TTL documentation about how FK relationships are handled in this scenario. Will TTL automatically ignore these FK relationships when it reaches that date and expire these rows anyway? (OR) Will TTL fail, as a delete would fail, because these FK relationships exist and need to be removed first?
Categories: DBA Blogs

INITIALLY DEFFERED FOREIGN KEY constraint doesn't work

Tom Kyte - Tue, 2023-05-30 08:26
Hi, I would like to create initally deferred constraint for FK as below. The problem is, that the constraint cars_service_car_id_fk is validated after DELETE statement, not at the end of transaction(COMMIT). I don't understand this behavior... <code> DROP TABLE cars; CREATE TABLE cars( car_id INTEGER , name VARCHAR2(20) , CONSTRAINT cars_pk PRIMARY KEY(car_id) ); DROP TABLE cars_service; CREATE TABLE cars_service( service_id INTEGER , car_id INTEGER , CONSTRAINT cars_service_pk PRIMARY KEY(service_id) , CONSTRAINT cars_service_car_id_fk FOREIGN KEY (car_id) REFERENCES cars(car_id) <b>DEFERRABLE INITIALLY DEFERRED</b> ); INSERT INTO cars(car_id, name) VALUES(1, 'Volvo'); INSERT INTO cars_service(service_id, car_id) VALUES(1, 1); COMMIT; DELETE FROM cars WHERE car_id = 1; -- And now Iim getting error to early.. <b> DELETE FROM cars WHERE car_id = 1 Error report - ORA-02292:</b> </code>
Categories: DBA Blogs

Pooled timeout with 19c

Tom Kyte - Mon, 2023-05-29 14:06
I am facing issue of "Pooled connection request timed out" when I upgraded database to 19c and moved the application to cloud(TAS) from VM. There is no fixed scenario when this happens. No code changes has been done except replacing DataAccess.dll to ManagedDataAccess.dll. SGA is set to 2GB. TAS memory is 2GB. Could you suggest me what configurations has to be setup or checked to support pooling, as I am developer and can delegate the same to DBA to have such configurations on database
Categories: DBA Blogs

MY_WALLET_DIRECTORY in TNSNAMES not working as expected

Tom Kyte - Mon, 2023-05-29 14:06
Hello Chris/Connor, I am trying to use the option MY_WALLET_DIRECTORY in TNSNAMES file in order to avoid having to specify WALLET_LOCATION in SQLNET.ORA file. However, it fails with ORA-01017: invalid username/password; logon denied error. Tracing the SQLNET (from client side) reveals that it never attempts to access wallet when specified as MY_WALLET_DIRECTORY but works fine when used conventionally (as WALLET_LOCATION in SQLNET.ORA file). Can you please help me figure out how to get this working? In MOS note <b>The Impact of the Sqlnet Settings on Database Security (sqlnet.ora Security Parameters and Wallet Location) (Doc ID 1240824.1)</b>, it appears to mention that using MY_WALLET_DIRECTORY is supported (excerpt below) <code>#8. Client settings. OCI Clients are using the same sqlnet.ora and wallet settings as the database server. However, a very good hint to use, when multiple wallets should be used by the same client and using multiple TNS_ADMIN environment variables can become a real pain (one example would be when using ODP.NET applications requiring multiple SSL certificates to logon to multiple services, especially on application servers), then this could be easily solved using a tnsnames.ora syntax similar to the one below: <tns_alias> = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = <machine_name>)(PORT = <port_number>)) (CONNECT_DATA = (SERVICE_NAME = <service_name>) ) (SECURITY = (MY_WALLET_DIRECTORY = <some_wallet_directory>)))" </code> Below is my attempt First, to prove that the wallet is set up correctly and WALLET_LOCATION mentioned in SQLNET.ORA. <code>C:\Users\JoeBlog\TNSHome\ScottWallet>copy ..\sqlnet.ora . 1 file(s) copied. C:\Users\JoeBlog\TNSHome\ScottWallet>set TNS_ADMIN=C:\Users\JoeBlog\TNSHome\ScottWallet C:\Users\JoeBlog\TNSHome\ScottWallet>tnsping DB_SCOTT TNS Ping Utility for 64-bit Windows: Version 19.0.0.0.0 - Production on 11-JUN-2021 15:02:41 Copyright (c) 1997, 2019, Oracle. All rights reserved. Used parameter files: C:\Users\JoeBlog\TNSHome\ScottWallet\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbscan.oracle.world)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbserv.oracle.world.com)) (SECURITY = (MY_WALLET_DIRECTORY = C:\Users\JoeBlog\TNSHome\ScottWallet))) OK (70 msec) C:\Users\JoeBlog\TNSHome\ScottWallet>sqlplus /@DB_SCOTT SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 11 15:02:51 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Fri Jun 11 2021 10:03:06 +01:00 Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> show user USER is "SCOTT" SQL> exit Disconnected from Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.10.0.0.0 </code> Now, when I try to use only MY_WALLET_DI...
Categories: DBA Blogs

Windows-Based SQL*Plus Scripting Issue

Tom Kyte - Mon, 2023-05-29 14:06
I will preface this with not having had tested SQL*Plus scripts on Windows utilizing script arguments in a number of years so I can't categorically state when this stopped working on said platform. That being said, the issue described does not occur on any *nix system. Additionally, this is running the SQL*Plus from the Instant Client not the Full Client. For Windows-based SQL*Plus scripting if you plan to use script arguments such as (being executed as "sqlplus <username>/<password>@<tns_alias> @test.sql 1"): <code> select to_number( '&1' ) from dual; </code> You get the following: <code> SQL> select to_number( '&1' ) 2 from dual; select to_number( '&1' ) * ERROR at line 1: ORA-01722: invalid number </code> as it is not translating the '&1' to mean the first script argument. If you add "set define '&'" to the options setting up the SQL*Plus environment then it runs as expected: <code> old 1: select to_number( '&1' ) new 1: select to_number( '1' ) TO_NUMBER('1') -------------- 1 1 row selected. </code> My question is why is it only mandatory on Windows to purposefully set DEFINE to its default value ('&')? I struggled with this for many weeks as there wasn't even a thought in my head I needed to set DEFINE to its default value as it makes no sense to do so. Only after re-reading the SQL*Plus User's Guide for the thousandth time did I say what the heck and as soon as I did it worked. Either this needs to be fixed or the documentation needs to address this because in theory SQL*Plus scripts, short of shelling out with HOST, should be portable.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator