Feed aggregator

DDL Script of partition in oracle

Tom Kyte - Fri, 2020-08-07 23:06
Hi Tom, How to get DDL scripts of table partition and index partition in oracle. Thanks, Leon.
Categories: DBA Blogs

Online table redefinition

Tom Kyte - Fri, 2020-08-07 23:06
This function DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS copy the dependent objects of the base table into interim. What is the use of it? And what will happen if I do not run this step?
Categories: DBA Blogs

Mining Statspack and AWR Repositories for Historical Trends

Pythian Group - Fri, 2020-08-07 16:43

There are various ways that we can approach the analysis of a past database performance problem. The initial steps usually differ based on their scope. Is it limited to a certain SQL/process/task, or is it perceived as a database-wide slowdown? Additionally, is it occurring at the moment, or is it an event that occurred in the past?

In case the starting scope of analysis is database-wide, mining the Automatic Workload Repository (AWR) is one of the methods we can use to obtain historical performance trends. However, not all customers have access to it, either because it requires the Diagnostics Pack license on Enterprise Edition, or because they are running Oracle Standard Edition, where it’s not present.

In such cases, we can still use Statspack as a free alternative to AWR, even though it’s not as sophisticated. One of Statspack’s shortcomings is it doesn’t store Active Session History data, which we can use to drill-down into the activity of particular sessions over time. With Statspack we’re missing this session-level granularity.

In this post, I’m going to present a script I use to get an overview of the workload dynamics of a database querying the Statspack repository. There’s also an AWR counterpart, as I’ll mention later in the post.

statspack_load_trends.sql 1. Script’s Properties

Let’s summarize the script’s main properties:

  1. It queries the Statspack repository directly and doesn’t create any (temporary) objects, nor relies on any pre-created Statspack reports.
  2. We can use it to analyze a Statspack repository imported from another DB, containing data with a different DBID as the DB hosting the repository.
  3. We can analyze a period spanning instance restart(s). The script considers only adjacent snapshot pairs having the same startup_time value. In this case, “adjacent” denotes two snapshots which belong to the same [DBID, instance number] pair, and which SNAP_IDs are closest to each other when ordered. That’s just to emphasize that the difference between two consecutive SNAP_IDs is not always one (think RAC with cached sequence values, an instance restart, or pressure on the shared pool).
2. Purpose

The script provides a quick high-level overview of the DB workload dynamics. It reports a combination of historical OS utilization statistics (stats$osstat), system time model statistics (stats$sys_time_model), and aggregated instance-level statistics (stats$sysstat) for a given period. Currently, it doesn’t query stats$system_event for wait event details. Several time-based metrics are presented in a form of Average Active Sessions (AAS), calculated by simply dividing the observed time-based metric by the elapsed time in the observed period.

You can download the script here, and its AWR counterpart is available here. Before running the AWR version, make sure the database has the Diagnostics Pack license. The following explanation applies to both scripts.

Note: I got the idea for this script from various AWR mining scripts in Chapter 5 (Sizing Exadata) from the “Oracle Exadata Recipes: A Problem-Solution Approach” book. Additionally, the idea to calculate and display CPU core-normalized values for some of the CPU usage statistics originates from John Beresniewicz’s AWR1page project.

3. Calculating CPU Capacity

The script considers the number of CPU cores, and not threads (in case hyperthreading is enabled) to calculate the number of CPU seconds between two Statspack snapshots. Various publications explain the reasons for this approach, but to summarize: Depending on the workload type, hyperthreading can provide up to approx. 30% higher CPU throughput compared to non-hyperthread mode.

When the number of processes running on CPU approach the number of CPU cores, the system might become CPU saturated/over-subscribed. At that point, its response time becomes unpredictable, and additional workload decreases the amount of useful work performed by the system.

4. Usage

The script produces a very wide output, so I recommend spooling it out to a file for easier viewing. Because Statspack tables have public synonyms, any user that has permission to select from the repository tables can run it.

Note: I’ve seen the script fail with errors like “ORA-00936: missing expression,” or “ORA-01722: invalid number” when used on databases running with cursor_sharing=FORCE. To avoid the error, I included the /*+ cursor_sharing_exact*/ hint in the script’s SELECT statement. Setting cursor_sharing=EXACT at the session-level is also a valid alternative.

SQL> spool load_trends.txt
SQL> @statspack_load_trends.sql

First, we provide the DBID and instance number we want to analyze. If we don’t provide an instance number, all of the instances for the provided DBID are considered:

Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id   |Inst Num|DB Name     |Instance    |Host
-----------|--------|------------|------------|-------------
 1558102526|       1|ORCL        |orcl1       |ol7-122-rac1
 1558102526|       2|ORCL        |orcl2       |ol7-122-rac2


Enter DBID to analyze - default "1558102526" :   /* enter DBID to analyze */
Enter instance number or "all" to analyze all instancs for DBID = 1558102526 - default "all" : /* report data for a specific RAC instance or all of them */

Next, we provide a time range to analyze.

5. Sample Output

Let’s check what the output looks like. Due to its width, and to fit the blog format, it’s segmented in several sections. Also, due to some (temporary) blog formatting limitations, I recommend viewing wide code sections by clicking “Open code in new window.”

  • “CPU #Cores;#Threads”: The number of CPU cores and threads (in case of enabled hyperthreading) reported by the OS.
  • “Tot CPU Time Avail [Cores] (s)”: The number of CPU seconds available between the two analyzed snapshots based on the number of CPU cores.
Instance|Snap ID        |Begin Snap     |End Snap       |Elapsed|            CPU|     Tot CPU Time|
  Number|start-end      |Time           |Time           |   Mins|#Cores;#Threads|Avail [Cores] (s)|
--------|---------------|---------------|---------------|-------|---------------|-----------------|
       1|19195-19196    |16-APR-20 17:00|16-APR-20 18:00|  59.98|         24; 48|        86,376.00|
        |19196-19197    |16-APR-20 18:00|16-APR-20 19:00|  60.00|         24; 48|        86,400.00|
        |19197-19198    |16-APR-20 19:00|16-APR-20 20:00|  60.00|         24; 48|        86,400.00|
        |19198-19199    |16-APR-20 20:00|16-APR-20 21:00|  60.02|         24; 48|        86,424.00|
        |19199-19200    |16-APR-20 21:00|16-APR-20 22:00|  59.98|         24; 48|        86,376.00|

Note: One hour between snapshots is probably excessive, but that’s beyond the scope of this post.

Time Model Statistics: stats$sys_time_model

The next section reports the time spent in the database in the form of Average Active Sessions (AAS). For completeness, and to better understand what the figures represent, I’m including how various statistics from stats$sys_time_model are related:

                DB time = CPU + Wait time spent by foreground sessions
background elapsed time = CPU + Wait time spent by background sessions
                 DB CPU = CPU time spent by foreground sessions
    background cpu time = CPU time spent by background sessions

Considering the above, we can calculate the AAS figures as follows:

       Total AAS = ("DB time" + "background elapsed time")/elapsed_time
Total AAS on CPU = ("DB CPU" + "background cpu time")/elapsed_time

FG AAS = "DB time" / elapsed_time
BG AAS = "background elapsed time" / elapsed_time

FG AAS on CPU = "DB CPU" / elapsed_time
BG AAS on CPU = "background cpu time" / elapsed_time

Total AAS in wait = ("Total AAS" - "Total AAS on CPU") / elapsed_time

FG AAS in wait = ("DB time" - "DB CPU") / elapsed_time
BG AAS in wait = ("background elapsed time" - "background cpu time") / elapsed_time

Columns reporting CPU-related figures display two values: The “usual” AAS value, and the “core-normalized Average Active Sessions” value, using the acronym “NPC”. If the core-normalized value approaches (or even crosses) the value of “1,” the system could potentially be CPU saturated:

  • “AAS [FG+BG]”: The number of AAS considering foreground and background sessions.
  • “AAS [FG]”: The number of AAS considering foreground sessions only.
  • “AAS [BG]”: The number of AAS considering background sessions only.

Continuing with the CPU component of the above three mentioned metrics:

  • “AAS on CPU [FG+BG]”: The number of AAS on CPU considering foreground and background sessions, followed by core-normalized AAS on CPU (NPC).
  • “AAS on CPU [FG]”: Same as above, though only considering foreground sessions.
  • “AAS on CPU [BG]”: Same as above, though only considering background sessions.

The “AAS wait” figures follow the same principle as the above-mentioned columns, and “AAS RMAN CPU” represents AAS performing “RMAN CPU time (backup/restore)” activities.

Returning to our sample output:

      AAS|      AAS|      AAS|AAS on CPU| AAS on CPU|AAS on  |AAS on CPU|AAS on  |AAS on CPU| AAS wait| AAS wait| AAS wait|             |
  [FG+BG]|     [FG]|     [BG]|[FG+BG]   |[FG+BG] NPC|CPU [FG]|  [FG] NPC|CPU [BG]|[BG] NPC  |  [FG+BG]|     [FG]|     [BG]|AAS RMAN CPU |
---------|---------|---------|----------|-----------|--------|----------|--------|----------|---------|---------|---------|-------------|
     96.4|     94.3|      2.0|       8.5|        0.4|     8.3|       0.3|     0.2|       0.0|     87.9|     86.0|      1.9|          0.0|
     32.9|     31.6|      1.3|      10.3|        0.4|    10.1|       0.4|     0.2|       0.0|     22.5|     21.4|      1.1|          0.0|
     59.4|     58.9|      0.6|      23.3|        1.0|    23.2|       1.0|     0.1|       0.0|     36.2|     35.7|      0.5|          0.0|
     13.3|     12.9|      0.5|       5.8|        0.2|     5.7|       0.2|     0.1|       0.0|      7.5|      7.1|      0.4|          0.0|
     23.0|     22.2|      0.8|       6.0|        0.3|     5.9|       0.2|     0.1|       0.0|     17.0|     16.3|      0.7|          0.0|

The first line reports 94.3 foreground AAS, out of which only 8.3 were on CPU, and 86 were in various waits. Looking at the third line, the situation changes, as out of the 58.9 AAS, 23.3 were on CPU, and 35.7 in waits. Checking the per-CPU-core-normalized value, we see it reports 1, which means the machine might be approaching or has already crossed CPU saturation. We also see that there was no RMAN activity occurring during that time. Background processes also spent most of their time in waits, rather than on CPU.

For convenience, we have displayed the number of seconds consumed by foreground sessions, breaking them further down into CPU and wait components, and reporting the relative percentages. This is basically the same information we saw in the previous section, just expressed as time instead of AAS:

  DB Time (s)            DB CPU (s)                      |
[FG CPU+WAIT]  =           [FG CPU] +          [FG WAIT] |
---------------------------------------------------------|
    339,555.62 =      29,924.51  9% +     309,631.11 91% |
    113,683.70 =      36,469.52 32% +      77,214.18 68% |
    211,880.46 =      83,404.47 39% +     128,475.99 61% |
     46,325.13 =      20,692.78 45% +      25,632.35 55% |
     79,966.07 =      21,274.38 27% +      58,691.69 73% |
OS Statistics – stats$osstat

Next, OS statistics from stats$osstat are displayed. “Tot OS Load@end_snap” is the recorded OS load at the time of the end snapshot creation. The other four columns represent Average Active Processes (AAP), which is simply the measured time of each named statistic divided by elapsed time in the observed period. Similarly, as above, the normalized value per core is also reported here for the BUSY statistic (sum of USER+SYS). The meaning is the same; If the value approaches 1, the system might be CPU saturated.

In our sample report, the third line reports 23.9 processes on CPU, or 1 per CPU core (that’s considering all the OS processes, not only Oracle’s). That also correlates with the “AAS on CPU [FG+BG]” figure in the third line we saw in the above snippet. Because in this particular case the machine is dedicated to one Oracle instance, it used all of the available CPU:

       Tot OS|           |AAP OS  |   AAP OS|   AAP OS|   AAP OS|
Load@end_snap|AAP OS BUSY|BUSY NPC|     USER|      SYS|   IOWAIT|
-------------|-----------|--------|---------|---------|---------|
         37.0|        9.2|     0.4|      8.2|      0.9|     12.1|
         76.8|       10.8|     0.4|      9.8|      0.9|      5.3|
          9.4|       23.9|     1.0|     22.9|      0.9|      2.3|
          4.3|        6.2|     0.3|      5.7|      0.5|      1.4|
          4.8|        6.4|     0.3|      5.6|      0.7|      4.7|
System Statistics: stats$sysstat

Finally, stats$sysstat reports various system statistics. I won’t describe their meaning because that’s beyond the scope of this post. It’s worth noting that apart from “Logons Current,” almost all other statistics are expressed in units of work per second. The only exceptions are statistics related to parallel operations. Because their usage usually pertains to “heavy-duty” DDL/DML tasks, we don’t expect to see many such operations per second. Thus, the whole snapshot interval seems a more appropriate time-frame to report the number of occurrences of such events.

    Logons|        |      User|            |SQL*Net roundtrips|SQL*Net roundtrips|   Bytes received via|     Bytes sent via|   Bytes received via|
   Current|Logons/s|   calls/s|  Executes/s|  to/from client/s|  to/from dblink/s|SQL*Net from client/s|SQL*Net to client/s|SQL*Net from dblink/s|
----------|--------|----------|------------|------------------|------------------|---------------------|-------------------|---------------------|
     556.0|     9.5|     872.9|       692.2|             723.0|           4,575.6|          1,846,238.6|        6,305,967.2|          1,177,004.3|
     527.0|    16.2|   1,008.0|       639.4|             828.5|           5,773.2|          2,462,067.1|        7,760,807.5|          1,453,024.0|
     607.0|    18.5|     738.8|       588.3|             556.1|           5,618.1|          1,986,647.1|        3,644,026.9|          1,448,627.4|
     427.0|     9.2|     873.3|       910.0|             716.4|           5,972.3|          2,691,244.6|        4,067,039.1|          1,532,389.7|
     418.0|     7.4|     719.9|       627.8|             588.5|           7,471.6|          2,564,916.7|        3,773,344.1|          1,852,806.9|

   
     Bytes sent via|Cluster wait|Session logical|      DB block|    Consistent| Consistent reads |  Physical|  Physical|Physical read|Physical write|
SQL*Net to dblink/s|      time/s|        reads/s|     changes/s|   changes/sec|undo rec applied/s|   reads/s|  writes/s|IO requests/s| IO requests/s|
-------------------|------------|---------------|--------------|--------------|------------------|----------|----------|-------------|--------------|
          576,510.8|         0.0|      339,009.8|      31,353.8|       3,062.0|           4,002.4|  47,349.4|   1,879.1|      2,621.5|         448.5|
          726,935.7|         0.0|      487,469.9|      48,874.4|         487.3|             563.9|  31,277.7|   2,127.6|      5,021.6|         526.8|
          707,648.6|         0.0|      343,665.8|      38,862.0|         379.4|             362.9|  37,057.7|     777.7|      1,949.2|         265.2|
          751,698.7|         0.0|      288,724.2|      26,163.7|         618.0|             435.8|  14,001.6|     823.3|        828.5|         274.1|
          940,096.3|         0.0|      335,631.4|      24,500.0|         198.5|             211.9|  53,625.8|     638.2|      2,451.6|         227.5|
     


    Parses|      Hard|     Parse|     Parse|      User|       User|     Redo size|          Redo|      Rollback changes|
   total/s|  parses/s|describe/s|failures/s| commits/s|rollbacks/s|       bytes/s|      writes/s|undo records applied/s|
----------|----------|----------|----------|----------|-----------|--------------|--------------|----------------------|
     142.0|     103.9|       0.0|       0.0|     158.2|        0.8|  46,951,095.1|         137.8|                   0.1|
     143.4|     100.1|       0.0|       0.3|     155.0|        0.8|  49,017,168.1|         170.3|                   1.2|
     135.9|      89.2|       0.0|       0.1|     143.3|        0.8|  11,513,858.2|         149.7|                   0.1|
     141.1|     109.8|       0.0|       0.0|     284.4|        0.8|   9,513,089.1|         226.2|                   0.1|
     123.0|      93.4|       0.0|       0.0|     175.5|        0.9|   7,462,206.6|         169.3|                   0.3|
     

     
           Queries|    DML statements|     PX oper not|PX oper downgraded|PX oper downgraded|PX oper downgraded|PX oper downgraded|PX oper downgraded	   
parallelized/Total|parallelized/Total|downgraded/Total|   to serial/Total|75 to 99 pct/Total|50 to 75 pct/Total|25 to 50 pct/Total| 1 to 25 pct/Total	   
------------------|------------------|----------------|------------------|------------------|------------------|------------------|------------------	   
           1,912.0|               3.0|         1,989.0|              38.0|               0.0|               1.0|               4.0|               1.0	   
           2,450.0|               6.0|         2,551.0|              10.0|               0.0|               0.0|               0.0|               0.0	   
           2,477.0|              13.0|         2,584.0|               9.0|               0.0|               0.0|               0.0|               0.0	   
           1,553.0|               3.0|         1,646.0|               9.0|               0.0|               0.0|               0.0|               0.0	   
           1,390.0|               2.0|         1,487.0|               8.0|               0.0|               0.0|               0.0|               0.0	   
6. Visualizing Results

When comparing a period where there was an issue to one where the database was running fine, or also when just checking for trends, it’s more convenient to plot the results. That’s an easy way to get an overview of how certain metrics changed over time, or how do they compare across nodes on a RAC database.

To ease that task, the two scripts contain two ways of formatting columns: One for plotting/charting purposes, and one displaying column headings on two lines for a more user-friendly format (the format used in the above descriptions). Based on the needs, the appropriate block of column formatting commands has to be uncommented in the script. You can plot results using a third-party utility, such as Tableau, which is used to produce graphs in the following sections.

CPU Usage Distribution and Limits Across Nodes

The graph below represents the average number of OS processes on CPU, normalized to the CPU core count for each of the three nodes on a RAC system. As noted above, when the normalized value per core crosses the value of “1,” the host might be oversubscribed on CPU. Nodes 2 and 3 are usually below the value of 1. However, spikes in usage on node 1 might require further investigation. Also, there seems to be an imbalance in CPU usage across nodes:Number of OS processes on CPU, normalized to the CPU core count

Breakdown of Foreground and Background Sessions on CPU and Wait Components, Expressed as AAS

When investigating a problematic time period, we can quickly get a high-level overview of the relation between CPU and waits experienced by foreground and background sessions:

Average Active Sessions breakdown to foreground and background CPU and wait components Observing Waits After Applying a Fix to Reduce Them

After applying “a fix” on April 16th, the time spent waiting by foreground sessions decreased substantially. CPU demand also decreased. Foreground CPU vs waits

 

Comparing the Number of AAS on CPU Normalized to CPU Core Count at the OS and DB Level on a Three-Node RAC DB

The observed DB isn’t using all/most of the CPU available on the nodes, and there’s “something else” using it at the OS level. That’s visible on the graph for node 1, between May 6th at 7-9 PM, where CPU usage at the OS level increased, but that was not the case for DB sessions. Additionally, because we have the normalized per CPU core value displayed, we can see that node 1 crosses the value of 1 quite often.Comparison of Average Active Sessions between OS and DB level

Logons per Second and Number of Logged-in Sessions per Node

RAC nodes 1 and 2 already have a relatively high number of Logons per second at ~5, whereas node 3 has this even higher at ~23. Additionally, there’s a large imbalance in the number of established sessions on node 3 compared to nodes 1 and 2. Because each node has 8 physical cores (not visible from the below graphs), the ~2500 established sessions represent a potential risk should too many of them try to become active at the same time. Overall it seems a connection pooling review is in place for this system.Session trends

A final example from a different server, but still related to connection management. Detecting a large drop in the number of established sessions and correlating with the number of logins per second:Session trends

7. Other Mining Tools

Some time ago, Maris Elsins published a post describing a set of handy scripts to mine the AWR repository. Make sure to check it out!

To conclude, here’s a list of some free mining utilities. These are mostly created for AWR, but some are for Statspack. Some of them parse a pre-created set of AWR/Statspack reports. Others connect directly to the database and extract/analyze data from there. The script presented in this post might not offer the same functionality as those utilities. However, for some of my use-cases, it complemented them by providing a customized set of pre-calculated workload related figures.

Categories: DBA Blogs

openssl: strange error.... (on the first look)

Dietrich Schroff - Fri, 2020-08-07 14:13
Some days ago i wanted to do a check of a certificate of an ip address. No big deal - so i did:
schroff@zerberus:~$ openssl s_client -showcerts  -connect 82.165.229.87.87:443 

140011908769088:error:2008F002:BIO routines:BIO_lookup_ex:system
lib:../crypto/bio/b_addr.c:726:Name or service not known
connect:errno=22
So i opened google to find a solution.
But on google i found nothing really helpful.

?

The answer was very easy:
If i read the command line carfully, i would have detected my error:

THE IP ADDRESS WAS INVALID

I wrote an ipv4 with 5 numbers and not with 4...

After using a correct ipv4 number the command worked like expected:
schroff@zerberus:~$ openssl s_client -showcerts  -connect 82.165.229.87:443 #
CONNECTED(00000003)
Can't use SSL_get_servername
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = GeoTrust RSA CA 2018
verify return:1
depth=0 C = DE, ST = Rheinland-Pfalz, L = Montabaur, O = 1&1 Mail & Media GmbH, CN = gmx.net
verify return:1
---

Total weight selection of rows

Tom Kyte - Fri, 2020-08-07 04:46
I have 1000 cubes of weight between 360 and 430 grams. I need to choose a set of cubes which will have weigh 50.000 grams. Do you have any idea how to do it in SQL or PL/SQL? <code>CREATE TABLE CUBES ( ID VARCHAR2(100 CHAR), WEIGHT NUMBER(10,3) ); BEGIN FOR i IN 1..1000 LOOP INSERT INTO cubes VALUES (i, round(dbms_random.value( 360, 430), 3)); END LOOP; COMMIT; END;</code>
Categories: DBA Blogs

private database link public synonym

Tom Kyte - Fri, 2020-08-07 04:46
Tom, What I am attempting to do is create a private database link then create a public synonym for that link. I need the link to run procedeure A on the remote database. i.e Execute immediate procedure a @remote.database. These reasoning behind the private link is a secure issue dictated by the higher offices. Is this possible? Is there a better way to do it?
Categories: DBA Blogs

Checklist While Troubleshooting Workload Errors in Kubernetes

Pakistan's First Oracle Blog - Fri, 2020-08-07 02:21

 Following is the checklist while troubleshooting workload/application errors in Kubernetes:

1- First check how many nodes are there

2- What namespaces are present

3- In which namespace , the faulty application is

4- Now check faulty app belongs to which deployment

5- Now check which replicaset (if any) is party of that deployment

6- Then check which pods are part of that replicaset

7- Then check which services are part of that namespace

8- Then check which service correspond to the deployment where our faulty application is 

9- Then make sure label selectors in deployment to pod template are correct

10- Then ensure label selector in service to deployment are correct.

11- Then check that servicename if referred in any deployment is correct. For example, webserver pod is referring to database host (which will be the servicename of database) in env of pod template is correct.

12- Then check that ports are correct in clusterIP or nodeport services. 

13- Check if the status of pod is running

14- check logs of pods and containers

I hope that helps and feel free to add any step or thought in the comments. Thanks.

Categories: DBA Blogs

grid19c_upgrade_has.rsp

Michael Dinh - Thu, 2020-08-06 21:16

This response file is generated from GUI.

The values can be 1,2,4,8,16,32, or 64 MB, depending on the specific disk group compatibility level.
Larger AU sizes typically provide performance advantages for data warehouse applications that use large sequential reads.

Which Is The Grow Up Factor When We Pass From AUsize Of 1 To AUsize Of 4? (Doc ID 1961116.1)

gridSetup.sh -silent -applyRU $PATCH_DIR/31305339 -responseFile ~/grid19c_upgrade_has.rsp -ignorePrereqFailure

--- grid19c_upgrade_has.rsp ---
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
*** INVENTORY_LOCATION=<FILL IN PATH LOCATION>
*** ORACLE_BASE=<FILL IN PATH LOCATION>
oracle.install.option=UPGRADE
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.configureGIMR=false
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false

Loading CLOB data (more than 32k char) into Oracle DB through Apex

Tom Kyte - Thu, 2020-08-06 10:26
Hi, I am currently working on developing apex web application which requires to capture CLOB data and save it to DB. I am using Rich text editor to capture the content and noticed that only max of 32k characters can be loaded through the page item. when I try to push data more than 32k char, no data is being sent to the DB. Please suggest me a way to capture data more than 32k through apex page item. Oracle DB Version: 12c Apex version:20.1 Kindly let me know if I am missing any details Thanks, Murugananth
Categories: DBA Blogs

are WITH READ ONLY sand WITH CHECK OPTION syntaxes used for views and tables?

Tom Kyte - Thu, 2020-08-06 10:26
Hi, I am not sure whether or not WITH READ ONLY and WITH CHECK OPTION syntaxes used for views and tables only. I read on Oracle Help Center with the following link https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CREATE-VIEW.html#GUID-61D2D2B4-DACC-4C7C-89EB-7E50D9594D30. It is said that: WITH READ ONLY Specify WITH READ ONLY to indicate that the table or view cannot be updated. WITH CHECK OPTION Specify WITH CHECK OPTION to indicate that Oracle Database prohibits any changes to the table or view that would produce rows that are not included in the subquery. When used in the subquery of a DML statement, you can specify this clause in a subquery in the FROM clause but not in subquery in the WHERE clause. I hope to receive your response. Thanks, Kris.
Categories: DBA Blogs

ORA-01031: insufficient privileges, cannot login as any user

Tom Kyte - Thu, 2020-08-06 10:26
I ran below command and restarted docker container. <code>alter system set processes = 1 scope = spfile;</code> after this I am not able to login to DB at all. <code> root@30b2f9030f89:/u01/app/oracle/product/11.2.0/xe/bin# sqlplus /nolog SQL*Plus: Release 11.2.0.2.0 Production on Wed Aug 5 14:59:14 2020 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> conn / as sysdba ERROR: ORA-01031: insufficient privileges </code> I tried to resolve it with below command, tried login again, still getting same error orapwd file=filename password=password entries=100 Also tried editting /u01/app/oracle/product/11.2.0/xe/dbs/spfileXE.ora file and manually assign processes=100 and back to processes=1 this didnt help. can someone guide here please?
Categories: DBA Blogs

Can we use RETURNING CLAUSE along with CURRENT OF clause in update statement.

Tom Kyte - Thu, 2020-08-06 10:26
Hi Tom, I am using below update statement in my procedure to return few columns that are getting updated in the update statement. <code>UPDATE DUMMY_TABLE SET DUMMY_STATUS = 'ABC' WHERE CURRENT OF DUMMY_CURSOR RETURNING DUMMY_FIELD1, DUMMY_FIELD2 BULK COLLECT INTO TAB_FIELD1, TAB_FIELD2;</code> The above code works if i am not using CURRENT OF CLAUSE, but is giving error when used like above... I want to know if i am making any syntax error here...or this is not possible at all.. Note: I cannot remove current of clause from the update statement, and still I have to return the columns that are getting updated. Thanks
Categories: DBA Blogs

Different Ways to Access Oracle Cloud Infrastructure

Pakistan's First Oracle Blog - Thu, 2020-08-06 09:00

This is a quick jot down of different ways you can access the ever-improving Oracle Cloud Infrastructure (OCI). Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID).

You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. To access the Console, you must use a supported browser. You can go to the sign-in page. You will be prompted to enter your cloud tenant, your user name, and your password. The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and responses.

All Oracle Cloud Infrastructure API requests must be signed for authentication purposes. All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2. Oracle Cloud Infrastructure provides a number of Software Development Kits (SDKs) and a Command Line Interface (CLI) to facilitate development of custom solutions.

Software Development Kits (SDKs) Build and deploy apps that integrate with Oracle Cloud Infrastructure services. Each SDK provides the tools you need to develop an app, including code samples and documentation to create, test, and troubleshoot. In addition, if you want to contribute to the development of the SDKs, they are all open source and available on GitHub.

  • SDK for Java
  • SDK for Python
  • SDK for TypeScript and JavaScript
  • SDK for .NET
  • SDK for Go
  • SDK for Ruby

Command Line Interface (CLI) The CLI provides the same core capabilities as the Oracle Cloud Infrastructure Console and provides additional commands that can extend the Console's functionality. The CLI is convenient for developers or anyone who prefers the command line to a GUI.

Categories: DBA Blogs

Case and Aggregate bug

Jonathan Lewis - Thu, 2020-08-06 06:43

The following description of a bug appeared on the Oracle Developer Community forum a little while ago – on an upgrade from 12c to 19c a query starting producing the wrong results on a simple call to the average() function. In fact it turned out to be a bug introduced in 12.2.0.1.

The owner of the thread posted a couple of zip files to build a test case – but I had to do a couple of edits, and change the nls_numeric_characters to ‘,.’ in order to get past a formatting error on a call to the to_timestamp() function. I’ve stripped the example to a minimum, and translated column name from German (which was presumably the source of the nls_numeric_characters issue) to make it easier to demonstrate and play with the bug.

First the basic data – you’ll notice that I’ve tested this on 12.1.0.2, 12.2.0.1 and 19.3.0.0 to find out when the bug appeared:

rem
rem     Script:         case_aggregate_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2020
rem     Purpose:        
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table test(
        case_col        varchar2(11), 
        duration        number(*,0), 
        quarter         varchar2(6), 
        q2h_knum_b      varchar2(10)
   )
/

insert into test values('OK',22,'1.2020','AB1234');
insert into test values('OK',39,'1.2020','AB1234');
insert into test values('OK',30,'1.2020','AB1234');
insert into test values('OK',48,'1.2020','AB1234');
commit;

execute dbms_stats.gather_table_stats(user,'test')

create or replace force view v_test
as 
select 
        q2h_knum_b,
        case 
                when b.case_col not like 'err%'
                        then b.duration 
        end     duration,
        case 
                when b.case_col not like 'err%' 
                        then 1 
                        else 0 
        end     status_ok
from
        test b
where
        substr(b.quarter, -4) = 2020
;


break on report
compute avg of duration on report
select * from v_test;

---------------------------------------------

Q2H_KNUM_B   DURATION  STATUS_OK
---------- ---------- ----------
AB1234             22          1
AB1234             39          1
AB1234             30          1
AB1234             48          1
           ----------
avg             34.75


I’ve created a table, loaded some data, gathered stats, then created a view over the table. The view includes a couple of columns that use a simple case expression, and both expressions are based in the same way on the same base column (this may, or may not, be significant in what’s coming). I’ve then run off a simple query with a couple of SQL*Plus commands to report the actual content of the view with the average of the duration column – which is 34.75.

So now we run a couple of queries against the view which aggregate the data down to a single row – including the avg() of the duration – using the coalesce() function – rather than the older nvl() function – to convert any nulls to zero.


select
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg,
        coalesce(sum(status_ok), 0)     ok_count
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

DURATION_COUNT DURATION_MED DURATION_AVG   OK_COUNT
-------------- ------------ ------------ ----------
             4         34.5            0          4

You’ll notice that the duration_avg is reported as zero (this would be the same if I used nvl(), and would be a null if I omitted the coalesce(). This is clearly incorrect. This was the output from 19.3; 12.2 gives the same result, 12.1.0.2 reports the average correctly as 34.75.

There are several way in which you can modify this query to get the right average – here’s one, just put the ok_count column first in the select list:


select
        coalesce(sum(status_ok), 0)     ok_count,
        coalesce(count(duration), 0)    duration_count,
        coalesce(median(duration), 0)   duration_med,
        coalesce(avg(duration), 0)      duration_avg
from
        v_test  v1
where
        instr('AB1234', q2h_knum_b) > 0
/

---------------------------------

  OK_COUNT DURATION_COUNT DURATION_MED DURATION_AVG
---------- -------------- ------------ ------------
         4              4         34.5        34.75


There’s no obvious reason why the error should occur, but there’s a little hint about what may be happening in the Column projection information from the execution plan. The basic plan is the same in both cases, so I’m only show it once; but it’s followed by two versions of the projection information (restricted to operation 1) which I’ve formatted to improve:

Plan hash value: 2603667166

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     2 (100)|          |
|   1 |  SORT GROUP BY     |      |     1 |    20 |            |          |
|*  2 |   TABLE ACCESS FULL| TEST |     1 |    20 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter((INSTR('AB1234',"Q2H_KNUM_B")>0 AND
              TO_NUMBER(SUBSTR("B"."QUARTER",(-4)))=2020))

Column Projection Information (Operation 1 only):  (Wrong result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22]



Column Projection Information (Operation 1 only):  (Right result)
-----------------------------------------------------------------
PERCENTILE_CONT(.5) WITHIN GROUP ( ORDER BY CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22],
COUNT(CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN "B"."DURATION" END)[22], 
SUM  (CASE  WHEN "B"."CASE_COL" NOT LIKE 'err%' THEN 1 ELSE 0 END)[22]

As you can see, to report avg() Oracle has projected sum() and count().

When we get the right result the sum() for duration appears immediately after the count().

When we get the wrong result the sum() for ok_count comes between the count() and sum() for duration.

This makes me wonder whether Oracle is somehow just losing track of the sum() for duration and therefore dividing null by the count().

This is purely conjecture, of course, and may simply be a coincidence – particularly since 12.1.0.2 gets the right result and shows exactly the same projection information.

Readers are left to experiment with other variations to see if they can spot other suggestive patterns.

 

Re-Register ArchiveLogs and Backups

Hemant K Chitale - Thu, 2020-08-06 05:34
If you as the DBA or someone else as the SysAdmin has had to delete and restore or relocate ArchiveLogs or RMAN Backups to another mount point, you'd find that Oracle can no longer identify them.

This would also happen if you run a CREATE CONTROLFILE -- all information about RMAN Backups and ArchiveLogs that was formerly in the controlfile is "lost" because your database starts with a controlfile that has information only about DataFiles and Online RedoLogs.

How do you re-register ArchiveLogs and RMAN Backups ?

I begin this demo with 3 ArchiveLogs (Sequence#50 to #52) and one BackupSet created today.


SQL> l
1 select sequence#, name from v$archived_log
2 where dest_id=1
3 and (completion_time > sysdate-1 OR first_time > sysdate-1)
4* order by sequence#
SQL> /

SEQUENCE# NAME
---------- --------------------------------------------------
50 /opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf
51 /opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf
52 /opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf

SQL>
RMAN> list archivelog all completed after "sysdate-1";

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
144 1 50 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf

145 1 51 A 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf

146 1 52 A 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf


RMAN>
RMAN> list backup completed after "sysdate-1";


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
17 Full 790.52M DISK 00:00:23 06-AUG-20
BP Key: 17 Status: AVAILABLE Compressed: NO Tag: TAG20200806T173252
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp
List of Datafiles in backup set 17
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 5356172 06-AUG-20 NO /opt/oracle/oradata/ORCLCDB/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
18 Full 18.02M DISK 00:00:01 06-AUG-20
BP Key: 18 Status: AVAILABLE Compressed: NO Tag: TAG20200806T173317
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_08_06/o1_mf_s_1047749597_hlqmly7z_.bkp
SPFILE Included: Modification time: 06-AUG-20
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 5356230 Ckp time: 06-AUG-20

RMAN>



Sometime later, these get deleted and  I can no longer find them.




RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=64 device type=DISK
validation succeeded for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf RECID=139 STAMP=1047211353
validation succeeded for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_49_1036108814.dbf RECID=142 STAMP=1047211650
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf RECID=144 STAMP=1047749103
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf RECID=145 STAMP=1047749332
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf RECID=146 STAMP=1047749334
Crosschecked 5 objects


RMAN>
RMAN> crosscheck backup completed after "sysdate-1";

using channel ORA_DISK_1
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp RECID=17 STAMP=1047749572
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_08_06/o1_mf_s_1047749597_hlqmly7z_.bkp RECID=18 STAMP=1047749598
Crosschecked 2 objects


RMAN>

oracle19c>pwd
/opt/oracle/archivelog/ORCLCDB
oracle19c>ls -ltr |tail -2
-rw-r-----. 1 oracle oinstall 6656 Jul 31 12:02 1_48_1036108814.dbf
-rw-r-----. 1 oracle oinstall 203776 Jul 31 12:07 1_49_1036108814.dbf
oracle19c>
oracle19c>pwd
/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06
oracle19c>ls -ltr
total 0
oracle19c>

RMAN> list expired backup;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
17 Full 790.52M DISK 00:00:23 06-AUG-20
BP Key: 17 Status: EXPIRED Compressed: NO Tag: TAG20200806T173252
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp
List of Datafiles in backup set 17
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 5356172 06-AUG-20 NO /opt/oracle/oradata/ORCLCDB/system01.dbf

RMAN> delete expired backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=269 device type=DISK

List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
17 17 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp

Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp RECID=17 STAMP=1047749572
Deleted 1 EXPIRED objects


RMAN>



So, ArchiveLogs from Sequence#50 to Sequence#52 have been deleted and so has today's backupset (although the controlfile autobackup is still present).

I ask the SysAdmin to restore the misssing files.  He restores them to a different mountpoint  -- under /NEWFS.
I then re-register them.



oracle19c>pwd
/NEWFS/archivelog/ORCLCDB
oracle19c>ls -l
total 4432
-rw-r-----. 1 oracle oinstall 1720832 Aug 6 17:25 1_50_1036108814.dbf
-rw-r-----. 1 oracle oinstall 2808320 Aug 6 17:28 1_51_1036108814.dbf
-rw-r-----. 1 oracle oinstall 1536 Aug 6 17:28 1_52_1036108814.dbf
oracle19c>

oracle19c>pwd
/NEWFS/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06
oracle19c>ls -l
total 809504
-rw-r-----. 1 oracle oinstall 828932096 Aug 6 17:33 o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp
oracle19c>

SQL> alter database register physical logfile '/NEWFS/archivelog/ORCLCDB/1_50_1036108814.dbf';

Database altered.

SQL> alter database register physical logfile '/NEWFS/archivelog/ORCLCDB/1_51_1036108814.dbf';

Database altered.

SQL> alter database register physical logfile '/NEWFS/archivelog/ORCLCDB/1_52_1036108814.dbf';

Database altered.

SQL>
RMAN> list archivelog all completed after "sysdate-1";

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
148 1 50 A 31-JUL-20
Name: /NEWFS/archivelog/ORCLCDB/1_50_1036108814.dbf

144 1 50 X 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf

149 1 51 A 06-AUG-20
Name: /NEWFS/archivelog/ORCLCDB/1_51_1036108814.dbf

145 1 51 X 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf

150 1 52 A 06-AUG-20
Name: /NEWFS/archivelog/ORCLCDB/1_52_1036108814.dbf

146 1 52 X 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf

147 1 53 A 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_53_1036108814.dbf


RMAN>
RMAN> crosscheck archivelog all completed after "sysdate-1";

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=285 device type=DISK
validation succeeded for archived log
archived log file name=/NEWFS/archivelog/ORCLCDB/1_50_1036108814.dbf RECID=148 STAMP=1047752869
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf RECID=144 STAMP=1047749103
validation succeeded for archived log
archived log file name=/NEWFS/archivelog/ORCLCDB/1_51_1036108814.dbf RECID=149 STAMP=1047752894
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf RECID=145 STAMP=1047749332
validation succeeded for archived log
archived log file name=/NEWFS/archivelog/ORCLCDB/1_52_1036108814.dbf RECID=150 STAMP=1047752905
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf RECID=146 STAMP=1047749334
validation succeeded for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_53_1036108814.dbf RECID=147 STAMP=1047751145
Crosschecked 7 objects


RMAN> delete expired archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=285 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
144 1 50 X 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf

145 1 51 X 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf

146 1 52 X 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_50_1036108814.dbf RECID=144 STAMP=1047749103
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_51_1036108814.dbf RECID=145 STAMP=1047749332
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_52_1036108814.dbf RECID=146 STAMP=1047749334
Deleted 3 EXPIRED objects


RMAN>
RMAN> list archivelog all completed after "sysdate-1";

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
148 1 50 A 31-JUL-20
Name: /NEWFS/archivelog/ORCLCDB/1_50_1036108814.dbf

149 1 51 A 06-AUG-20
Name: /NEWFS/archivelog/ORCLCDB/1_51_1036108814.dbf

150 1 52 A 06-AUG-20
Name: /NEWFS/archivelog/ORCLCDB/1_52_1036108814.dbf

147 1 53 A 06-AUG-20
Name: /opt/oracle/archivelog/ORCLCDB/1_53_1036108814.dbf


RMAN>
--- note that ArchiveLog 53 is a new one that has been generated recently, in the default location

RMAN> catalog start with '/NEWFS/FRA';

searching for all files that match the pattern /NEWFS/FRA

List of Files Unknown to the Database
=====================================
File Name: /NEWFS/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEWFS/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp

RMAN> list backup completed after "sysdate-1";


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
18 Full 18.02M DISK 00:00:01 06-AUG-20
BP Key: 18 Status: AVAILABLE Compressed: NO Tag: TAG20200806T173317
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_08_06/o1_mf_s_1047749597_hlqmly7z_.bkp
SPFILE Included: Modification time: 06-AUG-20
SPFILE db_unique_name: ORCLCDB
Control File Included: Ckp SCN: 5356230 Ckp time: 06-AUG-20

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
19 Full 790.52M DISK 00:00:23 06-AUG-20
BP Key: 19 Status: AVAILABLE Compressed: NO Tag: TAG20200806T173252
Piece Name: /NEWFS/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp
List of Datafiles in backup set 19
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 5356172 06-AUG-20 NO /opt/oracle/oradata/ORCLCDB/system01.dbf

RMAN> crosscheck backup completed after "sysdate-1";

using channel ORA_DISK_1
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_08_06/o1_mf_s_1047749597_hlqmly7z_.bkp RECID=18 STAMP=1047749598
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/NEWFS/FRA/ORCLCDB/ORCLCDB/backupset/2020_08_06/o1_mf_nnndf_TAG20200806T173252_hlqml4dv_.bkp RECID=19 STAMP=1047753112
Crosschecked 2 objects


RMAN>



To re-register ArchiveLogs, I used the SQL command "ALTER DATABASE REGISTER PHYSICAL LOGFILE'.  RMAN is then able to identify these ArchiveLogs as well.  I then use CROSSCHECK and DELETE EXPIRED in RMAN to delete information about the old location of the same ArchiveLogs.

To re-register RMAN Backups, I used the RMAN command "CATALOG START WITH".  And then used "CROSSCHECK" to confirm that the BackupPiece(s) is/are available.




Categories: DBA Blogs

Oracle 11g on AWS RDS Will Be Force Upgraded in Coming Months

Pakistan's First Oracle Blog - Thu, 2020-08-06 00:51
To make a long story short: If you have Oracle 11g running on AWS RDS, then start thinking, planning, and implementing it's upgrade to a later version, preferably Oracle 19c. 

This is what AWS has to say about this:

Oracle has announced the end date of support for Oracle Database version 11.2.0.4 as December 31, 2020, after which Oracle Support will no longer release Critical Patch Updates for this database version. Amazon RDS for Oracle will end support for Oracle Database version 11.2.0.4 Standard Edition 1 (SE1) for License Included (LI) model on October 31, 2020. For the Bring Your Own License (BYOL) model, Amazon RDS for Oracle will end the support for Oracle Database version 11.2.0.4 for all editions on December 31, 2020. All 11.2.0.4 SE1 LI instances will be automatically upgraded to 19c starting on November 1, 2020. Likewise, the 11.2.0.4 BYOL instances will be automatically upgraded to 19c starting on January 1, 2021. We highly recommend you upgrade your existing Amazon RDS for Oracle 11.2.0.4 DB instances and validate your applications before the automatic upgrades begin. 

The bit which probably would apply to most of enterprise customers who are running Oracle 11g with BYOL license is this:

January 1, 2021Amazon RDS for Oracle starts automatic upgrades of DB instances restored from snapshots to 19c
Instead of leaving to the last minute, its better to upgrade it sooner. There are lots of things which need to be taken into consideration for this upgrade within and outside of the database. If you need any hand with that, feel free to reach out.
Categories: DBA Blogs

Configure a MySQL Marketplace service for the new Tanzu Application Service on Kubernetes using Container Services Manager for VMware Tanzu

Pas Apicella - Thu, 2020-08-06 00:35
The following post shows how to configure a MySQL service into the new Tanzu Application Service BETA version 0.3.0. For instructions on how to install the Container Services Manager for VMware Tanzu (KSM) see post below.

http://www.clue2solve.io/tanzu/2020/07/14/install-ksm-and-configure-the-cf-marketplace.html
Steps
It's assumed you have already installed KSM into your Kubernetes Cluster as shown below. If not please refer to the documentation to get this done first


$ kubectl get all -n ksm
NAME READY STATUS RESTARTS AGE
pod/ksm-chartmuseum-78d5d5bfb-2ggdg 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-blvpp 1/1 Running 0 15d
pod/ksm-ksm-broker-6db696894c-mnshg 1/1 Running 0 15d
pod/ksm-ksm-daemon-587b6fd549-cc7sv 1/1 Running 1 15d
pod/ksm-ksm-daemon-587b6fd549-fgqx5 1/1 Running 1 15d
pod/ksm-postgresql-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ksm-chartmuseum ClusterIP 10.100.200.107 <none> 8080/TCP 15d
service/ksm-ksm-broker LoadBalancer 10.100.200.229 10.195.93.188 80:30086/TCP 15d
service/ksm-ksm-daemon LoadBalancer 10.100.200.222 10.195.93.179 80:31410/TCP 15d
service/ksm-postgresql ClusterIP 10.100.200.213 <none> 5432/TCP 15d
service/ksm-postgresql-headless ClusterIP None <none> 5432/TCP 15d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ksm-chartmuseum 1/1 1 1 15d
deployment.apps/ksm-ksm-broker 2/2 2 2 15d
deployment.apps/ksm-ksm-daemon 2/2 2 2 15d

NAME DESIRED CURRENT READY AGE
replicaset.apps/ksm-chartmuseum-78d5d5bfb 1 1 1 15d
replicaset.apps/ksm-ksm-broker-6db696894c 2 2 2 15d
replicaset.apps/ksm-ksm-broker-8645dfcf98 0 0 0 15d
replicaset.apps/ksm-ksm-daemon-587b6fd549 2 2 2 15d

NAME READY AGE
statefulset.apps/ksm-postgresql 1/1 15d

1. let's start by getting the Broker IP address which when installed using LoadBalancer type can be retrieved as shown below.

$ kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}'
10.195.93.188

2. Upgrade your Helm release by running the following using the IP address from above

$ export BROKER_IP=$(kubectl get service ksm-ksm-broker -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
$ helm upgrade ksm ./ksm -n ksm --reuse-values \
            --set cf.brokerUrl="http://$BROKER_IP" \
            --set cf.brokerName=KSM \
            --set cf.apiAddress="https://api.system.run.haas-210.pez.pivotal.io" \
            --set cf.username="admin" \
            --set cf.password="admin-password"

3. Next we configure the ksm CLI. You can download the CLI from here

configure-ksm-cli.sh

export KSM_IP=$(kubectl get service ksm-ksm-daemon -n ksm -o=jsonpath='{@.status.loadBalancer.ingress[0].ip}')
export KSM_TARGET=http://$KSM_IP:$(kubectl get svc ksm-ksm-daemon -n ksm -o=jsonpath='{@.spec.ports[0].port}')
export KSM_USER=admin
export KSM_PASSWORD=$(kubectl get secret -n ksm ksm-ksm-daemon -o=jsonpath='{@.data.SECURITY_USER_PASSWORD}' | base64 --decode)

4. Verify ksm CLI is configured correctly

$ ksm version
Client Version [0.10.80]
Server Version [0.10.80]

5. Create a YAML file for the KSM service account and ClusterRoleBinding using the following YAML:

ksm-sa.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ksm-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ksm-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: ksm-admin
    namespace: kube-system

Apply as follows

$ kubectl apply -f ksm-sa.yml

6. You need a cluster credential file to register and set default Kubernetes clusters that is done as follows

cluster-creds.sh

export kube_config="/Users/papicella/.kube/config"

cluster=`grep current $kube_config|sed "s/ //g"|cut -d ":" -f 2`

echo "Using cluster $cluster"

export server=`grep -B 2 "name: $cluster" $kube_config \
  |grep server|sed "s/ //g"|sed "s/^[^:]*://g"`

export certificate=`grep -B 2 "name: $cluster" $kube_config \
  |grep certificate|sed "s/ //g"|sed "s/.*://"`

export secret_name=$(kubectl get serviceaccount ksm-admin \
   --namespace=kube-system -o jsonpath='{.secrets[0].name}')

export secret_val=$(kubectl --namespace=kube-system get secret $secret_name \
   -o jsonpath='{.data.token}')

export secret_val=$(echo ${secret_val} | base64 --decode)

cat > cluster-creds.yaml << EOF
token: ${secret_val}
server: ${server}
caData: ${certificate}
EOF

echo ""
echo "ready to roll!!!!"
echo ""

Before running this script it's best to make sure you have targeted the correct K8s cluster you wish to. You can run a command as follows to verify that

$ kubectl config current-context
tas4k8s
 
7. Now we have a "cluster-creds.yaml" file we can go ahead and register the Kubernetes cluster with KSM as follows

$ ksm cluster register ksm-svcs ./cluster-creds.yaml
$ ksm cluster set-default ksm-svcs

Verify as follows:

$ ksm cluster list
CLUSTER NAME IP ADDRESS                                      DEFAULT
ksm-svcs    https://tas4k8s.run.haas-210.pez.pivotal.io:8443 true

8. Now we can go ahead and create a Marketplace offering for MySQL. To do that we will use the Bitnami MySQL chart as shown below

$ git clone https://github.com/bitnami/charts.git
$ cd ./charts/bitnami/mysql

** create bind.yaml as follows which is required so our service binding from Tanzu Application Service will inject the right JSON we are expecting or requiring at bind time **

$ cat bind.yaml
template: |
  local filterfunc(j) = std.length(std.findSubstr("mysql", j.name)) > 0;
  local s1 = std.filter(filterfunc, $.services);
  {
    hostname: s1[0].status.loadBalancer.ingress[0].ip,
    name: s1[0].name,
    jdbcUrl: "jdbc:mysql://" + self.hostname + "/my_db?user=" + self.username + "&password=" + self.password + "&useSSL=false",
    uri: "mysql://" + self.username + ":" + self.password + "@" + self.hostname + ":" + self.port + "/my_db?reconnect=true",
    password: $.secrets[0].data['mysql-root-password'],
    port: 3306,
    username: "root"
  }

$ helm package .
# cd ..
$ ksm offer save ./mysql ./mysql/mysql-6.14.7.tgz

Verify MySQL is now part of the offer list as follows
  
$ ksm offer list
MARKETPLACE NAME INCLUDED CHARTS VERSION PLANS
rabbitmq rabbitmq 6.18.1 [persistent ephemeral]
mysql mysql 6.14.7 [default]

9. Now we need to login as an ADMIN user

Verify you are logged in as admin user using the CF CLI:

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           admin
org:            system
space:          development

10. At this point you can see the KSM service broker registered with TAS4K8s as follows

$ cf service-brokers
Getting service brokers as admin...

name   url
KSM    http://10.195.93.188

11. Enable access to the MySQL service as follows

$ cf enable-service-access mysql

Verify it's enabled:

$ cf service-access
Getting service access as admin...
broker: KSM
   service    plan         access   orgs
   mysql      default      all
   rabbitmq   ephemeral    all
   rabbitmq   persistent   all

12. At this point it's best to log out of admin and log back in as a user that is not admin

$ cf target
api endpoint:   https://api.system.run.haas-210.pez.pivotal.io
api version:    2.151.0
user:           pas
org:            apples-org
space:          development

13. Create a MySQL service as follows. I passing in some JSON to indicate that my K8s cluster support's a LoadBalancer type so use that as part of the creation of the service.

$ cf create-service mysql default pas-mysql -c '{"service":{"type":"LoadBalancer"}}'

14. Check that the service has created correctly it will take a few minutes

$ cf services
Getting services in org apples-org / space development as pas...

name        service    plan        bound apps          last operation     broker   upgrade available
pas-mysql   mysql      default     my-springboot-app   create succeeded   KSM      no

15. Your service is created in it's own K8s namespace BUT that may not be the case at some point. 
$ kubectl get all -n ksm-2e526124-11a3-4d38-966c-b3ffd45471d7
NAME READY STATUS RESTARTS AGE
pod/k-wqo5mubw-mysql-master-0 1/1 Running 0 15d
pod/k-wqo5mubw-mysql-slave-0 1/1 Running 0 15d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/k-wqo5mubw-mysql LoadBalancer 10.100.200.12 10.195.93.192 3306:30563/TCP 15d
service/k-wqo5mubw-mysql-slave LoadBalancer 10.100.200.130 10.195.93.191 3306:31982/TCP 15d

NAME READY AGE
statefulset.apps/k-wqo5mubw-mysql-master 1/1 15d
statefulset.apps/k-wqo5mubw-mysql-slave 1/1 15d

16. At this point we can now test our new MySQL service we created and use a Spring Boot application to test this out with. 

The following GitHub repo can be used for that. Ignore the steps to create a service as you have already done that




Finally to define service plans see the link below

More Information
Container Services Manager(KSM)

Tanzu Application Service for Kubernetes

Categories: Fusion Middleware

How to copy the csv files from the shared path to HP-Unix server Data base directory

Tom Kyte - Wed, 2020-08-05 16:06
Hi Sir, I'm using the Oracle 9i database in the HP-UNIX system. The directory name is "XX_DATA" Shared network path "\\mkees01\public\Details.csv" May you please help me by providing the code for below scenario. From the Shared network path file - Details.csv how to copy to the oracle database directory? Regards, Sankar
Categories: DBA Blogs

JSON_OBJECT throws error in Stored Procedure

Tom Kyte - Wed, 2020-08-05 16:06
Dear Team, I am trying to use the JSON functions in a stored procedure and TOAD throws the syntax error. <b> Found 'value', an alias is not allowed here (reserved for XMLCOLATTVAL, XMLFOREST and XMLATTRIBUTES only)</b> Below is the query that I use. However when I try to run outside the stored procedure, it works fine. <code>SELECT JSON_OBJECT ( 'empid' value '1' , 'name' value 'Tom' , 'address' value '23333' ) into emp_json FROM dual ;</code> Could you please help me to identify the cause of the error. Thank You.
Categories: DBA Blogs

Best practice to delete rows with a CLOB column

Tom Kyte - Wed, 2020-08-05 16:06
Environment: Oracle 12.1.0.2 on Exadata I have a table with 30 columns, one of which is a CLOB, that contains about 26 million rows. I have a purge job (PL/SQL packaged procedure) that DELETEs rows from nine (9) other tables based on a list of IDs from the driver table that contains the CLOB. I save the list of IDs in a global temporary table and use that to delete the associated rows from the other tables that are done in the correct order to maintain all the FK relationships. I am running a test today that has identified about 98,000 IDs to be purged but I have about 5 million that need to be purged to 'catch up' and going forward I'll have about 10,000 per day to purge. The DELETE on the other 9 tables runs very quickly. The table with the CLOB column is taking many times longer than the others, like several hours as opposed to several minutes. I'm guessing there is a better and more efficient way to remove the rows from the table containing the CLOB and to regain the storage used by the deleted CLOB. I'm currently issuing just a 'DELETE FROM <table> WHERE ID IN (SELECT ID FROM <gtt>', where ID is the PK. I would appreciate any suggestions and order of operational steps to accomplish this purge, both for the 'catch up' and the daily run of 10,000. Thanks in advance for all the help!! Much appreciated! -gary
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator