Feed aggregator
Index Hints
I’ve lost count of the number of times I’ve reminded people that hinting (correctly) is hard. Even the humble /*+ index() */ hint and its close relatives are open to misunderstanding and accidental misuse, leading to complaints that “Oracle is ignoring my hint”.
Strange though it may seem, I’m still not 100% certain of what some of the basic index hints are supposed to do, and even the “hint report” in the most recent versions of dbms_xplan.display_xxx() hasn’t told me everything I’d like to know. So if you think you know all about hints and indexing this blog note is for you.
I’ll start with a brief, and approximate, timeline for the basic index hints – starting from 8.0
For completeness I’ve included the more exotic index-related hints in the list (without a version), and I’ve even highlighted the rarely seen use_nl_with_index() hint to remind myself to raise a rhetorical question about it at the end of this piece.
In this list you’ll notice that the only hint originally available directed the optimizer to access a table by index, but in 8.1 that changed so that we could
- tell the optimizer about indexes it should not use
- specify whether the index access should use the index in ascending or descending order
- use an index fast full scan.
In 9i Oracle then introduced the index skip scan, with the option to specify whether the skip scan should be in ascending or descending order. The index_ss hint seems to be no more than a synonym for the index_ss_asc hint (or should that be the other way round); ss far as I can tell the index_ss() hint will not produce a descending skip scan.
You’ll note that there’s no hint to block an index skip scan, until the hint no_index_ss() appears in 10g along with the no_index_ffs() hint to block the index fast full scan. Since 10g Oracle has got better at introducing both the “positive” and “negative” versions of a hint whenever it introduces any hints for new optimizer mechanisms.
Finally we get to 11g and if you search MOS you may still be able to find the bug note (4323868.8) that introduced the index_rs_asc() and index_rs_desc() hints for index range scan ascending and descending.
From MOS Doc 4323868.8: “This fix adds new hints to enforce that an index is selected only if a start/stop keys (predicates) are used: INDEX_RS_ASC INDEX_RS_DESC”
This was necessary because by this time the index() hint allowed the optimizer to decide for itself how to use an index and it was quite difficult to force it to use the strategy you really wanted.
It’s still a source of puzzlement to me that an explicit index() hint will sometimes be turned into an index_rs_asc() when you check the Outline Information from a call to dbms_xplan.display_xxx() the Optimizer wants to use to reproduce the plan, while there are other times that an explicit index_rs_asc() hint will be turned into a basic index() hint (which might not reproduce the original plan)!
Here’s a little surprise that could only reveal itself in the 19c hint report – unless you were willing to read your way carefully through a 10053 (CBO) trace file in earlier versions of Oracle. It comes from a little investigation of the index_ffs() hint that I’ve kept repeating over the last 20 years.
rem rem Script: c_indffs.sql rem Dated: March 2001 rem Author: Jonathan Lewis rem create table t1 nologging as select rownum id, rpad(mod(rownum,50),10) small_vc, rpad('x',50) padding from all_objects where rownum <= 3000 ; alter table t1 modify id not null; create index t_i1 on t1(id); create index t_i2 on t1(small_vc,id); set autotrace traceonly explain select count(small_vc) from t1 where id > 2750 ; select /*+ index(t1) */ count(small_vc) from t1 where id > 2750 ; select /*+ index_ffs(t1) */ count(small_vc) from t1 where id > 2750 ; select /*+ index_ffs(t1) no_index(t1) */ count(small_vc) from t1 where id > 2750 ; set autotrace off
I’ve created a table with two indexes, and then enabled autotrace to get the execution plans for 4 queries that vary only in their hinting. Here’s the plan (on 19.3, with my settings for system stats) for the first query:
------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 15 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 15 | | | |* 2 | INDEX FAST FULL SCAN| T_I2 | 250 | 3750 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("ID">2750)
It’s an index fast full scan on the t_i2 (two-column) index. If I add an index() hint to this query, will that allow Oracle to continue using the index fast full scan, or will it force Oracle into some other path. Here’s the plan for the query hinted with index(t1):
--------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 15 | 5 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 15 | | | | 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 250 | 3750 | 5 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | T_I1 | 250 | | 2 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("ID">2750)
The optimizer has chosen an index range scan on the (single-column) t1 index. Since this path costs more than the index fast full scan it would appear that the index() hint does not allow the optimizer to consider an index fast full scan. So we might decide that an index_ffs() hint is appropriate to secure the plan we want – and here’s the plan we get with that hint:
------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 15 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 15 | | | |* 2 | INDEX FAST FULL SCAN| T_I2 | 250 | 3750 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("ID">2750)
As expected we get the index fast full scan we wanted. But we might want to add belts and braces – let’s include a no_index() hint to make sure that the optimizer doesn’t consider any other strategy for using an index. Since we’ve seen that the index() hint isn’t associated with the index fast full scan path it seems reasonable to assume that the no_index() is also not associated with the index fast full scan path. Here’s the plan we get from the final variant of my query with index_ffs(t1) no_index(t1):
------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 15 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 15 | | | |* 2 | INDEX FAST FULL SCAN| T_I2 | 250 | 3750 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("ID">2750) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 2 (U - Unused (2)) --------------------------------------------------------------------------- 2 - SEL$1 / T1@SEL$1 U - index_ffs(t1) / hint conflicts with another in sibling query block U - no_index(t1) / hint conflicts with another in sibling query block
The query has produced the execution plan we wanted – but only by accident. The hint report (which, by default, is the version that reports only the erroneus or unused hints) tells us that both hints have been ignored because they each conflict with some other hint in a “sibling” query block. In this case they’re conflicting with each other.
So the plan we get was our original unhinted plan – which made it look as if we’d done exactly the right thing to ensure that we’d made the plan completely reproducible. Such (previously invisible) errors can easily lead to complaints about the optimizer ignoring hints.
The Main EventThe previous section was about an annoying little inconsistency in the way in which the “negative” version of a hint may not correspond exactly to the “postive” version. There’s a more worrying issue to address when you try to be more precise in your use of basic index hints.
We’ve seen that an index() hint could mean almost anything other than an index fast full scan, while a no_index() hint (probably) blocks all possible uses of an index, but would you expect an index_rs_asc() hint to produce a skip scan, or an index_ss_asc() hint to produce a range scan? Here’s another old script of mine to create some data and test some hints:
rem rem Script: skip_scan_anomaly.sql rem Author: Jonathan Lewis rem Dated: Jan 2009 rem create table t1 as with generator as ( select --+ materialize rownum id from all_objects where rownum <= 3000 -- > hint to avoid wordpress format issue ) select mod(rownum,300) addr_id300, mod(rownum,200) addr_id200, mod(rownum,100) addr_id100, mod(rownum,50) addr_id050, trunc(sysdate) + trunc(mod(rownum,2501)/3) effective_date, lpad(rownum,10,'0') small_vc, rpad('x',050) padding -- rpad('x',100) padding from generator v1, generator v2 where rownum <= 250000 -- > hint to avoid wordpress format issue ; create index t1_i1 on t1(effective_date); create index t1_i300 on t1(addr_id300, effective_date); create index t1_i200 on t1(addr_id200, effective_date); create index t1_i100 on t1(addr_id100, effective_date); create index t1_i050 on t1(addr_id050, effective_date);
I’ve created a table with rather more indexes than I’ll be using. The significant indexes are t1_i1(effective_date), and t1_i050(addr_id050, effective_date). The former will be available for range scans the latter for skip scans when I test queries with predicates only on effective_date.
Choice of execution path can be affected by the system stats, so I need to point out that I’ve set mine with the following code:
begin dbms_stats.set_system_stats('MBRC',16); dbms_stats.set_system_stats('MREADTIM',10); dbms_stats.set_system_stats('SREADTIM',5); dbms_stats.set_system_stats('CPUSPEED',500); exception when others then null; end; /
And I’ll start with a couple of “baseline” queries and execution plans:
explain plan for select small_vc from t1 where effective_date > to_date('&m_start_date','dd-mon-yyyy') and effective_date <= to_date('&m_end_date' ,'dd-mon-yyyy') ; select * from table(dbms_xplan.display(format=>'hint_report')); alter index t1_i1 invisible; explain plan for select /*+ index(t1) */ small_vc from t1 where effective_date > to_date('&m_start_date','dd-mon-yyyy') and effective_date <= to_date('&m_end_date' ,'dd-mon-yyyy') ;
You’ll notice at line 11 I’ve made the t1_i1 index invisible, and it will stay that way for a couple more tests. Here are the first two execution plans:
Unhinted -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1500 | 28500 | 428 (9)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T1 | 1500 | 28500 | 428 (9)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Hinted with index(t1) ----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1500 | 28500 | 1558 (1)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1500 | 28500 | 1558 (1)| 00:00:01 | |* 2 | INDEX SKIP SCAN | T1_I050 | 1500 | | 52 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 --------------------------------------------------------------------------- 1 - SEL$1 / T1@SEL$1 - index(t1)
Unhinted I’ve managed to rig the data and system stats so that the first path is a full tablescan; then, when I add the generic index(t1) hint Oracle recognises and uses the hint in the best possible way, picking the lowest cost index skip scan.
A variation I won’t show here – if I change the hint to index_rs_asc(t1) the optimizer recognizes there is no (currently visible) index that could be used for an index range scan and does a full tablescan, reporting the hint as unused. It won’t try to substitute a skip scan for a range scan.
What happens if I now try the index_ss(t1) hint without specifying an index. Firstly with the t1_i1 index still invisible, then after making t1_i1 visible again:
explain plan for select /*+ index_ss(t1) */ small_vc from t1 where effective_date > to_date('&m_start_date','dd-mon-yyyy') and effective_date <= to_date('&m_end_date' ,'dd-mon-yyyy') ; select * from table(dbms_xplan.display(format=>'hint_report'));
Here are the two execution plans, first when t1_i1(effective_date) is still invisible:
----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1500 | 28500 | 1558 (1)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1500 | 28500 | 1558 (1)| 00:00:01 | |* 2 | INDEX SKIP SCAN | T1_I050 | 1500 | | 52 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 --------------------------------------------------------------------------- 1 - SEL$1 / T1@SEL$1 - index_ss(t1)
As you might expect the optimizer has picked the t1_i050 index for a skip scan. (There are 3 other candidates for the skip scan, but since the have more distinct values for their leading column they are all turn out to have a higher cost than t1_i050).
So let’s make the t1_i1 index visible and see what the plan looks like:
---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1500 | 28500 | 521 (1)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1500 | 28500 | 521 (1)| 00:00:01 | |* 2 | INDEX RANGE SCAN | T1_I1 | 1500 | | 6 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 1 - SEL$1 / T1@SEL$1 U - index_ss_asc(t1)
The optimizer picks an index range scan using the t1_i1 index, and reports the hint as unused! For years I told myself that an index skip scan was derived as a small collection of range scans, so an index range was technically a “degenerate” skip scan i.e. one where the “small collection” consisted of exactly one element. Oracle 19c finally told me I was wrong – the optimizer is ignoring the hint.
The fact that it’s a sloppy hint and you could have been more precise is irrelevant – if the optimizer won’t do a skip scan when you specify a range scan it shouldn’t do a range scan when you specify a skip scan (personal opinion).
We should check, of course, that a precisely targeted skip scan hint works before complaining too loudly – would index_ss(t1 t1_i050), or index_ss_t1 t1_i300) work when there’s a competing index that could produce a lower cost range scan? The answer is yes.
explain plan for select /*+ index_ss(t1 t1_i050) */ small_vc from t1 where effective_date > to_date('&m_start_date','dd-mon-yyyy') and effective_date <= to_date('&m_end_date' ,'dd-mon-yyyy') ; select * from table(dbms_xplan.display(format=>'hint_report')); ----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1500 | 28500 | 1558 (1)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1500 | 28500 | 1558 (1)| 00:00:01 | |* 2 | INDEX SKIP SCAN | T1_I050 | 1500 | | 52 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) filter("EFFECTIVE_DATE"<=TO_DATE(' 2021-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "EFFECTIVE_DATE">TO_DATE(' 2021-02-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 --------------------------------------------------------------------------- 1 - SEL$1 / T1@SEL$1 - index_ss(t1 t1_i050)
If you specify a suitable index in the index_ss() hint then the optimizer will use it and won’t switch to the index range scan. You can, of course, specify the index by description rather than name, so the hint /*+ index_ss(t1 (addr_id050, effective_date)) */ would have been equally valid and obeyed.
How much do you know?I’ll finish off with a rhetorical question, which I’ll introduce with this description take from the 19c SQL Tuning Guide section 9.2.1.6:
The related hint USE_NL_WITH_INDEX(table index) hint instructs the optimizer to join the specified table to another row source with a nested loops join using the specified table as the inner table. The index is optional. If no index is specified, then the nested loops join uses an index with at least one join predicate as the index key.
An intuitive response to this hint would be to assume that most people expect nested loops to use index unique scans or range scans into the second table. So what would your initial expectation be about the validity of use_nl_with_index() if the only way the index could be used was with an index skip scan, or a full scan, or a fast full scan. What if there were two join predicates and there’s a path which could do a nested loop if it used two indexes to do an index join (index_join()) or an index bitmap conversion (index_combine()). Come to that, how confident are you that the hint will work if the index specified is a bitmap index?
SummaryIt’s important to be as accurate and thorough as possible when using hints. Even when a hint is documented you may find that you can asked “what if” questions about the hint and find that the only way to get answers to your questions is to do several experiments.
If you’re going to put hints into production code, take at least a little time to say to yourself:
“I know what I want and expect this hint to do; are there any similar actions that it might also be allowed to trigger, and how could I check if I need to allow for them or block them?”
Footnote: This journey of rediscovery was prompted by an email from Kaley Crum who supplied me with an example of Oracle using an index skip scan when it had been hinted to do an index range scan.
RMAN's CATALOG command
The CATALOG START WITH command allows you to update the RMAN Repository with information about backup pieces (or archivelogs) in the specified location.
For example, if older backups have already been purged from RMAN but are now restored from tape, they can be made visible to RMAN with the CATALOG START WITH command.
Another case would be if you relocate backups to an alternate filesystem or diskgroup and the RMAN repository needs to updated to identify the new location.
If you copy a backup to another server and then restore the controlfile from a different backup, you can have the controlfile updated with information about the copied backups using this command.
You can also take a backup from a Primary database and catalog it to a Standby (e.g. when you want to update the Standby which is significantly lagging). Oracle also allows you to catalog a backup from a Standby into the Primary server if the backup / backups is/are not available on the Primary.
A few demonstrations :
Demonstration 1 : Relocated Backup Set / BackupPiece for Datafile Backup(s)
SQL> select file#, name, checkpoint_change#
2 from v$datafile
3 where name = '/opt/oracle/oradata/ORCLCDB/users01.dbf'
4 /
FILE# NAME CHECKPOINT_CHANGE#
---------- ------------------------------------------------ ------------------
7 /opt/oracle/oradata/ORCLCDB/users01.dbf 7583758
SQL>
oracle19c>sqlplus '/ as sysdba'
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Jan 25 22:18:20 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Mon Jan 25 22:18:26 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCLCDB (DBID=2778483057)
RMAN> list backup of datafile 7;
using target database control file instead of recovery catalog
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
42 Full 229.31M DISK 00:00:26 14-NOV-20
BP Key: 42 Status: AVAILABLE Compressed: YES Tag: TAG20201114T162700
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
List of Datafiles in backup set 42
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7343626 14-NOV-20 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
53 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 53 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
RMAN>
-- Datafile 7 is currently at a higher SCN (7583758) then the latest backup as of 25-Jan-21
RMAN> crosscheck backup of datafile 7;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Crosschecked 1 objects
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
Crosschecked 1 objects
RMAN>
----- both backups are no longer available on disk
oracle19c>pwd
/var/tmp/For_Restore
oracle19c>ls -l
total 318016
-rw-r-----. 1 oracle oinstall 9194496 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
-rw-r-----. 1 oracle oinstall 4457984 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
-rw-r-----. 1 oracle oinstall 2251776 Jan 25 22:14 o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
-rw-r-----. 1 oracle oinstall 62976 Jan 25 22:15 o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
-rw-r-----. 1 oracle oinstall 240459776 Jan 25 22:14 o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
-rw-r-----. 1 oracle oinstall 69206016 Jan 25 22:14 o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp
oracle19c>
----- backups of 25-Jan have been restored from Tape to /var/tmp/For_Restore
RMAN> catalog start with '/var/tmp/For_Restore';
searching for all files that match the pattern /var/tmp/For_Restore
List of Files Unknown to the Database
=====================================
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkv4w_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkvdk_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
File Name: /var/tmp/For_Restore/o1_mf_annnn_TAG20210125T221517_j0xnmoj0_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
File Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnkym5_.bkp
RMAN>
RMAN> list backup of datafile 7;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
42 Full 229.31M DISK 00:00:26 14-NOV-20
BP Key: 42 Status: EXPIRED Compressed: YES Tag: TAG20201114T162700
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
List of Datafiles in backup set 42
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7343626 14-NOV-20 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
BS Key Type LV Size
------- ---- -- ----------
53 Full 229.31M
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
Backup Set Copy #2 of backup set 53
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:00:26 25-JAN-21 YES TAG20210125T221421
List of Backup Pieces for backup set 53 Copy #2
BP Key Pc# Status Piece Name
------- --- ----------- ----------
64 1 AVAILABLE /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
Backup Set Copy #1 of backup set 53
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ --------------- ---------- ---
DISK 00:00:26 25-JAN-21 YES TAG20210125T221421
List of Backup Pieces for backup set 53 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
53 1 EXPIRED /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
RMAN>
----- Now RMAN finds that there is one more backup in /var/tmp/For_Restore
----- RMAN also identifies that Backup Set 53 actually has 2 copies -- Copy#2 being in /var/tmp/For_Restore
----- The BackupSet is 53 but the BackupPiece is 53 at the FRA location and 64 for the Copy at /var/tmp/For_Restore
----- So, the CATALOG command has added this copy is a new BackupPiece in the Repository
RMAN> crosscheck backup of datafile 7;
using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=64 STAMP=1062800572
Crosschecked 1 objects
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Crosschecked 2 objects
RMAN> delete expired backup of datafile 7;
using channel ORA_DISK_1
using channel ORA_DISK_2
List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
42 42 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp
53 53 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp RECID=53 STAMP=1062800062
Deleted 1 EXPIRED objects
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_11_14/o1_mf_nnndf_TAG20201114T162700_htz56nnc_.bkp RECID=42 STAMP=1056472020
Deleted 1 EXPIRED objects
RMAN> list backup of datafile 7;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
53 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 64 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 53
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
RMAN>
----- after running CROSSCHECK and DELETE EXPIRED, RMAN now identifies that Backupset 53 has only one BackupPiece at /var/tmp/For_Restore
----- Any attempt to RESTORE DATAFILE 7 would now use this BackupPiece
Demonstration 2 : Relocated ArchiveLog and Backup of ArchiveLog
RMAN> list archivelog from sequence 119 until sequence 119;
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================
Key Thrd Seq S Low Time
------- ---- ------- - ---------
286 1 119 A 25-JAN-21
Name: /opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf
RMAN> list backup of archivelog from sequence 119 until sequence 119;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
51 2.15M DISK 00:00:01 25-JAN-21
BP Key: 51 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221418
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
List of Archived Logs in backup set 51
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 119 7582383 25-JAN-21 7583492 25-JAN-21
RMAN>
RMAN> crosscheck archivelog from sequence 119 until sequence 119;
released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf RECID=286 STAMP=1062800057
Crosschecked 1 objects
RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;
using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Crosschecked 1 objects
RMAN>
----- The CROSSCHECK command finds that both the ArchiveLog and it's backup are missing
RMAN> catalog start with '/var/tmp/ArchLogs_Restore/';
searching for all files that match the pattern /var/tmp/ArchLogs_Restore/
List of Files Unknown to the Database
=====================================
File Name: /var/tmp/ArchLogs_Restore/1_119_1036108814.dbf
File Name: /var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /var/tmp/ArchLogs_Restore/1_119_1036108814.dbf
File Name: /var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
RMAN> crosscheck archivelog from sequence 119 until sequence 119;
released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=288 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=46 device type=DISK
validation succeeded for archived log
archived log file name=/var/tmp/ArchLogs_Restore/1_119_1036108814.dbf RECID=299 STAMP=1062801628
Crosschecked 1 objects
validation failed for archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf RECID=286 STAMP=1062800057
Crosschecked 1 objects
RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;
using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Crosschecked 1 objects
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=66 STAMP=1062801628
Crosschecked 1 objects
RMAN>
RMAN> delete expired backup of archivelog from sequence 119 until sequence 119;
using channel ORA_DISK_1
using channel ORA_DISK_2
List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
51 51 1 1 EXPIRED DISK /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp
Do you really want to delete the above objects (enter YES or NO)? YES
deleted backup piece
backup piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2021_01_25/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=51 STAMP=1062800060
Deleted 1 EXPIRED objects
RMAN> crosscheck backup of archivelog from sequence 119 until sequence 119;
using channel ORA_DISK_1
using channel ORA_DISK_2
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/var/tmp/ArchLogs_Restore/o1_mf_annnn_TAG20210125T221418_j0xnkwy7_.bkp RECID=66 STAMP=1062801628
Crosschecked 1 objects
RMAN>
----- After I CROSSCHECK in the new (restored) location, RMAN finds the ArchiveLog and it's backup
----- I can DELETE the EXPIRED backup
----- (note that the missing ArchiveLog /opt/oracle/archivelog/ORCLCDB/1_119_1036108814.dbf is no longer listed as the CROSSCHECK had already marked it as "validation failed")
Demonstration 3 : Datafile Backup from Standby available at Primary
----- Backup of Datafile 7 taken at the Standby
RMAN> backup as compressed backupset datafile 7 format '/var/tmp/For_Primary/datafile_7.bkp';
Starting backup at 25-JAN-21
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=264 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/opt/oracle/oradata/STDBYDB/users01.dbf
channel ORA_DISK_1: starting piece 1 at 25-JAN-21
channel ORA_DISK_1: finished piece 1 at 25-JAN-21
piece handle=/var/tmp/For_Primary/datafile_7.bkp tag=TAG20210125T225828 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 25-JAN-21
Starting Control File and SPFILE Autobackup at 25-JAN-21
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2021_01_25/o1_mf_s_1062802630_j0xq4pmm_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 25-JAN-21
RMAN>
----- The backup is then copied over to the Primary Server
RMAN> catalog start with '/var/tmp/From_Standby/';
RMAN> catalog start with '/var/tmp/From_Standby/';
searching for all files that match the pattern /var/tmp/From_Standby/
List of Files Unknown to the Database
=====================================
File Name: /var/tmp/From_Standby/datafile_7.bkp
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /var/tmp/From_Standby/datafile_7.bkp
RMAN> list backup of datafile 7;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
60 Full 229.31M DISK 00:00:26 25-JAN-21
BP Key: 70 Status: AVAILABLE Compressed: YES Tag: TAG20210125T221421
Piece Name: /var/tmp/For_Restore/o1_mf_nnndf_TAG20210125T221421_j0xnky0z_.bkp
List of Datafiles in backup set 60
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7583529 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
62 Full 1.18M DISK 00:00:00 25-JAN-21
BP Key: 73 Status: AVAILABLE Compressed: YES Tag: TAG20210125T225828
Piece Name: /var/tmp/From_Standby/datafile_7.bkp
List of Datafiles in backup set 62
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
7 Full 7591636 25-JAN-21 NO /opt/oracle/oradata/ORCLCDB/users01.dbf
RMAN>
----- The Primary now recognises that there are 2 distinct backups of datafile 7
----- That in /var/tmp/For_Restore is as of CheckPoint SCN 7583529 (it has a new BS Key and BackupPiece as I have deleted and re-cataloged it for this, third, demo)
----- The one from the Standby at /var/tmp/From_Standby is at CheckPoint SCN 7591636 -- which is a higher SCN as it is a more recent backup
----- I can actualy use the backup from th Standby and Restore to the Primary
RMAN> sql 'alter database datafile 7 offline';
sql statement: alter database datafile 7 offline
RMAN> restore datafile 7;
Starting restore at 25-JAN-21
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /opt/oracle/oradata/ORCLCDB/users01.dbf
channel ORA_DISK_1: reading from backup piece /var/tmp/From_Standby/datafile_7.bkp
channel ORA_DISK_1: piece handle=/var/tmp/From_Standby/datafile_7.bkp tag=TAG20210125T225828
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 25-JAN-21
RMAN> sql 'alter database datafile 7 online';
sql statement: alter database datafile 7 online
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 01/25/2021 23:02:55
RMAN-11003: failure during parse/execution of SQL statement: alter database datafile 7 online
ORA-01113: file 7 needs media recovery
ORA-01110: data file 7: '/opt/oracle/oradata/ORCLCDB/users01.dbf'
RMAN> recover datafile 7;
Starting recover at 25-JAN-21
using channel ORA_DISK_1
using channel ORA_DISK_2
starting media recovery
media recovery complete, elapsed time: 00:00:00
Finished recover at 25-JAN-21
RMAN> sql 'alter database datafile 7 online';
sql statement: alter database datafile 7 online
RMAN>
----- So, when datafile 7 is corrupt at the Primary, I take it OFFLINE and then issue a RESTORE command
----- RMAN automatically identifies that, of the two backups, the "From_Standby/datafile_7.bkp' is more recent
----- So, the Backup from the Standby can be Restored to the Primary and the datafile brought ONLINE
----- RECOVERy is still required because the Primary database is currently at a higher SCN than the backup of that datafile from the Standby
----- So, the RECOVER command applies all Redo that is for SCN higher than 7591636 that needs to be applied to Datafile 7
----- For the duration when I had datafile 7 OFFLINE I had stopped Database Recovery at the Standby
Thus, there are different uses for the CATALOG START WITH command in RMAN
Amazon Comprehend | Natural Language Processing (NLP) On AWS
AWS uses Amazon Comprehend for natural language processing (NLP) tasks. It uses ML to find insights and relationships in a text. To work on Amazon Comprehend, no machine learning experience required. Natural Language Processing (NLP) is an approach for computers to understand, analyze, and extract meaning from textual data in a smart and useful way. […]
The post Amazon Comprehend | Natural Language Processing (NLP) On AWS appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Deep Learning On AWS
Nowadays Machine Learning and Artificial Intelligence gaining a lot of buzzes. But have you noted about AWS deep learning? Deep learning is also a developing field that is turning many heads in the current business scene. AWS has carried another point to deep learning with Amazon Machine Images (AMIs) explicitly implied for AI. Deep learning […]
The post Introduction To Deep Learning On AWS appeared first on Oracle Trainings for Apps & Fusion DBA.
Microk8s: publishing the dashboard (reachable from remote/internet)
If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard
The problem is, the command
microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
has to be reexecuted every time you restart your node, which you use to access the dashboard.
A better configuration can be done this way: Run the following command and change
type: ClusterIP --> type: NodePort
# Please edit the object below. Lines beginning with a '#' will be ignored,kubectl -n kube-system edit service kubernetes-dashboard
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
creationTimestamp: "2021-01-22T21:19:24Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "3599"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
clusterIP: 10.152.183.81
clusterIPs:
- 10.152.183.81
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}Then run
root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.152.183.81 <none> 443:30713/TCP 4m14s
After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713
Eleven Table Tennis: Basics
Assuming you are an IRL player who wants to get as close to the real thing as possible, that’s what I’d recommend:
Make sure you have enough space to play
The green box is your playing space. It should be a square of 2.50 m X 2.50 m ideally. Make sure to leave some space at the front, so you can reach balls close to the net and even a little across the net. Otherwise you may become a victim of ghost serves. Leave enough room at the sides – some opponents play angled, just like IRL.
If you don’t have enough space for this setup – maybe you shouldn’t play multiplayer mode then. You can still have fun, playing against the ballmachine or against the AI. Actually, I think it’s worth the money even in that case.
Use the discord channelThe Eleven TT community is on this discord channel: https://discord.gg/s8EbXWG
I recommend you register there and use the same or a similar name as the name you have in the game. For example, I’m Uwe on discord and uwe. in the game (because the name uwe was already taken). This is handy to get advice from more experienced players, also the game developers are there. They are very responsive and keen to improve Eleven TT even more, according to your feedback.
There’s a preview version presently, that has improved tracking functionality. You can just ask the developers there to get you this preview version. I did, and I find it better than the regular version, especially for fast forehand strokes.
Setup your paddleWhen you have the Sanlaki paddle adapter (as recommended in the previous post), go to the menu and then to Paddle Settings:

Click on Paddle Position and select the Sanlaki Adapter:

As an IRL player, you may start with an Advanced Paddle Surface:

Se how that works for you. Bounciness translates to the speed of your blade. An OFF ++ blade would be maximum bounciness. Spin is self-explaining. You have no tackiness attribute, though. Throw Coefficient translates to the sponge thickness. The higher that value, the thicker the sponge.
ServingThis takes some time to get used to. You need to press the trigger on the left controller to first “produce” a ball, then you throw it up and press the trigger again to release the ball. Took me a while to practice that and still sometimes I fail to release the ball as smoothly as I would like to.
What I like very much: You have a built-in arbiter, who makes sure your serve is legal according to the ITTF rules. That is applied for matches in multiplayer mode as well as for matches in single player mode. But not in free hit mode! Check out the Serve Practice:

It tells you what went wrong in case:


I recommend you practice with the AI opponent in single player mode for a while. It has spin lock on per default, which means it will never produce any side spin. I find that unrealistic. After some practicing against the AI in single player mode, you’re ready for matches in multiplayer mode against other human opponents.
Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster
Kubernetes cluster with microk8s on raspberry pi
If you want to join a node and you get the following error:
microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54Contacting cluster at 192.168.178.57
Traceback (most recent call last):
File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
join_dqlite(connection_parts)
File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'
This error happens, if you have not enabled dns on your nodes.
So just run "microk8s.enable dns" on every machine:
microk8s.enable dns
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled
And after that the join will work like expected:
root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready <none> 3m35s v1.20.1-34+97978f80232b01
zigbee Ready <none> 37m v1.20.1-34+97978f80232b01
Google Cloud Services and Tools
Google Cloud Services is a set of Computing, Networking, Storage, Big Data, Machine Learning, and Management services offered by Google which runs on the same cloud infrastructure that Google uses internally for YouTube, Gmail, and other end-user products. Want to know more about the tools and services offered by Google Cloud? Read the blog post […]
The post Google Cloud Services and Tools appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Amazon Lex | Conversational AI for Chatbots
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging […]
The post Introduction To Amazon Lex | Conversational AI for Chatbots appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Amazon SageMaker Built-in Algorithms
Amazon SageMaker provides a suite of built-in algorithms to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. Want to know more about the Amazon SageMaker Built-in Algorithms? Read the blog post at https://k21academy.com/awsml12 to learn more. The blog post covers: • What Is Amazon SageMaker and […]
The post Introduction To Amazon SageMaker Built-in Algorithms appeared first on Oracle Trainings for Apps & Fusion DBA.
Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 2)
We share our skills to maximize your revenue!
Announcing SLOB 2.5.3
This is just a quick blog post to inform readers that SLOB 2.5.3 is now available at the following webpage: click here.
SLOB 2.5.3 is a bug fix release. One of the fixed bugs has to do with how SLOB sessions get connected to RAC instances. SLOB users can surely connect to the SCAN service but for more repeatable testing I advise SLOB 2.5.3 and SQL*Net services configured one per RAC node. This manner of connectivity establishes affinity between schemas and RAC nodes. For example, repeatability is improved if sessions performing SLOB Operations against, say, user7’s schema, it is beneficial to do so connected to the same RAC node as you iterate through your testing.
The following is cut and pasted from SLOB/misc/sql_net/README:
The tnsnames.ora in this directory offers an example of
service names that will allow the user to test RAC with
repeatable results. Connecting SLOB sessions to the round
robin SCAN listener will result in SLOB sessions connecting
to random RAC nodes. This is acceptable but not optimal and
can result in varying run results due to slight variations
in sessions per RAC node from one test to another.
As of SLOB 2.5.3, runit.sh uses the SQLNET_SERVICE_BASE and
SQLNET_SERVICE_MAX slob.conf parameters to sequentially
affinity SLOB threads (Oracle sessions) to numbered service
names. For example:
SQLNET_SERVICE_BASE=rac
SQLNET_SERVICE_MAX=8
With these assigned values, runit.sh will connect the first
SLOB thread to rac1 then rac2 and so forth until rac8 after
which the connection rotor loops back to rac1. This manner
of RAC affinity testing requires either a single SLOB
schema (see SLOB Single Schema Model in the documentaion)
or 8 SLOB schemas to align properly with the value assigned
to SQLNET_SERVICE_MAX. The following command will connect
32 SLOB threads (Oracle sessions) to each RAC node in an
8-node RAC configuration given the tnsnames.ora example
file in this directory:
$ sh ./runit.sh -s 8 -t 32
Find sku_no values from the table which does not have any records for ven_type='P'
Troubleshooting heavy hash joins
Spooling data to .csv file via SQL Plus
Datapump in Oracle ADB using SQL Developer Web
If you have a small schema in the Oracle Cloud Autonomous Database, you can actually run DataPump from SQL Developer Web. DATA_PUMP_DIR maps to a DBFS mount inside the Oracle Database.
Logged in to my Oracle ADB as "ADMIN"
I check if DATA_PUMP_DIR exists and I find that it is in dbfs :
I run a PLSQL Block to export the HEMANT schema using the DBMS_DATAPUMP API :
After I drop the two tables in the schema, I run the import using the DBMS_DATAPUMP API and then refresh the list of Tables owned by "HEMANT" :
This method is a quick way of using the Autonomous Database itself when you don't have an external Object Store to hold the Datapump file. So, I'd use this only for very small schemas as the dump is itself loaded into DBFS.
The PLSQL Code is :
REM Based on Script posted by Dick Goulet, posted to oracle-l@freelists.org
REM With modifications by me.
REM Hemant K Chitale
REM Export schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'export';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;
REM Import schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'import';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value=>'SKIP');
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;
Again, I emphasise that this is only for small dumps.
Oracle 19c Automatic Indexing: Non-Equality Predicates Part II (Let’s Spend The Night Together)
MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady
On my little kubernetes cluster with microK8s
i got this problem:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
zigbee NotReady <none> 59d v1.19.5-34+b1af8fc278d3ef
ubuntu Ready <none> 59d v1.19.6-34+e6d0076d2a0033
The solution was:
kubectl describe node zigbee
and in the output i found:
Events:Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 18m kube-proxy Starting kube-proxy.
Normal Starting 14m kubelet Starting kubelet.
Warning SystemOOM 14m kubelet System OOM encountered, victim process: influx, pid: 3256628
Warning InvalidDiskCapacity 14m kubelet invalid capacity 0 on image filesystem
Normal NodeHasNoDiskPressure 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasSufficientMemory
But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...
Need help working with PL/SQL FOR LOOP
Historical question about the definition of the constraining table in the Oracle documentation
Pages
