Home » RDBMS Server » Performance Tuning » Performance for Full table scan ( Huge data )
Performance for Full table scan ( Huge data ) [message #166924] Mon, 10 April 2006 06:23 Go to next message
rawat_me
Messages: 45
Registered: September 2005
Member
Hi Oracle gurus,
We have a huge DataWarehose system where the data in the historic Table X is about 60M ( most of the days ) .
Now when extracting this data from the table it took around
2 hours every time. There is no Where Clause associated.
We are extracting all the records from table X.

Is there any method to enhance the performance.

Thanks
rawat
Re: Performance for Full table scan ( Huge data ) [message #166926 is a reply to message #166924] Mon, 10 April 2006 06:44 Go to previous messageGo to next message
kmsalih
Messages: 197
Registered: March 2006
Senior Member
Hi,

Use the Parallel Hint.

Select /*+Parallel(n) */ from table;

Rgds
SALIH KM
Cool
Re: Performance for Full table scan ( Huge data ) [message #166930 is a reply to message #166926] Mon, 10 April 2006 07:06 Go to previous messageGo to next message
Mahesh Rajendran
Messages: 10707
Registered: March 2002
Location: oracleDocoVille
Senior Member
Account Moderator
>> extracting this data from the table i
Depends on how you 'extract' the data and 'what' you are going to do with that.
You can do bulk process ( depending on the interface you use).
For example, with sql*plus arraysize,

scott@9i > set timing on
scott@9i > show arraysize
arraysize 15
scott@9i > set autotrace traceonly
scott@9i > select * from emp;

458752 rows selected.

Elapsed: 00:00:38.49

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'EMP'




Statistics
----------------------------------------------------------
          1  recursive calls
          1  db block gets
      33128  consistent gets
          0  physical reads
        120  redo size
   23120761  bytes sent via SQL*Net to client
     337068  bytes received via SQL*Net from client
      30585  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     458752  rows processed

scott@9i > /

458752 rows selected.

Elapsed: 00:00:39.11

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'EMP'




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      33101  consistent gets
          0  physical reads
          0  redo size
   23120761  bytes sent via SQL*Net to client
     337068  bytes received via SQL*Net from client
      30585  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     458752  rows processed

scott@9i > set arraysize 150
scott@9i > /

458752 rows selected.

Elapsed: 00:00:25.30

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'EMP'




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       5748  consistent gets
          0  physical reads
          0  redo size
   18138736  bytes sent via SQL*Net to client
      34293  bytes received via SQL*Net from client
       3060  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     458752  rows processed
Re: Performance for Full table scan ( Huge data ) [message #167026 is a reply to message #166930] Mon, 10 April 2006 22:37 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
If you are executing the extraction from a remote host, you could try running it from the Oracle server machine, zipping the file, then FTP to the remote host.

Also, speak to your DBA abou the db_file_multiblock_read_count parameter.


Ross Leishman
Re: Performance for Full table scan ( Huge data ) [message #167035 is a reply to message #166926] Mon, 10 April 2006 23:12 Go to previous messageGo to next message
rawat_me
Messages: 45
Registered: September 2005
Member
Hi All
Thanks for the different solution. The arraysize option is quite appealing.
But we are using Pro*c as the interface to extract records from database.
Is It possible thru Pro*C to set arraysize before executing
the query.
Thanks
Rawat
Re: Performance for Full table scan ( Huge data ) [message #167058 is a reply to message #167035] Tue, 11 April 2006 02:22 Go to previous message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
With Pro*C, you fetch cursors into C arrays. Check the pre-compilers manual for more info.

Ross Leishman
Previous Topic: Partitioning Table
Next Topic: Monitoring memory used by oracle process
Goto Forum:
  


Current Time: Mon May 27 03:45:24 CDT 2024