Feed aggregator

opt_estimate catalogue

Jonathan Lewis - Fri, 2019-10-04 04:10

This is just a list of the notes I’ve written about the opt_estimate() hint.

  • opt_estimate – using the hint to affect index calculations: index_scan and index_filter
  • opt_estimate 2 – applying the hint to nested loop joins, options: nlj_index_scan and nlj_index_filter
  • opt_estimate 3 – a couple of little-known options for the hint, “group_by” and “having”.
  • opt_estimate 4 – applying the hint at the query block level: particularly useful for CTEs (“with subquery”) and non-mergeable views.
  • opt_estimate 5 – a story of failure: trying to use opt_estimate to push predicates into a union all view.

I have a couple more drafts on the topic awaiting completion, but if you know of any other articles that would be a good addition to the list feel free to reference them in the comments.

 

Urdu in AWS

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:24
Urdu is arguably one of the most beautiful and poetic language on the planet. AWS Translate now supports Urdu along with 31 other languages, which is awesome.



AWS Translate is growing leaps and bounds and has matured quite a lot over the last few months. There are now hundreds of translations and its now available in all the regions.


Amazon Translate is a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. You can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages.


Amazon Translate provides translation between a source language (the input language) and a target language (the output language). A source language-target language combination is known as a language pair.


As with other AWS products, there are no contracts or minimum commitments for using Amazon Translate.



Categories: DBA Blogs

How to do a quick health check of AWS RDS database

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:16
Just because the database is on AWS RDS, it doesn't mean that it won't run slow or get stuck. So when your users complain about the slowness of your RDS database, do the following quick health check:
1- From AWS console, in RDS section, go to your database and then go to Logs and Events tab. From Logs, in case of Oracle check alert log, in case of SQL Server check Error log, for PostgreSQL check postgres log and error log for MySQL database. Check for any errors or warnings and proceed accordingly as per that database engine.


2- If you dont see any errors or warnings or if you want to check in addition, then first check which database instance type you are using. For example for one of my test Oracle databases, it is db.r4.4xlarge.


Go to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html and check specifications of this instance type.

For instance, for db.r4.4xlarge, it is :


Instance Class vCPU, ECU, Memory (GiB), VPC Only, EBS Optimized, Max. Bandwidth (Mbps), Network Performance
db.r4.4xlarge 16  53  122   Yes   Yes    3,500     Up to 10 Gbps


So this db.r4.4xlarge has a max bandwidth (throughput) of 437.5 MB/s  (3500 Mbps/8 = 437.5 MB/s). The throughput limit is separate for read and write, which means you’ll get 437.5 MB/s for read and 437.5 MB/s for write.


3- Now go to Monitoring tab of this RDS in your console and check Read Throughput and Write Throughput to see if your instance is touching above threshold. For instance in this case 437.5. If yes, then you know that IO is the issue and you may need to either tune the SQLs responsible or increase instance size.


4- Similarly, from the same monitoring tab check for CPU usage, free memory available and Free storage space to make sure no other threshold is being reached.
5- Also check for Disk Queue Depth. The Disk Queue Depth is the number of IO requests waiting to be serviced. This time spent waiting in the queue is a component of latency and service time. Ideally disk queue depth of 15 or less should be good, but in case you notice latency greater than 10 or milliseconds accompanied by high disk queue depth than that could cause performance issues.


6- Last but not least, reviewing and tuning your SQLs is the biggest optimization gain you can achieve whether your database is in RDS or not.


Hope it helps.
Categories: DBA Blogs

How to rename an oracle scheduler job

Flavio Casetta - Thu, 2019-10-03 14:07
Categories: DBA Blogs

How Can you Use Audience Targeting to Show your Ads?

VitalSoftTech - Thu, 2019-10-03 10:10

Audience targeting has been a crucial tool in an advertiser’s arsenal for a long time. You can use audience targeting to show your ads to the exact people you’re targeting. What’s more, you can show it to them at the exact time you want. As advertisers, we believe that this is one of the core […]

The post How Can you Use Audience Targeting to Show your Ads? appeared first on VitalSoftTech.

Categories: DBA Blogs

Online Wine Retailer Fine-Tunes Inventory and Speeds Delivery with Oracle Cloud

Oracle Press Releases - Thu, 2019-10-03 09:00
Blog
Online Wine Retailer Fine-Tunes Inventory and Speeds Delivery with Oracle Cloud

By Guest Author, Oracle—Oct 3, 2019

Enjoying wine may be an age-old pleasure, but making sure online buyers get the right wine delivered promptly is a very modern concern.

Vinomofo, founded in 2011 in an Adelaide garage with a focus on making great wines available to wine lovers and to help great wine makers grow. The company now serves half a million wine buyers in Australia, New Zealand, and Singapore. Vinomofo initially focused on marketing, customer interaction and sales while relying on a third-party to handle warehousing, logistics, stock levels and distribution.

While the service from the third party supplier was great, it was not a scalable solution. Vinomofo decided it had to take back the control of the back-office tasks, but with the help of a technology partner.

In light of this strategic change, it turned to Oracle Warehouse Management Cloud (WMS), giving Vinomofo a better handle on key data such as inventory levels and reporting. The Oracle cloud-based system lets the company check key performance indicators (KPIs) to make sure employees and the system itself meet expectations.

These automated processes enable Vinomofo to focus on the wine itself and the company’s 550,000 strong member base, rather than worrying about the logistics and back of house operations. Since using WMS, Vinomofo has been able to deliver wine three times faster—we’ll raise a glass to that!

It also allowed Vinomofo to start a new “click-and-collect” service in its Melbourne distribution center in three weeks of using the software. The system will also enable customers to mix their own cases, something Vinomofo is excited to launch within the next 12 months.

Use of the Oracle technology has improved both the accuracy of inventory stock checks and made it easier to offer same-day shipping.

According to Krista Diez-Simson, CFO and COO Vinomofo, “Oracle cloud has enabled us to focus on quality and curation whilst warehousing and distribution now take care of themselves.” 

Watch the Vinomofo Video

In this video, Krista Diez-Simson, CFO and COO of Vinomofo, shares how Oracle Cloud is helping the company focus on what it does best—delivering wine.

embedBrightcove('responsive', false, 'single', '6002758155001');

 

Read More Oracle Cloud Customer Stories

Vinomofo is one of the thousands of customers from around the world on its journey to cloud. Read about others in Stories from Oracle Cloud: Business Successes

Trace Files

Jonathan Lewis - Thu, 2019-10-03 07:38

A recent blog note by Martin Berger about reading trace files in 12.2 poped up in my twitter timeline yesterday and reminded me of a script I wrote a while ago to create a simple view I could query to read the tracefile generated by the current session while the session was still connected. You either have to create the view and a public synonym through the SYS schema, or you have to use the SYS schema to grant select privileges on several dynamic performance views to the user to allow the user to create the view in the user’s schema. For my scratch database I tend to create the view in the SYS schema.

Script to be run by SYS:

rem
rem     Script: read_trace_122.sql
rem     Author: Jonathan Lewis
rem     Dated:  Sept 2018
rem
rem     Last tested
rem             12.2.0.1

create or replace view my_trace_file as
select 
        *
from 
        v$diag_trace_file_contents
where
        (adr_home, trace_filename) = (
                select
                --      substr(tracefile, 1, instr(tracefile,'/',-1)-1),
                        substr(
                                substr(tracefile, 1, instr(tracefile,'/',-1)-1),
                                1,
                                instr(
                                        substr(tracefile, 1, instr(tracefile,'/',-1)),
                                        'trace'
                                ) - 2
                        ),
                        substr(tracefile, instr(tracefile,'/',-1)+1) trace_filename
                from 
                        v$process
                where   addr = (
                                select  paddr
                                from    v$session
                                where   sid = (
                                        sys_context('userenv','sid')
                                        -- select sid from v$mystat where rownum = 1
                                        -- select dbms_support.mysid from dual
                                )
                        )
        )
;


create public synonym my_trace_file for sys.my_trace_file;
grant select on my_trace_file to {some role};

Alternatively, the privileges you could grant to a user from SYS so that they could create their own view:


grant select on v_$process to some_user;
grant select on v_$session to some_user;
grant select on v_$diag_trace_file_contents to some_user;
and optionally one of:
        grant select on v_$mystat to some_user;
        grant execute on dbms_support to some_user;
                but dbms_support is no longer installed by default.

The references to package dbms_support and view v$mystat are historic ones I have lurking in various scripts from the days when the session id (SID) wasn’t available in any simpler way.

Once the view exists and is available, you can enable some sort of tracing from your session then query the view to read back the trace file. For example, here’s a simple “self-reporting” (it’s going to report the trace file that it causes) script that I’ve run from 12.2.0.1 as a demo:


alter system flush shared_pool;
alter session set sql_trace true;

set linesize 180
set trimspool on
set pagesize 60

column line_number      format  999,999
column piece            format  a150    
column plan             noprint
column cursor#          noprint

break on plan skip 1 on cursor# skip 1

select
        line_number,
        line_number - row_number() over (order by line_number) plan,
        substr(payload,1,instr(payload,' id=')) cursor#,
        substr(payload, 1,150) piece
from
        my_trace_file
where
        file_name = 'xpl.c'
order by
        line_number
/

alter session set sql_trace false;

The script flushes the shared pool to make sure that it’s going to trigger some recursive SQL then enables a simple SQL trace. The query then picks out all the lines in the trace file generated by code in the Oracle source file xpl.c (execution plans seems like a likely guess) which happens to pick out all the STAT lines in the trace (i.e. the ones showing the execution plans).

I’ve used the “tabibitosan” method to identify all the lines that belong to a single execution plan by assuming that they will be consecutive lines in the output starting from a line which includes the text ” id=1 “ (the surrounding spaces are important), but I’ve also extracted the bit of the line which includes the cursor number (STAT #nnnnnnnnnnnnnnn) because two plans may be dumped one after the other if multiple cursors close at the same time. There is still a little flaw in the script because sometimes Oracle will run a sys-recursive statement in the middle of dumping a plan to turn an object_id into an object_name, and this will cause a break in the output.

The result of the query is to extract all the execution plans in the trace file and print them in the order they appear – here’s a sample of the output:


LINE_NUMBER PIECE
----------- ------------------------------------------------------------------------------------------------------------------------------------------------------
         38 STAT #140392790549064 id=1 cnt=0 pid=0 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=3 pr=0 pw=0 str=1 time=53 us cost=4 size=113 card
         39 STAT #140392790549064 id=2 cnt=0 pid=1 pos=1 obj=37 op='INDEX RANGE SCAN I_OBJ2 (cr=3 pr=0 pw=0 str=1 time=47 us cost=3 size=0 card=1)'


         53 STAT #140392790535800 id=1 cnt=1 pid=0 pos=1 obj=0 op='MERGE JOIN OUTER (cr=5 pr=0 pw=0 str=1 time=95 us cost=2 size=178 card=1)'
         54 STAT #140392790535800 id=2 cnt=1 pid=1 pos=1 obj=4 op='TABLE ACCESS CLUSTER TAB$ (cr=3 pr=0 pw=0 str=1 time=57 us cost=2 size=138 card=1)'
         55 STAT #140392790535800 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'
         56 STAT #140392790535800 id=4 cnt=0 pid=1 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 str=1 time=29 us cost=0 size=40 card=1)'
         57 STAT #140392790535800 id=5 cnt=0 pid=4 pos=1 obj=73 op='TABLE ACCESS BY INDEX ROWID TAB_STATS$ (cr=2 pr=0 pw=0 str=1 time=10 us cost=0 size=40 card=1)
         58 STAT #140392790535800 id=6 cnt=0 pid=5 pos=1 obj=74 op='INDEX UNIQUE SCAN I_TAB_STATS$_OBJ# (cr=2 pr=0 pw=0 str=1 time=8 us cost=0 size=0 card=1)'


         84 STAT #140392791412824 id=1 cnt=1 pid=0 pos=1 obj=20 op='TABLE ACCESS BY INDEX ROWID BATCHED ICOL$ (cr=4 pr=0 pw=0 str=1 time=25 us cost=2 size=54 card
         85 STAT #140392791412824 id=2 cnt=1 pid=1 pos=1 obj=42 op='INDEX RANGE SCAN I_ICOL1 (cr=3 pr=0 pw=0 str=1 time=23 us cost=1 size=0 card=2)'


         94 STAT #140392790504512 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=7 pr=0 pw=0 str=1 time=432 us cost=6 size=374 card=2)'
         95 STAT #140392790504512 id=2 cnt=2 pid=1 pos=1 obj=0 op='HASH JOIN OUTER (cr=7 pr=0 pw=0 str=1 time=375 us cost=5 size=374 card=2)'
         96 STAT #140392790504512 id=3 cnt=2 pid=2 pos=1 obj=0 op='NESTED LOOPS OUTER (cr=4 pr=0 pw=0 str=1 time=115 us cost=2 size=288 card=2)'
         97 STAT #140392790504512 id=4 cnt=2 pid=3 pos=1 obj=19 op='TABLE ACCESS CLUSTER IND$ (cr=3 pr=0 pw=0 str=1 time=100 us cost=2 size=184 card=2)'
         98 STAT #140392790504512 id=5 cnt=1 pid=4 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=85 us cost=1 size=0 card=1)'
         99 STAT #140392790504512 id=6 cnt=0 pid=3 pos=2 obj=75 op='TABLE ACCESS BY INDEX ROWID IND_STATS$ (cr=1 pr=0 pw=0 str=2 time=8 us cost=0 size=52 card=1)'
        100 STAT #140392790504512 id=7 cnt=0 pid=6 pos=1 obj=76 op='INDEX UNIQUE SCAN I_IND_STATS$_OBJ# (cr=1 pr=0 pw=0 str=2 time=7 us cost=0 size=0 card=1)'
        101 STAT #140392790504512 id=8 cnt=0 pid=2 pos=2 obj=0 op='VIEW  (cr=3 pr=0 pw=0 str=1 time=47 us cost=3 size=43 card=1)'
        102 STAT #140392790504512 id=9 cnt=0 pid=8 pos=1 obj=0 op='SORT GROUP BY (cr=3 pr=0 pw=0 str=1 time=44 us cost=3 size=15 card=1)'
        103 STAT #140392790504512 id=10 cnt=0 pid=9 pos=1 obj=31 op='TABLE ACCESS CLUSTER CDEF$ (cr=3 pr=0 pw=0 str=1 time=21 us cost=2 size=15 card=1)'
        104 STAT #140392790504512 id=11 cnt=1 pid=10 pos=1 obj=30 op='INDEX UNIQUE SCAN I_COBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'


        116 STAT #140392791480168 id=1 cnt=4 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=3 pr=0 pw=0 str=1 time=62 us cost=3 size=858 card=13)'
        117 STAT #140392791480168 id=2 cnt=4 pid=1 pos=1 obj=21 op='TABLE ACCESS CLUSTER COL$ (cr=3 pr=0 pw=0 str=1 time=24 us cost=2 size=858 card=13)'
        118 STAT #140392791480168 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'


        126 STAT #140392789565328 id=1 cnt=1 pid=0 pos=1 obj=14 op='TABLE ACCESS CLUSTER SEG$ (cr=3 pr=0 pw=0 str=1 time=21 us cost=2 size=68 card=1)'
        127 STAT #140392789565328 id=2 cnt=1 pid=1 pos=1 obj=9 op='INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 str=1 time=12 us cost=1 size=0 card=1)'


        135 STAT #140392789722208 id=1 cnt=1 pid=0 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=3 pr=0 pw=0 str=1 time=22 us cost=3 size=51 card=
        136 STAT #140392789722208 id=2 cnt=1 pid=1 pos=1 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=2 pr=0 pw=0 str=1 time=16 us cost=2 size=0 card=1)'


        153 STAT #140392792055264 id=1 cnt=1 pid=0 pos=1 obj=68 op='TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=3 pr=0 pw=0 str=1 time=25 us)'
        154 STAT #140392792055264 id=2 cnt=1 pid=1 pos=1 obj=70 op='INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=0 pw=0 str=1 time=19 us)'

If you want to investigate further, the “interesting” columns in the underlying view are probably: section_name, component_name, operation_name, file_name, and function_name. The possible names of functions, files, etc. vary with the trace event you’ve enabled.

 

Accenture and Oracle Help Insurers Meet IFRS 17 and LDTI Accounting Standards

Oracle Press Releases - Wed, 2019-10-02 15:00
Press Release
Accenture and Oracle Help Insurers Meet IFRS 17 and LDTI Accounting Standards New solution enables insurers to more effectively digitize their financial information and improve financial reporting of their insurance contracts

NEW YORK—Oct 2, 2019

Accenture (NYSE: ACN) and Oracle today unveiled an integrated technology and services offering that enables insurers to digitize their financial information and meet accounting reporting standards for the International Accounting Standard Board’s (IASB) IFRS 17 and US GAAP Long Duration Targeted Improvements (LDTI).
 
Combining Oracle’s IFRS 17 and Finance solutions with Accenture’s deep expertise and capabilities, the offering includes preconfigured templates, rules and reports to help insurers address the complexity associated with these accounting standards and ensure compliance.
 
Effective Jan. 1, 2022, IFRS 17 provides new standards for how insurers recognize revenue tied to their insurance contracts. Similarly, LDTI changes the assumptions insurers use to measure the liability of future policy benefits for traditional insurance contracts.
 
Many industry observers consider these new accounting standards to be the most challenging financial reporting changes for insurers in decades, requiring innovative approaches using new technology. Together, Accenture and Oracle have created a solution that drive benefits beyond the finance function to the entire enterprise. Recent Accenture research revealed that CFOs who fully harness the power of data and new technologies can drive value, improve efficiency and empower CEOs with strategic insights.
 
“The changes required from IFRS 17 provide an opportunity for insurers to capitalize on their rich repositories of data and derive more value from advanced analytics,” said Steve Culp, a senior managing director at Accenture and global head of the company’s Finance & Risk practice. “This joint offering provides the first steps on the journey to a full digitization of finance, enabling insurance CFOs to drive the enterprise strategy beyond the borders of the finance function.”
 
Sonny Singh, general manager and senior vice president, Oracle Financial Services, added, “IFRS 17/LDTI Standards are a historic change to insurance revenue recognition requirements. It presents a unique opportunity for insurers to align and transform disparate business processes and gain greater operational efficiency and insight. Oracle’s modern integrated solution, combined with Accenture’s comprehensive services, will allow adoption of IFRS 17 standards and establish the baseline for a wider insurance modernization initiative.”

The new solution is built into Accenture myConcerto, a fully integrated digital platform that brings together Accenture’s most disruptive thinking, leading industry solutions and Oracle technologies to drive enterprise transformation. It is one of the latest offerings developed through the longstanding Accenture and Oracle alliance, which has spanned more than 25 years. Accenture has been one of Oracle’s leading systems integration partners globally, with more than 54,000 Oracle-skilled consultants around the world who help clients accelerate digital transformation by implementing Oracle-based business solutions and new business processes that develop and evolve as their digital business grows. Accenture is a Global Cloud Elite and Platinum level member of Oracle PartnerNetwork (OPN) and is certified as an Oracle Cloud Excellence Implementer.
 
More information about the offering can be found here.

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
judi.palmer@oracle.com
Michael McGinn
Accenture
+1 917 452 9458
m.mcginn@accenture.com
About Accenture

Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions—underpinned by the world’s largest delivery network—Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 492,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com.

About Oracle Financial Services

Oracle Financial Services provides solutions for retail banking, corporate banking, payments, asset management, life insurance, annuities and healthcare payers. With our comprehensive set of integrated digital and data platforms, banks and insurers are empowered to deliver next generation financial services. Our intelligent and open digital solutions enable customer-centric transformation, support collaborative innovation and drive efficiency. Our data and analytical platforms help financial institutions drive customer insight, integrated risk and finance, fight financial crime and comply with regulations. To learn more about Oracle Financial Services visit our website. Click here to learn more about Oracle IFRS 17 solutions.

Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program—an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Michael McGinn

  • +1 917 452 9458

The future of work

RDBMS Insight - Wed, 2019-10-02 10:48

The people behind the American tech industry blog TechDirt have put together an anthology of stories about the future of work called (appropriately) Working Futures, and I’m excited to have a story in it.

My story, “The Auditor and the Exorcist,” was heavily influenced by the years I spent working remotely for Oracle. It tells a near-future story of working in a world with a social credit system that depends on weak AI. Of course, poor security, hacking, and bugs are all present in the future, too…

Quick summary: Pat is stuck in a soul-deadening job as a social credit auditor. Her thoroughly modern home suddenly shows every sign of being haunted. Pat doesn’t believe in ghosts, but the only thing that seems to restore her home to normalcy is the help of an online exorcist. Is Pat’s house really haunted… or is something more sinister going on?

The book is now available in both ebook and paperback format at Amazon, and if you have a Kindle Unlimited subscription, you can read it for free. Plus, you can repost any of the stories in it (non-commercially): The stories in the anthology are all released under CC license. My story is released as CC-BY-NC-ND, meaning anyone’s free to recopy it non-commercially, but not to re-release it commercially or to remix without permission. to recopy it non-commercially, but not to re-release it commercially or to remix without permission.

Image credit: Remixed from a public domain image found on publicdomainimages.net. Feel free to reuse/remix further.

Categories: DBA Blogs

_cursor_obsolete_threshold

Jonathan Lewis - Wed, 2019-10-02 08:39

At the recent Trivadis Performance Days in Zurich, Chris Antognini answered a question that had been bugging me for some time. Why would Oracle want to set the default value of _cursor_obsolete_threshold to a value like 8192 in 12.2 ?

In 11.2.0.3 the parameter was introduced with the default value 100; then in 11.2.0.4, continuing into 12.1, the default value increased to 1,024 – what possible reason could anyone have for thinking that 8192 was a good idea ?

The answer is PDBs – specifically the much larger number of PDBs a single CBD can (theoretically) support in 12.2.

In fact a few comments, and the following specific explanation, are available on MoS in Doc ID 2431353.1 “High Version Counts For SQL Statements (>1024) Post Upgrade To 12.2 and Above Causing Database Slow Performance”:

The default value of _cursor_obsolete_threshold is increased heavily (8192 from 1024) from 12.2 onwards in order to support 4096 PDBs which was only 252 PDBs till 12.1. This parameter value is the maximum limit for obsoleting the parent cursors in an multitenant environment and cannot be increased beyond 8192.

Having said, this is NOT applicable for non-CDB environment and hence for those databases, this parameter should be set to 12.1 default value manually i.e. 1024. The default value of 1024 holds good for non-CDB environment and the same parameter can be adjusted case-to-case basis should there be a problem.

It’s all about PDBs – more precisely, it’s all about CDBs running a huge number of PDBs, which is not necessarily the way that many companies are likely to use PDBs. So if you’re a fairly typical companyy running a handful of PDBs in a single CDB then it’s probably a good idea to set the parameter down to the 12.1 value of 1024 (and for bad applications I’d consider going even lower) – and this MOS note actually makes that an official recommendation.

Impact analysis

What’s the worst that could happen if you actually have many PDBs all executing the same application and that application has a few very popular and frequently executed statements? Chris Antognini described a model he’d constructed and some tests he’d done to show the effects. The following code is a variation onhis work. It addresses the following question:

If you have an application that repeatedly issues (explicitly or implicitly) parse calls but doesn’t take advantage of the session cursor cache it has to search the library cache by hash_value / sql_id for the parent cursor, then has to walk the chain of child cursors looking for the right child. What’s the difference in the work done if this “soft parse” has to walk the list to child number 8,191 instead of finding the right cursor at child number 0.

Here’s the complete code for the test:


create table t1
select 1 id from dual
/

alter table t1 add constraint t1_pk primary key (id)
/

spool cursor_obsolete.lst

alter system flush shared_pool;
alter system flush shared_pool;

set serveroutput off
select /*+ index(t1) */ id from t1 where id > 0;
select * from table(dbms_xplan.display_cursor);

execute snap_my_stats.start_snap
execute snap_my_stats.start_snap

declare
        m_id number;
begin
        for i in 100+1..100+8192 loop
                execute immediate 'alter session set optimizer_index_cost_adj = ' || i ;
                select /*+ index(t1) */ id into m_id from t1 where id > 0;
        end loop;
end;
/

set serveroutput on
execute snap_my_stats.end_snap

column sql_text format a60
select sql_id, child_number, loaded_versions, executions, sql_text from v$sql where sql_text like 'SELECT%T1%' order by child_number;

prompt  ===============
prompt  Low child reuse
prompt  ===============

set serveroutput off
execute snap_my_stats.start_snap

declare
        m_id number;
begin
        for i in 100+1..100+1024 loop
                execute immediate 'alter session set optimizer_index_cost_adj = ' || i ;
                select /*+ index(t1) */ id into m_id from t1 where id > 0;
        end loop;
end;
/

set serveroutput on
execute snap_my_stats.end_snap

prompt  ================
prompt  High child reuse
prompt  ================

set serveroutput off
execute snap_my_stats.start_snap

declare
        m_id number;
begin
        for i in 7168+1..7168+1024 loop
                execute immediate 'alter session set optimizer_index_cost_adj = ' || i ;
                select /*+ index(t1) */ id into m_id from t1 where id > 0;
        end loop;
end;
/

set serveroutput on
execute snap_my_stats.end_snap

spool off

I’ve created a table with just one row and given it a primary key. My testing query is going to be very short and simple. A query hinted to return that one row by primary key index range scan.

I’ve flushed the shared pool (twice) to minimise fringe contention from pre-existing information, then executed the statement to populate the dictionary cache and some library cache information and to check the execution plan.

The call to the package snap_my_stats is my standard method for reporting changes in v$mystat across the test. I’ve called the start_snap procedure twice in a row to make sure that its first load doesn’t add some noise to the statistics that we’re trying to capture.

The test runs in three parts.

  • First I loop 8192 times executing the same statement, but with a different value for the optimizer_index_cost_adj for each execution – this gives me the limit of 8192 child cursors, each reporting “Optimizer Mismatch” as the reason for not sharing. I’ve run a query against v$sql after this to check that I have 8192 child cursors – you’ll need to make sure your shared pool is a few hundred megabytes if you want to be sure of keeping them all in memory.
  • The second part of the test simply repeats the loop, but only for the first 1,024 child cursors. At this point the child cursors exist, so the optimizer should be doing “soft” parses rather than hard parses.
  • The final part of the test repeats the loop again, but only for the last 1,024 child cursors. Again they should exist and be usable, so the optimizer should again be doing “soft” parses rather than hard parses.

What I’m looking for is the extra work it takes for Oracle to find the right child cursor when there’s a very long chain of child cursors. From my memory of dumping the library cache in older versions of Oracle, the parent will point to a “segmented array” of pointers to child cursors, and each segment of the array will consist of 16 pointers, plus a pointer to the next segment. So if you have to find child cursor 8191 you will have to following 512 segment pointers, and 16 pointers per segment (totalling 8708 pointers) before you find the child you want – and you’re probably holding a mutex (or latch) while doing so.

One preipheral question to ask, of course, is whether Oracle keeps appending to the segmented array, or whether it uses a “pushdown” approach when allocating a new segment so that newer child cursors are near the start of the array. (i.e. will searching for child cursor 0 be the cheapest one or the most expensive one).

And the results, limited to just the second and third parts, with just a couple of small edits are as follows:


host sdiff -w 120 -s temp1.txt temp2.txt >temp.txt

===============                                            |    ================
Low child reuse                                            |    High child reuse
===============                                            |    ================

Interval:-  0 seconds                                      |    Interval:-  6 seconds

opened cursors cumulative                      2,084       |    opened cursors cumulative                      2,054
recursive calls                                6,263       |    recursive calls                                6,151
recursive cpu usage                               33       |    recursive cpu usage                              570
session logical reads                          1,069       |    session logical reads                          1,027
CPU used when call started                        33       |    CPU used when call started                       579
CPU used by this session                          37       |    CPU used by this session                         579
DB time                                           34       |    DB time                                          580
non-idle wait count                               16       |    non-idle wait count                                5
process last non-idle time                         1       |    process last non-idle time                         6
session pga memory                           524,288       |    session pga memory                            65,536
enqueue requests                                  10       |    enqueue requests                                   3
enqueue releases                                  10       |    enqueue releases                                   3
consistent gets                                1,069       |    consistent gets                                1,027
consistent gets from cache                     1,069       |    consistent gets from cache                     1,027
consistent gets pin                            1,039       |    consistent gets pin                            1,024
consistent gets pin (fastpath)                 1,039       |    consistent gets pin (fastpath)                 1,024
consistent gets examination                       30       |    consistent gets examination                        3
consistent gets examination (fastpath)            30       |    consistent gets examination (fastpath)             3
logical read bytes from cache              8,757,248       |    logical read bytes from cache              8,413,184
calls to kcmgcs                                    5       |    calls to kcmgcs                                    3
calls to get snapshot scn: kcmgss              1,056       |    calls to get snapshot scn: kcmgss              1,026
table fetch by rowid                              13       |    table fetch by rowid                               1
rows fetched via callback                          6       |    rows fetched via callback                          1
index fetch by key                                 9       |    index fetch by key                                 1
index scans kdiixs1                            1,032       |    index scans kdiixs1                            1,024
session cursor cache hits                         14       |    session cursor cache hits                          0
cursor authentications                         1,030       |    cursor authentications                         1,025
buffer is not pinned count                     1,066       |    buffer is not pinned count                     1,026
parse time cpu                                    23       |    parse time cpu                                   558
parse time elapsed                                29       |    parse time elapsed                               556
parse count (total)                            2,076       |    parse count (total)                            2,052
parse count (hard)                                11       |    parse count (hard)                                 3
execute count                                  1,050       |    execute count                                  1,028
bytes received via SQL*Net from client         1,484       |    bytes received via SQL*Net from client         1,486

Two important points to note:

  • the CPU utilisation goes up from 0.33 seconds to 5.7 seconds.
  • the number of hard parses is zero, this is all about searching for the

You might question are the 2,048-ish parse count(total) – don’t forget that we do an “execute immediate” to change the optimizer_index_cost_adj on each pass through the loop. That’s probably why we double the parse count, although the “alter session” doesn’t then report as an “execute count”.

The third call to a statement is often an important one – it’s often the first one that doesn’t need “cursor authentication”, so I ran a similar test executing the last two loops a second time – there was no significant change in the CPU or parse activity between the 2nd and 3rd executions of each cursor. For completeness I also ran a test with the loop for the last 1,024 child cursors ran before the loop for the first child cursors. Again this made no significant difference to the results – the low number child cursors take less CPU to find than the high number child cursors.

Bottom line

The longer the chain of child cursors the more time (elapsed and CPU) you spend searching for the correct child; and when a parent is allowed 8,192 child cursors the extra time can become significant. I would claim that the ca. 5 seconds difference in CPU time appearing in this test corresponds purely to an extra 5 milliseconds walking an extra 7,000 steps down the chain.

If you have a well-behaved application that uses the session cursor cache effectively, or uses “held cursors”, then you may not be worried by very long chains of child cursors. But I have seen many applications where cursor caching is not used and every statement execution from the client turns into a parse call (usually implicit) followed by a hunt through the library cache and walk along the child chain. These applications will not scale well if they are cloned to multiple PDBs sharing the same CDB.

Footnote 1

The odd thing about this “cursor obselete” feature is that I have a distinct memory that when  PDBs were introduced at an ACE Director’s meeting a few years ago the first thought that crossed my mind was about the potential for someone running multiple copies of the same application as separate PDBs seeing a lot of library cache latch contention or cursor mutex contention because any popular statement would now be hitting the same parent cursor from multiple PDBs. I think the casual (i.e. neither formal, nor official) response I got when I raised the point was that the calculation of the sql_id in future releases would take the con_id into consideration. It seems that that idea fell by the wayside.

Footnote 2

If you do see a large number of child cursors for a single parent then you will probably end up looking at v$sql_shared_cursor for the sql_id to see if that gives you some good ideas about why a particular statement has generated so many child cursors. For a list of explainations of the different reasons captured in this view MOS Doc Id  296377.1“Troubleshooting: High Version Count Issues” is a useful reference.

Oracle Buys CrowdTwist

Oracle Press Releases - Wed, 2019-10-02 05:48
Press Release
Oracle Buys CrowdTwist Extends Oracle CX with Leading Cloud-Native Omni-Channel Loyalty Solution

Redwood Shores, Calif.—Oct 2, 2019

Oracle today announced it signed an agreement to acquire CrowdTwist, the leading cloud-native customer loyalty solution to empower brands to offer personalized customer experiences. The solution offers over 100 out-of-the-box engagement paths, providing rapid time-to-value for marketers to develop a more complete view of the customer. Upon the close of the acquisition, the CrowdTwist team will join the Oracle Customer Experience (CX) Cloud organization.

In today’s omni-channel environment, customer retention is critical to every business and loyalty is the unifying component for a brand across all marketing initiatives. Yet, many loyalty products offer static, single-channel, spend-and-get programs loaded with services and customization, creating difficult to adapt programs which are costly to implement and maintain. CrowdTwist’s cloud loyalty solution offers adaptable programs tailored to the needs of the customer to offer discounts, points, check-ins or rewards.

Together, Oracle and CrowdTwist will enable organizations of all sizes to deliver personalized engagement and extend loyalty and reward programs to a brand’s most beneficial customer behaviors. The combination offers several benefits to customers:

  • Tightly couple loyalty programs with the orchestration of B2C brands and customers through existing integration with Oracle Responsys.
  • Heighten customer intelligence and the value of loyalty program data through future integration with Oracle CX Unity.
  • Utilize loyalty and retention data to optimize B2B marketing campaigns via expanded integrations with Oracle Eloqua.
  • Deliver comprehensive, end-to-end loyalty view at transaction and store levels by extending current integrations with Oracle’s leading industry-specific retail, hospitality and food & beverage applications.

“Oracle is taking a unique approach to the customer data platform space, enabling the application of intelligence across every customer touchpoint,” said Rob Tarkoff, Executive Vice President, Oracle. “CrowdTwist’s leading loyalty platform will significantly augment Oracle CX’s ability to help our customers build more meaningful relationships with their customers.”

“The combination with Oracle validates CrowdTwist’s world-class technology, employees and customers,” said Scott Matthews, CEO of CrowdTwist. “We could not have found a more compelling partner to advance our vision of customer loyalty at scale.”

More information about this announcement is available at www.oracle.com/crowdtwist.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle is currently reviewing the existing CrowdTwist product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of CrowdTwist’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by CrowdTwist or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract.

Cautionary Statement Regarding Forward-Looking Statements

This document contains certain forward-looking statements about Oracle and CrowdTwist, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of CrowdTwist, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or CrowdTwist, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or CrowdTwist may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or CrowdTwist. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor CrowdTwist is under any duty to update any of the information in this document.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

MOS Auto Responses : What’s my problem with them?

Tim Hall - Wed, 2019-10-02 02:54

A couple of days ago I took to Twitter with a rather “incendiary” tweet caused by my frustration with MOS. It’s not about the specific SR or issue itself. It’s more a frustration with MOS generally and the way they handle some requests, specifically the automatic responses. I’ll explain.

The Moaning
  • I had an issue.
  • I Googled and didn’t find too much in the way of help.
  • I opened a SR about the issue, including an image to help explain my issue.
  • During that process it suggested some other stuff I might want to look at, one of which was quite interesting, but none of which were actually relevant. No problems I thought. At least I’ve learned something…
  • Next thing I get some emails about updates to my call. I logged in to find these 4 responses.
Response 1 Response 2 Response 3 Response 4
  • I was really angry about the auto-responses, and unloaded on Twitter using some rather “choice language”…

I totally understand a request for more information. The response of, “Please upload the RDA/TFA/AHF file”, is common and understandable on many occasions. It does annoy me more than a little when you are asking a general question, that is not specific to your software version, but you still have to upload it. Whatever…

So why did I lose the plot this time?

  • There are 4 messages, instead of one consolidated message. I hate that. It’s annoying. I just know that someone is running a report saying, “Look, we’ve done 1 gazillion responses this month”, but it’s all generated crap! This should have been one concise and clear request for additional information.
  • Just look at that second response. Are you kidding me? Loads of rubbish I don’t need to know and repetition of the first message. If I sent this sort of message to my users I’d be marched out of the building. If you think this is acceptable, please quit your job now! You have no place in a role that is even remotely user-facing.
  • How do you think people are going to respond to this? It makes me angry and I kind-of know what I’m doing. How do you expect some junior member of staff to respond to this? I’ll tell you how. They will ignore it, never fix the issue and think “Oracle is crap”. Thanks! Just what we need. I asked a colleague to look at it and their response was, “It’s like they don’t want you to continue with the request”. See?
  • People pay a lot of money for support, and this is what you are presented with? Really?

I’ve now deleted the tweet. I was *not* asked to delete it, and if I had been I definitely would not have, but I decided to because it was gathering too much momentum, such is the general feeling about Oracle Support, and it was not meant to be me grandstanding. It was just genuine frustration with a service my company is paying money for!

I’m a fan of automation. I understand wanting to streamline the SR process, and if automation can help, that’s great, but this is not the way to do it!

What should it look like?

It’s just my opinion, but I think something like this would be reasonable.

We need more information to continue. Please run the following Trace File Analyzer (TFA) commands and upload the files.

1) Run this command on the Agent target machine and answer the questions when prompted.

./tfactl diagcollect -srdc emtbsmetric

2) Enable debug on the OMS server using this command.

./tfactl diagcollect -srdc emdebugon

Repeat the actions in EM that you are trying to diagnose, then disable debug on the OMS server using this command.

./tfactl diagcollect -srdc emdebugoff

If you need more information about TFA or manual file collection for this issue, check out DOC ID 2279135.1.

If you would like to read more about the My Oracle Support automatic troubleshooting, check out Doc ID 1929376.1.

A single message that asks for the relevant information, and gives links if you need something more. That gets the job done, isn’t scary to new people and isn’t going to cause me to lose it on Twitter.

Feedback from Oracle

You may have noticed this post in my feed for a couple of days, but when you clicked on it, it was password protected. That’s because I wrote the post to provide some better feedback than my initial tweet, but delayed the publication while I waited for some feedback from Oracle. I was put in contact with the Vice President, Global Customer Support and the Sr. Director, DB-EM Proactive Support. Their respective responses were as follows. I’ve left out their names as not all folks like being name-checked.

“Hi Tim, Just reviewed your blog post and agree that the auto-responses are verbose. Adding our DB proactive lead who will follow up with you directly on planned next steps.”

Vice President, Global Customer Support

“Hi Tim, I have reviewed your blog regarding your experiences with SR automation. I want to thank you for providing this feedback. Direct feedback from users of SR automation is extremely important and valuable. We take the effectiveness of our SR automation very seriously. Our intention is to provide a streamlined support experience which allows us to identify information, up front in the SR, that will result in the shortest resolution time. There is a balance between casting a wide net to ensure we receive all diagnostic data required vs. the ease of consuming/executing the request to get that data. Admittedly, we don’t always strike the correct balance.   

Regarding the case described in your blog, I agree that our diagnostic messaging should be more concise and consumable. I also appreciate your thoughts on using collectors, such as TFA, to simplify the instructions. We have a plan to address this specific automation flow to eliminate superfluous information and provide a clear message around what is required and how to obtain that information. Additionally, I will incorporate your feedback into our review process, which is conducted on an on-going basis for our automation flows. Please feel free to contact me if you have any other feedback or suggestions. As I said, this kind of feedback is appreciated and always welcomed.”

Sr. Director, DB-EM Proactive Support

The whole Twitter episode wasn’t my finest moment, but if nothing else I’m glad the message got through to the correct people. Of course, all of this is just words unless something substantial happens. Please don’t let us down!

To everyone else out there, please continue to add your own constructive feedback on all things (in life). There’s no point complaining about a problem, if you’ve never actually raised it. I think of it like voting. If you didn’t bother to vote, I don’t really think you are entitled to moan about the outcome.

Cheers

Tim…

PS. Comments are disabled.

MOS Auto Responses : What’s my problem with them? was first posted on October 2, 2019 at 8:54 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Java Development with Autonomous Transaction Processing Dedicated (ATP-D)

Kuassi Mensah - Tue, 2019-10-01 18:20
The Oracle Autonomous Transaction Processing Dedicated (ATP-D) is a database Cloud service which allows implementing a private database Cloud service running on dedicated Exadata Infrastructure within the Oracle Public Cloud. The goal of this blog article is to help you, Java developer or architect, build and deploy fast, scalable, and reliable Java applications with ATP-D, using plain Java, Java Servlets, or Java Microservices with WebLogic, Helidon, WebSphere, Liberty, Tomcat, WildFly (JBoss), Spring, and so on.



Please read the full blog post @ https://medium.com/oracledevs/java-development-with-autonomous-transaction-processing-dedicated-atp-d-f0355a2f9abd

Oracle JDBC drivers on Maven Central

Kuassi Mensah - Tue, 2019-10-01 18:04


At last!
Yes, you asked for it, and with some delay (better late than ..), we did it!
Maven Central becomes a distribution center for the Oracle JDBC drivers. We started with the latest release 19.3.0.0 but will soon add previous and supported releases.
Read the full post @ https://medium.com/oracledevs/oracle-jdbc-drivers-on-maven-central-64fcf724d8b

Oracle Cloud: my first VM

Dietrich Schroff - Tue, 2019-10-01 14:15
After some problems with signing up i created my first vm inside Oracle Cloud:






 and then a short stop for provisioning:


 And finally:


The machine runs and a login can be done with:

schroff@zerberus:~/.ssh$ ssh 130.61.89.226 -l opc
[opc@myVmInstanceDS ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        459M     0  459M   0% /dev
tmpfs           486M     0  486M   0% /dev/shm
tmpfs           486M   13M  473M   3% /run
tmpfs           486M     0  486M   0% /sys/fs/cgroup
/dev/sda3        39G  1,9G   37G   5% /
/dev/sda1       200M  9,7M  191M   5% /boot/efi
tmpfs            98M     0   98M   0% /run/user/1000
[opc@myVmInstanceDS ~]$

Volume Icon Missing? Get it Back! (Windows 7, 8, 10)

VitalSoftTech - Tue, 2019-10-01 10:01

So you’re using your Windows PC, and you want to turn the volume up to better hear the characters in the movie you’re watching, only to discover the missing volume icon. Where did it go? Before you start stressing out about a potential virus or some malware infecting your PC, take a look at this […]

The post Volume Icon Missing? Get it Back! (Windows 7, 8, 10) appeared first on VitalSoftTech.

Categories: DBA Blogs

SKY Brasil Sees a 90% Reduction in Time to Market with Oracle Cloud

Oracle Press Releases - Tue, 2019-10-01 09:00
Blog
SKY Brasil Sees a 90% Reduction in Time to Market with Oracle Cloud

By Peter Schutt, Senior Director, Oracle—Oct 1, 2019

All throughout Brazil, in indigenous areas, remote cities, and low-income communities, SKY Brasil is providing satellite service. More than five million customers—nearly 30% of Brasil’s paying TV subscribers—rely on SKY to distribute digital programming across the nation.

Since SKY’s inception in 1996, the telecommunications and media industry has experienced tremendous change. The growing demand for seamless, personalized service and content requires providers to constantly innovate both their technology and content. To meet this demand, SKY set out to build targeted marketing strategies adapted to satisfy the needs of each individual customer.

SKY turned to Oracle Autonomous Data Warehouse to provide a high-performing, secure data warehouse that would enable the company to perform real-time marketing analytics. The self-driving autonomous database was set up and in production in 90% less time than SKY’s previous on-premises environment.

The autonomous database that tunes, patches, and maintains itself with zero downtime enabled SKY’s IT resources to dedicate 90% of its time to more-strategic data modeling. It saw a 60% cost savings by no longer procuring and managing hardware equipment and software licenses in a data center.

SKY’s marketing team can now run campaigns on demand with the flexibility to scale at any time. With the power of Autonomous Data Warehouse on Oracle Cloud Infrastructure, SKY is delivering exact offers to the right customers at the right time, propelling the business to the leading edge of telecommunications.

“In addition to the high performance of Oracle Cloud Infrastructure, the adoption of Autonomous Data Warehouse has streamlined processes and enabled us to reach our customers with the right offering at the right time,” said André Nazare IT Director of SKY Brasil.

Watch the SKY Brasil Video

In this video, SKY Brasil’s CIO, Alberto Camardelli, shares how SKY Brasil is transforming telecom with Oracle Autonomous Data Warehouse.

embedBrightcove('responsive', false, 'single', '6088214340001');

 

Read More Oracle Cloud Customer Stories

SKY Brasil is one of the thousands of customers on its journey to cloud. Read about others in Stories from Oracle Cloud: Business Successes

Free Oracle Cloud: 10. Running SQLcl and Datapump from the Compute Instance (VM) to ATP

Dimitri Gielis - Tue, 2019-10-01 05:30
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In this post, we will install and use SQLcl and Datapump from the Compute Instance (VM) connecting to our Oracle Database in the Autonomous Transaction Processing (ATP) Cloud.
Although I use most of the time SQL Developer to connect to the database, I find it important to be able to use command-line tools too, as this is what you can automate and it's really fast.
In the previous post, we installed the command line tools of the Oracle Cloud on our own machine, but for the Oracle Tools, I prefer to install them on our Compute Instance in the Cloud. Especially when we want to automate something, it's easier to do this from another machine in the cloud. It also makes it easier to follow as we only have to focus on how to install the Oracle Tools on Linux.

Oracle Instant Client

In order to connect to an Oracle database from a machine, we will use the Oracle Instant Client software. You can download the software for the different operating systems, but as our VM is running Oracle Linux we can install it with just a few commands:

First, update yum so it's smarter where to find Oracle software:

yum install oracle-release-el7

Next, we can search for the Oracle Instant Client version we need:

yum search oracle-instant


We want to install the Oracle Instant Client version of the system we want to connect to. For the Free Oracle Database on ATP, it's Oracle Database Release 18.4, so we will pick Oracle Instant Client 18.5. To be honest, typically I take the latest version of the software, but when I tried that, the Oracle Instant Client complained the libraries were not compatible with the version we wanted to connect to. I always thought you could use newer versions of the Oracle tools against previous databases, but apparently, that is no longer the case (at least not during my tests). Anyway, it's good to have the version of the same tool as the version you connect to.

Install the Instant Client basic and tools packages:

yum install oracle-instantclient18.5-basic.x86_64
yum install oracle-instantclient18.5-tools.x86_64


As a last step we set some environment variables:

export PATH=/usr/lib/oracle/18.5/client64/bin:$PATH
export LD_LIBRARY_PATH=/usr/lib/oracle/18.5/client64/lib
export TNS_ADMIN=/usr/lib/oracle/18.5/client64/lib/network/admin

That's it! We can now use the Oracle tools. Note there's also a SQL Plus package, which allows you to connect from a command line to the database, but I prefer to use SQLcl as it has some cool features for Oracle APEX (e.g. exporting your app). Download SQLcl now.

Before we move on to installing SQLcl, make sure you still have the credentials (wallet) file we used when connecting with SQL Developer to our database. Just like with SQL Developer, we also need this with SQLcl to connect to our database. As a reminder here's the screenshot I'm talking about:


Upload both the SQLcl and Credentials zip file to the Compute Instance (VM):

scp -i .ssh/oraclecloud /Users/dgielis/Downloads/wallet_DBDIMI.zip opc@132.145.215.55:/tmp

scp -i .ssh/oraclecloud /Users/dgielis/Downloads/sqlcl-19.2.1.206.1649.zip opc@132.145.215.55:/tmp


Connect to your VM and unzip the files:

ssh -i .ssh/oraclecloud opc@132.145.215.55

unzip /tmp/wallet_DBDIMI.zip -d /usr/lib/oracle/18.5/client64/lib/network/admin

unzip /tmp/sqlcl-19.2.1.206.1649.zip -d /opt


Before we can run SQLcl we also need to make sure we have JAVA installed, as SQLcl depends on that:

yum install java

To make it easier to run SQLcl from anywhere we will create a symbolic link:

ln -s /opt/sqlcl/bin/sql /usr/lib/oracle/18.5/client64/bin/sql

Now we are ready to connect to our database on ATP:

sql admin@dbdimi_high


There we go... we can connect from our VM to our ATP database.

The next thing we want to do is export the data from our ATP database. We will use Datapump that came with the installation of the tools.

Run the command to export the schema CLOUD:

expdp admin@dbdimi_high \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link \
data_options=group_partition_table_data  \
parallel=1 \
schemas=cloud \
dumpfile=export%u.dmp


So where did this export go? To the default DATA_PUMP_DIR directory we don't have direct access to... but to list the files in the directory we can do:

SELECT * FROM DBMS_CLOUD.LIST_FILES('DATA_PUMP_DIR');


Remember my previous blog post about the Object Storage, in which we set up a Backups bucket?
Oracle allows you to connect your Object Storage to your ATP database and that is exactly what we will do further on :)

We will use the same user we created earlier for CLI. In order to connect to ATP we need to set up an Auth Token. Go to the User Details of cliUser and click the Auth Tokens:


Click the Generate Token button:


There's the token... you only see it once, so make sure to copy it:


Next, connect to your ATP database and run the script to add the credentials to the ATP database:

begin
  dbms_cloud.create_credential(
    credential_name => 'DEF_CRED_NAME'
    , username => 'cliUser'
    , password => 'Frx}R9lD0O}dIgZRGs{:'
  );
end;
/


Now that the DBMS_CLOUD package has credentials, we can do other calls with this package.
To add the Datapump export files to the Object Storage, we can use the PUT_OBJECT procedure.

I created a small script to take all the files from the DATA_PUMP_DIR and put them in the backups Bucket in the Object Storage:

begin
  for r in (select object_name, bytes
              from dbms_cloud.list_files('DATA_PUMP_DIR'))
  loop
   dbms_cloud.put_object(credential_name => 'DEF_CRED_NAME',
     object_uri => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/id9u4qbhnjxj/b/backups/o/'||r.object_name,
     directory_name => 'DATA_PUMP_DIR',
     file_name => r.object_name);
  end loop;     
end;
/

And when we check our bucket, we see the Datapump export files! Yay!


We also want to export our Oracle APEX apps. In some projects, I use the APEXExport utility, but now we will use SQLcl to export our APEX app 101:

apex export 101

In real life I typically create a few scripts which I can run one-by-one or combined in a general backup script. The script will export the Oracle schemas, the APEX apps and save the files to another location, in our case the Object Storage.

vi make_backup.sh


 Here are the details of the scripts which are called in the main backup script:


You can schedule this script with crontab, for example, every day at 2AM:


The above is just an example of what you can do to automate your backups. You have to decide how frequently you want to do those backups.


If you want to move your existing Oracle database and APEX apps to the Oracle Cloud, the steps are similar to above. You upload your Datapump export file to your Object Storage. Next, run the Data Pump Import with the dump file parameter set to the list of file URLs on your Cloud Object Storage and the credential parameter set to the name of the credential you created earlier. For example:

impdp admin/password@dbdimi_high \  
     directory=data_pump_dir \  
     credential=def_cred_name \
     dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/adwc/b/adwc_user/o/export%u.dmp \
     parallel=1 \
     partition_options=merge \
     transform=segment_attributes:n \
     transform=dwcs_cvt_iots:y transform=constraint_use_default_index:y \
exclude=index,cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link

Next, you would create your APEX workspace and import the APEX apps.


In the next post, we dive again in Oracle APEX and how to send emails from your APEX app.

Categories: Development

Migrating Oracle database from windows to ODA

Yann Neuhaus - Tue, 2019-10-01 02:47

Nowadays I have been working on an interesting customer project where I had to migrate windows oracle standard databases to ODA. The ODAs are X7-2M Models, running version 18.5. This version is coming with Red Hat Enterprise Linx 6.10 (Santiago). Both windows databases and target ODA databases are running PSU 11.2.0.4.190115. But this would definitively also be working for oracle 12c and oracle 18c databases. The databases are licensed with Standard Edition, so migrating through data guard was not possible. Through this blog I would like to share the experience I could get on this topic as well as the method and steps I have been using to successfully migrate those databases.

Limitations

Windows and Linux platform been on the same endian, I have been initially thinking that it would not be more complicated than simply duplicating the windows database to an ODA instance using the last backup. ODA databases are OMF databases, so can not be easier, as no convert parameter is needed.
After having created a single instance database on the ODA, exported the current database pfile and adapted it for the ODA, created the needed TNS connections, I have been running a single RMAN duplicate command :

RMAN> run {
2> set newname for database to new;
3> duplicate target database to 'ODA_DBNAME' backup location '/u99/app/oracle/backup';
4> }

Note : If the database is huge, as for example, more than a Tera bytes, and your sga is small, you might want to increase it. Having a bigger sga size will lower the restore time. Minimum 50 GB would be a good compromise. Also if your ODA is from the ODA-X7 family you will benefit from the NVMe technologie. As per my experience, a duplication of 1.5 TB database, with backup stored locally, did not take more than 40 minutes.

I have been more than happy to see the first duplication step been successfully achieved :

Finished restore at 17-JUL-2019 16:45:10

And I was expecting the same for the next recovery part.

Unfortunately, this didn’t end as expected and I quickly got following restore errors :

Errors in memory script
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01507: database not mounted
ORA-06512: at "SYS.X$DBMS_RCVMAN", line 13661
ORA-06512: at line 1
RMAN-03015: error occurred in stored script Memory Script
RMAN-20000: abnormal termination of job step
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_07_17/o1_mf_1_25514_glyf3yd3_.arc'
RMAN-11001: Oracle Error:
ORA-10562: Error occurred while applying redo to data block (file# 91, block# 189)
ORA-10564: tablespace DBVISIT
ORA-01110: data file 91: '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_dbvisit_glyczqcj_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 501874
ORA-00600: internal error code, arguments: [4502], [0], [], [], [], [], [], [], [], [], [], [] RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2019 16:45:32
RMAN-05501: aborting duplication of target database

Troubleshooting the problem I could understand that migrating database from Windows to Linux might not be so simple. Following oracle Doc ID is describing the problem :
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)
Cross-Platform Database Migration (across same endian) using RMAN Transportable Database (Doc ID 1401921.1)
RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support (Doc ID 1079563.1)
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)

The problem is coming from the fact that recovering redo transactions between windows and linux platform is not supported if the database is not a standby one. For standard database version, the only possibility would be to go through a cold backup which, in my case, was impossible knowing the database size, the time taken to execute a backup and the short maintenance windows.

Looking for other solution and doing further tests, I could find a solution that I’m going to describe in the next steps.

Restoring the database from the last backup

In order to restore the database, I have been running next steps.

  1. Start the ODA instance in no mount :

  2. SQL> startup nomount

  3. Restore the last available control file from backup with rman :

  4. RMAN> connect target /
     
    RMAN> restore controlfile from '/mnt/backupNFS/oracle/ODA_DBNAME/20190813_233004_CTL_ODA_DBNAME_1179126808_S2864_P1.BCK';

  5. Mount the database :

  6. SQL> alter database mount;

  7. Catalog the backup path :

  8. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  9. And finally restore the database :

  10. RMAN> connect target /
     
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> switch datafile all;
    5> }

Convert the primary database to a physical standby database

In order to be able to recover the database we will convert the primary database to a physical standby one.

  1. We can check the actual status and see that our database is a primary one in mounted state :

  2. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PRIMARY MOUNTED

  3. We will convert the database to a physical standby

  4. SQL> alter database convert to physical standby;
     
    Database altered.

  5. We need to restart the database.

  6. SQL> shutdown immediate
     
    SQL> startup mount

  7. We can check new database status

  8. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PHYSICAL STANDBY MOUNTED

Get the current windows SCN database

We are now ready to recover the database and the application can be stopped. The next steps will now be executed during the maintenance windows. The windows database listener can be stopped to make sure there is no new connection.

  1. We will make sure there is no existing application session on the database :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col machine format a20
    SQL> col service_name format a20
     
    SQL> select SID, serial#, username, machine, process, program, status, service_name, logon_time from v$session where username not in ('SYS', 'PUBLIC') and username is not null order by status, username;

  3. We will create a restore point :

  4. SQL> create restore point for_migration_14082019;
     
    Restore point created.

  5. We will get the last online log transactions archived :

  6. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
     
    System altered.

  7. We will retrieve the SCN corresponding to the restore point :

  8. SQL> col scn format 999999999999999
     
    SQL> select scn from v$restore_point where lower(name)='for_migration_14082019';
     
    SCN
    ----------------
    13069540631

  9. We will backup the last archive log. This will be executed on the windows database using our dbi services internal DMK tool (https://www.dbi-services.com/offering/products/dmk-management-kit/) :

  10. servicedbi@win_srv:E:\app\oracle\local\dmk_custom\bin\ [ODA_DBNAME] ./rman_backup_ODA_DBNAME_arc.bat
     
    E:\app\oracle\local\dmk_custom\bin>powershell.exe -command "E:\app\oracle\local\dmk_ha\bin\check_primary.ps1 ODA_DBNAME 'dmk_rman.ps1 -s ODA_DBNAME -t bck_arc.rcv -c E:\app\oracle\admin\ODA_DBNAME\etc\rman.cfg
     
    [OK]::KSBL::RMAN::dmk_dbbackup::ODA_DBNAME::bck_arc.rcv
     
    Logfile is : E:\app\oracle\admin\ODA_DBNAME\log\ODA_DBNAME_bck_arc_20190814_141754.log
     
    RMAN return Code: 0
    2019-08-14_02:19:01::check_primary.ps1::MainProgram ::INFO ==> Program completed

Recover the database

The database can now be recovered till our 13069540631 SCN number.

  1. We will first need to catalog new archive log backups :

  2. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  3. And recover the database till SCN 13069540632 :

  4. RMAN> connect target /
     
    RMAN> run {
    2> set until scn 13069540632;
    3> recover database;
    4> }
     
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc RECID=30124 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc thread=1 sequence=30099
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc RECID=30119 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc thread=1 sequence=30100
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc RECID=30121 STAMP=1016289320
    media recovery complete, elapsed time: 00:00:02
    Finished recover at 14-AUG-2019 14:35:23

  5. We can check the alert log and see that recovering has been performed until SCN 13069540632 :

  6. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] taa
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc
    Wed Aug 14 14:35:23 2019
    Incomplete Recovery applied until change 13069540632 time 08/14/2019 14:13:46
    Media Recovery Complete (ODA_DBNAME)
    Completed: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'

  7. We can check the new ODA database current SCN :

  8. SQL> col current_scn format 999999999999999
     
    SQL> select current_scn from v$database;
     
    CURRENT_SCN
    ----------------
    13069540631

Convert database to primary again

Database can now be converted back to primary.

SQL> alter database activate standby database;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
MOUNTED ODA_DBNAME PRIMARY MOUNTED

At this step if the windows source database would be running 11.2.0.3 version, we could successfully upgrade the new ODA database to 11.2.0.4 following common oracle database upgrade process.

And finally we can open our database and have the database been migrated from windows to linux.


SQL> alter database open;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
OPEN ODA_DBNAME PRIMARY READ WRITE


oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Post migration steps

There will be a few post migration steps to be executed.

Created redo logs again

Redo logs are still stamped with windows path and therefore have been created in $ORACLE_HOME/dbs folder. In this steps we will create new OMF one again.

  1. Checking current online log members :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col member format a100
     
    SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG UNUSED 500
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG UNUSED 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG CURRENT 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG CURRENT 500

  3. Drop the first unused redo log group keeping only one :

  4. SQL> alter database drop logfile group 6;
     
    Database altered.
     
    SQL> alter database drop logfile group 5;
     
    Database altered.
     
    SQL> alter database drop logfile group 4;
     
    Database altered.
     
    SQL> alter database drop logfile group 3;
     
    Database altered.
     
    SQL> alter database add logfile group 3 size 500M;
     
    Database altered.

  5. Create the recent dropped group again :

  6. SQL> alter database add logfile group 3 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 4 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 5 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 6 size 500M;
     
    Database altered.

  7. Drop the last unused redo log group and create it again :

  8. SQL> alter database drop logfile group 2;
     
    Database altered.
     
    SQL> alter database add logfile group 2 size 500M;
     
    Database altered.

  9. Execute a switch log file and checkpoint so the current redo group becomes unused :

  10. SQL> alter system switch logfile;
     
    System altered.
     
    SQL> alter system checkpoint;
     
    System altered.

  11. Drop it and create it again :

  12. SQL> alter database drop logfile group 1;
     
    Database altered.
     
    SQL> alter database add logfile group 1 size 500M;
     
    Database altered.

  13. Check redo group members :

  14. SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    3 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rj4t_.log INACTIVE 500
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rjqn_.log INACTIVE 500
    4 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81ron1_.log UNUSED 500
    4 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81rp6o_.log UNUSED 500
    5 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rwhs_.log UNUSED 500
    5 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rx1g_.log UNUSED 500
    6 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s1rk_.log UNUSED 500
    6 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s2bx_.log UNUSED 500
    2 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgdf_.log CURRENT 500
    2 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgxd_.log CURRENT 500
    1 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vpls_.log UNUSED 500
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vq4v_.log UNUSED 500

  15. Delete the wrong previous redo log members files :

  16. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] cdh
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/ [ODA_DBNAME] cd dbs
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltrh *REDO*.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm *REDO*.LOG

Created temp file again
  1. Checking current temp file we can see that the path is still the windows one :

  2. SQL> set linesize 300
    SQL> col name format a100
     
    SQL> select b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    NAME STATUS MB NAME
    ---------------------------------------------------------------------------------------------------- ------- ---------- -------------------------------------------
    F:\ORADATA\ODA_DBNAME\TEMPORARY_DATA_1.DBF ONLINE 8192 TEMPORARY_DATA

  3. We can check that the default temporary tablespace is TEMPORARY_DATA

  4. SQL> col property_value format a50
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  5. Let’s create a new temp tablespace and make it the default one

  6. SQL> create temporary tablespace TEMP tempfile size 8G;
     
    Tablespace created.
     
    SQL> alter database default temporary tablespace TEMP;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMP
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  7. Drop previous TEMPORARY_DATA tablespace

  8. SQL> drop tablespace TEMPORARY_DATA including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP

  9. Create TEMPORARY_DATA tablespace again and make it the default one :

  10. SQL> create temporary tablespace TEMPORARY_DATA tempfile size 8G;
     
    Tablespace created.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP
     
    SQL> alter database default temporary tablespace TEMPORARY_DATA;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  11. And finally drop the intermediare temp tablespace :

  12. SQL> drop tablespace TEMP including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA

  13. Appropriate max size can be given to the new created temp tablespace

  14. SQL> alter database tempfile '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp' autoextend on maxsize 31G;
     
    Database altered.

  15. Remove wrong temp file stored in $ORACLE_HOME/dbs

  16. oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltr
    -rw-r--r-- 1 oracle oinstall 2851 May 15 2009 init.ora
    -rw-r--r-- 1 oracle oinstall 64 Jul 25 08:10 initODA_DBNAME.ora.old
    -rw-r----- 1 oracle oinstall 2048 Jul 25 08:10 orapwODA_DBNAME
    -rw-r--r-- 1 oracle oinstall 67 Jul 25 08:31 initODA_DBNAME.ora
    -rw-r----- 1 oracle asmadmin 8589942784 Aug 14 08:14 F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF
    -rw-rw---- 1 oracle asmadmin 1544 Aug 14 14:59 hc_ODA_DBNAME.dat
    -rw-r----- 1 oracle asmadmin 43466752 Aug 14 15:48 snapcf_ODA_DBNAME.f
     
    oracle@RZA-ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF

Apply specific ODA parameters

Following specific ODA parameters can be updated to the new created instance.


SQL> alter system set "_datafile_write_errors_crash_instance"=false scope=spfile;
 
System altered.
 
SQL> alter system set "_db_writer_coalesce_area_size"=16777216 scope=spfile;
 
System altered.
 
SQL> alter system set "_disable_interface_checking"=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set "_ENABLE_NUMA_SUPPORT"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set "_FILE_SIZE_INCREASE_INCREMENT"=2143289344 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_policy_time"=0 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_undo_affinity"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checking='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checksum='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_lost_write_protect='TYPICAL' scope=spfile;
 
System altered.
 
SQL> alter system set sql92_security=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set use_large_pages='only' scope=spfile;
 
System altered.

“_fix_control”parameter is specific to Oracle12c and not compatible Oracle 11g. See Doc ID 2145105.1.

Register database in grid

After applying specific ODA instance parameters, we can register the database in the grid and start it with the grid.


oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl add database -d ODA_DBNAME_RZA -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -c SINGLE -i ODA_DBNAME -x RZA-ODA02 -m ksbl.local -p /u02/app/oracle/oradata/ODA_DBNAME_RZA/dbs/spfileODA_DBNAME.ora -r PRIMARY -s OPEN -t IMMEDIATE -n ODA_DBNAME -j "/u02/app/oracle/oradata/ODA_DBNAME_RZA,/u03/app/oracle"
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

We can check the well functionning :

oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl stop database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is not running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Conclusion

Going through a physical standby database, I was able to migrate successfully the windows databases into ODA linux one. I have been able to achieve migration of source 11.2.0.4 databases but also 11.2.0.3 database by adding an upgrade step in the process.

Cet article Migrating Oracle database from windows to ODA est apparu en premier sur Blog dbi services.

How to enlarge an #Exasol database by adding a node

The Oracle Instructor - Mon, 2019-09-30 08:46

Adding a cluster node will not only increase the available storage capacity but also the total compute power of your cluster. This scale-out is a quite common operation for Exasol customers to do.

My example shows how to change an existing 2+1 cluster into a 3+0 cluster. Before you can enlarge the database with an active node, this node has to be a reserve node first. See here how to add a reserve to a 2+0 cluster. Of course you can add another reserve node to change from 3+0 to 3+1 afterwards. See here if you wonder why you may want to have a reserve node at all.

Initial state – reserve node is present

I start with a 2+1 cluster – 2 active nodes and 1 reserve node:

For later comparison, let’s look at the distribution of rows of one of my tables:

The rows are roughly even distributed across the two active nodes.

Before you continue, it would be a good idea to take a backup on a remote archive volume now – just in case.

Shutdown database before volume modification

A data volume used used by a database cannot be modified while that database is up, so shut it down first:

After going to the Storage branch in EXAoperation, click on the data volume:

Then click on Edit:

Decrease volume redundancy to 1

Change the redundany from 2 to 1, then click Apply:

Why is the redundancy reduced from 2 to 1 here? Let’s try to explain that. Initially, I had 2 active nodes with a volume using redundancy 2:

A and B are master segments while A’ and B’ are mirrored segments. If I could add a node to this volume keeping the existing segments, it would look like this:

Of course this would be a bad idea. The redundancy is reduced to 1 before the new node is added to the volume:

Only distributed master segments with no mirrors at first. Then the redundancy is again increased to 2:

This way, every master segment can be mirrored on a neighbor node. That’s why the redundancy needs to be reduced to 1.

Add new node to volume

After having decreased the volume redundancy to 1, click Edit on the volume detail page again and add n13 as a new master node to the volume and click Apply:

Increase redundancy to 2

Now click Edit again and increase the redudancy to 2:

The state of the volume shows now as RECOVERING – don’t worry, it just means that mirrored segments are now created.

Enlarge the database

Now click on the database link on the EXASolution screen:

Select the Action Enlarge and click Submit:

Enter 1 and click Apply:

The database detail page looks like this now:

Reorganize

Technically, this is a 3+0 cluster now – but the third node doesn’t contain any data yet. If we look at the same table as before, we see that no rows are on the new node:

To change that, a REORGANIZE needs to be done. Either on the database layer, on schema layer or on table layer. Most easy to perform is REORGANIZE DATABASE:

Took me about 10 Minutes on my tiny database. That command re-distributes every table across all cluster nodes and can be time consuming with high data volume. While a table is reorganized, that table is locked against DML. You can monitor the ongoing reorganization by selecting from EXA_DBA_PROFILE_RUNNING in another session.

Final state

Let’s check the distribution of the previous table again:

As you can see above, now there are rows on the added node. Also EXAoperation confirms that the new node is not empty any more:

On a larger database, you would see that the volume usage of the nodes is less than before per node and every node is holding roughly the same amount of data. For failsafety, you could add another reserve node now.

Summary of steps
  1. Add a reserve node (if not yet existing)
  2. Take a backup on a remote archive volume
  3. Shutdown database
  4. Decrease volume redundancy to 1
  5. Add former reserve node as new master node to the volume
  6. Increase redundancy to 2
  7. Enlarge database by 1 active node
  8. Reorganize
  9. Add another reserve node (optionally)
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator