Feed aggregator

OOW19 Review: Oracle Analytics Deep Dive

Rittman Mead Consulting - Wed, 2019-09-25 05:41
 Oracle Analytics Deep Dive

In my previous blog I outlined the global news regarding Oracle like the Always Free Tier, the new datacenter plan and the set of new tools for Data Science. Today's blog is dedicated to all the news announced regarding Oracle Analytics in any of the versions: Cloud, Server or Applications.

 Oracle Analytics Deep DiveOracle Analytics Server

OAS is the long awaited replacement of OBIEE 12c on-premises and promises  functional parity with OAC. Current official ETA is Fiscal Year 2020 and it will be available to customers as a free upgrade. With OAS all customers still on-premises will experience the following benefits:

  • Almost 1-1 feature with OAC
  • Complete compatibility with OAC
  • Simplified cloud migration and better support for hybrid deployments

A related announcement for on-premises customers regards licensing: there is only a single license to purchase OAS which includes all features within it, no separate option for Mobile or Self-Service Data Visualization needed!

Oracle Analytics for Application

This represents the new incarnation of BIApps, completely redesigned specifically for Fusion Apps. As his predecessor, OAX (this is the acronym) it's a packaged, ready-to-use solution with pre-built ETLs and Analytics content like RPD, dashboards, analysis, KPIs. Under the covers uses Oracle Autonomous Data Warehouse and Oracle Data Integrator Cloud. OAX is also extendible, by bringing additional datasets in ADW and extending the semantic model and catalog.

Oracle Analytics Cloud

Several enhancements were announced, especially during Gabby Rubin's (VP of Oracle Analytics Product Management) Strategy & Roadmap Session. New features will be available in most of the areas of the tool, including the core of the centralized reporting: the RPD.

 Oracle Analytics Deep DiveData Preparation

New options will be available in the Data Preparation/Enrichment phase such as:

  • Custom Enrichments based on pre-existing set of values. E.g. enriching PRODUCT_ID with fields coming from a standard Product dimension. This is an interesting idea to enable standardization of dimensions across reporting without forcing people to write SQL or to know where the standard information is coming from.
  • Force Enrichments/Masking: as Administrators, we could enforce some transformations like the credit card obfuscation of fields that may contain sensitive data.
Natural Language Generation

The Natural Language view is already present in the current version of OAC, there is a plan to enhance this visualization by adding more options in the settings panel for grouping and trending analysis.

 Oracle Analytics Deep Dive

Spatial Analytics in OAC

A few weeks ago I wrote about Oracle Spatial Studio, a tool designed to provide advanced visual Spatial Analytics. This tool will remain and progress over time, OAC will not cover all the specific use-cases of Spatial Studio. However OAC will enhance its spatial capabilities, like:

  • Providing accurate information about row geo-location: e.g. how many rows were correctly located, how may errors and menus to fix value to location association.
  • Provide spatial functions in the front-end: an end-user will be easily able to calculate the distance between points in a map by writing a simple Logical SQL statement. This option will probably appear on the RPD first (check the twitter thread below)

Yeah!Now you can natively perform #spatialanalytics on #oac! #geospatial functions are available in the RPD and can be exposed to #OracleDataVisualization! pic.twitter.com/g5q3Lf9CiG

— Francesco Tisiot (@FTisiot) September 16, 2019

As you can see, calculating the distance will be just a matter of having the correct dataset and writing a GeometryDistance function.

Connectivity and Security

One of OAC's missions is to become the Analytics Platform on top of any type of datasource. The plan in the future is to expand the list of connectors and security/configuration options like SSL or Kerberos. There is also a roadmap to extend the Data Gateway capabilities to query non-oracle databases.

Modeling capabilities

In OAC we were used to either the classic RPD approach or the self-service Data-Sets. The future reserves some news in both approaches:

  • A new cloud web-based Modeler with the objective of functional parity with the Admintool, so capable of handling more complex designs that the current light data-modeler. I believe this will be also an effort to adapt the RPD development process to the current standards of concurrent development, versioning and storage format.
  • A new Self Service Data Model solution to build light self service models allowing end-users to evolve datasets into proper models sharable and optimized for reporting.

I like the idea of allowing both top-down (centralized) as well as bottom-up (self-service) approach to data modeling. This provides clients the flexibility on the analytical approach while still allowing to enforce centralized rules  (e.g. unique source of truth) when needed.

Unified User Experience and Layout Customizations

As of now the old "Answers and Dashboards" and the new "Data Visualization Projects" were almost completely separated products with each one having its own home page and layout. In the next releases we'll see that the two worlds will start combining, with a unique home and a similar look and feel.

In other news, highly requested by end-users is the possibility of customize almost any option of the layout: from font type and size to colors of any object visible in a project.

Machine Learning Integration

As discussed in the previous OOW review post in the future OAC will be able to use models built in other tools like Oracle Machine Learning in the Autonomous Data Warehouse or Oracle Data Science. This provides an end-to-end Data Science story from Data Analyst to Data Scientist all with a simple, secure, highly configurable and performant toolset.

 Oracle Analytics Deep Dive

As you can see a lot of news coming in various aspects of the tool, from on-premise functional parity, a new packaged solution for Fusion Apps and a lot of features enhancing OAC functionality and customization options.

What do you think? Is this the right direction? Do you feel there is something missing?

Categories: BI & Warehousing

MobaXterm 12.2

Tim Hall - Wed, 2019-09-25 02:06

In another “the rest of the world ceases to exist in the lead up to OpenWorld” moment, I missed the release of MobaxTerm 12.2.

The downloads and changelog are in the usual places.

For Windows users who, like me, spend most of the day connecting to machines via SSH, this is the best tool I’ve found.

Cheers

Tim…

MobaXterm 12.2 was first posted on September 25, 2019 at 8:06 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

500 blog posts (in 12 years)

Dietrich Schroff - Tue, 2019-09-24 15:30
Last week i did my 500th posting - not really "run like hell" but i think better than nothing.
Here some statistics:




I think, i will continue for another 500. The exciting point is, wether i will find enough topics, which i am interested in...

The Product Concept: What does it mean in 2019?

VitalSoftTech - Tue, 2019-09-24 10:41

If you’ve ever taken a marketing course, you’ve likely heard about the “product concept.” However, there may be many who are unaware of what it is and what it entails. The advertising lexicon is getting broader by the day, and with each passing minute, more words for different concepts are being introduced. Which brings us […]

The post The Product Concept: What does it mean in 2019? appeared first on VitalSoftTech.

Categories: DBA Blogs

PFCLScan - Version 3.0

Pete Finnigan - Tue, 2019-09-24 09:26
We are very excited to announce that we are currently working to have version 3.0 of PFCLScan our flagship database security scanner for the Oracle database. We will be ready for sale in September and this development is going really....[Read More]

Posted by Pete On 11/07/19 At 03:33 PM

Categories: Security Blogs

Basic VMware Harbor Registry usage for Pivotal Container Service (PKS)

Pas Apicella - Tue, 2019-09-24 01:25
VMware Harbor Registry is an enterprise-class registry server that stores and distributes container images. Harbor allows you to store and manage images for use with Enterprise Pivotal Container Service (Enterprise PKS).

In this simple example we show what you need at a minimum to get an image on Harbor deployed onto your PKS cluster. First we need the following to be able to run this basic demo

Required Steps

1. PKS installed with Harbor Registry tile added as shown below


2. VMware Harbor Registry integrated with Enterprise PKS as per the link below. The most important step is the one as follows "Import the CA Certificate Used to Sign the Harbor Certificate and Key to BOSH". You must complete that prior to creating a PKS cluster

https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html

3. A PKS cluster created. You must have completed step #2 before you create the cluster

https://docs.pivotal.io/pks/1-4/create-cluster.html

$ pks cluster oranges

Name:                     oranges
Plan Name:                small
UUID:                     21998d0d-b9f8-437c-850c-6ee0ed33d781
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   oranges.run.yyyy.bbbb.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             4
Kubernetes Master IP(s):  1.1.1.1
Network Profile Name:

4. Docker Desktop Installed on your local machine



Steps

1. First let's log into Harbor and create a new project. Make sure you record your username and password you have assigned for the project. In this example I make the project public.




Details

  • Project Name: cto_apj
  • Username: pas
  • Password: ****

2. Next in order to be able to connect to our registry from our local laptop we will need to install

The VMware Harbor registry isn't running on a public domain, and is using a self-signed certificate. So we need to access this registry with self-signed certificates from my mac osx clients given I am using Docker for Mac. This link shows how to add the self signed certificate to Linux and Mac clients

https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac

You can download the self signed cert from Pivotal Ops Manager as sown below


With all that in place a command as follows is all I need to run

$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt

3. Now lets login to the registry using a command as follows

$ docker login harbor.haas-bbb.yyyy.pivotal.io -u pas
Password:
Login Succeeded

4. Now I have an image sitting on Docker Hub itself so let's tag that and then deploy that to our VMware Harbor registry as shown below

 $ docker tag pasapples/customer-api:latest harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest
 $ docker push harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest


5. Now lets create a new secret for accessing the container registry

$ kubectl create secret docker-registry regcred --docker-server=harbor.haas-bbb.yyyy.io --docker-username=pas --docker-password=**** --docker-email=papicella@pivotal.io

6. Now let's deploy this image to our PKS cluster using a deployment YAML file as follows

customer-api.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: customer-api
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: customer-api
    spec:
      containers:
        - name: customer-api
          image: harbor.haas-206.pez.pivotal.io/cto_apj/customer-api:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: customer-api-service
  labels:
    name: customer-api-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: customer-api
  type: LoadBalancer

7. Deploy as follows

$ kubectl create -f customer-api.yaml

8. You should see the POD and SERVICE running as follows

$ kubectl get pods | grep customer-api
customer-api-7b8fcd5778-czh46                    1/1     Running   0          58s

$ kubectl get svc | grep customer-api
customer-api-service            LoadBalancer   10.100.2.2    10.195.1.1.80.5   80:31156/TCP 


More Information

PKS Release Notes 1.4
https://docs.pivotal.io/pks/1-4/release-notes.html

VMware Harbor Registry
https://docs.vmware.com/en/VMware-Enterprise-PKS/1.4/vmware-harbor-registry/GUID-index.html

Categories: Fusion Middleware

Impact of Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-09-23 17:29

I am working on upgrading an Oracle database from 11.2.0.4 to 19c and migrating it from HP Unix to Linux. This 15-terabyte database is too large to copy from the old to the new system during our normal weekend downtime window. It also has a ton of weekend batch updates that overlap the normal weekend change window so it would be best for our business processing if the cut over from the old to the new system was as fast as possible.

I want to use GoldenGate to minimize the downtime for the cutover using an approach similar to what is described in this Oracle document:

Zero Downtime Database Upgrade Using Oracle GoldenGate

You start GoldenGate collecting changes on the current production system and then take your time copying the 15 TB of data from the old to new system. Once you are done with the initial load you apply the changes that happened in the meanwhile. Finally, you cut over to the new system. You could even switch the direction of the replication to push changes on the new production system back to the old system to allow for a mid-week back out several days after your upgrade. Pretty cool. A teammate of mine successfully used this approach on an important database some years back.

But the database that I am working on now, unlike the one that my colleague worked on, has a lot of tables set to nologging. Under the right conditions inserts into tables set to nologging are not written to the redo logs and will be missed by GoldenGate. This Oracle article recommends setting your database to FORCE LOGGING so GoldenGate will not miss any updates:

In order to ensure that the required redo information is contained in the Oracle redo logs for segments being replicated, it is important to override any NOLOGGING operations which would prevent the required redo information from being generated. If you are replicating the entire database, enable database force logging mode.

Oracle GoldenGate Performance Best Practices

We could also switch all our application tables and partitions in the source system to logging but we have so many I think we would set the whole database to force logging.

But the big question which I touched on in my previous post is whether force logging will slow down our weekend batch processing so much that we miss our deadlines for weekend processing to complete and affect our business in a negative way. The more I investigate it the more convinced I am that force logging will have minimal impact on our weekend jobs. This is an unexpected and surprising result. I really thought that our batch processing relied heavily on nologging writes to get the performance they need. It makes me wonder why we are using nologging in the first place. It would be a lot better for backup and recovery to have all our inserts logged to the redo logs. Here is a nice Oracle Support document that lays out the pros and cons of using nologging:

The Gains and Pains of Nologging Operations (Doc ID 290161.1)

I have an entry in my notes for this upgrade project dated 8/26/19 in which I wrote “Surely force logging will bog the … DB down”. Now I think the opposite. So, what changed my mind? It started with the graph from the previous post:

Graph From Previous Post with Little Direct Writes I/O

I was really surprised that the purple line was so low compared to the other two. But I felt like I needed to dig deeper to make sure that I was not just misunderstanding these metrics. The last thing I want to do is make some production change that slows down our weekend processes that already struggle to meet their deadlines. I was not sure what other metrics to look at since I could not find something that directly measures non-logged writes. But then I got the idea of using ASH data.

In my “Fast way to copy data into a table” post I said that to copy data quickly between two Oracle tables “you want everything done nologging, in parallel, and using direct path”. I may have known then and forgotten but working on this now has me thinking about the relationship between these three ways of speeding up inserts into tables. I think there are the following two dependencies:

  • Nologging requires direct path
  • Parallel requires direct path

Oracle document “Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)” says the first one. In the second case if you have a target table set to parallel degree > 1 and you enable parallel DML you get direct path writes when you insert into the target table.

From all this I got the idea to look for direct path write waits in the ASH views. I could use ASH to identify insert statements that are using direct path. Then I could check that the target tables or partitions are set to nologging. Then I would know they are doing non-logged writes even if I did not have a metric that said so directly.

directwritesql.sql looked at all the SQL statements that had direct write waits over the entire 6 weeks of our AWR history. The output looks like this:

     select
  2  sql_id,count(*) active
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  event = 'direct path write'
  6  group by sql_id
  7  order by active desc;

SQL_ID            ACTIVE
------------- ----------
2pfzwmtj41guu         99
g11qm73a4w37k         88
6q4kuj30agxak         58
fjxzfp4yagm0w         53
bvtzn333rp97k         39
6as226jb93ggd         38
0nx4fsb5gcyzb         36
6gtnb9t0dfj4w         31
3gatgc878pqxh         31
cq433j04qgb18         25

These numbers startled me because they were so low. Each entry in DBA_HIST_ACTIVE_SESS_HISTORY represents 10 seconds of activity. So over 6 weeks our top direct path write waiter waited 990 seconds. Given that we have batch processes running full out for a couple of days every weekend 990 seconds over 6 weekends is nothing.

I took the top SQL ids and dumped out the SQL text to see what tables they were inserting into. Then I queried the LOGGING column of dba_tables and dba_tab_partitions to see which insert was going into a table or partition set to nologging.

select logging,table_name
from dba_tables
where owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
order by table_name;


select logging,table_name,count(*) cnt
from dba_tab_partitions
where table_owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
group by logging,table_name
order by table_name,cnt desc;

This simple check for LOGGING or NOLOGGING status eliminated several of the top direct path write waiters. This process reduced the list of SQL ids down to three top suspects:

SQL_ID            ACTIVE
------------- ----------
cq433j04qgb18         25
71sr61v1rmmqc         17
0u0drxbt5qtqk         11

These are all inserts that are not logged. Notice that the most active one has 250 seconds of direct path write waits over the past 6 weeks. Surely enabling force logging could not cause more than about that much additional run time over the same length of time.

I got the idea of seeing what percentage of the total ASH time was direct path write waits for each of these SQL statements. In every case it was small:

cq433j04qgb18

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              2508              25    .996810207
 
71sr61v1rmmqc

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              1817              17    .935608145

0u0drxbt5qtqk

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              8691              11    .126567714

TOTAL_SAMPLE_COUNT was all the samples for that SQL_ID value for the past 6 weeks. DW_SAMPLE_COUNT is the same count of samples that are direct write waits that we already talked about. DW_SAMPLE_PCT is the percentage of the total samples that were direct write wait events. They were all around 1% or lower which means that write I/O time was only about 1% of the entire run time of these inserts. The rest was query processing best I can tell.

Also I used my sqlstat3 script to look at the average run time for these inserts:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
71sr61v1rmmqc      3333358322 01-SEP-19 12.00.46.381 PM                1         2768879.28         892080    207085.624                  0                      0                  3.817             9028323            1045428               19337954
71sr61v1rmmqc      3333358322 08-SEP-19 10.00.43.551 AM                0         264428.594          98840     28257.339                  0                      0                  3.657              177736             143345                      0
71sr61v1rmmqc      3333358322 08-SEP-19 11.00.49.648 AM                1          2352509.9         767440    160933.191                  0                      0                      0             8729437             791837               19110340
71sr61v1rmmqc      3333358322 15-SEP-19 11.00.03.027 AM                1         3090070.21         904310    190593.062                  0                      0                  2.192             9095421             949579               19470026

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
0u0drxbt5qtqk       382840242 01-SEP-19 02.00.23.436 AM                1         29281391.6        3211050    16624311.7                  0                      0              99532.905            37076159           14440303               24479240
0u0drxbt5qtqk       382840242 08-SEP-19 02.00.11.424 AM                1         3871668.37         424670    2563007.61                  0                      0               1236.003             4622248            2457057                2468983
0u0drxbt5qtqk       382840242 15-SEP-19 03.00.12.349 AM                0         5161808.16         615520    3358994.55                  0                      0              20656.365             6251060            2801828                      0
0u0drxbt5qtqk       382840242 15-SEP-19 04.00.33.661 AM                1         2412910.02         240650    1741053.89                  0                      0                699.717             3050529            1542895                4638794

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
cq433j04qgb18      1267318024 02-SEP-19 10.00.57.871 PM                1          77132.892          51180     10719.692                  0                      0                  2.003              460346              47055                 772468
cq433j04qgb18      1267318024 03-SEP-19 10.00.55.601 PM                1         116064.154          68350      9808.483                  0                      0              15746.609              911571              20422                1256808
cq433j04qgb18      1267318024 04-SEP-19 10.00.31.071 PM                1         106594.074          64030      6328.462                  0                      0              15603.102              777779              14945                1561172
cq433j04qgb18      1267318024 05-SEP-19 10.00.13.265 PM                0          44435.247          31810      2760.438                  0                      0                365.132              139637               5111                 257770
cq433j04qgb18      1267318024 09-SEP-19 10.00.18.346 PM                1         791385.928         324050    171504.931                  0                      0               7484.358             6430665             600703               14262960
cq433j04qgb18      1267318024 10-SEP-19 10.00.29.224 PM                0         1685763.14         676210    304045.354                  0                      0                283.296            11884045             838290               16268667
cq433j04qgb18      1267318024 11-SEP-19 10.01.00.800 PM                0         369644.825         172120     42679.357                  0                      0                  3.929             2443772             151369                3901044
cq433j04qgb18      1267318024 12-SEP-19 10.00.28.499 PM                0          30381.614          25630      1191.884                  0                      0                 422.55               98580               3389                 184812
cq433j04qgb18      1267318024 13-SEP-19 10.00.07.502 PM                0         173286.567         109990     11461.865                  0                      0                 359.37             1475324              63073                2360818
cq433j04qgb18      1267318024 16-SEP-19 10.00.52.175 PM                1         190203.822          93680     47585.666                  0                      0                122.658             1221886             348327                2955258

These queries run at most a couple of hours. If direct path writes are 1% of their total run time, I estimated that force logging would add about 1% to the elapsed time or about 2 minutes per execution.

The final step was to try to run one of these top nologging I/O inserts in a test environment with and without force logging to see if the test matches the expected performance slowdown. I was not able to run 0u0drxbt5qtqk without setting up a more elaborate test with the development team. My test of cq433j04qgb18 ran considerably faster with force logging than without it so I think other factors were hiding whatever effect force logging had. But 71sr61v1rmmqc had some nice results that matched my estimates well. This is on a Delphix clone of production so the data was up to date with prod but the underlying I/O was slower.

71sr61v1rmmqc results running 5 times normal 5 times force logging

The individual run times are in seconds and the averages are listed in seconds and in minutes. I ran the insert 5 times with no force logging and 5 times with it alternating. I dropped the primary key and unique index of the target table to keep from getting constraint errors. I rolled back the insert each time. It averaged about 1.2 minutes more out of 40 minutes of run time which is about a 3% increase. My estimate from ASH was about 1% so this test matches that well.

The final test remains. In some upcoming production weekend, I will put in a change to flip the database to force logging and see how it goes. My tests were run on a test system with a different storage system and with no other activity. We might see different results on a heavily loaded system with a queue for the CPU. But, after all this analysis and testing I feel confident that we won’t be able to tell that force logging is enabled. Unfortunately, we sometimes have performance issues anyway due to plan changes or data volume so the force logging switch might get blamed. But I feel confident enough to push for the final test and I think we ultimately will pass that test and be able to use force logging to enable GoldenGate to support a short cut over time for our migration and upgrade project.

Bobby

P.S. A good question came in as a comment about direct path write waits and asynchronous I/O. The system I am testing on does not support async I/O because it is HP Unix and a filesystem. This older blog post talks a bit about async and direct I/O on HP-UX:

https://www.bobbydurrettdba.com/2013/04/26/db_writer_processes-dbwr_io_slaves-with-no-asynch-io-on-hp-ux/

So, your mileage may vary (YMMV) if you do these same queries on a system with asynchronous writes. Linux filesystems support async writes and on HP-UX our RAC system on ASM supports it. It is one of the challenges of writing blog posts. Other people may be in different situations than I am.

Categories: DBA Blogs

Comprehensive TensorFlow.js Example

Andrejus Baranovski - Mon, 2019-09-23 14:45
I have implemented an app which includes TensorFlow.js API usage. First I will walk you through the app functionality and then will dive into implementation details. This app implements a business report execution time prediction use case (this time in JavaScript), which was explained in my previous post — Report Time Execution Prediction with Keras and TensorFlow.

Read more in my Towards Data Science post.


OOW 2019 Review: Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

Rittman Mead Consulting - Mon, 2019-09-23 09:41
 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

In the Oracle world, last week was "the week" with Oracle Openworld 2019 happening in San Francisco. A week full of exciting news, some of it were also associated with words like "Free", never heard before in any Oracle associated topic. This blog post will go in detail into some of the news with a special focus on the Analytics and Data Science topics.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceOracle Cloud Free Tier

Let's start with the big news: Oracle Cloud Free Tier! A set of services that can ALWAYS be used for free which include Oracle's best offering in the database space like ATP (Autonomous Transaction Processing) and ADW (Autonomous Data Warehouse) as well as Compute, Storage and additional services for networking, monitoring and notifications.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

This is a huge news in Oracle ecosystem since it enables everyone to start using the products without the need of a credit card! The always free schema can be also used in conjunction with the 30-day Free Trial (with associated 300$ in credits) to experience the full set of Oracle products without spending a single cent.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

An additional interesting point (compared to previous Oracle's developer licensing models) is that there is nothing in the licensing terms blocking any customer to use the free tier for production usage! This means that potentially, if the resources provided satisfy the business requirements, anyone could potentially run production applications directly on the free tier! And, in cases when an upscale is needed, Oracle will provide a very easy option to switch from a free instance to a paid one.

However, as one can expect, the free tier has limitations, for example the databases will allow only 1 OCPU and 20GB of Storage each. On top of the technical limitation, for any of the products in the free tier there is no support and no SLAs. This means, for example, that in case of problems, you'll not be able to open a ticket to Oracle support. Something definitely to ponder about when implementing a production system.

OCI Gen2 and New Datacenters

During his Keynote, Larry Ellison also announced the plan to launch 20 new OCI Gen2 datacenters in the next year! An average of a new datacenter every 23 days!

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceTaken from oracle documentation

This is very impressive and, as mentioned during the Keynote, will mean Oracle overtake Amazon for the number of datacenters. A particular mention needs to be given also to the OCI Gen2, the new version of Oracle Cloud Interface. The first generation of OCI mission was the pay per usage: offering a set of services available on demand and payable by hour. The OCI Gen2 adds the autonomous features to Gen1: services are now self-managed, self-patched, self-monitored, self-secured with no downtime required. OCI Gen2 removes a huge set of activities from the hands of administrators taking the human error out of the equation.

Analytics & Data Science

I had a talk on how to Become a Data Scientist with Oracle Analytics Cloud. The topic of Analytics & Data Science was really what interested me most, and my expectation for exciting news was met.

A whole new set of products will be shipped soon, making the whole Data Science experience more continuous and pervasive across the Oracle Products. Let's have a look at the news, I'll try to add links to the relevant sessions I attended.

  • Oracle Machine Learning: ML will be more pervasive in the Oracle Database, the Oracle Machine Learning Notebooks will be capable of handling R and Python in addition to SQL. All of this will be available on the Cloud Databases including ADW. A new Spark option is also coming enabling Machine Learning on the BDA.
  • Oracle Data Science: This is a brand new product for Data Science collaboration, work in team on projects, with sharing and versioning options available out of the box.
  • Oracle Data Catalog: Again a new product aimed at creating inventories of company's data assets and make them searchable and easily usable by business users or data scientist.
  • Oracle Analytics Cloud: A lot of new announcements for this product which is mature and consolidated in the market, like Natural Language Generation or enhancements in the Data Enrichment, which I'll address in a separate blog post.

An interesting feature is AutoML, available both in Oracle Machine Learning and Oracle Data Science, which removes some barriers to Data Science by automating most of the steps in the Machine Learning model creation such as model and feature selection, and hyper-parameters tuning.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceTaken from Oracle ML presentation

You might notice several enhancements in different products. However, the key indicator of Oracle's Data Science maturity is the fact that all of the products above can be easily combined! Oracle Data Science will use Oracle Machine Learning optimizations when running on supported platforms on top of datasets easily discoverable by Oracle Data Catalog. Machine Learning models developed by OML or ODS can then be exposed and used by Oracle Analytics Cloud. This provides an end to end path from data scientist to data analyst and business user all within the same toolset and on top of the best hardware (with support for GPUs coming)!

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

All in all a great OOW full of exciting news: no more barriers to access Oracle Cloud with the free tier, 20 new datacenters coming in the next year and a set of tools to perform Data Science, from the Analyst to the Data Scientist, in a collaborative and extremely performant way! If you want to have more news regarding Oracle Analytics Cloud, don't miss my next blog post (edit: available here)!

Categories: BI & Warehousing

Rolling Upgrade of a Galera Cluster with ClusterControl

Yann Neuhaus - Mon, 2019-09-23 03:29
Rolling Upgrade is easy

In this blog, I will show you how easy it is with Clustercontrol to perform a Galera cluster “Rolling Upgrade” without any loss of service.
Let’s say we want to upgrade from Percona XtraDB Cluster version 5.6 to 5.7.
The same procedure can be used to upgrade MariaDB (from 10.1 to 10.2 or 10.3)

Prerequisites

First of all, make sure that on your Galera cluster all nodes are synchronized.
From the Dashboard, on the tab Overview, then the Galera Nodes window, in the Last “Committed” column , all 3 figures must be identical.

Then disable from the GUI the “cluster & node auto-recovery”either by clicking on both until it gets red or by setting temporarely on the clustercontrol server in the
/etc/cmon.d/cmon_N.cnf file (N stands for Cluster ID) the 2 following parameters:
– enable_cluster_autorecovery=0
– enable_node_autorecovery=0
don’t forget to restart the cmon service.
systemctl restart cmon
It is very important & even crucial otherwise Clustercontrol will try everytime to restart the Galera node when you will stop it during the upgrade process.

Now we have to put the first Galera node in maintenance mode for one hour.


The Cluster status bar should be now as following.

Cluster Upgrade

Log on the first master node using your favorite terminal emulator “putty” or “MobaXterm”, open 2 sessions and stop the Percona service on the first node.
# service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (19698)
# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster).. SUCCESS!
# service mysql status
ERROR! MySQL (Percona XtraDB Cluster) is not running

Remove now existing Percona XtraDB Cluster and Percona XtraBackup packages
#[root@node1 yum.repos.d]# yum remove percona-xtrabackup* Percona-XtraDB-Cluster*
Loaded plugins: fastestmirror, ovl
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6 will be erased
---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package percona-xtrabackup.x86_64 0:2.3.10-1.el6 will be erased
--> Finished Dependency Resolution
Removed:
Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6
Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 percona-xtrabackup.x86_64 0:2.3.10-1.el6
Complete!

Install the new packages
#[root@node1 yum.repos.d]# yum install Percona-XtraDB-Cluster-57
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-client-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-server-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-shared-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: percona-xtrabackup-24 >= 2.4.12 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: qpress for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package percona-xtrabackup-24.x86_64 0:2.4.15-1.el6 will be installed
---> Package qpress.x86_64 0:11-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
Installed:
Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6
Dependency Installed:
Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6
Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 percona-xtrabackup-24.x86_64 0:2.4.15-1.el6
qpress.x86_64 0:11-1.el6
Complete!

Start the node outside the cluster (in standalone mode) by setting the wsrep_provider variable to none.
$ ps -edf|grep -i mysql
$ mysqld --user=mysql --wsrep-provider='none'

Run now mysql_upgrade in the second session
[root@node1 /]# mysql_upgrade -u root -p
Enter password:
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
Upgrade process completed successfully.
Checking if update is needed

When the upgrade is over, stop the mysqld process.
You can either kill the mysqld process ID or use mysqladmin shutdown with the MySQL root user credentials.
$ mysqladmin shutdown -uroot -p
Now you can restart the upgraded node to join the Galera cluster in the first session.
$ service mysql start
Starting Percona-Xtradb-server.190612 13:04:33 mysqld_safe Logging to '/var/log/mysql/mysqld.log'.
190612 13:04:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
.. SUCCESS!

Post-exec tasks

From the GUI, disable the maintenance mode and check for the new version by logging in the instance.
[root@node1 /]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 64
Server version: 5.7.27-30-57 Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39
Once the first node is upgraded, you can repeat exactly the same procedure for all the other nodes in the cluster.

Now you can repeat exactly the same procedure for the other nodes of the cluster. At the end, Clustercontrol should display the same version for all nodes.

Conclusion

Rolling upgrade of a Galera cluster with Clustercontrol is really easy and fast with no or very few impact on the service.

Cet article Rolling Upgrade of a Galera Cluster with ClusterControl est apparu en premier sur Blog dbi services.

Free Oracle Cloud: 7. Setup a web server on the Virtual Machine

Dimitri Gielis - Sun, 2019-09-22 05:42
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In this blog post we will configure a web server on our Compute VM Instance. This allows us to host some websites and have a custom URL for our Oracle APEX instance and applications.

Lets start with connecting to our VM:

From a Terminal connect to your Oracle Cloud VM:

ssh -i ssh_key opc@public_ip

The first thing we do, is change to the root user, as we want to install a web server it will be easier to do it with the root user. Alternatively in front of every command you can add sudo.

We logged in as the OPC user, to become the ROOT user we do:

sudo su

Although it doesn't really have anything to do with setting up a web server, I do want to share this... The first thing I always like to do on a machine, is get the system updated, so all latest software is being used. To do this, run following command:

yum update

It will take some time the first time, but after a couple of minutes you should see that all packages were updated:

So the purpose of this post was to install a web server so when we type in a certain domain, it will arrive at our machine. As web server, I typically chose between Apache and Nginx. Which one to choose is a hard one... if you search Google for "Apache vs Nginx" you can start reading ;) Since last year I started to use Nginx for all my new systems, before I always used Apache.

Following steps show how you install the Nginx web server and run it:

yum install nginx


Now we need to start the web server:

systemctl start nginx

To see if Nginx is successfully running, do:

systemctl status nginx

You should see something like:


The next thing we have to do is open the firewall on the Linux box, so incoming connections are allowed. The first line will open HTTP, the second HTTPS and then we reload the firewall:

firewall-cmd --permanent --zone=public --add-service=http
firewall-cmd --permanent --zone=public --add-service=https
firewall-cmd --reload

Opening the firewall on the Linux box is not enough. Oracle added some extra security layers around the VM (Compute Instance). We have to allow HTTP and HTTPS access to our machine in the Oracle Firewall too.

Click the Virtual Cloud Network link:


Click the Security Links:


And add two Ingress Rules, one for HTTP and one for HTTPS:


As Source add 0.0.0.0/0 so everything can connect to it and as destination port you specify the first time 80 (for HTTP) and the second time 443 (for HTTPS):


Once both Ingress Rules are added, your list looks like this:


Now you can navigate in a browser to your Public IP and you should see:


Now that we have a web server running and it's accessible through the IP address, we know things are working. Most of the time however you don't want to access your server through an IP address, rather you want people to use a domain name. To access my Free Oracle Cloud server for example I want to use the dgielis.com domain.

The first step to do, is in the domain provider you specify for the A record, the IP address of your Oracle Cloud VM (Compute) Instance.  I typically also setup some CNAME so any sub-domain will work too. For example I could point apex.dgielis.com to Oracle APEX Builder.


Now that the domain points to our VM, we have to make sure our web server listens to this domain and knows what to do. We will need to configure Nginx for this dgielis.com domain.

Here are the steps to do this (do this in your Terminal which is connected to your VM):

vi /etc/nginx/conf.d/dgielis.com.conf

# Add following to the file (change dgielis.com by your domain):
server {
    listen         80;
    listen         [::]:80;
    server_name    dgielis.com www.dgielis.com;
    root           /usr/share/nginx/html/dgielis.com;
    index          index.html;
    try_files $uri /index.html;
}

# Create a directory where your website resides:
mkdir /usr/share/nginx/html/dgielis.com

# Add an index.html file to the directory
# Quickest way is vi or cp an index.html in this folder
# Or develop your site locally first and when ready upload with scp
# scp -i .ssh/oraclecloud /Users/dgielis/site.zip opc@132.145.215.55:/tmp

# to test Nginx configuration
nginx -t 

# to restart Nginx
nginx -s reload

# note: in case nginx doesn't restart, kill the nginx process and try to restart again
ps -ef | grep nginx
kill ~pid~

When you go in your browser to your domain name, it should show your website!

This website runs over HTTP, but these days it's recommended to use HTTPS for your sites. Lets setup HTTPS for our website by using LetsEncrypt, a free service for SSL certificates.

First we have to install some supporting packages:

yum install certbot python2-certbot-nginx  # not necessary
yum install python27-python-pip
scl enable python27 bash
pip install certbot
pip install setuptools --upgrade
pip install certbot-nginx

Once the supporting packages are there, we can run certbot to setup the SSL certificates for our domain:

certbot --nginx

After completion of the wizard, you should see something like below:


During the Certbot wizard which configures LetsEncrypt, it asks if you want to redirect all HTTP access to HTTPS and I would answer Yes here.


Now, regardless if you use HTTP or HTTPS on dgielis.com, you will always end-up with HTTPS.
HTTPS is better for Google, better for security, so no reason not to do it :)



If we want to use our Nginx web server as reverse proxy for our APEX environment we can do that by adapting our /etc/nginx/conf.d/dgielis.com.conf file (see the location sections):

vi /etc/nginx/conf.d/dgielis.com.conf

Add following to the server:

  location /ords/ {
    proxy_pass your_apex_url/ords/;
    proxy_set_header Origin "" ;
    proxy_set_header X-Forwarded-Host $host:$server_port;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_connect_timeout       600;
    proxy_send_timeout          600;
    proxy_read_timeout          600;
    send_timeout                600;
  }

  location /i/ {
    proxy_pass your_apex_url/i/;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }

The final configuration file looks like this:


There's one other configuration change I would suggest you do straightaway; increase the size of the max_body_size in Nginx. Add following line to nginx.conf

vi /etc/nginx/nginx.conf


Test and reload the configuration of Nginx:

nginx -t
nginx -s reload

When going in a browser to https://dgielis.com/ords/ we arrive at Oracle APEX:


There's a lot of optimisation we can do on the reverse proxy. To gain performance we can put the images folder of APEX on the Nginx server, as Nginx is super fast in transferring static files. We can add more redirects, for example that apex.dgielis.com goes to the APEX builder and app.dgielis.com goes to one of our custom APEX apps. As this post is already long, I'm not including that in here.

Once Oracle provides vanity urls, we don't need to do the above, and the URL will point directly to ATP.

Update 26-SEP-2019 (thanks Kris Rice): Note that setting the Origin would negate any CORS info that ORDS is enforcing. That could be a security issue for some people. Oracle is looking into the ability to have your ORDS running on your own Compute VM (the webserver we just setup), which would solve the issue. The vanity URLs would not have the CORS issue either.

In the next post we will use this server to add an on-premises version of APEX Office Print (AOP), so we have everything we need to export data from the database in the format we want, for example in Excel and PDF.

Categories: Development

Free Oracle Cloud: 6. Create a VM Instance (Compute Cloud)

Dimitri Gielis - Fri, 2019-09-20 20:23
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In this post we will setup a virtual machine with Oracle Linux. If you need a machine in the Cloud, this service is for you... and you can't beat the price, it's FREE!

So why would you need a machine in the Cloud? For anything that you do on your own machine, you can do on your machine in the cloud. You can work with co-workers on the same app, test applications, host some PHP app or Wordpress blog or you name it. You have a full Oracle Linux machine where you can do whatever you want.

So lets get started to setup this machine, but first lets do one step on our own machine, as it's a pre-requisite to get started: create an SSH key pair. In the link you find how you create a keypair on Windows, Linux and OSX. You can see your SSH key pair as a more secure way to enter your machine in the cloud than a username/password.

As I'm on OSX here's the command I use:

ssh-keygen -t rsa -N "" -b 2048 -C "oraclecloud2019" -f oraclecloud2019


This will generate two files:


Now you are all set to setup your VM instance in the Oracle Cloud.

When you login to your Oracle Cloud account, select the Create a VM instance:


Give your instance a name:


When you scroll down, you see the section where you have to upload the SSH key file with the name .pub:


Hit the Create button and you are done... You will see the orange icon on the left stating it's being orange provisioned:


Once the provisioning is done the icon on the left will turn green and it's ready to be used:


In case you don't want to wait for the provisioning, you can always come back later by clicking the left top hamburger icon, and in the Compute section click on Instances:


You will get an overview of your instances, see the status and go to the details from there:


Now if you want to connect to your VM in the Oracle Cloud you can use Putty (Windows) or SSH (Linux, OSX). The command looks like this:

ssh -i ssh_key opc@public_ip

The OPC user is the one you use to connect to an Oracle Compute (VM) Instance.

In the following screenshot you see me connecting to my Oracle Cloud VM (for security reasons I used another ssh key, normally it would have been oraclecloud2019 but as I exposed that with my screenshot I setup another one):


There you go, now you have a full system running in the Cloud. In the next post we will setup a webserver on this machine and configure a custom domain.
Categories: Development

Oracle 19c : Point-In-Time Recovery in a PDB

Yann Neuhaus - Fri, 2019-09-20 11:51

Point-In-Time Recovery is also possible in a multitenant environment. As in Non-CDB, a recovery catalog can be used or not. In this blog we will see how to recover a dropped tablespace in a PDB. We will also see the importance of using a recovery catalog or not.
A PITR of a PDB does not affect remaining PBDs. That means that while doing a PITR in PDB, people can use the other PDBs. In this blog we are using an oracle 19c database with local undo mode enabled

SQL> 
  1  SELECT property_name, property_value
  2  FROM   database_properties
  3* WHERE  property_name = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME        PROPE
-------------------- -----
LOCAL_UNDO_ENABLED   TRUE
SQL>

SELECT con_id, tablespace_name FROM   cdb_tablespaces WHERE  tablespace_name LIKE 'UNDO%';

    CON_ID TABLESPACE_NAME
---------- ------------------------------
         3 UNDOTBS1
         4 UNDOTBS1
         1 UNDOTBS1

SQL>

We suppose that
-We have a tablespace named MYTABPDB2
-We have a valid backup of the whole database
-A recovery catalog is not used

Now connecting to the PDB2, let’s drop a tablespace after creating a restore point.

SQL> show con_name;

CON_NAME
------------------------------
PDB2

SQL> create restore point myrestpoint;

Restore point created.

SQL>
SQL> drop tablespace mytabpdb2 including contents and datafiles;

Tablespace dropped.

SQL>

And now let’s perform a PITR to the restore point myrestpoint

1- Connect to the root container

[oracle@oraadserver ~]$ rman target /

[oracle@oraadserver ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 13:07:07 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1546409981)

RMAN>

2- Close the PDB

RMAN> ALTER PLUGGABLE DATABASE PDB2 close;

using target database control file instead of recovery catalog
Statement processed

RMAN>

3- Do the PITR

RMAN> run
{
  SET TO RESTORE POINT myrestpoint;
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}2> 3> 4> 5> 6>

executing command: SET until clause

Starting restore at 20-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=54 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp tag=TAG20190920T141945
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 20-SEP-19

Starting recover at 20-SEP-19
current log archived
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

4- Open the PDB on resetlogs mode

RMAN> alter pluggable DATABASE  pdb2 open resetlogs;

Statement processed

RMAN>

I did not get any error from RMAN, but when looking the alert log file, I have following errors

PDB2(4):Pluggable database PDB2 dictionary check beginning
PDB2(4):Tablespace 'MYTABPDB2' #7 found in data dictionary,
PDB2(4):but not in the controlfile. Adding to controlfile.
PDB2(4):File #25 found in data dictionary but not in controlfile.
PDB2(4):Creating OFFLINE file 'MISSING00025' in the controlfile.
PDB2(4):Pluggable Database PDB2 Dictionary check complete
PDB2(4):Database Characterset for PDB2 is AL32UTF8

Seems there is some issue with the recovery of MYTABPDB2 tablespace. Connected to PDB2 I can have

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/product/19.0.0/dbhome_3/dbs/MISSING00025
MYTABPDB2

The tablespace was not recovered as expected.
What happens? In fact this issue is expected according Doc ID 2435452.1 where we can find
If the point in time recovery of the pluggable database is performed without the catalog, then it is expected to fail

As we are not using a recovery catalog, backup information are stored in the control file and it seems that the actual control file is no longer aware of the data file 25.
As specified in the document, we have to use a recovery catalog

Now let’s connect to a catalog and do again the same PITR
After connecting to the catalog we do a full backup. Then we drop the tablespace and run again the same recovery command while connecting to the catalog. We use the time before the tablespace was dropped.

[oracle@oraadserver trace]$ rman catalog rman/rman@rmancat

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 15:28:29 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to recovery catalog database

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

After closing PDB2 we run following bloc

RMAN> run
{
  SET UNTIL TIME "to_date('20-SEP-2019 15:27:00','DD-MON-YYYY HH24:MI:SS')";
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}
2> 3> 4> 5> 6>
executing command: SET until clause

Starting restore at 20-SEP-19
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: restoring datafile 00026 to /u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp tag=TAG20190920T152554
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

datafile 26 switched to datafile copy
input datafile copy RECID=5 STAMP=1019489668 file name=/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
Finished restore at 20-SEP-19
starting full resync of recovery catalog
full resync complete

Starting recover at 20-SEP-19
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

We then open PDB2 with resetlogs mode and then verify with sqlplus

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
MYTABPDB2


SQL>

And this time the PITR works fine. The tablespace was restored.

Conclusion

As seen in this blog, it is recommended to use a recovery catalog when coming to do some PITR operations in a multitenant environment.

Cet article Oracle 19c : Point-In-Time Recovery in a PDB est apparu en premier sur Blog dbi services.

OpenWorld and Code One 2019 : The Journey Home

Tim Hall - Fri, 2019-09-20 09:06

I got up at a reasonable time and got caught up with blog posts, then it was time to check out and get the BART to the airport. Bag drop was empty, because the rest of the planet was waiting at security. After what felt like an eternity I was through security and sat down and waited for my plane…

We boarded the flight from San Francisco to Amsterdam on time and didn’t have a significant wait for the departure slot, so the captain said we would arrive early. No luck with a spare seat on this flight. The guy next to me was about my size, but wasn’t making an effort to stay in his space. There was some serious man-spreading going on. I ended up spending most of the flight leaning into the aisle and pulling my arm across my body, so my left elbow feels knackered now. Doing that for 11 hours is not fun. I managed to watch the following films.

  • The Shape of Water – I love this film. I’ve seen it a load of times.
  • Rocketman – I wasn’t feeling this at the start. I’m not big on musicals, and I didn’t like the stuff when he was a kid. Once Taron Egerton started playing him it was cool. I kind-of forgot he wasn’t Elton John. If you can get past the start, it’s worth a go!
  • The Accountant – I liked it. Ben Affleck doing deadpan and expressionless is the perfect role for him.
  • John Wick: Chapter 3 – Parabellum – I got up to the final sequence, so I’m not sure how it ends. Pretty much the same as the previous films, which I liked. Just crazy fight scenes with loads of guns.

There was one bit of the flight that was odd. The in-flight entertainment died, then we hit some turbulence. Queue me deciding it was linked and we were all going to die… Pretty soon the turbulence stopped, then after about 10 minutes the screens rebooted…

I had quite a long wait at Schiphol. About 3 hours. That was pretty dull, but what are you going to do?

The flight from Amsterdam to Birmingham was delayed by a few minutes, then the was the issue of people trying to board with 15 pieces of hand luggage and a donkey. I had my bag on my feet. Luckily it was only an hour flight.

II was originally planning to get the train home, but I was so tired I got a taxi. The driver was a nice guy and we had a chat about his kids and future plans, which is a lot nicer than listening to me drone on…

I’m now home and started doing the washing…

I’ll do a wrap-up post tomorrow, with some thoughts about the event…

Cheers

Tim…

OpenWorld and Code One 2019 : The Journey Home was first posted on September 20, 2019 at 3:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Cloud: 5. Setup APEX in ATP and create first APEX app

Dimitri Gielis - Thu, 2019-09-19 18:29
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

Login to your Oracle Cloud Account. Once in, navigate on the left hamburger icon to Autonomous Transaction Processing and select the Autonomous Database. In the overview screen click the Service Console:


Make sure you allow pop-ups as it opens a new tab.
You will get a graphical overview, in here you want to click on the Development link.


There you find the link to Oracle APEX:


When you click the link the first time, it might take a bit of time as APEX is being configured behind the scenes. You first enter your cloud account password.


As it recognises you are logging in the first time, it will prompt you if you want to create a new Workspace. A Workspace is a space where your Oracle APEX applications live.

Click the button and it will prompt to create the Workspace.


Hit the Create Workspace button and you are good to go :)


As you might not click the Create Workspace the first time, I also want to show you how to create a new Workspace moving on.

When you click the Oracle APEX link in the Development dashboard again, it recognises Oracle APEX is already setup and it will prompt you in which workspace you want to login to. You want to login into the INTERNAL workspace with the ADMIN user and your cloud account password (same credentials as with SQL Developer Web).


You will arrive in the Instance Administration screen. Compared to the on-premises version of Oracle APEX, the functionality is a bit more limited as the Oracle Autonomous Database is pre-configured and managed by Oracle. So you can't tinker, for example, with the Security settings or provisioning of workspaces.


The first thing you want to do is create a new Workspace, a place where you will build your Oracle APEX app. In the Manage Workspaces menu, click on Create Workspace:


A modal dialog will appear where you can enter a new or existing database user, password and workspace name. In the following screenshot I want Oracle APEX to create a new database user called CLOUD linked to the workspace CLOUD.


Once that is provisioned, you can sign-out of the INTERNAL workspace, and sign-in to the new created workspace. As workspace name you enter the one you created above, username is the database user and the password is the one you provided when creating the workspace.


And voila... we are in Oracle APEX and we can start to create our APEX applications.



I'm not adding all screenshots to create a first Oracle APEX app in this blog post, but if you are new to APEX and want to see step by step how to start, check out the Oracle APEX Tutorial on YouTube by Caleb Curry. In the next video he walks you how to create an APEX app from a File.


There's one thing that is important when you are using the Oracle Cloud and want to do REST requests: only HTTPS requests are allowed, so make sure you always use https.

The same counts if you want to use APEX Office Print in the free Oracle Autonomous Database Cloud to export the data in the format you want (PDF, Excel, Word, Powerpoint, HTML, ...), use the HTTPS url https://api.apexofficeprint.com or the HTTPS url of your own environment.

Another thing to know is that you can't send emails out of the Free Oracle ATP Cloud with apex_mail. In case you need to send out email, I would recommend to integrate your Oracle APEX app with Mailgun, MailChimp or others.

Update on 26-SEP-2019: you can actually use APEX_MAIL, but you first have to configure the mail server as specified in the documentation.

In the post we configured Oracle APEX, so we can create our APEX apps. One of the things many people want to do is to setup a custom domain, so for example mydomain.com links to an APEX app. Oracle calls this 'vanity URLs', but that is not yet available, but coming very soon. But, no worries, you can actually already do a custom URL :) In the next post I will talk about setting up a Compute instance in the free Oracle Cloud, which makes our APEX hosting more complete, so we can have an on-premises AOP install, a web server to support custom urls etc.
Categories: Development

The Top 10 Biggest Sports Brands (2019)

VitalSoftTech - Thu, 2019-09-19 10:03

The sportswear market is one of the most profitable and lucrative niches in the entire world. Some of the biggest sports brands have successfully capitalized on the demand for sports apparel and equipment, turning them into internationally acknowledged top sports brands. It is also incredibly competitive, and newer and more innovative designs are manufactured each […]

The post The Top 10 Biggest Sports Brands (2019) appeared first on VitalSoftTech.

Categories: DBA Blogs

Free Oracle Cloud: 4. Connecting with SQL Developer Web to ATP

Dimitri Gielis - Wed, 2019-09-18 17:58
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

Login to your Oracle Cloud Account. Once in, navigate on the left hamburger icon to Autonomous Transaction Processing and select the Autonomous Database. In the overview screen click the Service Console:


Make sure you allow pop-ups as it opens a new tab.
You will get a graphical overview, in here you want to click on the Development link.


There you find the link to SQL Developer Web:


You can login with the same credentials as in SQL Developer (Desktop). Username: admin and your cloud account password.


Then you arrive at SQL Developer Web. The first time you login you get an introduction tour so you get used to the interface, but it's very similar to the SQL Developer Desktop version.


You might think why would I use SQL Developer Web? I sometimes don't have access to the database machine directly, but I have access to Oracle APEX interface. I use the SQL Workshop there, but if I had SQL Developer Web I would use that. So any customer in the cloud, I could use SQL Developer Web in case I need access to the database, but they don't allow direct connection :)

At Oracle Open World (OOW) I also heard that SQL Developer Web will ship with ORDS 19.3, so at that time we can use this nice interface on-premises too.
Categories: Development

OpenShift 4.1 Partner Introduction Day

Yann Neuhaus - Wed, 2019-09-18 12:57

Today was the OpenShift Partner Introduction Day by RedHat. This event happened at Lausanne
There were people with different backgrounds.

After presenting the RedHat compagny, the speaker explained what is Openshifts and why people must adopt it.
What we will retain is that OpenShift is trusted enterprise Kubernetes

With OpenShift we can for example
-Automated, full-stack installation from the container host to application services
-Seamless Kubernetes deployment to any cloud or on-premises environment
-Autoscaling of cloud resources
-One-click updates for platform, services,and applications

The new features in the version 4.1 were presented. The speaker also showed the Red Hat Openshift business value

The notions of Ansible Playbooks, Operator, CRI-O, Helm … were also explained.
The speaker also did a little demonstration of creating a small project with OpenShift.
Below the speaker during the demonstration

This was a very interesting event. It was general and allowed people to understand where Openshift is located in the architecture of containerization. But we have to retain that there are lot of components to understand when using OpenShift

Cet article OpenShift 4.1 Partner Introduction Day est apparu en premier sur Blog dbi services.

Free Oracle Cloud: 3. Connecting with SQL Developer (Desktop) to ATP

Dimitri Gielis - Tue, 2019-09-17 19:34
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In the previous post we setup our Oracle database in the Free Autonomous Oracle Database Cloud.
Now we want to connect to this database from our machine with SQL Developer.

Login to your Oracle Cloud Account. In case you forgot the link, you can always go back to your emails and in the "Your Oracle Cloud Account is Fully Provisioned" you find a button to Sign in.

Typically it's in this format: console.DATA_CENTER_YOU_PICKED.oraclecloud.com/?tenant=YOUR_CLOUD_ACCOUNT?provider=OracleIdentityCloudService.
So in my case this is: https://console.us-ashburn-1.oraclecloud.com/?tenant=dimi&provider=OracleIdentityCloudService

Once in, navigate on the left hamburger icon to Autonomous Transaction Processing and select the Autonomous Database (see previous blog post - last two screenshots). In the overview screen click the DB Connection button.


A modal dialog appears to download the credentials (wallet). Click the Download button:


The modal dialog changes and you need to enter a password and hit download:


Now we will make a new connection in SQL Developer. Right click on the Oracle Connections and click the New Connection... link:


Enter a name of your choice. In the User Info section as Username you enter admin and as paswoord, the paswoord you entered when you setup your database (previous blog post).
Select for Connection Type Cloud Wallet. A details section will open where you can select the file you downloaded before in the Configuration File.


Hit the Test button and see if you receive a Success message. Once fine, you can save and connect.

The admin user is a special user in the Oracle Cloud. You can see it as the sys/system user, with many of the same rights. So you can query for example dba_users to see all the users/schemas in the database:


So now you are all set to play with your database. You can setup new users, create tables, load and query data etc.

In this post we saw how to connect from SQL Developer Desktop version to the Oracle Cloud Database. In the next post I will show how to connect with SQL Developer Web, a webbased alternative to SQL Developer (Desktop).

Categories: Development

VirtualBox 6.0.12

Tim Hall - Tue, 2019-09-17 11:35

I know I’ve been distracted with the lead up to OpenWorld and Code One 2019, but how did I miss this release? VirtualBox 6.0.12 arrived two weeks ago.

The downloads and changelog are in the usual places.

Being a reckless type, I downloaded it and installed it on my Windows 10m laptop this morning. I’ve got a live demo in 2 hours!

The install was fine and my Vagrant VMs start with no problems. More extensive testing and installations on Oracle Linux and macOS hosts will happen when I get home, but so far so good!

Cheers

Tim…

VirtualBox 6.0.12 was first posted on September 17, 2019 at 5:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator