Feed aggregator

Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory

Oracle Press Releases - Mon, 2019-02-04 07:00
Press Release
Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory Built-in artificial intelligence and intuitive dashboards help retailers prevent overstocking and boost customer satisfaction

Redwood Shores, Calif.—Feb 4, 2019

Retailers can now improve inventory management through a single view of demand throughout their entire product lifecycle with the next generation Oracle Retail Demand Forecasting Cloud Service. With built-in machine learning, artificial intelligence and decision science, the offering enables retailers to gain pervasive value across retail processes, allowing for optimal planning strategies, decreased operational costs, and enhanced customer satisfaction. In addition, modern, intuitive dashboards improve operational agility and workflows, adapting immediately to new information to improve inventory outcomes.

The offering is part of the Oracle’s Platform for Modern Retail, all built on the cloud-native platform and aligned to the Oracle Retail Planning and Optimization portfolio. Learn more about Oracle Retail Demand Forecasting Cloud Service here.

“As customer trends continue to evolve faster than ever before, it’s imperative that retailers move quickly to optimize inventory and demand. Too little inventory and customers are dissatisfied. Too much and retailers have a bottom line problem that leads to unprofitable discounting,” said Jeff Warren, vice president, Oracle Retail. “We have distilled over 15 years of forecasting experience across hundreds of retailers worldwide into a comprehensive and modern solution that maximizes the forecast accuracy for the entire product lifecycle. Our customers asked, and we delivered.”

For example, the offering was evaluated by a major specialty retailer against 2.2M units sold over the 2018 holiday season, representing over $480M in revenue. With the forecast accuracy improvements, the retailer was able to achieve the same sales with at least 345K units less of inventory. In tandem, the retailer improved 70 percent of forecasts using completely automated next-generation forecasting data science. These results gave the retailer the confidence to decrease safety stock by 10 percent, reduce overall inventory by 30 percent and improve in-stock rates by 10 percent through smarter placement of the same inventory–all while delivering the same level of service to customers.

“As unified commerce sales grow, the ability to support all four business activities (demand planning, supply planning, inventory planning, and sales and operations execution/merchandising, inventory and operations execution) across all sales channels becomes even more important. A 2017 Gartner survey of supply chain executives highlighted the importance organizations place on their planning capabilities.” Of the “top three investment areas from 2016 through 2017, 36% of retail respondents cited upgrading their demand management capabilities,” wrote Gartner experts Mike Griswold and Alex Pradhan. Source:  Gartner Market Guide for Retail Forecasting and Replenishment Solutions, December 31, 2018

Maximizing Forecast Accuracy Throughout the Product Lifecycle

With the next generation Oracle Retail Demand Forecasting Cloud Service, retailers can:

  • Tailor approaches for short and long lifecycle products, maximizing forecast accuracy for the entire product lifecycle
  • Adapt to recent trends, seasonality, out-of-stocks, and promotions; and reflect retailers’ unique demand drivers, delivering better customer experience from engagement to sale, to fulfillment
  • Leverage dashboard views to support day-in-the-life forecasting workflows such as forecast overview, forecast scorecard, exceptions and forecast approvals
  • Gain transparency across the entire supply chain that enables analytical processes and end-users to understand and engage with the forecast, increasing inventory productivity
  • Coordinate and simulate demand-driven outcomes using forecasts that adapt immediately to new information and without a dependency on batch processes, driving operational agility
Contact Info
Kristin Reeves
Oracle
925-787-6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • 925-787-6744

[Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue

Online Apps DBA - Mon, 2019-02-04 06:49

Installed GoldenGate software with Oracle 11.2.0.4 database and also configured all the bidirectional parameters but Still Facing Oracle GoldenGate Bidirectional Replication issue… Worry Not! We are Here Visit: https://k21academy.com/goldengate33 and Consider our New Blog Covering: ✔ What is Bi-Directional Replication and its Capabilities ✔ Issues in Bi-Directional Replication ✔ Cause and Solution for the Issue […]

The post [Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

ORA-600 [ossnet_assign_msgid_1] on Exadata

Syed Jaffar - Mon, 2019-02-04 06:10
On a Exadata system with Oracle v12.1, a MERGE statement with parallelism was frequently failing with below ORA errors:

                   ORA-12805: parallel query server died unexpectedly
        ORA-06512

A quick look in the alert.log, an ORA-600 is noticed.

ORA-00600: internal error code, arguments: [ossnet_assign_msgid_1], [],[ ] 

The best and easy way to diagnose any ORA-600 errors is to utilize the ORA-600 tool available on MOS.

In our case, with large hash join, the following MOS note helped to fix the issue:

On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] (Doc ID 2254344.1)

Cause:
On Exadata Systems large hash joins can fail with ORA-600 [OSSNET_ASSIGN_MSGID_1] and the root cause if often a too small default value  for _smm_auto_min_io_size  and  _smm_auto_max_io_size'

and the workaround to fix the issue is to set the following underscore (_) parameters:

_smm_auto_max_io_size = 2048
_smm_auto_min_io_size = 256

In some cases, the below MOS notes helps to fix ORA-600 [ossnet_assign_msgid_1] error.

ORA-600 [ossnet_assign_msgid_1] (Doc ID 1522389.1)
Bug 14512766 : ORA-600 [OSSNET_ASSIGN_MSGID_1] DURING RMAN CONVERSION

Consistent device paths for block volumes - New Feature - Jan 2019 - Oracle Cloud Infrastructure

Senthil Rajendran - Mon, 2019-02-04 04:53
New Feature : Consistent device paths for block volumes
Services : Block Volume
Release Date : Jan 2019

With this feature you can now select a device path that will remain consistent between instance reboots. though this is an optional feature it is recommended to use the device path as you can refer to the volumes when create partitions , creating file systems , mounting file system , you can also specify this option in /etc/fstab file for automatically mounting volumes on the instance boot.





























Operating System Linux Images that are released by Oracle prior to November 2018 would not be able to use this feature. Windows based , Custom images and Partner images are not supported.

To verify if consistent device path support is available on your instance , login into your environment and do a "ll /dev/oracleoci/oraclevd*" , if you see a list of devices then it is supported else if you get a message "no such file or directory" then it is not supported.

Screenshot showing output for listing attached devices on instance using consistent device paths



Attaching a device path in the console is done simply by selecting a device path for the block volume.  Once attached you can verify the block volume from the summary page

Device Path : /dev/oracleoci/oraclevdb


After attaching the device then from the operating system you can create a partition using the device path.

fdisk  /dev/oracleoci/oraclevdb 
mkfs.ext3 /dev/oracleoci/oraclevdb1
update : /etc/fstab --- /dev/oracleoci/oraclevdb1   /oradata    ext3    defaults,_netdev,noatime  0  2
mkdir /oradata
mount /dev/oracleoci/oraclevdb1 /oradata





Cross Field Form Validation in Oracle JET

Andrejus Baranovski - Mon, 2019-02-04 03:09
JET keeps evolving and in the latest versions  - toolkit provides improved support for form cross-field validation. It is much easier to implement validation than it was before. I will show it in this example.

Example of the data entry form. Validation logic:

- Invoice Date before Payment Due Date and Payment Date
- Payment Due Date before Payment Date


Example when two fields fail validation:


JET provides component called validation group. Form can be wrapped by this component to identify if any validation errors are reported there. For example, when calling JS function, before proceeding with the function code - we can check if validation group contains errors:


Input field can be assigned with custom validator function:


Example of validation function code where cross-field validation logic is implemented - we compare field value with other fields. If validation rule condition is false - validation error is thrown:


Example of function code, where validation group is checked for errors. If there are errors in the current validation group - errors are displayed and the first field with error is focused:


Download sample code from my GitHub repo.

Documentum – Process Builder Installation Fails

Yann Neuhaus - Mon, 2019-02-04 01:25

A couple of weeks ago, at a customer I received an incident from the application team regarding an error occurred when installing Process Builder. The error message was:
The Process Engine license has not been enabled or is invalid in the ‘RADEV’ repository.
The Process Engine license must be enabled to use the Process Builder.
Please see your system administrator
.”

The error appears when selecting the repository:

Before I investigate on this incident I had to learn more about the Process Builder as it is usually managed by the application team.
In fact, The Documentum Process Builder is a software for creating a business process templates, used to formalize the steps required to complete a business process such as an approval process, so the goal is to extend the basic functionality of Documentum Workflow Manager.
It is a client application that can be installed on any computer, but before installing Process Builder you need to prepare your content server and repository by installing the Process Engine, because the CS handle the check in, check out, versioning, archiving, and all processes created are saved in the repository… Hummm, so maybe the issue is that my content server or repository is not well configured?

To be clean from the client side, I asked the application team to confirm the docbroker and port configured in C:\Documentum\Config\dfc.properties.

From the Content Server side, we used Process Engine installer, which install the Process Engine on all repositories that are served by the Content Server, deploy the bpm.ear file on Java Method Server and install the DAR files on each repository.

So let’s check the installation:

1. The BPM url http://Server:9080/bpm/modules.jsp is reachable:

2. No error in the bpm log file $JBOSS_HOME/server/DctmServer_MethodServer/logs/bpm-runtime.log.

3. BPM and XCP DARs are correctly installed in the repository:

select r_object_id, object_name, r_creation_date from dmc_dar where object_name in ('BPM', 'xcp');
080f42a480026d98 BPM 8/29/2018 10:43:35
080f42a48002697d xcp 8/29/2018 10:42:11

4. The Process Engine module is missed in the docbase configuration:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3

We know the root cause of this incident now :D
To resolve the issue, add the Process Engine module to the docbase config:

API>fetch,c,docbaseconfig
API>append,c,l,r_module_name
Process Engine
API>append,c,l,r_module_mode
3
API>save,c,l

Check after update:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
								[5]: Process Engine
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3
								[5]: 3
		...

Then I asked the application team to retry the installation, the issue has been resolved.

No manual docbase configuration required in the Process Engine Installation Guide. I guess the Process Engine Installer should do it automatically.
I will install a new environment in the next few days/weeks, and keep you informed if any news ;)

Cet article Documentum – Process Builder Installation Fails est apparu en premier sur Blog dbi services.

Batch Scheduler Integration Questions

Anthony Shorten - Sun, 2019-02-03 21:57

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows:

  • Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product.
  • Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support.
  • Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using.

Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead.

All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product. 

When asked about which technology should be used I tend to recommend the following:

  • If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler.
  • If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration.

Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs.

  • I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.

 

Java 11: JEP 333 ZGC A Scalable Low-Latency Garbage Collector

Dietrich Schroff - Sat, 2019-02-02 14:34
After i found this strange "No-Op Garbage Collector", i was keen, if there are some other new GC features with Java 11.


There is another JEP with the number 333:
 If you look here, the goals are:
  • GC pause times should not exceed 10ms
  • Handle heaps ranging from relatively small (a few hundreds of megabytes) to very large (many terabytes) in size
  • No more than 15% application throughput reduction compared to using G1
  • Lay a foundation for future GC features and optimizations leveraging colored pointers and load barriers
  • Initially supported platform: Linux/x64
 Inside JEP 333 there are some numbers for the performance provided:
Below are typical GC pause times from the same benchmark. ZGC manages to stay well below the 10ms goal. Note that exact numbers can vary (both up and down, but not significantly) depending on the exact machine and setup used.
(Lower is better)
ZGC
avg: 1.091ms (+/-0.215ms)
95th percentile: 1.380ms
99th percentile: 1.512ms
99.9th percentile: 1.663ms
99.99th percentile: 1.681ms
max: 1.681ms

G1
avg: 156.806ms (+/-71.126ms)
95th percentile: 316.672ms
99th percentile: 428.095ms
99.9th percentile: 543.846ms
99.99th percentile: 543.846ms
max: 543.846ms
This looks very promising. But within the limitations you can read, that it will take some more time, until this can be used:
The initial experimental version of ZGC will not have support for class unloading. The ClassUnloading and ClassUnloadingWithConcurrentMark options will be disabled by default. Enabling them will have no effect.
Also, ZGC will initially not have support for JVMCI (i.e. Graal). An error message will be printed if the EnableJVMCI option is enabled.
These limitations will be addressed at a later stage in this project.Nevertheless: You can use this GC with the command line argument
-XX:+UnlockExperimentalVMOptions -XX:+UseZGCFor more information take a look here: https://wiki.openjdk.java.net/display/zgc/Main


SELECT FOR UPDATE SKIP LOCKED

Tom Kyte - Fri, 2019-02-01 16:46
Hi Team Have a scenario to select a particular set of rows from a table for further processing. We need to ensure that multi users do not work on the same set of rows. We use SELECT FOR UPDATE SKIP LOCKED in order to achieve this. EG:a simp...
Categories: DBA Blogs

Sending e-mail! -- Oracle 8i specific response

Tom Kyte - Fri, 2019-02-01 16:46
How to send personalized email to clients registered in my portal www.intrainternet.com using the information stored in our Database Oracle 8i automatically?
Categories: DBA Blogs

Fusion and in-app Guides

Duncan Davies - Fri, 2019-02-01 05:00

I’ve heard a couple of clients recently mentioning that they’d like to have some kind of in-app guide setup to walk their self-service users through common tasks in Fusion, so I thought I’d investigate.

There are quite a few companies that operate in the same area, here are the ones that I found with a short googling session:

ServiceCostAppcuesVariesIridizeAcquired by Oracle, now known as Oracle Guided LearningMyGuide$1-3/user/monthPendoVariesToonimoRequest a QuoteUserlaneRequest a QuoteWalkMeRequest a QuoteWhatfixRequest a Quote

There are also a couple of Open Source alternatives Joyride and Bootstrap Tour. Although they’re free to use, you’re going to need to code to get anything up and running so they’d be significantly higher maintenance.

Over the next few weeks I’ll investigate some of these options and post the results.

Recover dropped tables with Virtual Access Restore in #Exasol

The Oracle Instructor - Fri, 2019-02-01 04:34

The technique to recover only certain objects from an ordinary backup is called Virtual Access Restore. Means you create a database from backup that contains only the minimum elements needed to access the objects you request. This database is then removed afterwards.

Let’s see an example. This is my initial setup:

EXAoperation Database page

One database in a 2+1 cluster. Yes it’s tiny because it lives on my notebook in VirtualBox. See here how you can get that too.

It uses the data volume v0000 and I took a backup into the archive volume v0002 already.

EXAoperation volumes

I have a schema named RETAIL there with the table SALES:

RETAIL.SALES

By mistake, that table gets dropped:

drop table

And I’m on AUTOCOMMIT, otherwise this could be rolled back in Exasol. Virtual Access Restore to the rescue!

First I need another data volume:

second data volume

Notice the size of the new volume: It is smaller than the overall size of the backup respectively the size of the “production database”! I did that to prove that space is not much of a concern here.

Then I add a second database to the cluster that uses that volume. The connection port (8564) must be different from the port used by the first database and the DB RAM in total must not exceed the licensed size, which is limited to 4 GB RAM in my case:

second database

I did not start that database because for the restore procedure it has to be down anyway. Clicking on the DB Name and then on the Backups button gets me here:

Foreign database backups

No backup shown yet because I didn’t take any backups with exa_db2. Clicking on Show foreign database backups:

Backup choice

The Expiration date must be empty for a Virtual Access Restore, so I just remove it and click Apply. Then I select the Restore Type as Virtual Access and click Restore:

Virtual Access Restore

This will automatically start the second database:

Two databases in one cluster

I connect to exa_db2 with EXAplus, where the Schema Browser gives me the DDL for the table SALES:

ExaPlus Schema Browser get DDL

I take that to exa_db1 and run it there, which gives me the table back but empty. Next I create a connection from exa_db1 to exa_db2 and import the table

create connection exa_db2 
to '192.168.43.11..13:8564' 
user 'sys' identified by 'exasol';

import into retail.sales 
from exa at exa_db2 
table retail.sales;

This took about 2 Minutes:

Import

The second database and then the second data volume can now be dropped. Problem solved!

 

Categories: DBA Blogs

Continuing the Journey

Steven Chan - Fri, 2019-02-01 03:50

Greetings, EBS Technology Blog readers!

Speaking from my own experience as an Oracle E-Business Suite customer, this blog served as my go-to place for information regarding Oracle E-Business Suite Technology. So I understand first-hand the importance of this blog to you.

We in the EBS Technology Product Management and Development Teams are grateful to Steven for his leadership in the continuing refinement and usability of Oracle E-Business Suite, and his pioneering use of this blog to better keep in touch with you, our customers. Personally, I could not have asked for a better mentor. Wishing you all the best, Steven! And, hoping our paths cross again soon and often.

On behalf of the whole team, let me stress our continued commitment to the blog, and intention of operating with Steven's original guiding principle in mind: to bring you the information you need, when you need it. And ultimately, to keep improving how we do this.

Steven's new path has left us with some rather large shoes to fill, so going forward, this blog will bring you the distinctive individual voices of a number of highly experienced experts from our team who will be giving you their own unique insights into what we have delivered. Over the next few weeks, Kevin and I will be re-introducing you to some of our existing and frequent contributors and introducing you to new blog authors.

Our key goal is to continue to provide you, our readers and customers, the very latest news direct from Oracle E-Business Suite Development. And last but by no means least, we look forward to hearing your comments and feedback as we continue the journey of this blog.

Related Articles
Categories: APPS Blogs

Installing Spinnaker on Pivotal Container Service (PKS) with NSX-T running on vSphere

Pas Apicella - Thu, 2019-01-31 19:47
I decided to install spinnaker on my vSphere PKS installation into one of my clusters. Here is how I did this step by step

1. You will need PKS installed which I have on vSphere with PKS 1.2 using NSX-T. Here is a screen shot of that showing Ops Manager UI


Make sure your PKS Plans have these check boxes enabled, without these checked spinnaker will not install using the HELM chart we will be using below


2. In my setup I created a DataStore which will be used by my K8's cluster, this is optional you can setup PVC however you see fit.



3. Now it's assumed you have a K8s cluster which I have as shown below. I used the PKS CLI to create a small cluster of 1 master node and 3 worker nodes

$ pks cluster lemons

Name:                     lemons
Plan Name:                small
UUID:                     19318553-472d-4bb5-9783-425ce5626149
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   lemons.haas-65.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             3
Kubernetes Master IP(s):  10.y.y.y
Network Profile Name:

4. Create a Storage Class as follows, notice how we reference our vSphere Data Store named "k8s" as per step 2

$ kubectl create -f storage-class-vsphere.yaml

Note: storage-class-vsphere.yaml defined as follows

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
  datastore: k8s
  diskformat: thin
  fstype: ext3

5. Set this Storage Class as the default

$ kubectl patch storageclass fast -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Verify

papicella@papicella:~$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   14h

6. Install helm as shown below

$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
$ sleep 10
$ helm ls

Note: rbac-config.yaml defined as follows

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

7. Install spinnaker into your K8's cluster as follows

$ helm install --name myspinnaker stable/spinnaker --timeout 6000 --debug

If everything worked

papicella@papicella:~$ kubectl get pods
NAME                                  READY     STATUS      RESTARTS   AGE
myspinnaker-install-using-hal-gbd96   0/1       Completed   0          14m
myspinnaker-minio-5d4c999f8b-ttm7f    1/1       Running     0          14m
myspinnaker-redis-master-0            1/1       Running     0          14m
myspinnaker-spinnaker-halyard-0       1/1       Running     0          14m
spin-clouddriver-7b8cd6f964-ksksl     1/1       Running     0          12m
spin-deck-749c84fd77-j2t4h            1/1       Running     0          12m
spin-echo-5b9fd6f9fd-k62kd            1/1       Running     0          12m
spin-front50-6bfffdbbf8-v4cr4         1/1       Running     1          12m
spin-gate-6c4959fc85-lj52h            1/1       Running     0          12m
spin-igor-5f6756d8d7-zrbkw            1/1       Running     0          12m
spin-orca-5dcb7d79f7-v7cds            1/1       Running     0          12m
spin-rosco-7cb8bd4849-c44wg           1/1       Running     0          12m

8. At the end of the HELM command once complete you will see output as follows

1. You will need to create 2 port forwarding tunnels in order to access the Spinnaker UI:
  export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace default $DECK_POD 9000

2. Visit the Spinnaker UI by opening your browser to: http://127.0.0.1:9000

To customize your Spinnaker installation. Create a shell in your Halyard pod:

  kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

For more info on using Halyard to customize your installation, visit:
  https://www.spinnaker.io/reference/halyard/

For more info on the Kubernetes integration for Spinnaker, visit:
  https://www.spinnaker.io/reference/providers/kubernetes-v2/

9. Go ahead and run these commands to connect using your localhost to the spinnaker UI

$ export DECK_POD=$(kubectl get pods --namespace default -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
$ kubectl port-forward --namespace default $DECK_POD 9000
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

10. Browse to http://127.0.0.1:9000



More Information

Spinnaker
https://www.spinnaker.io/

Pivotal Container Service
https://pivotal.io/platform/pivotal-container-service


Categories: Fusion Middleware

Configuration Management for Oracle Utilities

Anthony Shorten - Thu, 2019-01-31 18:45

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents:

  • Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products.
  • Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs.
  • Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud.
  • Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators.
  • Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud.
  • Change Management. A generic change management process to approve extension releases including assessment criteria.
  • Configuration Status. The information available for reporting state of extension assets.
  • Defect Management. A generic defect management process to handle defects in the product and extensions.
  • Implementing Fixes. A process and advice on implementing single fixes individually or in groups.
  • Implementing Upgrades. The common techniques and processes for implementing upgrades.
  • Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud.

For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

Introducing Fishbowl’s XML Feed Connector for Google Cloud Search

Last November, Google released Cloud Search with third-party connectivity. While not a direct replacement for the Google Search Appliance (GSA), Google Cloud Search is Google’s next-generation search platform and is an excellent option for many existing GSA customers whose appliances are nearing (or past) their expiration. Fishbowl is aiming to make the transition for GSA customers even easier with our new XML Feed Connector for Google Cloud Search.

One of the GSA’s features was the ability to index content via custom feeds using the GSA Feeds Protocol. Custom XML feed files containing content, URLs, and metadata could be pushed directly to the GSA through a feed client, and the GSA would parse and index the content in those files. The XML Feed Connector for Google Cloud Search brings this same functionality to Google’s next-generation search platform, allowing GSA customers to continue to use their existing XML feed files with Cloud Search.

Our number one priority with the XML Feed Connector was to ensure that users would be able to use the exact same XML feed files they used with the GSA, with no modifications to the files required. These XML feed files can provide content either by pointing to a URL to be crawled, or by directly providing the text, HTML, or compressed content in the XML itself. For URLs, the GSA’s built-in web crawler would retrieve the content; however, Google Cloud Search has no built-in crawling capabilities. But fear not, as our XML Feed Connector will handle URL content retrieval before sending the content to Cloud Search for indexing. It will also extract the title and metadata from any HTML page or PDF document retrieved via the provided URL, allowing the metadata to be used for relevancy, display, and filtering purposes. For content feeds using base-64 compressed content, the connector will also handle decompression and extraction of content for indexing.

In order to queue feeds for indexing, we’ve implemented the GSA’s feed client functionality, allowing feed files to be pushed to the Connector through a web port. The same scripts and web forms you used with the GSA will work here. You can configure the HTTP listener port and restrict the Connector to only accept files from certain IP addresses.

Another difference between the GSA and Google Cloud Search is how they handle metadata. The GSA would accept and index any metadata provided for an item, but Cloud Search requires you to specify and register a structured data schema that defines the metadata fields that will be accepted. There are tighter restrictions on names of metadata fields in Cloud Search, so we implemented the ability to map metadata names between those in your feed files and those uploaded to Cloud Search. For example, let’s say your XML feed file has a metadata field titled “document_title”. Cloud Search does not allow for underscores in metadata definitions, so you could register your schema with the metadata field “documenttitle”, then using the XML Feed Connector, map the XML field “document_title” to the Cloud Search field “documenttitle”.

Here is a full rundown of the supported features in the XML Feed Connector for Google Cloud Search:

  • Full, incremental, and metadata-and-url feed types
  • Grouping of records
  • Add and delete actions
  • Mapping of metadata
  • Feed content:
    • Text content
    • HTML content
    • Base 64 binary content
    • Base 64 compressed content
    • Retrieve content via URL
    • Extract HTML title and meta tags
    • Extract PDF title and metadata
  • Basic authentication to retrieve content from URLs
  • Configurable HTTP feed port
  • Configurable feed source IP restrictions

Of course, you don’t have to have used the GSA to benefit from the XML Feed Connector. As previously mentioned, Google Cloud Search does not have a built-in web crawler, and the XML Feed Connector can be given a feed file with URLs to retrieve content from and index. Feeds are especially helpful for indexing html content that cannot be traversed using a traditional web/spidering approach such as web applications, web-based content libraries, or single-page applications. If you’d like to learn more about Google Cloud Search or the XML Feed Connector, please contact us.

Fishbowl Solutions is a Google Cloud Partner and authorized Cloud Search reseller.

The post Introducing Fishbowl’s XML Feed Connector for Google Cloud Search appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

SQL Profile not used on slightly different query

Bobby Durrett's DBA Blog - Thu, 2019-01-31 15:09

Last week I was asked to help with a performance problem that looked a lot like a problem I fixed in July with a SQL Profile. The query whose plan I fixed back in July was modified by a minor application change over the weekend. A single column that was already in the select clause was added to another part of the select clause. As a result, the SQL_ID for the new query was different than the one for the July query. The SQL Profile from July associated SQL_ID 2w9nb7yvu91g0 with PLAN_HASH_VALUE 1178583502, but since the SQL_ID was now 43r1v8v6fc52q the SQL Profile was no longer used. At first, I thought I would have to redo the work I did in July to create a SQL Profile for the new query. Then I realized that the plan I used in July would work with the new SQL_ID so all I did was create a SQL Profile relating SQL_ID 43r1v8v6fc52q with PLAN_HASH_VALUE 1178583502 and the problem was solved. This is an 11.2.0.3 database running on the HP-UX Itanium platform. Here is a post from 2013 explaining how to create a SQL Profile: url. I thought it would be helpful to use this post to go over the steps that I went through with the July incident and how I originally generated the good plan. Then I wanted to make some comments about the various ways I come up with good plans for SQL Profiles by either generating a new better plan or by finding an older existing better one. Lastly, I wanted to talk about how a given good plan can be used for a variety of similar SQL statements.

The problem query that I worked on in July and many of the other SQL statements that I tune with SQL Profiles have bind variables in their where clauses. Usually the optimizer generates the plan for a query with bind variables once based on the values of the bind variables at that time. Then, unless the plan is flushed out of the shared pool, the query continues to run on the same plan even if it is horribly inefficient for other bind variable values. There is a feature that will cause the optimizer to run different plans based on the bind variable values in some cases but the SQL statements that I keep running into do not seem to use that feature. Since the query I worked on in July had bind variables I assumed that it was a typical case of a plan that worked well for one set of bind variables and that was terribly slow for another set. So, I had to find a set of bind variable values that made the query slow and figure out a better plan for those values. I used my bind2.sql script to extract the bind variable values for the problem query when I was working on the problem in July.

After extracting the bind variables, I used an AWR report to figure out which part of the plan contributed the most to the run time of the query so that I knew which bind variable value was causing the slowdown. Using an AWR report in this way only works if you do not have a bunch of slow SQL statements running at the same time. In this case the problem query 2w9nb7yvu91g0 was dominating the activity on the database with 62.19% of the total elapsed time. If there were a bunch of SQL Statements at the top of this list with similar percent of total values, it might be hard to use the AWR report to find information about this one query.

Since the activity for 2w9nb7yvu91g0 was 87.19% CPU I looked for the segments with the most logical reads. Logical reads are reads from memory, so they consume CPU and not disk I/O. In the graph below the segment for the S_ACCNT_POSTN table has 88.60% of the logical reads so most likely this segment caused the slowness of the query’s plan.

I looked at the plan for 2w9nb7yvu91g0 to see where the most heavily read table was used. This would probably be the source of the slow query performance. I found that it was doing a range scan of an index for the S_ACCNT_POSTN table that had the column POSITION_ID as its first column. This made me suspect that the plan was using the wrong index. If an index was used to retrieve many rows from the table that could take a long time. I did a count on all the rows in the table grouping by POSITION_ID and found that most rows had a specific value for that column. I replaced the actual POSITION_ID values with VALUE1, VALUE2, etc. below to hide the real values.

POSITION_ID            CNT
--------------- ----------
VALUE1             2075039
VALUE2               17671
VALUE3                8965
VALUE4                5830
VALUE5                5502
VALUE6                5070
VALUE7                4907
VALUE8                4903

Next, I verified that the query had an equal condition that related a bind variable to the POSITION_ID column of the problem table. This made me suspect that the plan in the shared pool was generated with a bind variable value for POSITION_ID other than VALUE1. So, that plan would work well for whatever value was used to create it. POSITION_ID would be equal to that value for a small percentage of the rows in the table. But, running the query in SQL*Plus with POSITION_ID=’VALUE1′ caused the optimizer to choose a plan that made sense given that this condition was true for most of the rows in the table. The PLAN_HASH_VALUE for the new plan was 1178583502.

I tested 1178583502 against a variety of possible bind variable values by using an outline hint in SQL*Plus scripts to force that plan no matter which values I tested against. I extracted the outline hint by running the query with POSITION_ID=’VALUE1′ and using this dbms_xplan call:

select * from table(dbms_xplan.display_cursor(null,null,'OUTLINE'));

Then I just added the outline hint to a copy of the same SQL*Plus script and tried various combinations of bind variable values as constants in the where clause just as I had tried VALUE1 for POSITION_ID. I used the values that I had extracted using bind2.sql. After verifying that the new plan worked with a variety of possible bind variable values, I used a SQL Profile to force 2w9nb7yvu91g0 to use 1178583502 and the problem was resolved.

I have just described how I created the original July SQL Profile by running a version of the problem query replacing the bind variables with constants that I knew would cause the original plan to run for a long time. The optimizer chose a better plan for this set of constants than the one locked into the shared pool for the original query. I used the PLAN_HASH_VALUE for this plan to create a SQL Profile for the July query. This is like an approach that I documented in two earlier blog posts. In 2014 I talked about using a hint to get a faster plan in memory so I could use it in a SQL Profile. In 2017 I suggested using an outline hint in the same way. In both of those cases I ran the problem query with hints and verified that it was faster with the hints. Then I used a SQL Profile to force the better PLAN_HASH_VALUE onto the problem query. So, in all these cases the key is to generate a better plan in any way possible so that it is in memory and then create a SQL Profile based on it. A lot of times we have queries that have run on a better plan in the past and we just apply a SQL Profile that forces the better plan that is already in the system. My December, 2018 post documents this type of situation. But the 2014 and 2017 blog posts that I mentioned above and the July 2018 example that I just described all are similar in that we had to come up with a new plan that the query had never used and then force it onto the SQL statement using a SQL Profile.

The incidents in January and July and the cases where I added hints all lead me to wonder how different one SQL statement can be from another and still share the same plan. The problem last week showed that two queries with slightly different select clauses could still use the same plan. The other cases show that you can add hints or run the statement with bind variables replaced with constants. In the January case I did not have to go back through the analysis that I did in July because I could quickly force the existing plan from the July query onto the January one. The January problem also shows the limits of SQL Profiles. The slightest change to a SQL statement causes a SQL Profile to be ignored, even though the plan would still work for the new SQL statement. But in the January case the ability to use the same plan for slightly different queries made it easy to create a new SQL Profile.

Bobby

Categories: DBA Blogs

A New Chapter, Redux

Steven Chan - Thu, 2019-01-31 11:54

The new team is in place now, so it's time to bow out.

My 21 years at Oracle have been more-fulfilling than I would have ever imagined. All of you – my mentors, colleagues, staff, and faithful readers – have helped me grow professionally and personally. Some of you have done even more, by contributing to my life in deep and profound ways. You have my eternal gratitude.

Life is long and the world small. I hope that our paths will cross again.

Categories: APPS Blogs

Descending Problem

Jonathan Lewis - Thu, 2019-01-31 09:34

I’ve written in the past about oddities with descending indexes ( here, here, and here, for example) but I’ve just come across a case where I may have to introduce a descending index that really shouldn’t need to exist. As so often happens it’s at the boundary where two Oracle features collide. I have a table that handles data for a large number of customers, who record a reasonable number of transactions per year, and I have a query that displays the most recent transactions for a customer. Conveniently the table is partitioned by hash on the customer ID, and I have an index that starts with the customer_id and transaction_date columns. So here’s my query or, to be a little more accurate, the client’s query – simplified and camouflaged:


select  /*+ gather_plan_statistics */
        *
from    (
             select
                    v1.*,
                    rownum rn
             from   (
                             select   /*
                                         no_eliminate_oby
                                         index_rs_desc(t1 (customer_id, transaction_date))
                                      */
                                      t1.*
                             from     t1
                             where    customer_id = 50
                             and      transaction_date >= to_date('1900-01-01','yyyy-mm-dd')
                             order by transaction_date DESC
                ) v1
                where  rownum <= 10 -- > comment to avoid WordPress format issue
         )
where    rn >= 1
;

You’ll notice some hinting – the /*+ gather_plan_statistics */ will allow me to report the rowsource execution stats when I pull the plan from memory, and the hints in the inline view (which I’ve commented out in the above) will force a particular execution plan – walking through the index on (company_id, transaction_date) in descending order.

If I create t1 as a simple (non-partitioned) heap table I get the following plan unhinted (I’ve had to edit a “less than or equal to” symbol to avoid a WordPress format issue):

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |       |      1 |        |    14 (100)|     10 |00:00:00.01 |      14 |
|*  1 |  VIEW                           |       |      1 |     10 |    14   (0)|     10 |00:00:00.01 |      14 |
|*  2 |   COUNT STOPKEY                 |       |      1 |        |            |     10 |00:00:00.01 |      14 |
|   3 |    VIEW                         |       |      1 |     10 |    14   (0)|     10 |00:00:00.01 |      14 |
|   4 |     TABLE ACCESS BY INDEX ROWID | T1    |      1 |    340 |    14   (0)|     10 |00:00:00.01 |      14 |
|*  5 |      INDEX RANGE SCAN DESCENDING| T1_I1 |      1 |     10 |     3   (0)|     10 |00:00:00.01 |       4 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM .LE. 10)
   5 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE" IS NOT NULL AND "TRANSACTION_DATE">=TO_DATE('
              1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


Notice the descending range scan of the index – just as I wanted it – the minimal number of buffer visits, and only 10 rows (and rowids) examined from the table. But what happens if I recreate t1 as a hash-partitioned table with local index – here’s the new plan, again without hinting the SQL:


----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |       |      1 |        |   207 (100)|     10 |00:00:00.01 |     138 |       |       |          |
|*  1 |  VIEW                                          |       |      1 |     10 |   207   (1)|     10 |00:00:00.01 |     138 |       |       |          |
|*  2 |   COUNT STOPKEY                                |       |      1 |        |            |     10 |00:00:00.01 |     138 |       |       |          |
|   3 |    VIEW                                        |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |     138 |       |       |          |
|*  4 |     SORT ORDER BY STOPKEY                      |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |     138 |  2048 |  2048 | 2048  (0)|
|   5 |      PARTITION HASH SINGLE                     |       |      1 |    340 |   206   (0)|    340 |00:00:00.01 |     138 |       |       |          |
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |      1 |    340 |   206   (0)|    340 |00:00:00.01 |     138 |       |       |          |
|*  7 |        INDEX RANGE SCAN                        | T1_I1 |      1 |    340 |     4   (0)|    340 |00:00:00.01 |       3 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM. LE. 10)
   4 - filter(ROWNUM .LE. 10)
   7 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE">=TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "TRANSACTION_DATE" IS NOT NULL)

Even though the optimizer has recognised that is will be visiting a single partition through a local index it has not chosen a descending index range scan, though it has used the appropriate index; so it’s fetched all the relevant rows from the table in the wrong order then sorted them discarding all but the top 10. We’ve done 138 buffer visits (which would turn into disk I/Os, and far more of them, in the production system).

Does this mean that the optimizer can’t use the descending index when the table is partitioned – or that somehow the costing has gone wrong. Here’s plan with the hints in place to see what happens when we demand a descending range scan:


----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |   207 (100)|     10 |00:00:00.01 |       8 |
|*  1 |  VIEW                                 |       |      1 |     10 |   207   (1)|     10 |00:00:00.01 |       8 |
|*  2 |   COUNT STOPKEY                       |       |      1 |        |            |     10 |00:00:00.01 |       8 |
|   3 |    VIEW                               |       |      1 |    340 |   207   (1)|     10 |00:00:00.01 |       8 |
|   4 |     PARTITION HASH SINGLE             |       |      1 |    340 |   206   (0)|     10 |00:00:00.01 |       8 |
|   5 |      TABLE ACCESS BY LOCAL INDEX ROWID| T1    |      1 |    340 |   206   (0)|     10 |00:00:00.01 |       8 |
|*  6 |       INDEX RANGE SCAN DESCENDING     | T1_I1 |      1 |    340 |     4   (0)|     16 |00:00:00.01 |       3 |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM .LE. 10)
   6 - access("CUSTOMER_ID"=50 AND "TRANSACTION_DATE" IS NOT NULL AND "TRANSACTION_DATE">=TO_DATE('
              1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The optimizer is happy to oblige with the descending range scan – we can see that we’ve visited only 8 buffers, and fetched only 10 rows from the table. The cost, however, hasn’t made any allowance for the limited range scan. Check back to the plan for the simple (non-partitioned) table and you’ll see that the optimizer did allow for the reduced range scan. So the problem here is a costing one – we have to hint the index range scan if we want Oracle limit the work it does.

You might notice, by the way that the number of rowids returned in the index range scan descending operation is 16 rather than 10 – a little variation that didn’t show up when the table wasn’t partitioned. I don’t know why this happened, but when I changed the requirement to 20 rows the range scan returned 31 rowids, when I changed it to 34 rows the range scan returned 46 rows, and a request for 47 rows returned 61 index rowids – you can see the pattern, the number of rowids returned by the index range scan seems to be 1 + 15*N.

Footnote:

If you want to avoid hinting the code (or adding an SQL patch) you need only re-create the index with the transaction_date column declared as descending (“desc”), at which point the optimizer automatically chooses the correct strategy and the run-time engine returns exactly 10 rowids and doesn’t need to do any sorting. But who wants to create a descending index when they don’t really need it !

If you want to reproduce the experiments, here’s the script to create my test data.


rem
rem     Script:         pt_ind_desc_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1 (
        customer_id,
        transaction_date,
        small_vc,
        padding 
)
partition by hash(customer_id) partitions 4
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        mod(rownum,128)                         customer_id,
        (trunc(sysdate) - 1e6) + rownum         transaction_date,
        lpad(rownum,10,'0')                     v1,
        lpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(customer_id, transaction_date) 
local 
nologging
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve run this test on 12.1.0.2, 12.2.0.1, and 18.3.0.0 – the behaviour is the same in all three versions.

Italian Oracle User Group Tech Days 2019

Yann Neuhaus - Wed, 2019-01-30 15:28

The Italian Oracle User Group (ITOUG) is an independent group of Oracle enthusiasts and experts which work together as volunteers to promote technical knowledge sharing in Italy.

Here the ITOUG Board members:
ITOUG Board

This year ITOUG Tech Days take place in Milan on 30th January and in Rome on 1st February. Two different streams for each event:
– Database
– Analytics and Big Data
Today I participated to the event in Milan.
But before talking about that, ITOUG Tech Days started with the speakers’ dinner on Tuesday evening in Milan: aperitif, good Italian food and very nice people.
ITOUG Speakers Dinner

On Wednesday morning, we all met at Oracle Italia in Cinisello Balsamo (MI):
ITOUG Milan

After the welcome message by some ITOUG Board members:
ITOUG Welcome  Msg
sessions finally started. I attended the following ones of the Database stream:

- “Instrumentation 2.0: Performance is a feature” by Lasse Jenssen from Norway
Lasse
We have to understand what’s going on into a system, performance is a feature and we need instrumentation. Oracle End-to-End metrics, new tags in 12c, v$sql_monitor, dbms_monitor… And work in progress for instrumentation 3.0 with ElasticSearch, LogStash and Kibana.

- “Hash Join Memory Optimization” by one of the ITOUG Board member, Donatello Settembrino
Donatello
How Hash Join works and how to improve PGA consumption performances. Examples of partitioning (to exclude useless data), (full) Partitioning Wise Join (to use less resources) and parallelism. Differences between Right-Deep Join Trees and Left-Deep Join Trees, and concept of Bushy Join Trees in 12R2.

- “Design your databases using Oracle SQL Developer Data Modeler” by Heli Helskyaho from Finland
Heli
Oracle SQL Developer Data Modeler with SQL Developer or in a standalone mode to design your database. It uses Subversion integrated in the tool for the version control and management. It also has support for other databases, MySQL for example. And it’s free.

- “Bringing your Oracle Database alive with APEX” by Dimitri Gielis from Belgium
Dimitri
Two things to learn from this session:
1) Use Oracle Application Express to design and develop a web application.
2) And Quick SQL to create database objects and build a data model
And all that in a very fast way.

- “Blockchain beyond the Hype” by one of the ITOUG Board member, Paolo Gaggia
Paolo
The evolution of blockchain from bitcoin to new Enterprise-Oriented implementation and some interesting use cases.

Every session was very interesting: thanks to the great and amazing speakers (experts working on Oracle technologies, Oracle ACE, Oracle ACE Director…) for their sharing.

Follow the Italian Oracle User Group on Twitter (IT_OUG) and see you at the next ITOUG event!

Cet article Italian Oracle User Group Tech Days 2019 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator