Feed aggregator

Storing JSON file in the database

Tom Kyte - Wed, 2019-02-06 06:26
I created a table with two JSON columns. CREATE TABLE USER.JSON_DOCS ( id RAW(16) NOT NULL, file CLOB, CONSTRAINT json_docs_pk PRIMARY KEY (id), CONSTRAINT json_docs_chk CHECK (file IS JSON) ); Then I tried to storage a JSON in...
Categories: DBA Blogs

View that can be used to get last SQL elapsed time

Tom Kyte - Wed, 2019-02-06 06:26
Hello Gentlemen, We currently don't have OEM tool. Is there a data dictionary view that can be queried that will provide the last elapsed time for a particular query that has an elapsed time of less than 5 seconds? v$sql view has a column called...
Categories: DBA Blogs

Create Databases from Automatic Daily Backups - New Feature - Jan 2019 - Oracle Cloud Infrastructure

Senthil Rajendran - Wed, 2019-02-06 03:03
New Feature - Create Databases from Automatic Daily Backups
Services : Block Volume
Release Month : Jan 2019

Automatic Daily Backup feature lets you to create automated backups of OCI DB System Databases on a daily basis. With this feature you can now create a new system out of the automated daily backups.

Here is a DB System in OCI that is created and you can see there is a Automated Backup Running













Once the backup is completed you will see that it is ready for the refresh





From the backup section select Create Backup and fill in the required details





Once you start the database creation process you will see the restore process beginning immediately.



The DB System will have your database in provisioning stage



Once restore is done you will be having your new Database up and running.



This is indeed a cool feature. I bet your DBA will love to see this , he can now start looking into other useful stuffs.



New OA Framework 12.2.7 Update 2 Now Available

Steven Chan - Tue, 2019-02-05 18:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.7 is now available:

Oracle Application Framework (FWK) Release 12.2.7 Bundle 2 (Patch 28963259:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.7 users should apply this patch. Future OAF patches for EBS Release 12.2.7 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch includes fixes for the following issues:

  • When the LOV search icon is clicked, the link ‘Skip navigation elements to page contents’ takes focus.
  • When adding attachments, an attachment title longer than 256 multibyte characters raises an error.

Related Articles

Categories: APPS Blogs

HP Color LaserJet MFP on Ubuntu: Scan Error during device I/O (code=9)

Dietrich Schroff - Tue, 2019-02-05 13:57
I decided to use a multi function printer including scanner and fax with my ubuntu systems.
First step was to download the hplip package from HP:
https://developers.hp.com/hp-linux-imaging-and-printing/gethplip

The installation process worked like a charme
bash ./hplip-3.19.1.run But running the scan utility ends up with the following error:


$ hp-scan

HP Linux Imaging and Printing System (ver. 3.19.1)
Scan Utility ver. 2.2

Copyright (c) 2001-15 HP Development Company, LP
This software comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to distribute it
under certain conditions. See COPYING file for more details.

warning: No destinations specified. Adding 'file' destination by default.
Using device hpaio:/net/HP_ColorLaserJet_MFP_M278-M281?ip=192.168.178.200
Opening connection to device...
error: SANE: Error during device I/O (code=9)After searching around the solution was the following:
"Install the plugin"?
!

$ hp-plugin





HP Linux Imaging and Printing System (ver. 3.19.1)
Plugin Download and Install Utility ver. 2.1

Copyright (c) 2001-15 HP Development Company, LP
This software comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to distribute it
under certain conditions. See COPYING file for more details.


HP Linux Imaging and Printing System (ver. 3.19.1)
Plugin Download and Install Utility ver. 2.1

Copyright (c) 2001-15 HP Development Company, LP
This software comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to distribute it
under certain conditions. See COPYING file for more details.

Checking for network connection...
Downloading plug-in from:
Plugin is not accessible. Trying to download it from fallback location: [https://developers.hp.com/sites/default/files/hplip-3.19.1-plugin.run]
Receiving digital keys: /usr/bin/gpg --homedir /home/esther/.hplip/.gnupg --no-permission-warning --keyserver pgp.mit.edu --recv-keys 0x4ABA2F66DBD5A95894910E0673D770CDA59047B9
Creating directory plugin_tmp
Verifying archive integrity... All good.
Uncompressing HPLIP 3.19.1 Plugin Self Extracting Archive..............................................................

HP Linux Imaging and Printing System (ver. 3.19.1)
Plugin Installer ver. 3.0

Copyright (c) 2001-15 HP Development Company, LP
This software comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to distribute it
under certain conditions. See COPYING file for more details.

Plug-in version: 3.19.1
Installed HPLIP version: 3.19.1
Number of files to install: 64


Done.
 Plug-in installation successful

Done.After that running hp-scan immediately creates a PNG file with the scan:
 hp-scan

HP Linux Imaging and Printing System (ver. 3.19.1)
Scan Utility ver. 2.2

Copyright (c) 2001-15 HP Development Company, LP
This software comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to distribute it
under certain conditions. See COPYING file for more details.

warning: No destinations specified. Adding 'file' destination by default.
Using device hpaio:/net/HP_ColorLaserJet_MFP_M278-M281?ip=192.168.178.200
Opening connection to device...

Resolution: 300dpi
Mode: gray
Compression: JPEG
Scan area (mm):
  Top left (x,y): (0.000000mm, 0.000000mm)
  Bottom right (x,y): (215.899994mm, 296.925995mm)
  Width: 215.899994mm
  Height: 296.925995mm
Destination(s): file
Output file:
warning: File destination enabled with no output file specified.
Setting output format to PNG for greyscale mode.
warning: Defaulting to '/home/esther/Downloads/hpscan001.png'.

Warming up...


Scanning...
Reading data: [*************************************************************************************************************************] 100%  8.5 MB    
Read 8.5 MB from scanner.
Closing device.

Outputting to destination 'file':

Done.This is really fast. But if you want a GUI just use xsane...

Keeping a column constant

Tom Kyte - Tue, 2019-02-05 12:26
Hi, <code>Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit</code> We have a need to keep certain values in a table constant in a cloned database. We store email addresses in a table, and use these for generating email commu...
Categories: DBA Blogs

Reintroducing Robert Farrington

Steven Chan - Tue, 2019-02-05 09:22

It is my pleasure to reintroduce you to Robert Farrington, a long-time member of the Oracle E-Business Suite Technology documentation team whose two main (and rather diverse, which he likes!) current areas of coverage are Oracle Application Framework and EBS on Oracle Cloud. In addition, he writes and edits many My Oracle Support knowledge documents. As an Oracle E-Business Suite customer, you have been reading Robert's work for many years.

In addition, Robert has been a key contributor to this blog and our sister Oracle E-Business Suite and Oracle Cloud blog as both editor and author for many years. I look forward to collaborating with Robert to continue to bring you our latest announcements and tips in a timely manner and easy to read but informative style.

Here are a few of Robert's recent blog posts:

Related Articles
Categories: APPS Blogs

What are custom and generic plans in PostgreSQL?

Yann Neuhaus - Tue, 2019-02-05 08:28

I have already written a post about prepared statements in PostgreSQL some time ago. What I did not mention in that post is the concept of generic and custom plans. So lets have a look at those.

As always, we start with creating a demo table and populate that table with some sample data:

pgbench=# create table demo ( a int, b text );
CREATE TABLE
pgbench=# insert into demo select i, 'aaa' from generate_series (1,100) i;
INSERT 0 100
pgbench=# insert into demo select i, 'bbb' from generate_series (101,200) i;
INSERT 0 100
pgbench=# insert into demo select i, 'ccc' from generate_series (201,300) i;
INSERT 0 100
pgbench=# analyze demo;
ANALYZE

Now that we have some data we can prepare a statement we would like to execute with various values:

pgbench=# prepare my_stmt as select * from demo where b = $1;
PREPARE

Btw: You can check for the currently available prepared statements in your session by querying the pg_prepared_statements catalog view:

pgbench=# select * from pg_prepared_statements;
  name   |                      statement                      |         prepare_time          | parameter_types | from_sql 
---------+-----------------------------------------------------+-------------------------------+-----------------+----------
 my_stmt | prepare my_stmt as select * from demo where b = $1; | 2019-02-05 13:15:39.232521+01 | {text}          | t

When we explain(analyze) that statement what do we see?

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.111..0.230 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.344 ms
 Execution time: 0.285 ms
(5 rows)

In the “Filter” line of the execution plan we can see the real value (‘aaa’) we passed to our prepared statement. When you see that, it is a so called custom plan. When PostgreSQL goes for a custom plan that means the statement will be re-planned for the provided set of parameters. When you execute that a few times more:

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.045..0.158 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.243 ms
 Execution time: 0.225 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.035..0.123 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.416 ms
 Execution time: 0.173 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.036..0.124 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.195 ms
 Execution time: 0.178 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.035..0.126 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.192 ms
 Execution time: 0.224 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.038..0.130 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.191 ms
 Execution time: 0.183 ms
(5 rows)

… you will see that the “Filter” line changes from displaying the actual parameter to a place holder. Now we have a generic plan. This generic plan will not change anymore for the lifetime of the prepared statement no matter which value you pass into the prepared statement:

pgbench=# explain (analyze) execute my_stmt ( 'bbb' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.096..0.219 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.275 ms
 Execution time: 0.352 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'ccc' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.090..0.132 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.084 ms
 Execution time: 0.204 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( null );
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.033..0.033 rows=0 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 300
 Planning time: 0.018 ms
 Execution time: 0.051 ms
(5 rows)

When you take a look at the source code (src/backend/utils/cache/plancache.c) you will see why it changes after 5 executions:

/*
 * choose_custom_plan: choose whether to use custom or generic plan
 *
 * This defines the policy followed by GetCachedPlan.
 */
static bool
choose_custom_plan(CachedPlanSource *plansource, ParamListInfo boundParams)
{
        double          avg_custom_cost;

        /* One-shot plans will always be considered custom */
        if (plansource->is_oneshot)
                return true;

        /* Otherwise, never any point in a custom plan if there's no parameters */
        if (boundParams == NULL)
                return false;
        /* ... nor for transaction control statements */
        if (IsTransactionStmtPlan(plansource))
                return false;

        /* See if caller wants to force the decision */
        if (plansource->cursor_options & CURSOR_OPT_GENERIC_PLAN)
                return false;
        if (plansource->cursor_options & CURSOR_OPT_CUSTOM_PLAN)
                return true;

        /* Generate custom plans until we have done at least 5 (arbitrary) */
        if (plansource->num_custom_plans < 5)
                return true;

Even if we change the data and analyze the table again we will still get a generic plan once PostgreSQL went for it:

pgbench=# insert into demo select i, 'ddd' from generate_series (201,210) i;
INSERT 0 10
pgbench=# insert into demo select i, 'ee' from generate_series (211,211) i;
INSERT 0 1
pgbench=# analyze demo;
ANALYZE
pgbench=# select b,count(*) from demo group by b order by b;
  b  | count 
-----+-------
 aaa |   100
 bbb |   100
 ccc |   100
 ddd |    10
 ee  |     1
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'ddd' );
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.88 rows=78 width=8) (actual time=0.147..0.151 rows=10 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 300
 Planning time: 0.293 ms
 Execution time: 0.190 ms
(5 rows)

The situation changes when we have much more data, data is not uniformly distributed and we have an index on the column “b”:

pgbench=# truncate demo;
TRUNCATE TABLE
pgbench=# insert into demo select i, 'aaa' from generate_series (1,1000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'bbb' from generate_series (1000001,2000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'ccc' from generate_series (2000001,3000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'eee' from generate_series (3000001,3000010) i;
INSERT 0 10
pgbench=# create index i1 on demo (b);
CREATE INDEX
pgbench=# select b,count(*) from demo group by b order by b;
  b  |  count  
-----+---------
 aaa | 1000000
 bbb | 1000000
 ccc | 1000000
 eee |      10
(4 rows)

pgbench=# prepare my_stmt as select * from demo where b = $1;
PREPARE

No matter how often we execute the following statement (which asks for ‘eee’), we never get a generic plan:

pgbench=# explain (analyze) execute my_stmt ('eee');
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Index Scan using i1 on demo  (cost=0.43..4.45 rows=1 width=8) (actual time=0.054..0.061 rows=10 loops=1)
   Index Cond: (b = 'eee'::text)
 Planning time: 0.249 ms
 Execution time: 0.106 ms
(4 rows)

-----> REPEAT THAT HOW OFTEN YOU WANT BUT AT LEAST 10 TIMES

pgbench=# explain (analyze) execute my_stmt ('eee');
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Index Scan using i1 on demo  (cost=0.43..4.45 rows=1 width=8) (actual time=0.054..0.061 rows=10 loops=1)
   Index Cond: (b = 'eee'::text)
 Planning time: 0.249 ms
 Execution time: 0.106 ms

This is because the custom plan (which includes the costs for re-planning) is always cheaper than the generic plan (which does not include the costs for re-planning) when we have a data distribution like that. The documentation is very clear about that: “Using EXECUTE values which are rare in columns with many duplicates can generate custom plans that are so much cheaper than the generic plan, even after adding planning overhead, that the generic plan might never be used”.

Hope that helps.

Cet article What are custom and generic plans in PostgreSQL? est apparu en premier sur Blog dbi services.

Utilities Can Now Manage Diverse, Distributed Energy Resources with Oracle

Oracle Press Releases - Tue, 2019-02-05 07:00
Press Release
Utilities Can Now Manage Diverse, Distributed Energy Resources with Oracle Oracle Network Management, featuring built-in AI, continuously optimizes performance for a more resilient, efficient grid

DISTRIBUTECH, New Orleans, LA.—Feb 5, 2019

Energy distribution is no longer a linear equation. As new energy and data sources continue to grow, utilities must take a more holistic approach to better understand and manage resources and engage increasingly active customers at the edge of the grid. The expanded Oracle Utilities Network Management System (NMS) addresses this market need with a new Distributed Energy Resource Management (DERM) module that enables utilities to monitor situations in real-time and proactively optimize their broader network in concert with this explosion of emerging energy resources, including solar, wind, electric vehicles and more.

By eliminating data and application silos and giving operators real-time visibility and control across all grid and pipeline assets in a single platform, operators can leverage the most comprehensive, real-time model to ensure a more resilient and efficient grid. For example, applying built-in artificial intelligence (AI) and machine learning to a growing library of data, including Advanced Metering infrastructure (AMI), weather forecasts, SCADA and IoT device interaction, NMS enables grid operators to reliably predict future storms and alter supply and demand. Coupled with enhanced mobile tools, field crews can quickly garner the insights they need to limit the impact of outages for customers and speed restoration efforts.

“Utilities today are facing a perfect storm of evolving distributed energy resources across their networks, a barrage of data, and ever-savvy customers who are looking to their utility to be a trusted advisor in this changing energy journey,” said Dan Byrnes, SVP of product development, Oracle Utilities. “As such, resources can no longer be managed in isolation. With the latest innovations in NMS, utilities can improve long-term business planning around infrastructure investments and maintenance schedules, launch new revenue-generating customer-focused energy services, and proactively optimize operational performance.”

End-to-End Visibility and Control Across Resources

With the new enhancements to Oracle Utilities Network Management System, utilities reap the benefits of:

  • Automated Grid Optimization: Offers complete visibility and control from distribution to the connected customer including distributed energy resource management, D-SCADA functionality, and IoT device interactions. This allows utilities to balance supply and demand all in one place, in both real-time and continuously looking ahead to proactively mitigate grid constraints before they happen.
  • Deep Insights for Automated Decision Making: By ingesting data from a variety of sources and harnessing the power of AI and analytics, utilities can understand past weather patterns and predict future outages for automated decision making.
  • Extended Field Crew Access: Leveraging existing mobile device capabilities, field crews can now pull up outage maps, trend data and performance analysis, even in offline mode for faster, more efficient outage resolution.
  • Improved User Experience: Intuitive order-based workflows, simplified tab-based windows, and the ability to have multiple events open simultaneously deliver an enriched user experience with no compromise to safety, reliability and performance.
 

Oracle Utilities NMS combines market-leading outage management technology with distribution management and advanced analytics to empower utilities to maximize grid operation capabilities, shorten outage duration and optimize distribution, all while keeping their customers engaged and informed. To learn more, see the new Oracle NMS system in action today at DistribuTECH booth #10315 or visit here.

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
Molly Hardy-Knowles
Hill+Knowlton Strategies
+1.713.752.1931
molly.hardy-knowles@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

Molly Hardy-Knowles

  • +1.713.752.1931

Oracle Linux hangs after “probing EDD” in Oracle Cloud

XTended Oracle SQL - Tue, 2019-02-05 03:38

Just short note: If your imported Oracle Linux image hangs on boot in the Oracle cloud, just set GRUB_DISABLE_UUID=”true” in /etc/default/grub

Categories: Development

Unique Indexes Force Hints To Be “Ignored” Part I (What’s Really Happening)

Richard Foote - Tue, 2019-02-05 03:15
As I was compiling my new “Oracle Diagnostics and Performance Tuning” seminar, I realised there were quite a number of indexing performance issues I haven’t discussed here previously. The following is a classic example of what difference a Unique Index can have over a Non-Unique index, while also covering the classic myth that Oracle sometimes […]
Categories: DBA Blogs

JDeveloper 12c IDE Performance Boost

Andrejus Baranovski - Tue, 2019-02-05 02:14
There is a way to optimize JDeveloper 12c IDE performance by disabling some of the features you are not using.

I was positively surprised with improved JDeveloper responsiveness after turning off some of the features. ADF BC, Task Flow, and ADF Faces wizards started to respond in a noticeably faster way. Simple change and big performance gain, awesome.

One of the strongest JDeveloper performance improvements come from disabling TopLink feature. Ironically - TopLink is an abandoned product (12.1.3 was the last release). I remember back in 2006 TopLink was very promising and it was almost becoming the default platform for ADF Model. One of the old blog posts written by me related to TopLink - External Transaction Service in Oracle TopLink. But luckily it was overshadowed by ADF BC.

These are the features I disabled in my JDeveloper to get performance gain:

Documentum – MigrationUtil – 2 – Change Docbase Name

Yann Neuhaus - Tue, 2019-02-05 02:01

You are attending the second episode of the MigrationUtil series, today we will change the Docbase Name. If you missed the first one, you can find it here. I did this change on Documentum CS 16.4 with Oracle database, on the same docbase I already used to change the docbase ID.
My goal is to do both changes on the same docbase because that’s what I will need in the future.

So, we will be interested in the docbase RepoTemplate to change his name to repository1.

1. Migration preparation

I will not give the overview of the MigrationUtil, as I already did in the previous blog.
1.a Update the config.xml file
Below is the updated version of config.xml file to change the Docbase Name:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/product/16.4/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">RepoTemplate</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>
...
<entry key="ChangeDocbaseName">yes</entry>
<entry key="NewDocbaseName.1">repository1</entry>
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Before the migration (optional)

– Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : RepoTemplate
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

– Create a document in the docbase:
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
090f449880001125
(1 row affected)

note the r_object_id.

3. Execute the migration

3.a Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

3.b Update the database name in the server.ini file
It is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/RepoTemplate/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = RepoTemplate
server_config_name = RepoTemplate
database_conn = dctmdb.local
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/RepoTemplate/dbpasswd.txt
service = RepoTemplate
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin
...

Don’t worry, we will roll back this change before docbase start ;)

3.c Execute the MigrationUtil script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...
Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...

Changing Docbase Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Finished changing Docbase Name...

Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

No Error encountred here but it doesn’t mean that everything is ok… Please check the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Start: 2019-02-01 19:32:10.631
Changing Docbase Name
=====================

DocbaseName: RepoTemplate
New DocbaseName: repository1
Retrieving server.ini path for docbase: RepoTemplate
Found path: /app/dctm/product/16.4/dba/config/RepoTemplate/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_docbase_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3c0f449880000103'
select r_object_id,docbase_name from dm_docbaseid_map_s where docbase_name = 'RepoTemplate'
update dm_docbaseid_map_s set docbase_name = 'repository1' where r_object_id = '440f449880000100'
select r_object_id,file_system_path from dm_location_s where file_system_path like '%RepoTemplate%'
update dm_location_s set file_system_path = '/app/dctm/product/16.4/data/repository1/content_storage_01' where r_object_id = '3a0f44988000013f'
...
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800003e0'
...
select i_stamp from dmi_vstamp_s where i_application = 'dmi_dd_attr_info'
...
Successfully updated database values...
...
Backed up '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_startup script.
Renamed '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_start_repository1'
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_shutdown script.
Renamed '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_shutdown_repository1'
WARNING...File /app/dctm/product/16.4/dba/config/RepoTemplate/rkm_config.ini doesn't exist. RKM is not configured
Finished processing File changes...

Processing Directory Changes...
Renamed '/app/dctm/product/16.4/data/RepoTemplate' to '/app/dctm/product/16.4/data/repository1'
Renamed '/app/dctm/product/16.4/dba/config/RepoTemplate' to '/app/dctm/product/16.4/dba/config/repository1'
Renamed '/app/dctm/product/16.4/dba/auth/RepoTemplate' to '/app/dctm/product/16.4/dba/auth/repository1'
Renamed '/app/dctm/product/16.4/share/temp/replicate/RepoTemplate' to '/app/dctm/product/16.4/share/temp/replicate/repository1'
Renamed '/app/dctm/product/16.4/share/temp/ldif/RepoTemplate' to '/app/dctm/product/16.4/share/temp/ldif/repository1'
Renamed '/app/dctm/product/16.4/server_uninstall/delete_db/RepoTemplate' to '/app/dctm/product/16.4/server_uninstall/delete_db/repository1'
Finished processing Directory Changes...
...
Processing Services File Changes...
Backed up '/etc/services' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/services_docbase_RepoTemplate.backup'
ERROR...Couldn't update file: /etc/services (Permission denied)
ERROR...Please update services file '/etc/services' manually with root account
Finished changing docbase name 'RepoTemplate'

Finished changing docbase name....
End: 2019-02-01 19:32:23.791

Here it is a justified error… Let’s change the service name manually.

3.d Change the service
As root, change the service name:

[root@vmtestdctm01 ~]$ vi /etc/services
...
repository1				49402/tcp               # DCTM repository native connection
repository1_s       	49403/tcp               # DCTM repository secure connection

3.e Change back the Database name in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

3.f Start the Docbroker and the Docbase

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

3.g Check the docbase log

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-01T19:43:15.677455	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 16594, session 010f449880000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:15.677967	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16595, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:16.680391	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16606, session 010f44988000000b) is started sucessfully." 

You are saying the log name is still RepoTemplate.log ;) Yes! because in my case the docbase name and the server name were the same before I change the docbase name:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/repository1/dbpasswd.txt
service = repository1
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin

Be patient, in the next episode we will see how we can change the server name :)

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

it’s not very nice to keep the old description of the docbase… Use below idql request to change it:

Update dm_docbase_config object set title='Renamed Repository' where object_name='repository1';

Check after change:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
...
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Renamed Repository
...

Check the document created before the migration:
docbase id : 090f449880001125

API> dump,c,090f449880001125
...
USER ATTRIBUTES

  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...
5. Conclusion

Well, the tool works, but as you saw we need a workaround to make the change. Which is not great, hope that it will be fixed in the future versions.
In the next episode I will change the server config name, see you there ;)

Cet article Documentum – MigrationUtil – 2 – Change Docbase Name est apparu en premier sur Blog dbi services.

In-transit encryption for boot and block volumes - New Feature - Jan 2019 - Oracle Cloud Infrastructure

Senthil Rajendran - Tue, 2019-02-05 01:15
New Feature : In-transit encryption for boot and block volumes
Services : Block Volume
Release Month : Jan 2019

Data is often considered less secured when in movement. It could be across two servers , two data center , between two services, between cloud and on-premise or between two cloud providers. Wherever data is  moving , data protection methods should be implemented for in transit data that are critical. While organization care more about data at rest , protecting sensitive data in-transit should also be given high importance as attackers find new methods to steal data.

Encryption is the best way to protect data in-transit. This is done by encrypting the data before sending it , authenticating the end points and decryption once the data is received. 

OCI block volume service encrypts all block volumes at rest and their backups as well using AES Advanced Encryption Standard algorithms with 256-bit encryption. Data moving between the instance and the block volume is transferred over an internal and highly secure network. This transfer could be encrypted with this feature announcement for paravirtualized volume attachments on virtual machines.






































Optionally you can use the encryption keys managed by the key management service for volume encryption. if there is no service used oracle provided encryption key is used and this is for both data at rest and in-transit.




Here above when you specify the key for the block while creating then the same will be used for in-transit as well.


[FREE Live Masterclass] Cloud Security Using Oracle IDCS: Career Path & What to Learn

Online Apps DBA - Tue, 2019-02-05 00:00

[FREE Live Masterclass] Oracle Identity Cloud Service allow both on premise and cloud resources to be secured from a single set of controls… Sounds Interesting? So Get Ready and Plan your journey towards Oracle Identity Cloud Services and become an IDCS Expert.? Visit: https://k21academy.com/idcs02 & Engage yourself in our Free Masterclass on Cloud Security Using […]

The post [FREE Live Masterclass] Cloud Security Using Oracle IDCS: Career Path & What to Learn appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Python cx_Oracle 7.1's Connection Fix-up Callback Improves Application Scalability

Christopher Jones - Mon, 2019-02-04 18:09

cx_Oracle logo

 

 

cx_Oracle 7.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

 

Another great release of cx_Oracle is available from PyPI, this time with a focus on session pooling. There were also a number of incremental improvements and fixes, all detailed in the release notes.

Session Pooling

When applications use a lot of connections for short periods, Oracle recommends using a session pool for efficiency. The session pool is a pool of connections to Oracle Database. (For all practical purposes, a 'session' is the same as a 'connection'). Many applications set some kind of state in connections (e.g. using ALTER SESSION commands to set the date format, or a time zone) before executing the 'real' application SQL. Pooled connections will retain this state after they have been released back to the pool with conn.close() or pool.release(), and the next user of the connection will see the same state. However, because the number of connections in a pool can vary over time, or connections in the pool can be recreated, there is no guarantee a subsequent pool.acquire() call will return a database connection that has any particular state. In previous versions of cx_Oracle, any ALTER SESSION commands had to be run after each and every pool.acquire() call. This added load and reduced system efficiency.

In cx_Oracle 7.1, a new cx_Oracle.SessionPool() option 'sessionCallback' reduces configuration overhead, as featured in the three scenarios shown below. Further details on session callbacks can be found in my post about the equivalent feature set in node-oracledb.

Scenario 1: All Connections Should Have the Same State

When all connections in a pool should have exactly the same state, you can set sessionCallback to a Python function:

def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.execute("alter session ....") pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback=InitSession, threaded=True) . . .

The function InitSession will be called whenever a pool.acquire() call selects a newly created database connection in the pool that has not been used before. It will not be called if the connection in the pool was previously used by the application. It is called before pool.acquire() returns. The big advantage is that it saves the cost of altering session state if a previous user of the connection has already set it. Also the current caller of pool.acquire() can always assume the correct state is set.

If you need to execute more than one SQL statement in the callback, use a PL/SQL block to reduce round-trips between Python and the database:

def InitSession(conn, requestedTag): cursor = conn.cursor() cursor.callproc( """begin execute immediate 'alter session set nls_date_format = ''YYYY-MM-DD'' nls_language = AMERICAN'; -- other SQL statements could be put here end;""")

The requestedTag parameter is shown in the next section.

Scenario 2: Connections Need Different State

When callers of pool.acquire() need different session states, for example if they need different time zones, then session tagging can be used in conjunction with sessionCallback. See SessionCallback.py for a runnable example.

A tag is a semi-arbitrary string that you assign to connections before you release them to the pool. Typically a tag represents the session state you have set in the connection. Note that when cx_Oracle is using Oracle Client 12.2 (or later) libraries then tags are multi-property and must be in the form of one or more "name=value" pairs, separated by a semi-colon. You can choose the property names and values.

Subsequent pool.acquire() calls may request a connection be returned that has a particular tag already set, for example:

conn = pool.acquire(tag="NLS_DATE_FORMAT=SIMPLE")

This will do one of:

  • Select an existing connection in the pool that has the requested tag. In this case, the sessionCallback function is NOT called.

  • Select a new, previously unused connection in the pool (which will have no tag) and call the sessionCallback function.

  • Will select a previously used connection with a different tag. The existing session and tag are cleared, and the sessionCallback function is called.

An optional matchanytag parameter can be used:

conn = pool.acquire(tag="TIME_ZONE=MST", matchanytag=True)

In this case, a connection that has a different tag may be selected from the pool (if a match can't be found) and the sessionCallback function will be invoked.

When the callback is executed, it can compare the requested tag with the tag that the connection currently has. It can then set the desired connection state and update the connection's tag to represent that state. The connection is then returned to the application by the pool.acquire() call:

def InitSession(conn, requestedTag): # Display the requested and actual tags print("InitSession(): requested tag=%r, actual tag=%r" % (requestedTag, conn.tag)) # Compare the requested and actual tags and set some state . . . cursor = conn.cursor() cursor.execute("alter session ....") # Assign the requested tag to the connection so that when the connection # is closed, it will automatically be retagged conn.tag = requestedTag

The sessionCallback function is always called before pool.acquire() returns.

The underlying Oracle Session Pool tries to optimally select a connection from the pool. Overall, a pool.acquire() call will try to return a connection which has the requested tag string or tag properties, therefore avoiding invoking the sessionCallback function.

Scenario 3: Using Database Resident Connection Pooling (DRCP)

When using Oracle client libraries 12.2 (or later) the sessionCallback can alternatively be a PL/SQL procedure. Instead of setting sessionCallback to a Python function, set it to a string containing the name of a PL/SQL procedure, for example:

pool = cx_Oracle.SessionPool(un, pw, connectstring, sessionCallback="myPlsqlCallback", threaded=True)

The procedure has the declaration:

PROCEDURE myPlsqlCallback ( requestedTag IN VARCHAR2, actualTag IN VARCHAR2 );

For an example PL/SQL callback, see SessionCallbackPLSQL.py.

The PL/SQL procedure is called only when the properties in the requested connection tag do not match the properties in the actual tag of the connection that was selected from the pool. The callback can then change the state before pool.acquire() returns to the application.

When DRCP connections are being used, invoking the PL/SQL callback procedure does not need round-trips between Python and the database. In comparison, a complex (or badly coded) Python callback function could require lots of round-trips, depending on how many ALTER SESSION or other SQL statements it executes.

A PL/SQL callback can also be used without DRCP; in this case invoking the callback requires just one round-trip.

Summary

cx_Oracle 7.1 is a solid release which should particularly please session pool users.

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Facebook group: https://www.facebook.com/groups/418337538611212/

Questions: github.com/oracle/python-cx_Oracle/issues

Microsoft Azure: Regions and Availability Zones

Dietrich Schroff - Mon, 2019-02-04 13:07
Before running a VM or any other service inside Microsoft Azure, just a look where the servers can be placed:
This looks fairly the same as the regions at AWS.
Here a better view to europe:

 Each region has several availibility zones (same wording like AWS):
The definition give at Microsoft Azure:
RegionsA region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.
With more global regions than any other cloud provider, Azure gives customers the flexibility to deploy applications where they need to. Azure is generally available in 42 regions around the world, with plans announced for 12 additional regions.

Availability ZonesAvailability Zones are physically separate locations within an Azure region. Each Availability Zone is made up of one or more datacenters equipped with independent power, cooling, and networking.
Availability Zones allow customers to run mission-critical applications with high availability and low-latency replication.

EBS 12.2.8 VM Virtual Appliance Now Available

Steven Chan - Mon, 2019-02-04 10:50

We are pleased to announce that the Oracle E-Business Suite Release 12.2.8 VM Virtual Appliance is now available from the Oracle Software Delivery Cloud.

You can use this appliance to create an Oracle E-Business Suite 12.2.8 Vision demonstration instance on a single, unified virtual machine containing both the database tier and the application tier.

Use with Oracle VM Manager or Oracle VM VirtualBox

This virtual appliance can be imported into Oracle VM Manager to deploy an Oracle E-Business Suite Linux 64-bit environment on compatible server-class machines running Oracle VM Server. It can also be imported into Oracle VM VirtualBox to create a virtual machine on a desktop PC or laptop.

Note: This virtual appliance is for on-premises use only. If you're interested in running Oracle E-Business Suite on Oracle Cloud, see Getting Started with Oracle E-Business Suite on Oracle Cloud (My Oracle Support Knowledge Document 2066260.1) for more information.

EBS Technology Stack Components

The Oracle E-Business Suite 12.2.8 VM virtual appliance delivers the full software stack, including the Oracle Linux 6.10 (64-bit) operating system, Oracle E-Business Suite, and additional required technology components.

The embedded technology components and their versions are listed in the table below:

Technology Component Version (and associated MOS Doc ID if applicable) RDBMS Oracle Home 12.1.0.2 Application Code Level
  • Oracle E-Business Suite 12.2.8 Release Update Pack (Doc ID 2393248.1)
  • R12.AD.C.Delta.10 and R12.TXK.C.Delta.10 (Doc ID 1617461.1)
  • Consolidated technology patches for database and application tier techstack components (Doc ID 1594274.1)
  • Oracle E-Business Suite Data Removal Tool (DRT) patches (Doc ID 2388237.1)
Oracle Forms and Reports 10.1.2.3 WebLogic Server 10.3.6 Web Tier 11.1.1.9 JDK JDK 1.7.0_201 Java Plugin J2SE 1.7 Critical Patch Update (CPU) October 2018 (Doc ID 2445688.1)   REST Services Availability

With Oracle VM Virtual Appliance for Oracle E-Business Suite Release 12.2.8, the required setup tasks for enabling Oracle E-Business Suite REST services provided through Oracle E-Business Suite Integrated SOA Gateway are already preconfigured. This means that Oracle E-Business Suite integration interface definitions published in Oracle Integration Repository, a component in Oracle E-Business Suite Integrated SOA Gateway, are available for REST service deployment.

Note that PL/SQL APIs, Java Bean Services, Application Module Services, Open Interface Tables and Views and Concurrent Programs are all REST-enabled in this VM virtual appliance.

References Related Articles
Categories: APPS Blogs

Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory

Oracle Press Releases - Mon, 2019-02-04 07:00
Press Release
Retailers Turn to Latest Oracle Demand Forecasting Service to Optimize Inventory Built-in artificial intelligence and intuitive dashboards help retailers prevent overstocking and boost customer satisfaction

Redwood Shores, Calif.—Feb 4, 2019

Retailers can now improve inventory management through a single view of demand throughout their entire product lifecycle with the next generation Oracle Retail Demand Forecasting Cloud Service. With built-in machine learning, artificial intelligence and decision science, the offering enables retailers to gain pervasive value across retail processes, allowing for optimal planning strategies, decreased operational costs, and enhanced customer satisfaction. In addition, modern, intuitive dashboards improve operational agility and workflows, adapting immediately to new information to improve inventory outcomes.

The offering is part of the Oracle’s Platform for Modern Retail, all built on the cloud-native platform and aligned to the Oracle Retail Planning and Optimization portfolio. Learn more about Oracle Retail Demand Forecasting Cloud Service here.

“As customer trends continue to evolve faster than ever before, it’s imperative that retailers move quickly to optimize inventory and demand. Too little inventory and customers are dissatisfied. Too much and retailers have a bottom line problem that leads to unprofitable discounting,” said Jeff Warren, vice president, Oracle Retail. “We have distilled over 15 years of forecasting experience across hundreds of retailers worldwide into a comprehensive and modern solution that maximizes the forecast accuracy for the entire product lifecycle. Our customers asked, and we delivered.”

For example, the offering was evaluated by a major specialty retailer against 2.2M units sold over the 2018 holiday season, representing over $480M in revenue. With the forecast accuracy improvements, the retailer was able to achieve the same sales with at least 345K units less of inventory. In tandem, the retailer improved 70 percent of forecasts using completely automated next-generation forecasting data science. These results gave the retailer the confidence to decrease safety stock by 10 percent, reduce overall inventory by 30 percent and improve in-stock rates by 10 percent through smarter placement of the same inventory–all while delivering the same level of service to customers.

“As unified commerce sales grow, the ability to support all four business activities (demand planning, supply planning, inventory planning, and sales and operations execution/merchandising, inventory and operations execution) across all sales channels becomes even more important. A 2017 Gartner survey of supply chain executives highlighted the importance organizations place on their planning capabilities.” Of the “top three investment areas from 2016 through 2017, 36% of retail respondents cited upgrading their demand management capabilities,” wrote Gartner experts Mike Griswold and Alex Pradhan. Source:  Gartner Market Guide for Retail Forecasting and Replenishment Solutions, December 31, 2018

Maximizing Forecast Accuracy Throughout the Product Lifecycle

With the next generation Oracle Retail Demand Forecasting Cloud Service, retailers can:

  • Tailor approaches for short and long lifecycle products, maximizing forecast accuracy for the entire product lifecycle
  • Adapt to recent trends, seasonality, out-of-stocks, and promotions; and reflect retailers’ unique demand drivers, delivering better customer experience from engagement to sale, to fulfillment
  • Leverage dashboard views to support day-in-the-life forecasting workflows such as forecast overview, forecast scorecard, exceptions and forecast approvals
  • Gain transparency across the entire supply chain that enables analytical processes and end-users to understand and engage with the forecast, increasing inventory productivity
  • Coordinate and simulate demand-driven outcomes using forecasts that adapt immediately to new information and without a dependency on batch processes, driving operational agility
Contact Info
Kristin Reeves
Oracle
925-787-6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • 925-787-6744

[Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue

Online Apps DBA - Mon, 2019-02-04 06:49

Installed GoldenGate software with Oracle 11.2.0.4 database and also configured all the bidirectional parameters but Still Facing Oracle GoldenGate Bidirectional Replication issue… Worry Not! We are Here Visit: https://k21academy.com/goldengate33 and Consider our New Blog Covering: ✔ What is Bi-Directional Replication and its Capabilities ✔ Issues in Bi-Directional Replication ✔ Cause and Solution for the Issue […]

The post [Blog] [Solved] Oracle GoldenGate: Bidirectional Replication Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator