Feed aggregator

Latest ACE Technical Articles: March 2019

OTN TechBlog - Tue, 2019-04-09 11:05

What does is take to spend countless hours hunched over a keyboard pounding out code, only to hunch yet again to write a technical article? Beyond the necessary technical skill and expertise, my guess is that it also takes massive quantities of coffee. I have no hard evidence to support that theory other than the long lines at any Starbucks within range of any developer conference. I didn't ask these ACEs how much coffee they consumed as they wrote these articles. They may not drink coffee at all. But the articles listed below are clear evidence that these fine people had the energy and the inclination to transfer their expertise and experience onto the page were you can absorb it.

So pour yourself a cup of whatever keeps you going and soak up some of what these ACEs are serving,

Oracle ACE Director Nassayam BashaNassyam Basha
Database Expert, eProseed
Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
Author, Speaker, Oracle Evangelist, Award winning DBA
Oracle ACE David FitzjarrellDavid Fitzjarrell
Oracle Database Administrator, Pinnacol Assurance
Oracle ACE Michael GanglerMichael Gangler
Database Expert, eProseed
Oracle ACE Asscociate Jian JiangJian Jiang
Yunqu Tech
  Oracle ACE Associate Bin HongBin Hong
Senior MySQL DBA, Shanghai Action Information Technology Co., Ltd.
Additional Resources

New Bookmark for the Journey

Steven Chan - Tue, 2019-04-09 09:01

Contributing Author: Robert Farrington

Greetings, EBS Technology Blog readers!

As some of you may have noticed, on Friday, April 5, 2019, the URL of this blog changed to https://blogs.oracle.com/ebstech. The change was made to better reflect its reason for existing: to provide unmatched insider coverage of the wide range of technology used by Oracle E-Business Suite.

This update is the latest stage in the evolution of the EBS Technology Blog that Steven Chan started in 2006. Since Steven updated you at the end of January 2019 about his extended leave, we've continued to provide a comprehensive range of technology updates from our ever-growing team of contributors.

You will be pleased to hear that our blog infrastructure support team has made the change seamlessly, with a custom configuration that transparently redirects to our new URL. We do recommend, however, that you update your bookmarks with the new URL.

RSS Feed

If you are a subscriber to our RSS feed, this too will continue to function in the same way, with no action needed by you. And if you have not subscribed, you are invited to do so by clicking on the orange RSS icon below and towards the right of the blog title banner. The feed is a great way for you to get automated notifications of the articles published on this blog, conveniently and at no cost.

Get in Touch

Please don't hesitate to let us know how we are doing on any aspect of this blog - what you'd like to see more of, or any other suggestions. You can comment directly on this article, or drop us an email.

References Related Articles
Categories: APPS Blogs

Oracle Enables Utilities to Embrace a Customer-Centric Future

Oracle Press Releases - Tue, 2019-04-09 06:00
Press Release
Oracle Enables Utilities to Embrace a Customer-Centric Future New enhancements to CIS offerings empower utilities to improve operations while providing personalized insights to better engage customers

CS WEEK, Phoenix, Arizona—Apr 9, 2019

The utility industry is undergoing a massive evolution as utility customers are becoming more active consumers and producers. The explosion of data from a multitude of devices, including meters, IoT sensors, and smart home appliances, require utilities to rethink old business and billing models to meet the demands of this new utility economy.

As such, Oracle continues to enhance its leading customer solutions platform to deliver the tools water, gas, and energy utilities need to address this changing market dynamic. With a modern user interface, deep analytics, and built-in integrations with operational device data management and customer self-service capabilities, utilities can better engage customers at scale. For example, they will be able to tap user interface innovations to target customers for new programs based on personalized information on their utility usage. The new features are now available across Oracle’s customer solutions, including Customer to Meter and Customer Cloud Service (CCS).

“In this new utility economy, the customer is transitioning from a billable account to a participant in the utility business model. With changes in rates, regulations, and production, customers are looking for more information regarding their usage and bill and how they can lessen the impact of both,” said Hillary Martin, senior director, Oracle Utilities. “Antiquated customer systems simply don’t allow utilities the flexibility and intelligence they need in this changing market paradigm. With our continued updates, Oracle is bringing utilities to the leading edge of innovation.”

Driving Customer Innovation at Scale

Customer Cloud Service brings together Oracle Utilities’ market-leading customer information system (CIS) and advanced meter data management capabilities into a single meter-to-cash-to-customer solution on a shared database and technology stack. By bringing metering, customer account and revenue data into a common view, enriched by deep analytics and data visualization tools, utilities can drive greater business value from the increasing amounts of data and convert it into actionable business insights.

Built on the Oracle Cloud and delivered as the industry’s first fully supported Software as a Service (SaaS) solution, CCS features automated patching, upgrades, security, and standard SaaS services—freeing up valuable utility resources. With CCS, utilities gain a complete view of their customers and significant cost reductions compared to managing two separate systems. 

With these new innovations, CCS and Customer to Meter utility customers can benefit from:

  • User Experience Enhancements: Guided by dozens of years working closely with customers on their CIS initiatives, a decade of Opower behavioral science research and design best practices, Oracle customer solutions feature a new, clean and modern interface and improvements to the ways end users interact with data, insights, content and context. With guided workflows and intuitive design, even brand new customer service representatives can deliver the same level of service as long-term call center experts.
  • Advanced Analytics: With the exponential growth in volumes and sources of data, utilities have an ever-expanding opportunity to derive value from that information to support better decisions across the business. With pre-built analytics embedded in both Oracle Utilities customer solutions, users can drive insight throughout the customer lifecycle. This allows utilities to visualize data clearly across systems and slice and dice to discover fresh insights that support customer engagement, revenue management, and more. With Oracle, utilities have the power of data science and machine learning at their fingertips, in pre-packaged solutions designed specifically for utility needs.
  • Operational Device Management: enables utilities to track meter and device inventory and monitor performance from within the same system capturing meter reads and customer accounts. This customer-device-data integration in a single solution not only reduces technology costs for utilities but also provides a more complete, “single source of truth” picture across an increasingly distributed network. This helps utilities move beyond the meter to leverage data across all relevant devices so they can explore and scale to new connected home services and more.
  • Digital Self Service: delivers a crucial component in providing modern customer experiences. By integrating Oracle Utilities Digital Self Service (DSS) and Oracle customer solutions, utilities can deliver superior service across channels and devices while driving down operating costs. In addition, based on a decade of user insights and design experience from Opower, Oracle DSS provides a strong web foundation to launch additional customer engagement programs. 
 

“Utility distribution is no longer a linear equation where a customer uses a resource and then is billed for it. With the industry’s most connected, end-to-end customer platforms, Oracle is enabling utilities to become trusted advisors in this new journey while modernizing operations for an evolving customer-centric utility future,” added Martin.

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

How To Create Compute (Linux or Windows) in Oracle Cloud (OCI)

Online Apps DBA - Tue, 2019-04-09 05:41

How To Create Compute (Linux or Windows) in Oracle Cloud (OCI) Compute, Instance or Machine are used interchangeably and represents a VM or BM shapes in Oracle’s Gen2 Cloud (OCI)! Thinking, ✔ How To Create a Linux or Windows Machine in OCI? ✔ What are various options like AD, Image Source, Shape, of FD mean? […]

The post How To Create Compute (Linux or Windows) in Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

PostgreSQL 12: Copying replication slots

Yann Neuhaus - Tue, 2019-04-09 04:44

The concept of replication slots was introduced in PostgreSQL 9.4 and was created to prevent a primary instance to delete WAL that a replica still needs to apply. That could happen when you have a network interruption or the replica was down for another reason. With replication slots you can prevent that at the downside that your master could fill up your disk if the interruption is too long. This concept of a “physical replication slot” was then advanced so you can also create “logical replication slots” which are used in logical replication which made in into PostgreSQL 10. Now with PostgreSQL 12 being in active development another great feature made it into PostgreSQL core: Copying replication slots.

What might that be good for? Lets assume the following scenario:

  • You want to attach two replicas to your master instance
  • You want both replicas to use a physical replication slot
  • You want to build both replicas from the same basebackup and to start at the same position

What you can do in PostgreSQL is to create base backup that will create a physical replication slot:

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -c "select * from pg_replication_slots" postgres
 slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
(0 rows)

postgres@pgbox:/home/postgres/ [PGDEV] mkdir -p /var/tmp/basebackup
postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup --create-slot --slot myslot --write-recovery-conf -D /var/tmp/basebackup/
postgres@pgbox:/home/postgres/ [PGDEV] psql -X -c "select * from pg_replication_slots" postgres
 slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
 myslot    |        | physical  |        |          | f         | f      |            |      |              | 0/2000000   | 
(1 row)

(Please note that there is no more recovery.conf in PostgreSQL 12 so the recovery parameters have been added to postgresql.auto.conf)

The replication slot will not be dropped after pg_basebackup finished and you can use it to attach a new replica. But before doing that: As of PostgreSQL 12 you can copy the slot and then attach a second replica to the copied slot, so both replicas will start at the same position:

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -c "select pg_copy_physical_replication_slot('myslot','myslot2')" postgres
 pg_copy_physical_replication_slot 
-----------------------------------
 (myslot2,)
(1 row)

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -c "select * from pg_replication_slots" postgres
 slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
 myslot    |        | physical  |        |          | f         | f      |            |      |              | 0/8000000   | 
 myslot2   |        | physical  |        |          | f         | f      |            |      |              | 0/8000000   | 
(2 rows)

As you can see both replication slots have the same value for “restart_lsn”. This will make it very easy to use the basebackup for the two replicas and start them from the same position:

postgres@pgbox:/home/postgres/ [PGDEV] mkdir -p /var/tmp/replica1
postgres@pgbox:/home/postgres/ [PGDEV] mkdir -p /var/tmp/replica2
postgres@pgbox:/home/postgres/ [PGDEV] cp -pr /var/tmp/basebackup/* /var/tmp/replica1/
postgres@pgbox:/home/postgres/ [PGDEV] cp -pr /var/tmp/basebackup/* /var/tmp/replica2/
postgres@pgbox:/home/postgres/ [PGDEV] sed -i 's/myslot/myslot2/g' /var/tmp/replica2/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [PGDEV] echo "port=8888" >> /var/tmp/replica1/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [PGDEV] echo "port=8889" >> /var/tmp/replica2/postgresql.auto.conf 
postgres@pgbox:/home/postgres/ [PGDEV] chmod o-rwx /var/tmp/replica1
postgres@pgbox:/home/postgres/ [PGDEV] chmod o-rwx /var/tmp/replica2

What happened here:

  • Restore the same basebackup to the new replica locations
  • Change the slot to use for the second replica to our copied slot name
  • Change the ports of both replicas because we are running on the same host
  • Fix the permissions so pg_ctl will not complain

That’s it. We can startup both replicas:

postgres@pgbox:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/replica1/ start
postgres@pgbox:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/replica2/ start
postgres@pgbox:/home/postgres/ [PGDEV] psql -X -p 8888 -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 t
(1 row)

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -p 8889 -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery 
-------------------
 t
(1 row)

Quite easy and we can confirm that both replicas are at the same location as previously:

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -c "select * from pg_replication_slots" postgres
 slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn 
-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------
 myslot    |        | physical  |        |          | f         | t      |      15622 |      |              | 0/9000148   | 
 myslot2   |        | physical  |        |          | f         | t      |      15632 |      |              | 0/9000148   | 
(2 rows)

You can also copy logical replication slots, of course. Nice, thanks all involved.

Cet article PostgreSQL 12: Copying replication slots est apparu en premier sur Blog dbi services.

JSON path expression syntax error

Tom Kyte - Tue, 2019-04-09 03:46
Hi TOM, I am trying to use json_value function infor loop as below. then getting error as 'PL/SQL: ORA-40442: JSON path expression syntax error'. Could you please help me with solution. Thanks set serveroutput on; DECLARE p_dn_list CLO...
Categories: DBA Blogs

Parallel recursive WITH Enhancements

Tom Kyte - Tue, 2019-04-09 03:46
Team, was reading about this feature Parallel recursive WITH clause enhancements in 12.2 database from the below link: <u>https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/SELECT.html#GUID-CFA006CA-6FF1-4972-821E-6996142A51C...
Categories: DBA Blogs

Attempting primary / standby's using same scan listener name and configuration on both clusters.

Tom Kyte - Tue, 2019-04-09 03:46
Hi Tom, I've never gotten through to you in 20 years, so hopefully you'll be available as I could use some sound advice (you're so in demand, I gave up about 10 years ago :-) Here are the particulars: * We're planning a migration from data cen...
Categories: DBA Blogs

Check db_link from several databases, from one server via dv_links to these servers

Tom Kyte - Tue, 2019-04-09 03:46
Hello, I need to check db_links from all of my servers and I want to check it from procedure or view on statistic db server. Statistic db server have db_links to all of my db's. It is possible? If it is possible how I can do this?
Categories: DBA Blogs

Why the same query performs much slower in PL/SQL procedure/package than directly querying

Tom Kyte - Tue, 2019-04-09 03:46
Dear Tom, I have a complex query joining multiple tables which takes about 10 to 15 seconds to return results. There are 3 layers (2 sub queries) of the "FROM" to get the final results. However, if I create a procedure to run the same codes, whic...
Categories: DBA Blogs

To create multiple instances and databases on same unix machine

Tom Kyte - Tue, 2019-04-09 03:46
I have a requirement to create 2 instances and 2 databases on same unix machine. Do I need multiple homes, I think not because it is same version (11g). I already created one instance/database using OUI. Are these steps right for creating second one?...
Categories: DBA Blogs

Cube Viewer - A new way of analyzing operational data in OUAF

Anthony Shorten - Mon, 2019-04-08 19:23

In past releases of Oracle Utilities Application Framework, Query Zones have been a flexible way of display lists of information with flexible filters and dynamic interaction including creating and saving views of the lists for reuse. In Oracle Utilities Application Framework V4.3.0.6.0 and above, we introduced the Cube Viewer, which extends the query model to support pivot style analytical analysis and visualization of operational data. The capability extends the ConfigTools (aka Task Optimization) capability to allow implementations to define cubes and provide interactivity for end users on operational data.

The Cube Viewer brings together a number of ConfigTools objects to build an interactive visualization with the following capabilities:

  • Toolbar. An interactive toolbar to decide the view of the cube to be shown by the user. This includes saving a view including the criteria for reuse.
  • Settings. The view and criteria/filters to use on the data set to help optimize the analysis. For example you might want to see the raw data, a pivot grid, a line chart or bar chart. You can modify the dimensions shown and even add rules for how certain values are highlighted using formats.
  • Filters. You can decide the filters and values shown in the grid within the selection criteria.
  • View. The above configuration results in a number of views of the data.

An example of the Cube Viewer is shown below:

Example Cube ViewerThe Cube Viewer has many features that allow configuration to optimize and highlight critical data whilst allowing users to interact with the information presented. In summary the key features are:

  • Flexible View Configuration. It is possible to use the configuration at runtime to determine the subset the data to analyze and display format as a saved view. As with query portals, views can be saved and reused. These views can be Private, Shared (within an Access Group) or Public.
  • Formatting Support. To emphasize particular data values, it is possible at runtime to alter their display using simple rules. For example:

Example formatting

  • Visual and Analytical Views. The data to be shown can be expressed in a number of view formats including a variety of graph styles, in grid format and/or raw format. This allows users to interpret the data according to their preferences.
  • Configurable using ConfigTools. The Cube View uses and extends existing ConfigTools objects to allow greater flexibility and configuration control. This allows existing resources who have skills in ConfigTools.
  • Comparison Feature. Allows different selection criteria  sets to be used for comparison purposes.  This allows for difference comparison between two sets of data.
  • Save View as "Snapshot". It is possible to isolate data using the interactive elements of the Cube Viewer to find the data you want to analyze. Once found, you can save the configuration and filters etc for recall later, very similar to the concept of a "Snapshot". For example, if you find some data that needs attention, you can save the view and then reuse it to show others later if necessary.
  • Function Support. In the details additional functions such as Average Value, Count, Maximum Value, Median Value, Minimum Value, Standard Deviation and Sum are supported at the row and column levels.  For example:

Example Functions

Cube Views may be available with each product (refer to documentation shipped with the product) and Cubes Views can be configured by implementers and reused across users as necessary. Over the next few weeks a step by step guide will be published here and other locations to show the basic process and some best practices of building a Cube Viewer.

Welcome to Oracle’s corporate security blog

Oracle Security Team - Mon, 2019-04-08 10:31

Hi, all! My name is Mary Ann Davidson and I am the Chief Security Officer for Oracle. I’m the first contributor to our relaunched corporate security blog. Having many different security voices contribute to this blog will help our customers understand the breadth of security at Oracle - across multiple organizations and multiple lines of business. Security at Oracle is way too big (and too important) to be constrained to one person or even one organization. This blog entry will describe how security is organized at Oracle and what my organization does specifically.

When I joined Oracle (and before I started working in security), we were just beginning to build “Oracle Financials” – at the time, general ledger, purchasing and payables applications - which have long since expanded to a huge portfolio of business applications. Since then, we’ve continued to grow: more business applications, middleware, operating systems, engineered systems, industry solutions (e.g., construction, retail, hospitality, financial services) and of course, many clouds (infrastructure as a service (IaaS), platform as a service (PaaS) and Software as a Service (SaaS) - business applications we run for our customers). Plus databases, of course!

The amount of diversity we have in terms of our product and service portfolio has a significant impact from a security perspective. The first one is pretty obvious: nobody can be an expert on absolutely everything in security (and even if one person were an expert on everything, there isn’t enough time for one person to be responsible for securing absolutely everything, everywhere in Oracle). The second is also obvious: security must be a cultural value, because you can never hire enough security experts to look over the shoulders of everyone else to make sure they are doing whatever they do securely. As a result, Oracle has adopted a decentralized security model, albeit with corporate security oversight: “trust, but verify.”

With regard to our core business, security expertise remains in development. By that I mean that development organizations are responsible for the security-worthiness of what they design and build, and in particular, security has to be “built in, not bolted on” since security doesn’t work well (if at all) as an afterthought. (As I used to say when I worked in construction management in the US Navy, “You can’t add rebar after the concrete has set.”)

Security oversight falls under the following main groups at Oracle: Global Physical Security (facility security, investigations, executive protection, etc.), Global Information Security (the “what” we do as a company in terms of corporate security policies, including compliance, forensic investigations, etc.), Corporate Security Architecture (review and approval prior to systems going live to ensure they are securely architected), and Global Product Security, which is my team.

I mentioned I am the CSO for Oracle but really, that would be better categorized as “Chief Security Assurance Officer.” What does assurance at Oracle encompass? In essence, that everything we build – hardware and software products, services, and consulting engagements – has security built in and maintains a lifecycle of security. In order to do that, my team has developed an extensive program – from “what” we do to “how” we do it – including verifying that “we did what we are supposed to do.” The “what” includes secure coding and secure development standards, covering not only “don’t do X,” but “here’s how to do Y.” We train many people in development organizations on these standards (e.g., not only developers but quality assurance (QA) people and doc writers, some of whom write code samples that we obviously want to reflect secure coding practice). We have more extensive, tailored training tracks, as well. Our secure development requirements also include architectural risk analysis (ARA), since people building systems (or even features within systems) need to think about the type of threats the system will be subjected to and design with those threats in mind.  These programs and activities are collectively known as Oracle Software Security Assurance (OSSA).

One of the ways we decentralize security is by identifying and appointing people in development and consulting organizations to be our “security boots on the ground.” Specifically, we have around 60 senior Security Leads and over 1,700 Security Points Of Contact (SPOCs) that implement Oracle Software Security Assurance programs across a multiplicity of development organizations and consulting.

Development teams are required to use various security analysis and testing tools, including both static and dynamic analysis, to triage the security bugs found and to attempt to fix the worst issues the quickest. We use a lot of different tools to do this, since no one tool works equally well for all types of code. We also build tools in-house to help us find security problems (e.g., a static analysis tool called Parfait, built by Oracle Labs, which we optimize for use within Oracle). Other tools are developed by the ethical hacking team (EHT), e.g., the wonderfully-named SQL*Splat, which fuzzes PL/SQL code.

The EHT’s job is to attempt to break our products and services before “real” bad guys do, and in particular to capture “larger lessons learned” from the results of the EHT’s work, so we can share those observations (e.g., via a new coding standard or an automated tool) across multiple teams in development. I’m also pleased to note that the EHT’s skills are so popular that a number of development groups in Oracle have stood up their own EHTs.

My team also includes people who manage security vulnerabilities: the SecAlert team, who manage the release of our quarterly critical path updates (CPUs), and the Security Alert program, as well as engaging with the security researcher community.

Lastly, we have a team of security evaluators who take selected products and services through international Criteria (ISO-15408) and U.S Federal Information Processing (FIPS)-140 certifications: another way we “trust, but verify.”

Security assurance is not only increasingly important – think of the bazillions of Internet of Things devices as people insist on implanting sensors in absolutely everything - but increasingly asked about by customers who want to know “how did you build – and manage – this product or service?” That is another reason we make sure we can measure what teams across the company are doing or not doing in assurance and help uplift those who need to do better.

In the future, we will be publishing more blog entries to discuss the respective roles of security oversight teams, as well as the security work of operational and development teams. “Many voices” will illustrate the breadth and width of security at Oracle, and how seriously we take it. On a personal note, I look forward to reading about the great work my many valued colleagues at Oracle are doing to continue to make security rock solid, and a core cultural value.

ACE Director

Bar Solutions - Mon, 2019-04-08 09:03

It has now been a week since I have been promoted to ACE Director. I want to thank everybody for their wishes. Directly after being promoted I was off to Dublin for the UKOUG Ireland conference. That is why I didn’t respond to every tweet individually. Please don’t think I didn’t like the tweets or wishes on Linked-In, but I just didn’t have the time and the opportunity to respond.
I especially want to thank Jim Czuprynski and Alex Nuijten for nominating me. And Bryn Llewellyn and Maria Colgan for supporting me from the Oracle side.
I also want to thank my employer (Qualogy) for granting me the opportunity to be very active in the community by sending me to all the usergroups all over the world.
And of course a big thanks to the ACE Program for accepting me.

I hope I will be able to maintain this title for a long, long time.

Video : JSON_MERGEPATCH Function in Oracle Database 19c

Tim Hall - Mon, 2019-04-08 05:25

Today’s video is a run through the JSON_MERGEPATCH function, which was introduced in Oracle 19c.

For those that don’t do video, this was based on the following article.

The star of today’s video is Rene Antunez.

Cheers

Tim…

Video : JSON_MERGEPATCH Function in Oracle Database 19c was first posted on April 8, 2019 at 11:25 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Describe Upgrade

Jonathan Lewis - Mon, 2019-04-08 05:02

Here’s an odd little change between Oracle versions that could have a stunning impact on the application performance if the thing that generates your client code happens to use an unlucky selection of constructs.  It’s possible to demonstrate the effect remarkably easily – you just have to describe a table, doing it lots of times to make it easy to see what’s happening.


create table t1 as select * from all_objects
where rownum = 1
;

set heading off
set feedback off
set pagesize 0
set linesize 156
set trimspool on
set termout off
set serveroutput on

execute snap_rowcache.start_snap
execute snap_libcache.start_snap

start start_10000
-- start_1 contains "describe t1"

set termout on
set serveroutput on

spool temp
execute snap_rowcache.end_snap
execute snap_libcache.end_snap

spool off

The start_10000 script is my mechanism for running a simple piece of code many times, and as the comment following it says, all I’m doing is repeating “describe t1”. The calls to “snap” something are examples of procedures I use to find the changes recorded in various dynamic performance views over short periods of time (there’s an example of the code for v$mystat here) In this case, as the names suggest, the snapshots record the changes in v$rowcache (the dictionary cache) and v$librarycache (the library cache). I’ve put a simple variant of the code at the end of the blog note so you don’t have to do all the preparation if you want to run a quick test for yourself.

Here are the results I get when running the test in Oracle 18.3.0.0

---------------------------------
Dictionary Cache - 05-Apr 19:00:00
Interval:-      27 seconds
---------------------------------
Parameter                 Sub# Usage Fixed    Gets  Misses   Scans  Misses    Comp    Mods Flushes
---------                ----- ----- -----    ----  ------   -----  --------------    ---- -------
dc_objects                         0     0 260,051       0       0       0       0       0       0
dc_users                           0     0 260,000       0       0       0       0       0       0
---------------------------------
Library Cache - 05-Apr 19:00:00
Interval:-      27 seconds
---------------------------------
Type      Description              Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
-----     -----                    ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE TABLE/PROCEDURE        10,003      10,003   1.0     280,028     280,028   1.0         0         0

Before showing you corresponding figures from 12.2.0.1 I’ll just point out that in version 18.3.0.0 of Oracle the structure of view all_objects gives me a table of 26 columns. Think about that and the 10,000 describes while looking at the number above, then look at the corresponding 12.2.0.1 results:

---------------------------------
Dictionary Cache - 05-Apr 19:00:00 
Interval:-      28 seconds
---------------------------------
Parameter                 Usage Fixed    Gets  Misses   Scans  Misses    Comp    Mods Flushes
---------                 ----- -----    ----  ------   -----  --------------    ---- -------
dc_users                      0     0       2       0       0       0       0       0       0
dc_objects                    0     0       9       0       0       0       0       0       0

---------------------------------
Library Cache - 05-Apr 19:04:17
Interval:-      28 seconds
---------------------------------
Type      Description              Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
-----     -----                    ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE TABLE/PROCEDURE        10,005      10,005   1.0      20,018      20,018   1.0         0         0

The internal mechanism of the “describe” call has changed between 12.2.0.1 to 18.3.0.0.

For each describe in 18.3, for each column in the table you see a “get” on dc_users and dc_objects in v$rowcache and you see one “get” on the TABLE/PROCEDURE namespace in v$librarycache, and (2 + number of columns) “pins”. In 12.2.0.1 there are no gets on the dictionary cache and only 1 get and two pins in the library cache for each describe.

As a couple of extra checks I modified the test to query from a 12c client to and 18c server, then from an 18c client to a 12c server. The big numbers appeared in the former test (i.e. when the server is 18c) and the small number for the latter (when the server is 12c). I also tried a couple of other variations on the theme:

  • If the table t1 doesn’t exist when I run the test then there are no gets on the row cache, and I see 2 gets and pins (with hits) on the library cache per describe.
  • If I run the test using “decribe other_schema.t1 the pins (and hits) on the library cache go up by 1 per describe
  • If I execute “alter session set current_schema = other_schema” so that “describe t1” is actually describing a table in another schema the pins (and hits) on the library cache go up by 1 per describe
  • If I run the test in the SYS schema, 18c behaves like 12c !! But SYS is often a little wierd compared to other schemas

Enabling the 10051 trace – I can see that both versions report an OPI call type = 119: “V8 Describe Any” for each call to “describe”. And that’s really where this blog started, and why lots of people might need to be aware(at least in the short term) of this change in behaviour across versions .

Welcome to the Real World.

My demonstration is clearly silly – no-one does hundreds of describes per second in a real application, surely. Well, not deliberately, and not necessarily with the intent to do a full describe, but sometimes n-tier development environments end up generating code that does things you might not expect. One such example is the way that JDBC can handle a requirement of the form:

insert into tableX( id, ......) values (my_sequence.nextval, ...) returning id into ?;

In the course of handling this requirement one of the coding strategies available to JDBC ends up executing the type 119 “V8 Describe Any” call. Imagine the effect this has when you have a couple of dozen concurrent sessions totalling a couple of million single row inserts per hour. The competition for library cache pins and row cache gets is massive – and the memory structures involved are all protected by mutexes. So when a a client of mine recently upgraded their system from 11.2.0.4 to 18.3.0.0 they saw “library cache: mutex X” waits change from a few hundred seconds per hour to tens of thousands of seconds, and “row cache mutex” leaping up  from nowhere in the “Top timed events” to reporting further even more thousands of seconds of wait time per hour.

The actual impact of this issue will depend very strongly on how much use you (or your ORM) makes of this construct. The problem may be particularly bad for my client because of the very large number of concurrent executions of a very small number of distinct statements that all address the same table. For low concurrency, or for a wide range of different tables and statements, you may not see so much contention.

If you are seeing contention for “row cache mutex” and “library cache: mutex X”, then a quick corroborative test (if you are licensed for the performance and dianostic packs) is to check the top_level_call# and top_level_call_name from v$active_session_history:

select
        top_level_call#, top_level_call_name, count(*)
from    v$active_session_history
group by
        top_level_call#, top_level_call_name
order by
        count(*)

If (119, ‘V8 Describe Any’) shows up as a significant fraction of the total then you’re probably looking at this issue.

Java is not my strong point – but here’s a trivial piece of standalone Java that you can use to demonstrate the issue if you’re familiar with running Java on the server. There are a few notes inline to explain necessary preparatory steps and code changes:


/*
        To create a class file, you need to execute
        javac temptest2.java

        This generates file temptest2.class
        If this is successful then execute
        java temptest {number of iterations} {commit frequency}

        e.g.
        java temptest2 10000 10

        To be able to compile, you need a CLASSPATH environment variable
        e.g. export CLASSPATH=$CLASSPATH:$ORACLE_HOME/jdbc/lib/ojdbc8.jar

        For java to see the created class the CLASSPATH must also include 
        the holding directory
        e.g. export CLASSPATH=$CLASSPATH:/mnt/working

        Example combined setting:
        export CLASSPATH=$CLASSPATH:$ORACLE_HOME/jdbc/lib/ojdbc8.jar:/mnt/working

        A schema will need to be created to hold two objects,
        And the connection string in the code below will need to be modified -
        and the listener has to be started and the database registered to it.

        Database objects:
        -----------------
        create sequence s1;

        create table test(
                id number, 
                code varchar2(32), 
                descr varchar2(32), 
                insert_user varchar2(32),
                insert_date date
        );

*/


import java.sql.*;
import oracle.jdbc.OracleDriver;
import java.util.Date;

public class temptest2
{
  public static void main (String arr[]) throws Exception
  {
    DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
    Connection con = DriverManager.getConnection
          ("jdbc:oracle:thin:@localhost:1521:or18","test_user","test");

    Integer iters = new Integer(arr[0]);
    Integer commitCnt = new Integer(arr[1]);

    con.setAutoCommit(false);
    doInserts( con, iters.intValue(), commitCnt.intValue() );

    con.commit();
    con.close();
  }

  static void doInserts(Connection con, int count, int commitCount )
  throws Exception
  {

    int  rowcnt = 0;
    int  committed = 0;
    long start = new Date().getTime();

    for (int i = 0; i < count; i++ ) {
      PreparedStatement ps = con.prepareStatement(
           "insert into test (id, code, descr, insert_user, insert_date) " +
                     "values (s1.nextval,?,?, user, sysdate)",
           new String[]{"id"}
      );
      ps.setString(1,"PS - code" + i);
      ps.setString(2,"PS - desc" + i);
      ps.executeUpdate();

      ResultSet rs = ps.getGeneratedKeys();
      int x = rs.next() ? rs.getInt(1) : 0;
      System.out.println(x);
                
      rowcnt++;
      ps.close();

      if ( rowcnt == commitCount )
      {
        con.commit();
        rowcnt = 0;
        committed++;
      }
    }
    long end = new Date().getTime();
    con.commit();
    System.out.println
    ("pstatement " + count + " times in " + (end - start) + " milli seconds committed = "+committed);
  }
}

/*
 *
 * Sample from trace file after setting events 10046 and 10051:
 *
 * OPI CALL: type=119 argc= 7 cursor=  0 name=V8 Describe Any
 * OPI CALL: type=94 argc=38 cursor=  0 name=V8 Bundled Exec
 * PARSE #140693461998224:c=0,e=15,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3884345238,tim=1368184246
 * EXEC #140693461998224:c=0,e=135,p=0,cr=0,cu=4,mis=0,r=1,dep=0,og=1,plh=3884345238,tim=1368184411
 * OPI CALL: type=105 argc= 2 cursor=  0 name=Cursor close all
 * CLOSE #140693461998224:c=0,e=15,dep=0,type=3,tim=1368185231
 * OPI CALL: type=14 argc= 0 cursor=  0 name=COMMIT
 * XCTEND rlbk=0, rd_only=0, tim=1368185291
 * OPI CALL: type=119 argc= 7 cursor=  0 name=V8 Describe Any
 * OPI CALL: type=94 argc=38 cursor=  0 name=V8 Bundled Exec
 * PARSE #140693461998224:c=0,e=20,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3884345238,tim=1368187929
 * EXEC #140693461998224:c=0,e=162,p=0,cr=0,cu=4,mis=0,r=1,dep=0,og=1,plh=3884345238,tim=1368188141
 * OPI CALL: type=105 argc= 2 cursor=  0 name=Cursor close all
 * CLOSE #140693461998224:c=0,e=6,dep=0,type=3,tim=1368189336
 * OPI CALL: type=14 argc= 0 cursor=  0 name=COMMIT
 * XCTEND rlbk=0, rd_only=0, tim=1368189373
 *
*/


You’ll notice that I’ve prepared, executed and closed a statement inside a loop. The problem wouldn’t happen if I prepared the statement before the loop and closed it after the loop, doing nothing but the execute inside the loop; but the code is simply modelling the “single row processing” effect that typically appears through ORMs.

You’ll have to decide for yourself how to take snapshots of the dynamic performance views while this code is running, and how to emable tracing – but anyone who want to fiddle with the code is probably better at coding Java than I am – so it’s left as an exercise to the reader (I used a logon trigger for the traces, and snap_rowcache and snap_libcache from another session).

There is a certain cruel irony to this issue.  For years I have been telling people that

    insert into my_table(id, ...) values(my_sequence.nextval,...) returning id into :bind1;

is more efficient than:

    select my_sequence.nextval into :bind1 from dual;
    insert into my_table(id,.....) values(:bind1, ...);

If, at present, you’re using  Hibernate as your ORM it generates code that does the (inefficient, bad practice) latter and you won’t see the “describe” problem.

Footnote

If you want a very simple SQL*Plus script to see the effect – and have privileges to query v$rowcache and v$library_cache – here’s a hundred describes with a little wrapper to show the effect:

em
rem     The schema running this script must not be SYS
rem     but must be granted select on v_$rowcache and
rem     v_$librarycache. For the results to be clearly
rem     visible the test needs to be run while virtually
rem     nothing else is running on the instance.
rem
rem     In 18.3.0.0 every describe seems to access 
rem     dc_users, dc_objects, and pin the library cache
rem     once for every column in the table described
rem     (plus a fixed "overhead" of 2 pins and one get)
rem
rem     When run by SYS the counts fall back to the
rem     12.2  numbers -i.e. only two pins and one get
rem     on the libraray cache with no accesses to the 
rem     dictionary cache
rem
rem     The excess gets and pins disappear in 19.2, 
rem     thought the pin count on the library cache 
rem     goes up to 4 per describe.
rem

drop table t1 purge;
create table t1(
        n1 number,
        n2 number,
        n3 number,
        n4 number,
        n5 number,
        n6 number,
        n7 number
)
;


-- variant create table t1 as select * from all_objects where rownum = 1;


set serveroutput off
set linesize 167
set trimspool on


spool temp_desc
select namespace, gets, pins from v$librarycache where namespace = 'TABLE/PROCEDURE';
select parameter, gets from v$rowcache where parameter in ('dc_users','dc_objects') and subordinate# is null;
spool off

set termout off

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1
describe t1

set termout on
set serveroutput on

spool temp_desc append

select namespace, gets, pins from v$librarycache where namespace = 'TABLE/PROCEDURE';
select parameter, gets from v$rowcache where parameter in ('dc_users','dc_objects') and subordinate# is null;

spool off



set doc off
doc

Sample output from 18.3
=======================
NAMESPACE                                                              GETS       PINS
---------------------------------------------------------------- ---------- ----------
TABLE/PROCEDURE                                                       27449      71108


PARAMETER                              GETS
-------------------------------- ----------
dc_users                              17341
dc_objects                           115830


NAMESPACE                                                              GETS       PINS
---------------------------------------------------------------- ---------- ----------
TABLE/PROCEDURE                                                       27555      72017


PARAMETER                              GETS
-------------------------------- ----------
dc_users                              18041
dc_objects                           116533


Note change in rowcache gets - one per column per describe on each parameter.
Note change in library cache pins - (one per column + 2) per describe.

Sample output from 12.2
=======================
NAMESPACE                  GETS       PINS
-------------------- ---------- ----------
TABLE/PROCEDURE           13393      20318


PARAMETER                              GETS
-------------------------------- ----------
dc_users                               4889
dc_objects                            31413


NAMESPACE                  GETS       PINS
-------------------- ---------- ----------
TABLE/PROCEDURE           13504      20539


PARAMETER                              GETS
-------------------------------- ----------
dc_users                               4889
dc_objects                            31416


No change in v$rowcache
Only the same single get and the "+2" pins per describe in v$librarycache

#


The good news is that I sent this text to a colleague who has access to Oracle 19, and the problem goes away (almost completley) – there are just two extra pins on the library cache in Oracle 19 compared to Oracle 12, and no gets on the rowcache dc_users and dc_objects. This suggests that it’s a known issue (even though there’s visible bug report, and the problem is still present in 18.5) so it may be possible to get a backport of the version 19 code for vesion 18 fairly quickly. If not the best temporary workaround is probably to bypass the ORM and manually code for a function call that processes an anonymous PL/SQL block – but I haven’t tested this yet.

There is a JDBC cursor cache available – and if this were enabled than the prepared statement that was closed by the code would be kept open by the JDBC cache and Oracle would receive the “describe” call. Unfortunately it seems that there’s a cursor leak (still) in JDBC caching algorithm that will lead to sessions hitting Oracle error “ORA-01000: maximum open cursors exceeded.”

Acknowledgements.

I’d particularly like to thank Lasse Jenssen who spent a couple of hours with me (when he could have been attending some interesting sessions) at the OUG Ireland conference a few days ago, working through various tests and strategies to pin down the problem and attempt to circumvent it. (Any criticism of the Java code above should, nevertheless be aimed at me).

 

Partitioning -- 15 : Online Modification of Partitioning Type (Strategy)

Hemant K Chitale - Mon, 2019-04-08 03:59
Oracle 18c introduces the ability to convert a Partitioned Table from one Type to another -- e.g. from Hash Partitioning to Range Partitioning.  This is effectively a change of the Partitioning strategy for a table without actually having to manually rebuild the table.

I start with a Hash Partitioned Table.

SQL> create table customers(customer_id number, customer_name varchar2(200), customer_city_code number)
2 partition by hash (customer_id) partitions 4;

Table created.

SQL> select partitioning_type from user_part_tables
2 where table_name = 'CUSTOMERS'
3 /

PARTITION
---------
HASH

SQL> select partition_name from user_tab_partitions
2 where table_name = 'CUSTOMERS'
3 /

PARTITION_NAME
--------------------------------------------------------------------------------
SYS_P221
SYS_P222
SYS_P223
SYS_P224

SQL>
SQL> insert into customers
2 select dbms_random.value(1,1000001), dbms_random.string('X',25), mod(rownum,5)
3 from dual
4 connect by level < 1000001
5 /

1000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','CUSTOMERS');

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'CUSTOMERS'
4 /

PARTITION_NAME NUM_ROWS
---------------- ----------
SYS_P221 250090
SYS_P222 249563
SYS_P223 250018
SYS_P224 250329

SQL>


I now want to convert this Hash Partitioned Table to a Range Partitioned Table online.

SQL> alter table customers
2 modify
3 partition by range (customer_id)
4 (partition P_100K values less than (100001),
5 partition P_200K values less than (200001),
6 partition P_300K values less than (300001),
7 partition P_400K values less than (400001),
8 partition P_500K values less than (500001),
9 partition P_600K values less than (600001),
10 partition P_700K values less than (700001),
11 partition P_800K values less than (800001),
12 partition P_900K values less than (900001),
13 partition P_1MIL values less than (1000001),
14 partition P_2MIL values less than (2000001),
15 partition P_MAXVALUE values less than (MAXVALUE))
16 online;

Table altered.

SQL>
SQL> select partitioning_type
2 from user_part_tables
3 where table_name = 'CUSTOMERS'
4 /

PARTITION
---------
RANGE

SQL> exec dbms_stats.gather_table_stats('','CUSTOMERS');

PL/SQL procedure successfully completed.

SQL> col high_value format a12
SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'CUSTOMERS'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------ ----------
P_100K 100001 100116
P_200K 200001 99604
P_300K 300001 99941
P_400K 400001 100048
P_500K 500001 99841
P_600K 600001 99920
P_700K 700001 100081
P_800K 800001 100024
P_900K 900001 100123
P_1MIL 1000001 100302
P_2MIL 2000001 0
P_MAXVALUE MAXVALUE 0

12 rows selected.

SQL>


The Hash Partitioned Table is now converted to a Range Partitioned Table.  The number of Partitions has been changed.  And the operation was performed online with the ONLINE keyword added to the ALTER TABLE ... statement.  The UPDATE INDEXES clauses can also be used to update existing Indexes on the Table.





Categories: DBA Blogs

The Power of Functional Programming

OTN TechBlog - Mon, 2019-04-08 02:36

Oracle has added support to serverless computing on the cloud that enables developers to leverage programming languages that support Functional programming like Kotlin. Oracle Functions can be thought of as Function-as-a-Service (FaaS).

I will attempt to throw some light on Functional Programming in this blog.

What is Functional Programming?

Functional Programming paradigm can be equated to the mathematical equivalent of y = fn(x).

Mathematical definition:

A function is a process or a relation that associates each element x of a set X, the domain of the function, to a single element y of another set Y (possibly the same set), the codomain of the function.

Function

How does functions benefit programmers?

Functions have certain properties that make it favorable, especially when you want your code to seamless work in a multi threaded concurrent environment. Some of its notable properties are:

  • Functions are idempotent, that is calling a function multiple times with the same input yields the same output.
  • Functions can be chained. For example,

    Given two functions f : X → Y {\displaystyle f\colon X\to Y} f\colon X\to Y and g : Y → Z {\displaystyle g\colon Y\to Z} {\displaystyle g\colon Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f\colon X\rightarrow Z} g\circ f\colon X\rightarrow Z defined by

    ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} (g\circ f)(x)=g(f(x)).

  • Functions are associative, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal. Thus, one writes h ∘ g ∘ f = ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle h\circ g\circ f=(h\circ g)\circ f=h\circ (g\circ f).} {\displaystyle h\circ g\circ f=(h\circ g)\circ f=h\circ (g\circ f).}

These properties enforce immutability in the way the functions are written. For example, in Java streams, only variables declared as final can utilized inside the anonymous functions used in streams.This makes functions to be easily utilized by parallel streams.

Functional Programming Constructs

Traditionally functions take parameters like primitive types. However Functions in Functional Programming, can consume other functions as well. These functions are called higer-order functions. Like in Python, functions in Kotlin are first-class citizens - they can be assigned to variables and passed around as parameters. The type a function is a function type, which is indicated with a parenthesized parameter type list and an arrow to the return type. Consider this function:

fun safeDivide(numerator: Int, denominator: Int) =
    if (denominator == 0.0) 0.0 else numerator.toDouble() / denominator

It takes two Int parameters and returns a Double so its type is (Int,Int) -> Double. We can reference the function itself by prefixing its name with ::, and we can assign it to a variable (whose type would normally be inferred, but we show the type signature for demonstration):

val f: (Int, Int) -> Double = ::safeDivide

When you have a variable or parameter of function type (sometimes called a function reference), you can call it as if it were an ordinary function, and that will cause the referenced function to be called:

val quotient = f(3.14, 0.0)

Kotlin

Kotlin is an  open source, cross platform, statically typed, general purpose programming language with type inference. Kotlin is designed to be fully interoperable with Java, and the JVM version of its standard library depends on the Java Class Library,[2] but type inference allows its syntax to be more concise. Kotlin mainly targets the JVM, but also compiles to JavaScript or native code (via LLVM). Kotlin is sponsored by JetBrains and Google through the Kotlin Foundation.

Let us see a sample problem, that I have solved in traditional OOP (Object oriented programming) and see how we can solve it in FP (Functional programming).

Functional Programming in Action

Let us now see functional programming in action. I will be demonstrating functional programming using Kotlin.

Sample Problem

Suppose you need to design a system that facilitates circumnavigation of the Mars terrain (simplified to a grid). The system is given the upper-right corner coordinates (lower left being implied as 0,0). It is also given the position of the rover on the grid in the form (x, y, d) where x and y are positions on the x and y axis of the grid and d is the direction in which the rover is facing, being one of these values (N - 'North', S - 'South', E - 'East', W - 'West'). The system is also given a set of instructions that the rover should use to navigate the grid, character sequence with values being (L - turn left by 90º, R - turn right by 90º, M - move to the next grid position without changing direction).

Sample Input 

5 5

1 2 N

LMLMLMLMM

Expected Output

1 3 N

Design

You have a Rover, a Plateau (terrain) and a Main object that will call invoke and instruct the Rover.

The Main initializes the Plateau

Code - OOP

Below is my implementation of the Rover class in OOP. Available on GitHub

Code - FP

Below is the functional version of the same code.Code available on Github

Explanation

You can see how concise this code looks compared to the OOP approach used to solve the same problem here

I have used fold to reduce the the sequence of instructions to a single final position, which is the destination expected.

Fold takes an accumulator and element of list/map, combines/reduces and returns this value as accumulator to next element and so on, till the last element is completed.

Fold is similar to Reduce except that it takes and initial value, whereas Reduce copies the first element to the accumulator. Also Fold can be used when the accumulator is of a different type than the list or map.

The crux of the problem is converting the sequence of instructions given as a string to a position on the grid.

So, given instructions 'MML' tells the rover to move two spaces in which ever direction it is facing and the turn left.

  • Split the string(ins) into a Character Sequence.
  • Pass the initial position of Rover to Fold
  • For each char instruction('L', 'R' or 'M'), turn left, right or move the rover respectively.

More on functional constructs available on Kotlin - here

References

Oracle VM Server: OVM CLI via ssh admin@localhost -p 10000 does not work

Dietrich Schroff - Sat, 2019-04-06 04:57

In my Oracle VM server installation the ovmcli refuses to work:
ssh -l admin localhost -p 10000 

Connection to localhost closed by remote host.

Connection to localhost closed.
There some workarounds like:
ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oHostKeyAlgorithms=+ssh-dss -l  admin localhost -p 10000
But the result is the same.

Here a debug output from ssh -v:
[root@oraVMManager mnt]# ssh -l admin localhost -p 10000  -v
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 58: Applying options for *
debug1: Connecting to localhost [::1] port 10000.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Remote protocol version 2.0, remote software version ${POM.ARTIFACTID}-${POM.VERSION}
debug1: no match: ${POM.ARTIFACTID}-${POM.VERSION}
debug1: Authenticating to localhost:10000 as 'admin'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: diffie-hellman-group1-sha1
debug1: kex: host key algorithm: ssh-dss
debug1: kex: server->client cipher: aes128-cbc MAC: hmac-sha1 compression: none
debug1: kex: client->server cipher: aes128-cbc MAC: hmac-sha1 compression: none
debug1: kex: diffie-hellman-group1-sha1 need=20 dh_need=20
debug1: kex: diffie-hellman-group1-sha1 need=20 dh_need=20
debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
debug1: Server host key: ssh-dss SHA256:E6FYsSD9om4ChxJT17vBGUyqHmz3kLLAIxxJZlYjJCM
debug1: Host '[localhost]:10000' is known and matches the DSA host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: rekey after 4294967296 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey after 4294967296 blocks
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: password,publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /root/.ssh/admin
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: Authentication succeeded (publickey).
Authenticated to localhost ([::1]:10000).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
debug1: Sending environment.
debug1: Sending env LANG = de_DE.UTF-8
debug1: channel 0: free: client-session, nchannels 1
Connection to localhost closed by remote host.
Connection to localhost closed.
Transferred: sent 3224, received 1432 bytes, in 0.0 seconds
Bytes per second: sent 2461309.8, received 1093236.9
debug1: Exit status -1

The solution was:
Upgrade the OVM Manager (take a look here)

And then:
ssh -l admin localhost -p 10000  -v
SHA256:pidDB23XNyVHE55Q7GJ+9uqJvBfoR3B1lm02gdYeus8
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:pidDB23XNyVHE55Q7GJ+9uqJvBfoR3B1lm02gdYeus8.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending DSA key in /root/.ssh/known_hosts:1
RSA host key for [localhost]:10000 has changed and you have requested strict checking.
Host key verification failed.
Ok - this was not expected, but the upgrade changed the host key.

But after removing that line from the known_hosts file:
[root@oraVMManager mnt]# ssh -l admin localhost -p 10000  
OVM> list server
Command: list server
Status: Success
Time: 2019-01-25 05:52:14,849 EST
Data:
  id:18:e2:a6:9d:5c:b6:48:3a:9b:d2:b0:0f:56:7e:ab:e9  name:oraclevm
OVM> ?
     add
     create
     delete
     edit
     embeddedcreate
     embeddeddelete
     embeddededit
     exit
     help
     list
     remove
     set
     show
     showallcustomcmds
     showclisession
     showcustomcmds
     showobjtypes
     showversion




PostgreSQL 12: generated columns

Yann Neuhaus - Sat, 2019-04-06 04:17

PostgreSQL 12 will finally bring a feature other database systems already have for quite some time: Generated columns. What exactly is that and how does is look like in PostgreSQL? As usual, lets start with a simple test setup.

We begin with a simple table containing two columns:

postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values (1,'aaa');
INSERT 0 1
postgres=# select * from t1;
 a |  b  
---+-----
 1 | aaa
(1 row)

postgres=# 

A generated column is not a “real” column because it’s value is computed:

postgres=# alter table t1 add column c int generated always as (a*2) stored;
ALTER TABLE
postgres=# select * from t1;
 a |  b  | c 
---+-----+---
 1 | aaa | 2
 postgres=# \d t1
                              Table "public.t1"
 Column |  Type   | Collation | Nullable |              Default               
--------+---------+-----------+----------+------------------------------------
 a      | integer |           |          | 
 b      | text    |           |          | 
 c      | integer |           |          | generated always as (a * 2) stored

The keyword “stored” means that the column is stored on disk. In a future version there will probably also be a “virtual” keyword which instructs PostgreSQL not to store the data on disk but always compute it when it is read rather then written.

What you can see here as well is, that you can refer other columns of the same table for the computation of the generated column. But this is not a requirement:

postgres=# alter table t1 add column d int generated always as (3*2) stored;
ALTER TABLE
postgres=# \d t1
                              Table "public.t1"
 Column |  Type   | Collation | Nullable |              Default               
--------+---------+-----------+----------+------------------------------------
 a      | integer |           |          | 
 b      | text    |           |          | 
 c      | integer |           |          | generated always as (a * 2) stored
 d      | integer |           |          | generated always as (3 * 2) stored

Referencing columns of other tables is not possible and it is not possible to reference another generated columns:

postgres=# alter table t1 add column d int generated always as (c*2) stored;
ERROR:  cannot use generated column "c" in column generation expression
DETAIL:  A generated column cannot reference another generated column.

Directly updating such a column does of course not work as well:

postgres=# update t1 set d=5;
psql: ERROR:  column "d" can only be updated to DEFAULT
DETAIL:  Column "d" is a generated column.
postgres=# update t1 set d=default;
UPDATE 1

What will happen when we create a generated column that uses a volatile function?

postgres=# alter table t1 add column e int generated always as (random()) stored;
psql: ERROR:  generation expression is not immutable
postgres=# 

Does not work as well. Only immutable expressions can be used here. That would work:

postgres=# alter table t1 add column e text generated always as (md5(b)) stored;
ALTER TABLE
postgres=# \d t1
                               Table "public.t1"
 Column |  Type   | Collation | Nullable |               Default               
--------+---------+-----------+----------+-------------------------------------
 a      | integer |           |          | 
 b      | text    |           |          | 
 c      | integer |           |          | generated always as (a * 2) stored
 d      | integer |           |          | generated always as (3 * 2) stored
 e      | text    |           |          | generated always as (md5(b)) stored

Nice feature. Documentation is here

Cet article PostgreSQL 12: generated columns est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator