OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 2 days 9 hours ago

Using YAML for Build Configuration in Oracle Developer Cloud

Thu, 2019-05-16 16:34

In the April release, we introduced support for YAML-based build configuration in Oracle Developer Cloud. This blog will introduce you to scripting YAML-based build configurations in Developer Cloud.

Before I explain how to create your first build configuration using YAML on Developer Cloud, let’s take a look at a few things.

Why do we need YAML configuration?

A YAML-based build configuration allows you to define and configure build jobs by creating YAML files that you can push to the Git repository where the application code that the build job will be building resides.

This allows you to version your build configuration and keep the older versions, should you ever need to refer back to them.  This is different from user interface-based build job configuration where once changes are saved there is no way to refer back to an older version.

Is YAML replacing the User Interface based build configuration in Developer Cloud?

No, we aren’t replacing the existing UI-based build configuration in Developer Cloud with YAML. In fact, YAML-based build configuration is an alternative to it. Both configuration methods will co-exist going forward.

Are YAML and User Interface-based build configurations interchangeable in Developer Cloud?

No, not at the moment. What this means for the user is that a build job configuration created as a YAML file will always exist as and can only be edited as a YAML file. A build job created or defined through the user interface will not be available as a YAML file for editing.

Now let’s move on to the fun part, scripting our first YAML-based build job configuration to build and push a Docker container to Docker registry.

 

Set Up the Git Repository for a YAML-Based Build

To start, create a Git repository in your Developer Cloud project and then create a .ci-build folder in that repository. This is where the YAML build configuration file will reside. For this blog, I named the Git repository NodeJSDocker, but you can name it whatever you want.

In the Project Home page, under the Repositories tab, click the +Create button to create a new repository.

 

Enter the repository Name and a Description, leave the default values for everything else, and click the Create button.

 

 

In the NodeJSDocker Git repository root, use the +File button and create three new files: Main.js, package.json, and Dockerfile.  Take a look at my NodeJS Microservice for Docker blog for the code snippets that are required for these files.

Your Git repository should look like this.

 

Create a YAML file in the .ci-build folder in the Git repository. The .ci-build folder should always be in the root of the repository.

In the file name field, enter .ci-build/my_first_yaml_build.yml, where .ci-build is the folder and my_first_yaml_build.yml is the YAML file that defines the build job configuration. Then add the code snippet below and click the Commit button.

Notice that the structure of the YAML file is very similar to the tabs for the Build Job configuration. The root mapping in the build job configuration YAML is “job”, which consists of “name”, “vm-template”, “git”, “steps”, and “settings”. The following list describes each of these:

  • name”: Identifies the name of the build job and must be unique within the project.
  • vm-template”: Identifies the VM template that is used for building this job.
  • git”: Defines the Oracle Developer Git repository url, branch, and repo-name.
  • steps”:  Defines the build steps. In YAML, we support all the same build steps as we support in a UI-based build job.

 

In the code snippet below, we define the build configuration to build and push the Docker container to DockerHub registry. To do this, we need to include the Docker Login, Docker Build, and Docker Push build steps in the steps mapping.

Note:

For the Docker Login step, you’ll need to include your password. However, storing your password in plain text in a readable file, such as in a YAML file, is definitely not a good idea. The solution is to use the named password functionality in Oracle Developer Cloud.

To define a named password for the Docker registry, we’ll to click Project Administration tab in the left navigation bar and then the Build tile, as shown below.

 

In the Named Password section, click the +Create button.

 

Enter the Name and the Password for the Named Password. You’ll refer to it in the build job. Click the Create button and it will be stored.

You’ll be able to refer this Named Password in the YAML build job configuration by using #{DOCKER_HUB}.

 

docker-build: Under source, put DOCKERFILE and, if the Dockerfile does not reside in the root of the Git repository, include the mapping that defines the path to it. Enter the image-name (required) and version-tag information.

docker-push: You do not need the registry-host entry if you plan to use DockerHub or Quay.io. Otherwise, provide the registry host. Enter the image-name (required) and version-tag information.

**Similarly for docker-login, You do not need the registry-host entry if you plan to use DockerHub or Quay.io

job: name: MyFirstYAMLJob vm-template: Docker git: - url: "https://alex.admin@devinstance4wd8us2-wd4devcs8us2.uscom-central-1.oraclecloud.com/devinstance4wd8us2-wd4devcs8us2/s/devinstance4wd8us2-wd4devcs8us2_featuredemo_8401/scm/NodeJSDocker.git" branch: master repo-name: origin steps: - docker-login: username: "abhinavshroff" # required password: "#{DOCKER_HUB}" # required - docker-build: source: "DOCKERFILE" image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" - docker-push: image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" settings: - discard-old: days-to-keep-build: 5 builds-to-keep: 10 days-to-keep-artifacts: 5 artifacts-to-keep: 10

Right after you commit the YAML file in the .ci-build folder of the repository, a job named MyFirstYAMLJob will be created in the Builds tab. Notice that the name of the job that is created matches the name of the job you defined in the my_first_yaml_build.yml file.

Click the MyFirstYAMLJob link and then, on the Builds page, click the Configure button. The Git tab will open, showing the my_first_yaml_build.yml file in the .ci-build folder of the NodeJSDocker.git repository. Click the Edit File button and edit the YAML file.

 

After you finish editing and commit the changes, return to the Builds tab and click the Build Job link. Then click the Build Now button.

 

When the build job executes, it builds the Docker image and then pushes it to DockerHub.

You’ll also be able to create and configure pipelines using YAML. To learn more about creating and configuring build jobs and pipelines using YAML, see the documentation link.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

ACE-Organized Meet-Ups: May 17-June 13, 2019

Thu, 2019-05-16 05:00
The meet-ups below were organized by the folks in the photos. But those people will necessarily present the content. And in many cases the events consist of multiple sessions. For additional detail on each event please click the links provided.
 

Oracle ACE Christian PfundtnerChristian Pfundtner
CEO, DB Masters GmbH
Austria


Host Organization: DB Masters
Friday, May 17, 2019
MA01 - Veranstaltungszentrum 
1220 Vienna, Stadlauerstraße 56 
 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France


Host Organization: Paris Province Oracle Meetup
Monday, May 20, 2019
6:30pm - 8:30pm
Easyteam
39 Rue du Faubourg Roubaix
Lille, France
 

Oracle ACE Associate Mathias MagnussonMathias Magnusson
CEO, Evil Ape
Nacka, Sweden

 
Host Organization: Stockholm Oracle
Thursday, May 23, 2019
6:00pm - 8:00pm
(See link for location details)
 

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.

 
Host Organization: Oracle Fusion Middleware & Oracle PaaS of NOVA
Tuesday, May 28, 2019
4:00pm - 6:00pm
Reston Regional Library
11925 Bowman Towne Dr.
Reston, VA
 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 
Host Organization: ORCLAPEX-NL
Wednesday, May 29, 2019
5:30pm - 9:30pm
Oracle Netherlands
Hertogswetering 163-167,
Utrecht, Netherlands
 

Oracle ACE Associate Jose RodriguesJosé Rodrigues
Business Manager for BPM & WebCenter, Link Consulting
Lisbon, Portugal

 
Host Organization: Oracle Developer Meetup Lisbon
Thursday, May 30, 2019
6:30pm - 8:30pm
Auditorio Link Consulting
Avenida Duque Ávila, 23
Lisboa
 

Oracle ACE Director Rita NunezRita Nuñez
Consultora IT Sr, Tecnix Solutions
Argentina

 
Host Organization: Oracle Users Group of Argentina (AROUG)
Thursday, June 13, 2019
Aula Magna UTN.BA - Medrano 951
 
Additional Resources

Podcast: Do Bloody Anything: The Changing Role of the DBA

Wed, 2019-05-15 05:00

In August of 2018 we did a program entitled Developer Evolution: What’s Rocking Roles in IT. That program focused primarily on the forces that are reshaping the role of the software developer. In this program we shift the focus to the DBA -- the Database Administrator -- and the evolve-or-perish choices that face those in that role.

Bringing their insight to the discussion is an international panel of experts who represent years of DBA experience, and some of the forces that are transforming that role.

The Panelists

In alphabetical order

Maria ColganMaria Colgan
Master Product Manager, Oracle Database
San Francisco, California


 “Security, especially as people move more towards cloud-based models, is something DBAs should get a deeper knowledge in.”

 

Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland

 

"Now that Autonomous Database is here, I see several database administrators being scared that somehow all their routine tasks will be replaced and they will have very little to do. As if doing the routine stuff is the biggest joy in their lives."

 

Oracle ACE Director Tim HallTim Hall
DBA, Developer, Author, and Trainer
Birmingham, United Kingdom


 “I never want to do something twice if I can help it. I want to find a way of automating it. If the database will do that for me, that’s awesome.”

 

Oracle ACE Director Lucas JellemaLucas Jellema
CTO/Consulting IT Architect, AMIS
Rotterdam,Netherlands


 “By taking heed of what architects are coming up with, and how applications and application landscapes are organized and how the data plays a part in that, I think DBAs can prepare themselves and play a part in putting it all together in a meaninful way.”

 

Oracle ACE Director Brendan TierneyBrendan Tierney
Principal Consultant, Oralytics
Dublin, Ireland


"Look beyond what you're doing in your cubicles with your blinkers on. See what's going on across all IT departments. What are the business needs? How is data being used? Where can you contribute to that to deliver better business value?"

 

Gerald VenzlGerald Venzl
Master Product Manager, Oracle Cloud, Database, and Server Technologies
San Francisco, California

 

"When you talk to anybody outside the administrative roles -- DBA or Unix Admin -- they will tell you that those people are essentially the folks that always say no. That's not very productive."

 

Additional Resources

ACEs at Riga DevDays - May 29-31

Tue, 2019-05-14 05:00

If you find yourself wandering the Baltic states late in May, why not make your way to Riga, Latvia and drop in on the Riga Dev Days? Held May 29-31 at the Cinema Kino Citadele in Riga, the 3-day DevDays event features 40 speakers, including these members of the Oracle ACE Program.

Oracle ACE Director Christian AntogniniChristian Antognini
Senior Principal Consultant and Partner, Trivadis AG
Monte Carasso, Switzerland

 

Oracle ACE Director Martin BachMartin Bach
Principal Consultant, Accenture Enkitec Group
Germany

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Expert, Moovit
Acre, Israel

 

Oracle ACE Direcctor Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Øyvind IseneØyvind Isene
Consultant, Sysco AS
Oslo, Norway

 

Oracle ACE Piet De VisserPiet De Visser
Independent Oracle Database Consultant
The Hague, Netherlands

 
Related Resouorces

Get Started with Autonomous Database and SQLcl in No Time Using Cloud Developer Image

Fri, 2019-05-10 22:46

In this blog post, I describe how to use a free trial for Oracle Cloud and the recently released, Oracle Linux-based Cloud Developer Image to provision an Autonomous Transaction Processing Database and connect to it via SQLcl, all in a matter of minutes.

Think of the Cloud Developer Image as a Swiss army knife for Cloud developers. It has a ton of tools pre-installed, including:

Languages and Oracle Database Connectors
  • Java Platform Standard Edition (Java SE) 8
  • Python 3.6 and cx_Oracle 7
  • Node.js 10 and node-oracledb
  • Go 1.12
  • Oracle Instant Client 18.5
Oracle Cloud Infrastructure Client Tools
  • Oracle Cloud Infrastructure CLI
  • Python, Java, Go and Ruby Oracle Cloud Infrastructure SDKs
  • Terraform and Oracle Cloud Infrastructure Terraform Provider
  • Oracle Cloud Infrastructure Utilities
Other
  • Oracle Container Runtime for Docker
  • Extra Packages for Enterprise Linux (EPEL) via Yum
  • GUI Desktop with access via VNC Server

Here are the steps to provision a fresh Autonomous Transaction Processing Database ad connect to it via SQLcl.

Steps
  1. Launch the Cloud Developer Image from the Console
  2. Log to the instance running the Cloud Developer Image via ssh
  3. Set up OCI cli
  4. Create Autonomous Transaction Processing Database using CLI
  5. Download Wallet using CLI
  6. Install and configure SQLcl
  7. Connect to the database
1. Launch Cloud Developer Image

Log in to the Console. If you don't already have an ssh key pair, make sure you generate those firstby following the documentation. Launch the image by choosing Marketplace under Solutions, Platforms and Edge via the hamburger menu and clicking on Oracle Cloud Developer Image.

Click Launch Instance. Review the terms and conditions, click Launch Instance again. Paste in your ssh public key and click Create. Once the image is running, make a note of the IP address.

2. Set up the OCI client tools

Connect to your newly launched image from your local computer via ssh:

ssh -i <path to your ssh keys> opc@<IP address>

Once logged in, run oci setup config and follow the directions, providing the necessary OCIDs as described in the documentation on Required Keys and OCIDs.

$ oci setup config

Remember to upload your API key by following the instructions in the same documentation. If you accepted all the defaults during the oci client setup, the public key to upload is the output of this:

$ cat /home/opc/.oci/oci_api_key_public.pem 3. Create Autonomous Transaction Processing Database using the OCI CLI

A few of the next commands require the compartment-id as input so it's helpful to have a shorthand ready. Get its value and store it in an environment variable by calling the metadata service via oci-metadata

$ export C=`oci-metadata -g compartmentid --value-only`

Next, create the Autonomous Database. Be sure to provide your own admin password.

$ oci db autonomous-database create --compartment-id $C --db-name myadb --cpu-core-count 1 --data-storage-size-in-tbs 1 --admin-password "<YOUR PASSWORD>"

You should see output similar to:

{ "data": { "compartment-id": "ocid1.tenancy.oc1..aaaaaalskdjflsdkjflsdjflsdkflsjdflksjjfqntfkzizeeikohha4oa", "connection-strings": null, "cpu-core-count": 1, "data-storage-size-in-tbs": 1, "db-name": "myadb", "db-version": null, "db-workload": "OLTP", "defined-tags": {}, "display-name": "autonomousdatabase20190511024732", "freeform-tags": {}, "id": "ocid1.autonomousdatabase.oc1.iad.abuwcljrgx2kosiudoisdufoidsufoidsufodsfkdkdd3zprxjzsouzq", "license-model": "BRING_YOUR_OWN_LICENSE", "lifecycle-details": null, "lifecycle-state": "PROVISIONING", "service-console-url": null, "time-created": "2019-05-11T02:47:32.745000+00:00", "used-data-storage-size-in-tbs": null }, "etag": "a133c7fa" }

Export the Database ID in an environment variable as that will come in handy later.

export DB_ID=`oci db autonomous-database list --compartment-id $C | jq -r '.data[] | select( ."db-name" == "myadb" ).id'`

Wait for the Database to be in AVAILABLE state. You can check the database state with the following command. Initially, this command will return PROVISIONING

oci db autonomous-database get --autonomous-database-id $DB_ID | jq -r '.data["lifecycle-state"]' AVAILABLE

For me, it took about 6 minutes from for the database to be available after executing the create command.

Download Wallet using CLI $ oci db autonomous-database generate-wallet --autonomous-database-id $DB_ID --password <YOUR PASSWORD> --file wallet.zip

Set TNS_ADMIN and extract wallet.zip

$ export TNS_ADMIN="`cat /etc/ld.so.conf.d/oracle-instantclient.conf`/network/admin" $ sudo -E unzip ~/wallet.zip -d $TNS_ADMIN Install and configure SQLcl

Install SQLcl by temporarily enabling the ol7_ociyum_config repo. Then, run the sqlcl.sh that was installed in /etc/profile.d to add the sql command to your PATH.

$ sudo yum install -y --enablerepo=ol7_ociyum_config sqlcl $ source /etc/profile.d/sqlcl.sh

Start SQLcl in /nolog mode and point it to the wallet.zip you downloaded earlier using the set cloudconfig command.

$ sql /nolog SQLcl: Release 19.1 Production on Fri May 10 00:24:29 2019 Copyright (c) 1982, 2019, Oracle. All rights reserved. SQL> set cloudconfig /home/opc/wallet.zip Operation is successfully completed. Operation is successfully completed. Using temp directory:/tmp/oracle_cloud_config2842421108875448254 Connect to the database

Connect to your Autonomous database with the admin. For the service name, use one of the entries in $TNS_ADMIN/tnsnames.ora. Each ADB is created with a high, medium and low service.

SQL> connect admin/<YOUR PASSWORD>@myadb_high Connected. SQL> select sysdate from dual; SYSDATE --------- 11-MAY-19 SQL> Conclusion

The Oracle Linux-based Cloud Developer Image comes with wealth of developer tools pre-installed, reducing the time it takes to get started with Oracle Cloud and Autonomoud Database. In this blog post, I showed how you can provision an Autonomous Database and get connected to it in a matter of minutes. The fact that the Cloud Developer Image already has the important bits pre-installed, including OCI client tools an Oracle Instant Client, makes completing this task a breeze.

Latest Blog Posts from Oracle ACEs: April 28 - May 4, 2019

Thu, 2019-05-09 05:00

The chances of having a movie theater to yourself these days are slim. But while the rest of the world is focused on learning the fates of various Marvel characters in the latest Avengers epic, the members of the Oracle ACE program listed below demonstrated super will power last week by devoting their screen time to hammering out these blog posts.  The least you can do to reward that kind of effort is to take a look, right?

 

Oracle ACE Director  Oracle ACE Directors

Oracle ACE Director Opal AlaphatOpal Alaphat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 
Oracle ACE  Oracle ACEs

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.

 

Oracle ACE Anju GargAnju Garg
Corporate Trainer, Author, Speaker, Blogger
New Delhi, India

 

Oracle ACE Bert ScalzoBert Scalzo
Technical Product Manager: Databases
Flower Mound, Texas

 

Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA, SYDLE
Belo Horizonte, Brazil

 

Oracle ACE Fabio PradoFabio Prado
Instrutor, Oramaster Treinamentos em Bancos de Dados
Sao Paulo, Brazil

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil

 

Oracle ACE Leonardo GonzalezLeonardo Gonzalez Cruz
SOA Architect, Services & Processes Solutions
Mexico

 

Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina

 

Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France

 

Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Malahide, Ireland

 

Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida

 

Oracle ACE Wataru MorohashiWataru Morohashi
Solution Architect, Hewlett-Packard Japan, Ltd.
Tokyo, Japan

 
Oracle ACE Associates  Oracle ACE Associates

Oracle ACE Associate Abigail Gils-HaighAbigail Giles-Haigh
Chief Data Science Officer, Vertice
United Kingdom

 

Oracle ACE Associate Diana RobeteDiana Robete
Team Lead/Senior Database Administrator, First4 Database Partners Inc
Calgary, Canada

 

Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia

 

Oracle ACE Associate Emiliano FusagliaEmiliano Fusaglia
Principal Oracle RAC DBA/Data Architect, Trivadis
Lausanne, Switzerland

 

Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 

Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland

 

Oracle ACE Associate Heema SatapathyHeema Satapathy
Senior Principal Consultant, BIAS Corporation
United States

 

Oracle ACE Associate Lykle ThijssenLykle Thijssen
Principal Architect, eProseed
Utrecht, Netherlands

 

Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia

 

Oracle ACE Associate Roy SalazarRoy Salazar
Senior Oracle Database Consultant, Pythian
Costa Rica

 

Oracle ACE Associate Mark DaynesMark Daynes
Managing Director, Beyond Systems Ltd
Manchester, United Kingdom

 
Additional Resources

Spotlight on Oracle ACE Director Ruben Rodriguez

Wed, 2019-05-08 05:00

Oracle ACE Director Ruben Rodriguez is a Cloud and Mobile Solution Specialist with avanttic Consultoría Tecnológica in Madrid, Spain. He graduated from the Universidad Alfonso X El Sabio in Madrid in 2011 with a degree in Computer Science, then made his way through a variety of IT jobs in Spain and the UK before landing at avantic in 2015. Ruben first entered the Oracle ACE program in October 2017 and was confirmed as an Oracle ACE Director in November 2018. Active in the community, Ruben is a blogger and frequent conference speaker. In December 2019 Packt Publishing will publish Professional Oracle Mobile, written by Ruben and co-author Soham Dasgupta.

Watch the video and get the story from Ruben himself.

Additional Resouces

Latest Blog Posts from Oracle ACEs: April 21-27, 2019

Tue, 2019-05-07 05:00

Blogs in bloom...

Winter is mostly a memory, spring is in the air, and people naturally want to... sit inside and crank out a bunch of blog posts! These members of the Oracle ACE Program resisted the temptation to enjoy some fresh air and sunshine so they could share some of their expertise with you. Take it in.

ACE Directors

Oracle ACE Director Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland

 

Oracle ACE Director Kamran Agayev A.Kamran Agayev A.
Oracle DBA Expert, Azercell
Azerbaijan

 

Oracle ACE Director Richard FooteRichard Foote
Director/Principal Consultant, Richard Foote Consulting Pty Ltd
Canberra, Australia

 

Oracle ACE Director Timo HahnTimo Hahn
Principal Software Architect, virtual 7 GmbH
Germany

 
Oracle ACEs

Oracle ACE Bert ScalzoBert Scalzo
Technical Product Manager: Databases
Flower Mound, Texas

 

Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland

 

Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA, SYDLE
Belo Horizonte, Brazil

 

Oracle ACE Emrah MeteEmrah Mete
Solution Architect/Data Engineer, Turkcell Technology
Istanbul, Turkey

 

Oracle ACE Fabio PradoFabio Prado
Instrutor, Oramaster Treinamentos em Bancos de Dados
Sao Paulo, Brazil

 

Oracle ACE Kyle GoodfriendKyle Goodfriend
Vice President, Planning & Analytics, Accelytics Inc.
Columbus, Ohio

 

Oracle ACE Martien van den AkkerMartien van den Akker
Contractor: Fusion MiddleWare Implementation Specialist, Immigratie- en Naturalisatiedienst (IND)
The Hague, Netherlands

 

Oracle ACE Scott WesleyScott Wesley
Systems Consultant/Trainer, SAGE Computing Services
Perth, Australia

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 
ACE Associates

AOracle ACE Associate Adrian Pngdrian Png
Senior Consultant/Database Administrator, Insum
Canada

 

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Dayalan PunniyamoorthyDayalan Punniyamoorthy
Oracle EPM Consultant,Vertical Edge Consulting Group
Bengaluru, India

 

Oracle ACE Associate Diana RobeteDiana Robete
Team Lead/Senior Database Administrator, First4 Database Partners Inc
Calgary, Canada

 

Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia

 

Oracle ACE Associate Lisandro FernigriniLisandro Fernigrini
Senior Software Developer/DBA, Kapsch TrafficCom
Argentina

 

Oracle ACE Associate Oliver PykaOliver Pyka
Senior Database Consultant
Germany
 

 

Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 
Additional Resources

Articles by Oracle ACEs - April 2019

Thu, 2019-05-02 05:00

Who you gonna ask?

While the phrase "wildly famous" may not apply to the Oracle ACE program members listed here, each has their own following, and each has earned a reputation for sharing experience and expertise. And let's face it, if you have a question about Oracle APEX, or about Autonomous Transaction Processing, are you going to ask one of the Kardashians? I don't think so.

Better you should ask one of these people, or read one of their freshly-written articles.

Oracle ACE Director Alex NuijtenAlex Nuijten
Director/Senior Oracle Developer, allAPEX
Oosterhout, Netherlands

 

Oracle ACE Director Alex ZaballaAlex Zaballa
Infrastructure Senior Principal, Accenture Brasil
São Paulo Area, Brazil

 

Oracle ACE Director Paul GuerinPaul Guerin
Database Service Delivery Leader, Hewlett-Packard
Philippines

 

Oracle ACE Umair MansoobUmair Mansoob
Senior Database Architect, Sirius Computer Solutions
Skokie, Illinois

 

Oracle ACE Borys NeselovskyiBorys Neselovskyi
Solution Architect, OPITZ Consulting
Dortmund, Germany

 

Oracle ACE Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Dhahran, Saudi Arabia

 

Oracle ACE Mathias MagnussonMathias Magnusson
CEO, Evil Ape
Stockholm, Sweden

 
Related Resources

Oracle ACE Sessions at the Great Lakes Oracle Conference (GLOC)

Tue, 2019-04-30 05:00

On May 15-16, 2019 the Northeast Ohio Oracle Users Group will present the Great Lakes Oracle Conference in the historic Cleveland Public Hall, just about a ten minute walk from the Rock and Roll Hall of Fame and Museum, seen in the photo above.

The following members of the Oracle ACE Program will present sessions at GLOC. So if you're in the neighborhood, come on down.

For more information: Great Lakes Oracle Conference

 

Oracle ACE Director Gary CrisciGary Crisci
Principal Architect, General Electric
Norwalk, Connecticut

 

Oracle ACE Director Janice GriffinJanice Griffin
Senior Sales Engineer, Quest Software
Longmont, Colorado

 

Oracle ACE Director Cary MillsapCary Millsap
Vice President, User Experience Services and Solutions, Cintra Software and Services
Dallas, Texas

 

Oracle ACE Director Scott SpendoliniScott Spendolini
Vice President, Viscosity North America
Austin, Texas

 

Mike Gangler
Senior Database Specialist / Database Architect, Secure-24
Southfield, Michigan

 

Oracle ACE Michael MessinaMichael Messina
Senior Managing Consultant, Rolta-AdvizeX
Owensburg, Indiana

 

Oracle ACE Anuj MohanAnuj Mohan
Technical Account Manager, Data Intensity, LLC
Covington, Kentucky

 

Oracle ACE Anton NielsenAnton Nielsen
Vice President, Insum Solutions
Boston, Massachusetts

 

Oracle ACE Michel SchildmeijerMichel Schildmeijer
Lead Software Architect for Justis, SSC-I DJI
Gouda, Netherlands

 

 

Related Content

Two New “Dive Into Containers and Cloud Native” Podcasts

Thu, 2019-04-25 16:48

Oracle Cloud Native Services cover container orchestration and management, build pipelines, infrastructure as code, streaming and real-time telemetry. Join Kellsey Ruppel and me for two new podcasts about these services.

In the first podcast, you can learn more about three services for containers: Container Engine for Kubernetes, Cloud Infrastructure Registry, and Container Pipelines.

In the second podcast, you can learn more about Resource Manager for infrastructure as code, Streaming for event-based architectures, Monitoring for real-time telemetry, and Notifications for real-time alerts based on infrastructure changes.

You can find these and other podcasts at the Oracle Cloud Café. Please take a few minutes to listen-in and share any feedback you may have. 

Presentation Persuasion: Calls for Proposals for Upcoming Events

Thu, 2019-04-25 05:00

Sure you've got solid technical chops, and you share your knowledge through your blog, articles, and videos. But if you want to walk it like you talk it you have to get yourself in front of a live audience and keep them awake for about an hour. If you do it right, who knows? You might just spend the time after your session signing autographs and posing for selfies to calm your new fans. The first step in accomplishing all that is to respond to calls for proposals for conferences, meet-ups, and other live events like these:

  • AUSOUG Webinar Series 2019
    Ongoing series of webinars hosted by the Australian Oracle User Group. No CFP deadline posted.
     
  • NCOAUG Training Day 2019
    CFP Deadline: May 17, 2019
    North Central Oracle Applications User Group
    Location: Oakbrook Terrace, Ill
    Event: August 1, 2017
     
  • MakeIT Conference
    CFP Deadline: May 17, 2019

    Organized by the Slovenian Oracle User Group (SIOUG).
    Event: October 14-15, 2019
     
  • HrOUG 2019
    CFP Deadline: May 27, 2019
    Organized by the Croatian Oracle Users Group
    Event: October 15-18, 2019
     
  • DOAG 2019 Conference and Exhibition
    CFP Deadline: June 3, 2019
    Organized by the Deutsche Oracle Anwendergruppe (German Oracle Users Group)>
    Location: Nürnberg, Germany
    Event: November 19-20, 2019

Good luck!

Related Content

Deploying A Micronaut Microservice To The Cloud

Tue, 2019-04-23 10:17

So you've finally done it. You created a shiny new microservice. You've written tests that pass, ran it locally and everything works great. Now it's time to deploy and you're ready to jump to the cloud. That may seem intimidating, but honestly there's no need to worry. Deploying your Micronaut application to the Oracle Cloud is really quite easy and there are several options to chose from. In this post I'll show you a few of those options and by the time you're done reading it you'll be ready to get your app up and running.

If you haven't yet created an application, feel free to check out my last post and use that code to create a simple app that uses GORM to interact with an Oracle ATP instance.  Once you've created your Micronaut application you'll need to create a runnable JAR file. For this blog post I'll assume you followed my blog post and any assets that I refer to will reflect that assumption. With Micronaut creating a runnable JAR is as easy as using ./gradlew assemble or ./mvnw package (depending on which build automation tool your project uses). Creating the artifact will take a bit longer than you're probably used to if you haven't used Micronaut before. That's because Micronaut precompiles all necessary metadata for Dependency Injection so that it can minimize/reduce runtime reflection to obtain that metadata. Once your task completes you will have a runnable JAR file in the build/libs directory of your project. You can launch your application locally by running java -jar /path/to/your.jar. So to launch the JAR created from the previous blog post, I set some environment variables and run:

Which results in the application running locally:

So far, pretty easy. But we want to do more than launch a JAR file locally. We want to run it in the cloud, so let's see what that takes. The first method I want to look at is more of a "traditional" approach: launching a simple compute instance and deploying the JAR file.

Creating A Virtual Network

If this is your first time creating a compute instance you'll need to set up virtual networking.  If you have a network ready to go, skip down to "Creating An Instance" below. 

Your instance needs to be associated with a virtual network in the Oracle Cloud. Virtual cloud networks (hereafter referred to as VCNs) can be pretty complicated, but as a developer you need to know enough about them to make sure that your app is secure and accessible from the internet. To get started creating a VCN, either click "Create a virtual cloud network" from the dashboard:

Or select "Networking" -> "Virtual Cloud Networks" from the sidebar menu and then click "Create Virtual Cloud Network" on the VCN overview page:

In the "Create Virtual Cloud Network" dialog, populate a name and choose the option "Create Virtual Cloud Network Plus Related Resources" and click "Create Virtual Cloud Network" at the bottom of the dialog:

The "related resources" here refers to the necessary Internet Gateways, Route Table, Subnets and related Security Lists for the network. The security list by default will allow SSH, but not much else, so we'll edit that once the VCN is created.  When everything is complete, you'll receive confirmation:

Close the dialog and back on the VCN overview page, click on the name of the new VCN to view details:

On the details page for the VCN, choose a subnet and click on the Security List to view it:

On the Security List details page, click on "Edit All Rules":

And add a new rule that will expose port 8080 (the port that our Micronaut application will run on) to the internet:

Make sure to save the rules and close out. This VCN is now ready to be associated with an instance running our Micronaut application.

Creating An Instance

To get started with an Oracle Cloud compute instance log in to the cloud dashboard and either select "Create a VM instance":

Or choose "Compute" -> "Instances" from the sidebar and click "Create Instance" on the Instance overview page:

In the "Create Instance" dialog you'll need to populate a few values and make some selections. It seems like a long form, but there aren't many changes necessary from the default values for our simple use case. The first part of the form requires us to name the instance, select an Availability Domain, OS and instance type:

 

The next section asks for the instance shape and boot volume configuration, both of which I leave as the default. At this point I select a public key that I can use later on to SSH in to the machine:

Finally, select the a VCN that is internet accessible with port 8080 open:

Click "Create" and you'll be taken to the instance details page where you'll notice the instance in a "Provisioning" state.  Once the instance has been provisioned, take note of the public IP address:

Deploying Your Application To The New Instance

Using the instance public IP address, SSH in via the private key associated with the public key used to create the instance:

We're almost ready to deploy our application, we just need a few things.  First, we need a JDK.  I like to use SDKMAN for that, so I first install SDKMAN, then use it to install the JDK with sdk install java 8.0.212-zulu and confirm the installation:

We'll also need to open port 8080 on the instance firewall so that our instance will allow the traffic:

We can now upload our instance with SCP:

I've copied the JAR file, my Oracle ATP wallet and 2 simple scripts to help me out. The first script sets some environment variables:

The second script is what we'll use to launch the application:

Next, move the wallet directory from the user home directory to the root with sudo mv wallet/ /wallet and source the environment variables with . ./env.sh. Now run the application with ./run.sh:

And hit the public IP in your browser to confirm the app is running and returning data as expected!

You've just deployed your Micronaut application to the Oracle Cloud! Of course, a manual VM install is just one method for deployment and isn't very maintainable long term for many applications, so in future posts we'll look at some other options for deploying that fit in the modern application development cycle.

.gist{ border-left: none } code { padding: 2px 4px; font-size: 90%; display: inline; margin: 0;}

Latest Blog Posts from Oracle ACEs: April 14-20, 2019

Tue, 2019-04-23 10:06

In writing the blog posts listed below, the endgame for the Oracle ACE program members is simple: sharing their experience and expertise with the community. That doesn't make them superheroes, but you have to marvel at their willingness to devote time and energy to helping others.

Here's what they used their powers to produce for the week of April 14-20, 2019.

 

Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia

 

Oracle ACE Director Ludovico CaldaraLudovico Caldara
Computing Engineer, CERN
Nyon, Switzerland

 

Oracle ACE Director Martin Giffy D'SouzaMartin D'Souza
Director of Innovation, Insum Solutions
Alberta, Canada

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, Texas

 

Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
CTO, eProseed
Riyadh, Saudi Arabia

 

Oracle ACE Alfredo KreigAlfredo Krieg
Senior Principal Consultant, Viscosity North America
Dallas, Texas

 

Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany
Oracle ACE Marco Mischke

 

Oracle ACE Noriyushi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise Japan
Tokyo, Japan
Oracle ACE Noriyushi Shinoda

 

 

Oracle ACE Patrick JolliffePatrick Jolliffe
Manager, Li & Fung Limited
Hong Kong
Oracle ACE Patrick Joliffe

 

Oracle ACE Phil WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom
Oracle ACE Phil Wilkins

 

Oracle ACE Syed ZaheerZaheer Syed
Oracle Application Specialist, Tabadul
Riyadh, Saudi Arabia
Oracle ACE Zaheer Syed

 

Batmunkh Moltov
Chief Technology Officer, Global Data Engineering Co.
Ulaanbaatar, Mongolia
Oracle ACE Associate

 

Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland
Oracle ACE Associate Flora Barriele

 

 

Related Resources

Automating DevSecOps for Java Apps with Oracle Developer Cloud

Mon, 2019-04-22 11:32

Looking to improve your application's security? Automating vulnerability reporting helps you prevent attacks that leverage known security problems in code that you use. In this blog we'll show you how to achieve this with Oracle's Developer Cloud.

Most developers rely on third party libraries when developing applications. This helps them reduce the overall development timelines by providing working code for specific needs. But are you sure that the libraries you are using are secure? Are you keeping up to date with the latest reports about security vulnerabilities that were found in those libraries? What about apps that you developed a while back and are still running but might be using older versions of libraries that don't contain the latest security fixes?

DevSecOps aims to integrate security aspects into the DevOps cycle, ideally automating security checks as part of the dev to release lifecycle. The latest release of Oracle Developer Cloud Service - Oracle's cloud based DevOps and Agile team platform - includes a new capability to integrate security check into your DevOps pipelines.

Relying on the public National Vulnerability Database, the new dependency vulnerability analyzer scans the libraries used in your application against the database of known issues, and flags any security risks your app might have based on this data. The current version of DevCS support this for any Maven based Java project. Leveraging the pom files as a source of truth for the list of libraries used in your code.

Vulnerability Analyzer Step

When running the check, you can specify your level of tolerance to issues - for example defining that you are ok with low risk issues, but not with medium to high risk vulnerabilities. When a check finds issues you can fail the build pipeline, send notifications, and in addition add an issue into the issue tracking system provided for free with Developer Cloud.

Check out this demo video to see the process in action.

Having these type of vulnerability scans applied to your platform can save you from situation where hackers leverage publicly known issues and out of date libraries usage to break into your systems. These checks can be part of your regular build cycle, and can also be scheduled to run on a regular basis on systems that have already been deployed - to verify that we keep them up to date with the latest security checks.

 

Economics and Innovations of Serverless

Fri, 2019-04-19 13:08

The term serverless has been one of the biggest mindset changes since the term cloud, and learning how to “think serverless” should be part of every developers cloud-native journey. This is why one of Oracle’s 10 Predictions for Developers in 2019 is “The Economics of Serverless Drives Innovation on Multiple Fronts”. Let’s unpack what we mean by economics and innovation while covering a few common misconceptions.

The Economics

Cost is only part of the story

I often hear “cost reduction” as a key driver of serverless architectures. Everyone wants to save money and be a hero for their organization. Why pay for a full time server when you can pay per function millisecond? The ultimate panacea of utility computing — pay for exactly what you need and no more. This is only part of the story.

Economics is a broad term for the production, distribution, and consumption of things. Serverless is about producing software. And software is about using computers as leverage to produce non-linear value. Facebook (really MySpace) leveraged software to change the way the world connected. Uber leveraged software to transform the transportation industry. Netflix leveraged software to change the way the world consumed movies. Software is transforming every major company in every major industry, and for most, is now at the heart of how they deliver value to end users. So why the fuss about serverless?

Serverless is About Driving Non-Linear Value

Because serverless is ultimately about driving non-linear business value which can fundamentally change the economics of your business. I’ve talked about this many times , but Ben nails it — “serverless is a ladder. You’re climbing to some nirvana where you get to deliver pure business value with no overhead.”

Pundits point out that “focus on business value” has been said many times over the years, and they’re right. But every software architecture cycle learns from past cycles and incorporates new ways to achieve this goal of greater focus, which is why serverless is such an important cycle to watch. It effectively incorporates the promise (and best) of cloud with the promise (and learnings) of SOA .

Ultimately the winning businesses reduce overhead while increasing value to their customers by empowering their developers. That’s why the economics are too compelling to ignore. Not because your CRON job server goes from $30 to $0.30/month (although a nice use case), but because creating a culture of innovation and focus on driving business value is a formula for success.

So we can’t ignore the economics. Let’s move to the innovations.

The Innovations

The tech industry is in constant motion. Apps, infrastructure, and the delivery process drive each other forward together in a ping-pong fashion. Here are a few of the key areas to watch that are contributing to forward movement in the innovation cycle, as illustrated in the “Digital Trialectic”:

Depth of Services

The web is fundamentally changing how we deliver services. We’re moving towards an “everything-as-a-service” world where important bits of functionality can be consumed by simply calling an API. Programming is changing, and this is driven largely by the depth of available services to solve problems that once plagued developers working hours.

Twilio now removes the need for SMS, voice, and now email (acquired Sendgrid) code and infrastructure. Google’s Cloud Vision API removes the need for complex object and facial detection code and infrastructure. AWS’s Ground Station removes the need for satellite communications code and infrastructure (finally?), and Oracle’s Autonomous Database replaces your existing Oracle Database code and infrastructure.

Pizzas, weather, maps, automobile data, cats – you have an endless list of things accessible across simple API calls.

Open Source

As always, serverless innovation is happening in the world of open source as well, many of which end up as part of the list of services above. The Fn Project is fully open source code my team is working on which will allow anyone to run their own serverless infrastructure on any cloud, starting with Functions-as-a-service and moving towards things like workflow as well. Come say hi in our Slack.

But you can get to serverless faster with the managed Fn service, Oracle Functions. And there are other great industry efforts as well including Knative by Google, OpenFaas by Alex Ellis, and OpenWhisk by IBM.

All of these projects focus mostly on the compute aspect of a serverless architecture. There are many projects that aim to make other areas easier such as storage, networking, security, etc, and all will eventually have their own managed service counterparts to complete the picture. The options are a bit bewildering, which is where standards can help.

Standards

With a paradox of choice emerging in serverless, standards aim to ease the pain in providing common interfaces across projects, vendors, and services. The most active forum driving these standards is the Serverless Working Group, a subgroup of the Cloud Native Compute Foundation. Like cats and dogs living together, representatives from almost every major vendor and many notable startups and end users have been discussing how to “harmonize” the quickly-moving serverless space. CloudEvents has been the first major output from the group, and it’s a great one to watch. Join the group during the weekly meetings, or face-to-face at any of the upcoming KubeCon’s.

Expect workflow, function signatures, and other important aspects of serverless to come next. My hope is that the group can move quickly enough to keep up with the quickly-moving space and have a material impact on the future of serverless architectures, further increasing the focus on business value for developers at companies of all sizes.

A Final Word

We’re all guilty of skipping to the end in long posts. So here’s the net net: serverless is the next cycle of software architecture, its roots and learnings coming from best-of SOA and cloud. Its aim is to change the way in which software is produced by allowing developers to focus on business value, which in turn drives non-linear business value. The industry is moving quickly with innovation happening through the proliferation of services, open source, and ultimately standards to help harmonize this all together.

Like anything, the best way to get started is to just start. Pick your favorite cloud, and start using functions. You can either install Fn manually or sign up for early access to Oracle Functions.

If you don’t have an Oracle Cloud account, take a free trial today.

Creating A Microservice With Micronaut, GORM And Oracle ATP

Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:

Summary

We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.

Oracle ACEs at APEX Connect 2019, May 7-9 in Bonn

Thu, 2019-04-18 11:36

APEX Connect 2019, the annual conference organized by DOAG (the German Oracle Applications User Group) will be held May 7-9, 2019 in Bonn, Germany. The event features a wide selection of sessions and events, covering APEX, PL and PL/SQL, and JavaScript.  Among the session speakers are the following members of the Oracle ACE Program:

Oracle ACE Director Nils de BruijinNiels de Bruijn
Business Unit Manager APEX, MT AG
Cologne, Germany

 

 

 

Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands

 

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

 

 

Oracle ACE Director John Edward ScottJohn Edward Scott
Founder, APEX Evangelists
West Yorkshire, United Kingdom

 

 

Oracle ACE Director Kamil StawiarskiKamil Stawiarski
Owner/Partner, ORA-600
Warsaw, Poland

 

 

Oracle ACE Director Martin WidlakeMartin Widlake
Database Architect and Performance Specialist, ORA600
Essex, United Kingdom

 

 

Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands

 

 

Oracle ACE Tobias ArnholdTobias Arnhold
Freelance APEX Developer, Tobias Arnhold IT Consulting
Germany

 

 

Oracle ACE Dietmar AustDietmar Aust
Owner, OPAL UG
Cologne, Germany

 

 

Oracle ACE Kai DonatoKai Donato
Senior Consultant for Oracle APEX Development, MT AG
Cologne, Germany

 

 

Oracle ACE Daniel HochleitnerDaniel Hochleitner
Freelance Oracle APEX Developer and Consultant
Regensburg, Germany

 

 

Oracle ACE Oliver LemmOliver Lemm
Business Unit Manager, MT AG
Cologne, Germany

 

 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 

 

Oracle ACE Matt MulvaneyMatt Mulvaney
Senior Development Consultant, Explorer UK LTD
Leeds, United Kingdom

 

 

Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda, Netherlands

 

 

Oracle ACE Phillip SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zürich, Switzerland

 

 

Oracle ACE Sven-Uwe WellerSven-Uwe Weller
Syntegris Information Solutions GmbH
Germany

 

 

Oracle ACE Associate Carolin HagemannCarolin Hagemann
Hagemann IT Consulting
Hamburg, Germany

 

 

Oracle ACE Associate Moritz KleinMoritz Klein
Senior APEX Consultant, MT AG
Frankfurt, Germany

 

Additional Resources

Developers Decide One Cloud Isn’t Enough

Wed, 2019-04-17 08:00

Introduction

Developers have significantly greater choice today than even just a few years ago, when considering where to build, test and host their services and applications, deciding which clouds to move existing on-premises workloads to, and which of the multitude of open source projects to leverage. So why, in this new era of empowered developers and expanding choice, have so many organizations pursued a single cloud strategy?  The proliferation of new, cloud native open source projects and cloud service providers over recent years who have added capacity, functionality, tools, resources and services, has resulted in better performance, different cost models, and more choice for developers and DevOps engineers, while increasing competition among providers. This is leading into a new era of cloud choice, where the new norm will be dominated by a multi-cloud and hybrid cloud model.

As new cloud native design and development technologies like Kubernetes, serverless computing, and the maturing discipline of microservices emerge, they help accelerate, simplify, and expand deployment and development options. Users have the ability to leverage new technologies with their existing designs and deployments, and the flexibility they afford expands users’ option to run on many different platforms. Given this rapidly changing cloud landscape, it is not surprising that hybrid cloud and multi cloud strategies are being adopted by an increasing number of companies today. 

For a deeper dive into Prediction #7 of the 10 Predictions for Developers in 2019 offered by Siddhartha Agarwal, “Developers Decide One Cloud Isn’t Enough”, we look at the growing trend for companies and developers to choose more than one cloud provider. We’ll examine a few of the factors they consider, the needs determined by a company’s place in the development cycle, business objectives, and level of risk tolerance, and predict how certain choices will trend in 2019 and beyond.

 

Different Strokes

We are in a heterogeneous IT world today. A plethora of choice and use cases, coupled with widely varying technical and business needs and approaches to solving them, give rise to different solutions. No two are exactly the same, but development projects today typically fall within the following scenarios.

A. Born in the cloud development – these suffer little to no constraint imposed by existing applications; it is highly efficient and cost-effective to begin design in the cloud. They are naturally leveraging containers and new open source development tools like serverless (https://fnproject.io/) or service mesh platforms (e.g., Istio)  A decade ago, startup costs based on datacenter needs alone were a serious barrier to entry for budding tech companies – cloud computing has completely changed this.

B. On premises development moving to cloud – enterprises in this category have many more factors to consider. Java teams for example are rapidly adopting frameworks like Helidon and GraalVM to help them move to a microservice architecture and migrate applications to the cloud. But will greenfield development projects start only in cloud? Do they migrate legacy workloads to cloud? How do they balance existing investments with new opportunities? And what about the interface between on-premises and cloud?

C. Remaining mostly on premises but moving some services to cloud – options are expanding for those in this category. A hybrid cloud approach has been expanding, and we predict will continue to expand, over the course of at least the next few years.  The cloud native stacks available on premises now mirror the cloud native stacks in the cloud thus enabling a new generation of hybrid cloud use cases. An integrated and supported cloud native framework that spans on premises and cloud options delivers choice once again.  And, security, privacy and latency concerns will dictate some of their unique development project needs.

 

If It Ain’t Broke, Don’t Fix It?

IT investments are real. Inertia can be hard to overcome. Let’s look at the main reasons for not distributing workloads across multiple clouds.  

  • Economy of scale tops the list, as most cloud providers will offer discounts for customers who go all in; larger workloads on one cloud provide negotiating leverage.
  • Development staff familiarity with one chosen platform makes it easier to bring on and train new developers; ramp time to productivity increases.
  • Custom features or functionality unique to the main cloud provider may need to be removed or redesigned in moving to another platform. Even on supposedly open platforms, developers must be aware of the not-so-obvious features impacting portability.
  • Geographical location of datacenters for privacy and/or latency concerns in less well-served areas of the world may also inhibit choice, or force uncomfortable trade-offs.
  • Risk mitigation is another significant factor, as enterprises seek to balance conflicting business needs with associated risks. Lean development teams often need to choose between taking on new development work vs modernizing legacy applications, when resources are scarce.

Change is Gonna Do You Good

These are valid concerns, but as dev teams look more deeply into the robust services and offerings emerging today, the trend is to diversify.

The most frequently cited concern is that of vendor lock-in. This counter-argument to that of economy of scale says that the more difficult it is to move your workloads off of one provider, the less motivated that vendor is to help reduce your cost of operations. For SMBs (small to mid-sized businesses) without a ton of leverage in comparison to large enterprises, this can be significant. Ensuring portability of workloads is important. A comprehensive cloud native infrastructure is imperative here – one that includes container orchestration but also streaming, CI/CD, and observability and analysis (e.g, Prometheus and Grafana). Containers and Kubernetes deliver portability, provided your cloud vendor uses unmodified open source code. In this model, a developer can develop their web application on their laptop, push it into a CI/CD system on one cloud, and leverage another cloud for managed Kubernetes to run their container-based app. However, the minute you start using specific APIs from the underlying platform, moving to another platform is much more difficult. AWS Lambda is one of many examples.

Mergers, acquisitions, changing business plans or practices, or other unforeseen events may impact a business at a time when they are not equipped to deal with it. Having greater flexibility to move with changing circumstances, and not being rushed into decisions, is also important. Consider for example, the merger of an organization that uses an on-premises PaaS, such as OpenShift, merging with another organization that has leveraged the public cloud across IaaS, PaaS and SaaS. It’s important to choose interoperable technologies to anticipate these scenarios.

Availability is another reason cited by customers. A thoughtfully designed multi-cloud architecture not only offers potential negotiating power as mentioned above, but also allows for failover in case of outages, DDoS attacks, local catastrophes, and the like. Larger cloud providers with massive resources and proliferation of datacenters and multiple availability domains offer a clear advantage here, but it also behooves the consumer to distribute risk across not only datacenters, but over several providers.

Another important set of factors is related to cost and ROI. Running the same workload on multiple cloud providers to compare cost and performance can help achieve business goals, and also help inform design practices.  

Adopting open source technologies enables businesses to choose where to run their applications based on the criteria they deem most important, be they technical, cost, business, compliance, or regulatory concerns. Moving to open source thus opens up the possibility to run applications on any cloud. That is, any CNCF-certified Kubernetes managed cloud service can safely run Kubernetes – so enterprises can take advantage of this key benefit to drive a multi-cloud strategy.

The trend in 2019 is moving strongly in the direction of design practices that support all aspects of a business’s goals, with the best offers, pricing and practices from multiple providers. This direction makes enterprises more competitive – maximally productive, cost-effective, secure, available, and flexible regarding platform choice.

 

Design for Flexibility

Though having a multi-cloud strategy seems to be the growing trend, it does come with some inherent challenges. To address issues like interoperability among multiple providers and establishing depth of expertise with a single cloud provider, we’re seeing an increased use of different technologies that help to abstract away some of the infrastructure interoperability hiccups. This is particularly important to developers, who seek the best available technologies that fit their specific needs.

Serverless computing seeks to reduce the awareness of any notion of infrastructure. Consider it similar to water or electricity utilities – once you have attached your own minimal home infrastructure to the endpoint offered by the utility, you simply turn on the tap or light switch, and pay for what you consume. The service scales automatically – for all intents and purposes, you may consume as much output of the utility or service as desired, and the bill goes up and down accordingly. When you are not consuming the service, there is no (or almost no) overhead.  

Development teams are picking cloud vendors based on capabilities they need. This is especially true in SaaS. SaaS is a cloud-based software delivery model with payment based on usage, rather than license or support-based pricing. The SaaS provider develops, maintains and updates the software, along with the hardware, middleware, application software, and security. SaaS customers can more easily predict total cost of ownership with greater accuracy. The more modern, complete SaaS solutions also allow for greater ease of configuration and personalization, and offer embedded analytics, data portability, cloud security, support for emerging technologies, and connected, end-to-end business processes.

Serverless computing not only provides simplicity through abstraction of infrastructure, its design patterns also promote the use of third-party managed services whenever possible. This provides flexibility and allows you to choose the best solution for your problem from the growing suite of products and services available in the cloud, from software-defined networking and API gateways, to databases and managed streaming services. In this design paradigm, everything within an application that is not purely business logic can be efficiently outsourced.

More and more companies are finding it increasingly easy to connect elements together with Serverless functionality for the desired business logic and design goals. Serverless deployments talking to multiple endpoints can run almost anywhere; serverless becomes the “glue” that is used to make use of the best services available, from any provider.

Serverless deployments can be run anywhere, even on multiple cloud platforms. Hence flexibility of choice expands even further, making it arguably the best design option for those desiring portability and openness.

 

Summary

There are many pieces required to deliver a successful multi-cloud approach. Modern developers use specific criteria to validate if a particular cloud is “open” and whether or not it supports a multi-cloud approach. Does it have the ability to

  • extract/export data without incurring significant expense or overhead?
  • be deployed either on-premises or in the public cloud, including for custom applications, integrations between applications, etc.?
  • monitor and manage applications that might reside on-premises or in other clouds from a single console, with the ability to aggregate monitoring/management data?

And does it have a good set of APIs that enables access to everything in the UI via an API? Does it expose all the business logic and data required by the application? Does it have SSO capability across applications?

The CNCF (Cloud Native Computing Foundation) has over 400 cloud provider, user, and supporter members, and its working groups and cloud events specification engage these and thousands more in the ongoing mission to make cloud native computing ubiquitous, and allow engineers to make high-impact changes frequently and predictably with minimal toil.

We predict this trend will continue well beyond 2019 as CNCF drives adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects, and democratizing state-of-the-art patterns to make these innovations accessible for everyone.

Oracle is a platinum member of CNCF, along with 17 other major cloud providers. We are serious about our commitment to open source, open development practices, and sharing our expertise via technical tutorials, talks at meetups and conferences, and helping businesses succeed. Learn more and engage with us at cloudnative.oracle.com, and we’d love to hear if you agree with the predictions expressed in this post. 

Podcast: On the Highway to Helidon

Tue, 2019-04-16 23:00

Are you familiar with Project Helidon? It’s an open source Java microservices framework introduced by Oracle in September of 2018.  As Helidon project lead Dmitry Kornilov explains in his article Helidon Takes Flight, "It’s possible to build microservices using Java EE, but it’s better to have a framework designed from the ground up for building microservices."

Helidon consists of a lightweight set of libraries that require no application server and can be used in Java SE applications. While these libraries can be used separately, using them in combination provides developers with a solid foundation on which to build microservices.

In this program we’ll dig into Project Helidon with a panel that consists of two people who are actively engaged in the project, and two community leaders who have used Helidon in development projects, and have also organized Helidon-focused Meet-Ups.

This program was recorded on Friday, March 8, 2019. So let’s journey through time and space and get to the conversation. Just press play in the widget.

The Panelists Dmitry Kornilov

Dmitry Kornilov
Senior Software Development Manager, Oracle; Project Lead, Project Helidon
Prague, Czech Republic

 

Tomas Langer

Tomas Langer
Consulting Member of Technical Staff, Oracle; Member of the Project Helidon Team
Prague, Czech Republic

 

Oracle ACE Associate José Rodrigues

José Rodrigues
Principal Consultant and Business Analyst, Link Consulting; Co-Organizer, Oracle Developer Meetup Lisbon
Lisbon, Portugal

 

Oracle ACE Phil Wilkins

Phil Wilkins
Senior Consultant, Capgemini; Co-Organizer. Oracle Developer Meetup London
Reading, UK

 

Relevant Resources

Pages