Feed aggregator

Jami (Gnu Ring) review

RDBMS Insight - Tue, 2019-09-10 15:35

An unavoidable fact of database support life is webconferences with clients or users. Most of the time, we’re more interested in what’s going on onscreen than in each others’ faces. But every now and then we need to have a face-to-face. Skype is popular, but I recently had the chance to try out a FOSS alternative with better security: Jami.

Jami (formerly Gnu Ring) is a FOSS alternative to Skype that advertises a great featureset and some terrific privacy features. I suggested to a small group that we try it out for an upcoming conference call.

Just going by its specs, Jami (https://jami.net/) looks amazing. It’s free, open-source software that’s available on all the major platforms, including all the major Linux distros. It boasts the following advantages over Skype and many other Skype alternatives:

  • Distributed: Uniquely, there aren’t any central servers. Jami uses distributed hash table technology to distribute directory functions, authentication, and encryption across all devices connected to it.
  • Secure: All communications are end-to-end encrypted.
  • FOSS: Jami’s licensed under a GPLv3+ license, is a GNU package and a project of the Free Software Foundation.
  • Ad-free: If you’re not paying for commercial software, then you are the product. Not so with Jami, which is non-commercial and ad-free. Jami is developed and maintained by Savoir Faire Linux, a Canadian open-source consulting company.

And its listed features include pretty much everything you’d use Skype for: text messaging, voice calls, video calls, file and photo sharing, even video conference calls.

I wanted to use it for a video conference call, and my group was willing to give it a try. I had high hopes for this FOSS Skype alternative.


Jami is available for: Windows, Linux, OS X, iOS, Android, and Android TV. (Not all clients support all features; there’s a chart in the wiki.) I tried the OS X and iOS variants.

First, I installed Jami on OS X and set it up. The setup was straightforward, although I had to restart Jami after setting up my account, in order for it to find that account.

Adding contacts

One particularly cool feature of Jami is that your contact profile is stored locally, not centrally. Your profile’s unique identifier is a cumbersomely long 40-digit hexadecimal string, such as “7a639b090e1ab9b9b54df02af076a23807da7299” (not an actual Jami account afaik). According to the documentation, you can also register a username for your account, such as “natalkaroshak”.

Contacts are listed as hex strings.Unfortunately, I wasn’t able to actually find any of my group using their registered usernames, nor were they able to find me under my username. We had to send each other 40-digit hex strings, and search for the exact hex strings in Jami, in order to find each other.

The only way to add a contact, once you’ve located them, is to interact with them, eg. by sending a text or making a call. This was mildly annoying when trying to set up my contact list a day ahead of the conference call.

Once I’d added the contacts, some of them showed up in my contact list with their profile names… and some of them didn’t, leaving me guessing which hex string corresponded to which member of my group.

Sending messages, texts, emojis

Sending and receiving Skype-style messages and emojis worked very well in Jami. Group chat isn’t available.

Making and taking calls

The documented process for a conference call in Jami is pretty simple: call one person,

Only the Linux and Windows versions currently support making conference calls. Another member of our group tried to make the conference call. As soon as I answered his incoming call, my Jami client crashed. So I wasn’t able to actually receive a call using Jami for OS X.

The caller and one participant were able to hear each other briefly, before the caller’s Jami crashed as well.

Linking another device to the same account

I then tried installing Jami on my iPhone. Again, the installation went smoothly, and this let me try another very cool feature of Jami.

In Jami, your account information is all stored in a folder on your device. There’s no central storage. Password creation is optional, because you don’t log in to any server when you join Jami. If you do create a password, you can (1) register a username with the account and (2) use the same account on another device.

The process of linking my iPhone’s Jami to the same account I used with my OSX Jami was very smooth. In the OSX install, I generated an alphanumeric PIN, entered the PIN into my device, and entered the account password. I may have mis-entered the first alphanumeric PIN, because it worked on the second try.

Unfortunately, my contacts from the OSX install didn’t appear in the iOS install, even though they were linked to the same account. I had to re-enter the 40-digit hex strings and send a message to each conference call participant.

Making calls on iOS

The iOS client doesn’t support group calling, but I tried video calling one person. We successfully connected. However, that’s where the success ended. I could see the person I called, but was unable to hear her. And she couldn’t see OR hear me. After a few minutes, the video of the other party froze up too.


Jami looked very promising, but didn’t actually work.

All of the non-call stuff worked: installation, account creation, adding contacts (though having to use the 40-digit hex codes is a big drawback), linking my account to another device.

But no one in my group was able to successfully make a video call that lasted longer than a few seconds. The best result was that two people could hear each other for a couple of seconds.

Jami currently has 4.5/5 stars on alternativeto.net. I have to speculate that most of the reviews are from Linux users, and that the technology is more mature on Linux. For OSX and iOS, Jami’s not a usable alternative to Skype yet.

Big thanks to my writing group for gamely trying Jami with me!

Categories: DBA Blogs

Oracle Cloud: Sign up failed... [3] & solved

Dietrich Schroff - Tue, 2019-09-10 13:44
Finally (see my attempts here and here) i was able to sign up to Oracle cloud.
What did the trick?

I got help from Oracle support:
So i used my gmail address and this worked:

and then:

Let's see how this cloud will work compared to Azure and AWS

Red Hat Forum Zürich, 2019, some impressions

Yann Neuhaus - Tue, 2019-09-10 07:34

Currently the Red Hat Forum 2019 in Zürich is ongoing and people just finished lunch before the more technical sessions are starting.

As expected a lot is around OpenShift 4 and automation with Ansible. As dbi is a Red Hat advanced business partner we took the opportunity to be present with a booth for getting in touch with our existing customers and to meet new people:

All the partners got their logo on a huge wall at the entrance to the event:

As the event is getting more and more popular, Red Hat moved to the great and huge location, the Stage One in Zürich Oerlikon. So all of the 850 participants found their space.

There is even space for some fun stuff:

Important as well: the catering was excellent:

The merger with IBM was an important topic and Red Hat again stated several times: Red Hat will stay Red Hat. Let’s see what happens here, not all people trust this statement. All in all it is a great atmosphere here in Oerlikon, great people to discuss with, interesting topics, great organization and a lot of “hybrid cloud”. Technology is moving fast and Red Hat is trying to stay at the front. From a partner perspective the Forum is a great chance to meet the right people within Red Hat, no matter what topic you want to discuss: Technology, marketing, training, whatever. I am pretty sure we will attend the next forum as well.

Cet article Red Hat Forum Zürich, 2019, some impressions est apparu en premier sur Blog dbi services.

[Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud

Online Apps DBA - Tue, 2019-09-10 06:39

7 Things Every Oracle Apps DBA or Architect Must know in order to Build Manage & Migrate EBS (R12) on Oracle’s Gen 2 Cloud that’s Oracle Cloud Infrastructure (OCI) These 7 things include: ✔ Deployment Options On Oracle Cloud ✔ The architecture of EBS (R12) on OCI ✔ Cloud Tools i.e. EBS Cloud Manager, Cloud […]

The post [Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introducing Accelerated Database Recovery with SQL Server 2019

Yann Neuhaus - Tue, 2019-09-10 06:01

SQL Server 2019 RC1 was released out a few weeks ago and it is time to start blogging about my favorite core engine features that will be shipped with the next version of SQL Server. Things should not be completely different with the RTM, so let’s introduce the accelerated database recovery (aka ADR) which is mainly designed to solve an annoying issue that probably most of SQL Server DBAs already faced at least one time: long running transactions that impact the overall recovery time. As a reminder with current versions of SQL Server, database recovery time is tied to the largest transaction at the moment of the crash. This is even more true in high-critical environments where it may have a huge impact on the service or application availability and ADR is another feature that may help for sure.

Image from Microsoft documentation

In order to allow very fast rollback and recovery process the SQL Server team redesigned completely the SQL database engine recovery process and the interesting point is they have introduced row-versioning to achieve it. Row-versioning, however, exist since the SQL Server 2005 version through RCSI and SI isolation levels and from my opinion this is finally good news to extend (finally) such capabilities to address long recovery time.

Anyway, I performed some testing to get an idea of what could be the benefit of ADR and the impact of the workload as well. Firstly, I performed a recovery test without ADR and after initiating a long running transaction, I simply crashed my SQL Server instance. I used an AdventureWorks database with the dbo.bigTransactionHistory table which is big enough (I think) to get a relevant result.

The activation of ADR is per database meaning that row-versioning is also managed locally per database. It allows a better workload isolation compared to using the global tempdb version store with previous SQL Server versions.

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET

ALTER DATABASE AdventureWorks_dbi SET


The dbo.bigtransactionHistory table has only one clustered primary key …

EXEC sp_helpindex 'dbo.bigTransactionHistory';


… with 158’272’243 rows and about 2GB of data

EXEC sp_helpindex 'dbo.bigTransactionHistory';


I simulated a long running transaction with the following update query that touches every row of the dbo.bigTransactionHistory table to get a relevant impact on the recovery process duration time.


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;


The related transactions wrote a log of records into the transaction log size as show below:

	DB_NAME(database_id) AS [db_name],
	total_log_size_in_bytes / 1024 / 1024 AS size_MB,
	used_log_space_in_percent AS [used_%]
FROM sys.dm_db_log_space_usage;


The sys.dm_tran_* and sys.dm_exec_* DMVs may be helpful to dig into the transaction detail including the transaction start time and log used in the transaction log:

   GETDATE() AS [Current Time],
   [des].[login_name] AS [Login Name],
   DB_NAME ([dtdt].database_id) AS [Database Name],
   [dtdt].[database_transaction_begin_time] AS [Transaction Begin Time],
   [dtdt].[database_transaction_log_bytes_used] / 1024 / 1024 AS [Log Used MB],
   [dtdt].[database_transaction_log_bytes_reserved] / 1024 / 1024 AS [Log Reserved MB],
   SUBSTRING([dest].text, [der].statement_start_offset/2 + 1,(CASE WHEN [der].statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),[dest].text)) * 2 ELSE [der].statement_end_offset END - [der].statement_start_offset)/2) as [Query Text]
   sys.dm_tran_database_transactions [dtdt]
   INNER JOIN sys.dm_tran_session_transactions [dtst] ON  [dtst].[transaction_id] = [dtdt].[transaction_id]
   INNER JOIN sys.dm_exec_sessions [des] ON  [des].[session_id] = [dtst].[session_id]
   INNER JOIN sys.dm_exec_connections [dec] ON   [dec].[session_id] = [dtst].[session_id]
   LEFT OUTER JOIN sys.dm_exec_requests [der] ON [der].[session_id] = [dtst].[session_id]
   OUTER APPLY sys.dm_exec_sql_text ([der].[sql_handle]) AS [dest]


The restart of my SQL Server instance kicked-in the AdventureWorks_dbi database recovery process. It took about 6min in my case:

EXEC sp_readerrorlog 0, 1, N'AdventureWorks_dbi'


Digging further in the SQL Server error log, I noticed the phase2 (redo) and phase3 (undo) of the recovery process that took the most of time (as expected).

However, if I performed the same test with ADR enabled for the AdventureWorks_dbi database …

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET


… and I dig again into the SQL Server error log:

Well, the output above is pretty different but clear and irrevocable: there is a tremendous improvement of the recovery time process here. The SQL Server error log indicates the redo phase took 0ms and the undo phase 119ms. I also tested different variations in terms of long transactions and logs generated in the transaction log (4.5GB, 9.1GB and 21GB) without and with ADR. With the latter database recovery remained fast irrespective to the transaction log size as shown below:

But there is no free lunch when enabling ADR because it is a row-versioning based process which may have an impact on the workload. I was curious to compare the performance of my update queries between scenarios including no row-versioning (default), row-versioning with RCSI only, ADR only and finally both RCSI and ADR enabled. I performed all my tests on a virtual machine quad core Intel® Core ™ i7-6600U CPU @ 2.6Ghz and 8GB of RAM. SQL Server memory is capped to 6GB. The underlying storage for SQL Server data files is hosted on SSD disk Samsung 850 EVO 1TB.

Here the first test I performed. This is the same update I performed previously which touches every row on the dbo.bigTransactionHistory table:


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;


And here the result with the different scenarios:

Please don’t focus strongly on values here because it will depend on your context but the result answers to the following questions: Does the activation of ADR will have an impact on the workload and if yes is it in the same order of magnitude than RCSI / SI? The results are self-explanatory.

Then I decided to continue my tests by increasing the impact of the long running transaction with additional updates on the same data in order to stress a little bit the version store.


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;


Here the new results:

This time ADR seems to have a bigger impact than RCSI in my case. Regardless the strict values of this test, the key point here is we have to be aware that enabling ADR will have an impact to the workload.

After performing these bunch of tests, it’s time to get a big picture of ADR design with several components per database including a persisted version store (PVS), a Logical Revert, a sLog and a cleaner process. In this blog post I would like to focus on the PVS component that acts as persistent version store for the concerned database. In other words, with ADR, tempdb will not be used to store row versions anymore. The interesting point is that RCSI / SI row-versioning will continue to be handle through the PVS if ADR is enabled according to my tests.

There is the new added column named is_accelerated_database_recovery_on to the sys.databases system view. In my case both RCSI and ADR are enabled in AdventureWorks_dbi database.

	name AS [database_name],
FROM sys.databases
WHERE database_id = DB_ID()


The sys.dm_tran_version_store_space_usage DMV displays the total space in tempdb used by the version store for each database whereas the new sys.dm_tran_persistent_version_store_stats DMV provides information related to the new PVS created with the ADR activation.


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;

	DB_NAME(database_id) AS [db_name],
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;

	reserved_page_count / 128 AS reserved_MB
FROM sys.dm_tran_version_store_space_usage;


After running my update query, I noticed the PVS in AdventureWorks_dbi database was used rather the version store in tempdb.

So, getting rid of the version store in tempdb seems to be a good idea and probably more scalable per database but according to my tests and without drawing any conclusion now it may lead to performance considerations … let’s see in the future what happens …

In addition, from a storage perspective, because SQL Server doesn’t use tempdb anymore as version store, my curiosity led  to see what happens behind the scene and how PVS interacts with the data pages where row-versioning comes into play. Let’s do some experiments:

Let’s create the dbo.bigTransationHistory_row_version table from the dbo.bigTransationHistory table with fewer data:

USE AdventureWorks_dbi;

DROP TABLE IF EXISTS [dbo].[bigTransactionHistory_row_version];

INTO [dbo].[bigTransactionHistory_row_version]
FROM [dbo].[bigTransactionHistory]


Now, let’s have a look at the data page that belongs to my dbo.bigTransacitonHistory_row_version table with the page ID 499960 in my case:

DBCC TRACEON (3604, -1);
DBCC PAGE (AdventureWorks_dbi, 1, 499960, 3);


Versioning info exists in the header but obviously version pointer is set to Null because there is no additional version of row to maintain in this case. I just inserted one.

Let’s update the only row that exists in the table as follows:

UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1


The version pointer has been updated (but not sure the information is consistent here or at least the values displayed are weird). One another interesting point is there exists more information than the initial 14 bytes of information we may expect to keep track of the pointers. There is also extra 21 bytes at the end of row as show above. On the other hand, the sys.dm_db_index_physical_stats() DMF has been updated to reflect the PVS information with new columns inrow_*, total_inrow_* and offrow_* and may help to understand some of the PVS internals.

FROM sys.dm_db_index_physical_stats(
	DB_ID(), OBJECT_ID('dbo.bigTransactionHistory_row_version'), 


Indeed, referring to the above output and correlating them to results I found inside the data page, I would assume the extra 21 bytes stored in the row seems to reflect a (diff ?? .. something I need to get info) value of the previous row (focus on in_row_diff_version_record_count and in_row_version_payload_size_in_bytes columns).

Furthermore, if I perform the update operation on the same data the storage strategy seems to switch to a off-row mode if I refer again to the sys.dm_db_index_physical_stats() DMF output:

Let’s go back to the DBCC PAGE output to confirm this assumption:

Indeed, the extra payload has disappeared, and it remains only the 14 bytes pointer which has been updated accordingly.

Finally, if I perform multiple updates of the same row, SQL Server should keep the off-row storage and should create inside it a chain of version pointers and their corresponding values.


UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1
GO 100000


My assumption is verified by taking a look at the previous DMVs. The persistent version store size has increased from ~16MB to ~32MB and we still have 1 version record in off-row mode meaning there is still one version pointer that references the off-row mode structure for my record.

Finally, let’s introduce the cleaner component. Like the tempdb version store, cleanup of old row versions is achieved by an asynchronous process that cleans page versions that are not needed. It wakes up periodically, but we can force it by executing the sp_persistent_version_cleanup stored procedure.

Referring to one of my first tests, the PVS size is about 8GB.


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
	DB_NAME(database_id) AS [db_name],
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
-- Running PVS cleanu process
EXEC sp_persistent_version_cleanup


According to my tests, the cleanup task took around 6min for the entire PVS, but it was not a blocking process at all as you may see below. As ultimate test, I executed in parallel an update query that touched every row of the same table, but it was not blocked by the cleaner as show below:

This is a process I need to investigate further. Other posts are coming as well .. with other ADR components.

See you!
















Cet article Introducing Accelerated Database Recovery with SQL Server 2019 est apparu en premier sur Blog dbi services.

Using Web Worker for Long Tasks in Oracle JET

Andrejus Baranovski - Tue, 2019-09-10 02:42
JavaScript app runs in a single thread. This means if there is a long-running resource-intensive operation - the thread will be blocked and the page will stay frozen until operation completes. Obviously, this is not user-friendly and such behavior should be avoided. We can use Web Workers, through Web Workers we could run long-running operations in separate threads, without blocking the main thread. Code running in Web Worker doesn't have access to UI DOM, this means logic coded in Web Worker should operate with logic which is not directly related to UI.

Sample app contains commented code in dashboard.js. This code blocks main thread for 10 seconds, if you uncomment it - you will see that app becomes frozen for 10 seconds:

Web Worker is defined in dashboard.js. Web Worker is a separate JS file, which is being used for Worker object. API allows to send and receive messages, this way we can communicate to and from Web Worker (start a new task and receive message when task is completed):

Web Worker code - onmessage invoked when the message arrives from the main thread. postMessage sends message back to the main thread:

The sample app is available on GitHub repo.

Azure Advisor And Fixing Errors

Jeff Moss - Mon, 2019-09-09 17:23

Azure can be configured to send you advisor reports detailing things that are not quite right in your environment. The advisor is not necessarily always right but it’s sensible to review the outputs periodically, even if they relate to non production environments.

A few issues popped up on an advisor report on my recent travels and although you can just use the entries on the report on the portal to target the offending resources, I thought it might be helpful to write some Powershell to identify the offending resources as an alternative.

Secure transfer to storage accounts should be enabled

This error shows up similar to this on the report:

Fairly obvious what this means really – the storage account has a setting which is currently set to allow insecure transfers (via http rather than https) – an example looks like this under the Configuration blade of the Storage Account:

The advisor highlights this and the solution is to just set the toggle to Enabled for “Secure transfer required” and press save.

To identify all the storage accounts which have this issue use the following:

Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}

This gives output similar to the following (redacted):

PS Azure:> Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}

StorageAccountName      ResourceGroupName      Location    SkuName      Kind    AccessTier CreationTime         ProvisioningState EnableHttps TrafficOnly
------------------ ----------------- -------- ------- ---- ---------- ------------ ----------------- -----------
XXXXXXXXXXXXXXXXXX AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 9/6/19 9:51:53 PM Succeeded False
YYYYYYYYYYYYYYYYYY AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 6/26/19 3:29:38 PM Succeeded False
An Azure Active Directory
administrator should be
provisioned for SQL servers

This one appears like the following in the advisor output:

As a long term Oracle guy I’m no SQL Server expert so I can’t quite see why this is an issue if you have a SQL Server authenticated administrative user active – no doubt a friendly SQL DBA will chime in and explain.

To fix this navigate to the SQL Server in question and the Active Directory admin blade and select “Set admin”, choose a user from the Active Directory and press Save.

To find all SQL Servers affected by this I wrote the following Powershell:

$sqlservers = Get-AzResource -ResourceType Microsoft.Sql/servers
foreach ($sqlserver in $sqlservers)
    $ADAdmin = Get-AzureRmSqlServerActiveDirectoryAdministrator -ServerName $sqlserver.Name -ResourceGroupName $sqlserver.ResourceGroupName
    "AD Administrator:" + $ADAdmin.DisplayName + "/" + $ADAdmin.ObjectId

This returns output similar to the following (redacted):

AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
AD Administrator:/
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c

From the above you can that mysqlserver2 has no AD Administrator and will be showing up on the advisor report.

Enable virtual machine backup to
protect your data from corruption
and accidental deletion

This one appears like the following in the advisor output:

To fix this, navigate to the Backup blade on the VM Resource in question and set the appropriate settings to enable the backup.

To identify VMs where this issue is evident use the following Powershell:

$VMs = Get-AzVM
foreach ($VM in $VMs)
    "VM: " + $VM.Name
    $RecoveryServicesVaults = Get-AzRecoveryServicesVault
    foreach ($RecoveryServicesVault in $RecoveryServicesVaults)
        Get-AzRecoveryServicesBackupContainer -VaultID $RecoveryServicesVault.ID -ContainerType "AzureVM" -Status "Registered" -FriendlyName $VM.Name

This gives results similar to the following, allowing you to see VMs where no backup is enabled:

VM: myVM1

FriendlyName                   ResourceGroupName    Status               ContainerType
------------                   -----------------    ------               -------------
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
VM: myVM2
VM: myVM3
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM

What you can see from the above is myVM1 and myVM3 both have registered backups unlike myVM2 which has none and therefore myVM2 needs backup enabling.

OpenWorld 2019: I'm Speaking!

Jim Marion - Mon, 2019-09-09 16:02

Are you ready for OpenWorld 2019? Have you built your schedule? If not, I suggest you get right on it! The best sessions are standing room only. If you don't build your schedule, then you have to wait outside each session until the last few minutes, whereas preregistered attendees walk right in, taking the best seats.

As you plan your PeopleSoft-focused trip to OpenWorld, I recommend starting with Rebekah Jackson's OpenWorld Preview video:

Next, visit peoplesoftoow.com for a complete list of PeopleSoft sessions at OpenWorld. You definitely want to make sure the Roadmap sessions are on your agenda.

Before leaving home, be sure to register for the Quest PeopleSoft reception being held Monday night at the Epic Steakhouse. Details are available on the Quest Community Site

And finally, where will you find me? In pretty much all of the PeopleTools sessions listed in the PeopleTools Program Guide. More specifically, Sarah and I will be leading the session Getting the Most Out of PeopleSoft PeopleTools: Tips and Techniques on Wednesday, Sep 18, 11:15 a.m. - 12:00 p.m. in Moscone West - Room 2002.

We look forward to seeing you in San Francisco!

Basic Replication -- 2b : Elements for creating a Materialized View

Hemant K Chitale - Mon, 2019-09-09 09:02
Continuing the previous post, what happens when there is an UPDATE to the source table ?

SQL> select * from source_table;

---------- --------------- --------------- ---------
1 First One 18-AUG-19
3 Third Three 18-AUG-19
4 Fourth Four 18-AUG-19

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from rupd$_source_table;

no rows selected

SQL> update source_table
2 set data_element_2 = 'Updated', date_col=sysdate
3 where id=4;

1 row updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> commit;

Commit complete.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

---------- --------- - -
4 01-JAN-00 U U


So, it is clear that UPDATES, too, go to the MLOG$ table.

What about multi-row operations ?

SQL> update source_table set id=id+100;

3 rows updated.

SQL> select * from rupd$_source_table;

no rows selected

SQL> select * from mlog$_source_table;

---------- --------- - -
4 01-JAN-00 U U

1 01-JAN-00 D O

101 01-JAN-00 I N

3 01-JAN-00 D O

103 01-JAN-00 I N

4 01-JAN-00 D O

104 01-JAN-00 I N

7 rows selected.


Wow ! Three rows updated in the Source Table translated to 6 rows in the MLOG$ table ! Each update row was represented by an DMLTYPE$$='D' and OLD_NEW$$='O'  followed by a DMLTYPE$$='I' and OLD_NEW$$='N'.   So that should mean "delete the old row from the materialized view and insert the new row into the materialized view" ??

(For the time being, we'll ignore SNAPTIME$$ being '01-JAN-00').

So an UPDATE to the Source Table of a Materialized View can be expensive during the UPDATE (as it creates two entries in the MLOG$ table) and for subsequent refresh's as well !

What happens when I refresh the Materialized View ?

SQL> execute dbms_session.session_trace_enable;

PL/SQL procedure successfully completed.

SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> execute dbms_session.session_trace_disable;

PL/SQL procedure successfully completed.


The session trace file shows these operations (I've excluded a large number of recursive SQLs and SQLs that were sampling the data for optimisation of execution plans):

set snaptime$$ = :1
where snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')


select 1 from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1
and ((dmltype$$ IN ('I', 'D')) or (dmltype$$ = 'U' and old_new$$ in ('U', 'O')
and sys.dbms_snapshot_utl.vector_compare(:2, change_vector$$) = 1))
and rownum = 1

COUNT(*) cnt

select dmltype$$, count(*) cnt from "HEMANT"."MLOG$_SOURCE_TABLE"
where snaptime$$ > :1 and snaptime$$ <= :2
group by dmltype$$ order by dmltype$$

where snaptime$$ <= :1

and this being the refresh (merge update) of the target MV
WHERE "SNAPTIME$$" > :1 AND ("DMLTYPE$$" != 'I'))


So, we see a large number of intensive operations against the MLOG$ Materialized View Log object.

And on the MV, there is a DELETE followed by a MERGE (UPDATE/IINSERT)

Two takeaways :
1.  Updating the Source Table of a Materialized View can have noticeable overheads
2.  Refreshing a Materialized View takes some effort on the part of the database

(Did you notice the strange year 2100 date in the update of the MLOG$ table?
Categories: DBA Blogs

POUG Conference 2019

Yann Neuhaus - Mon, 2019-09-09 03:47

POUG (Pint with Oracle users group) organized his annual conference on 6-7th September in Wroclaw in New Horizons Cinema.

My abstract about “MySQL 8.0 Community: Ready for GDPR?” was accepted, so I had the opportunity to be there.


My talk was planned for the first day. New MySQL 8.0 version introduces several improvements about security and these are the main points I discussed:
– Encryption of Redo/Undo and Binary/Relay log files, which comes to enrich existing datafile encryption
– Some password features such as:
* Password Reuse Policy, to avoid a user to always use the same passwords
* Password Verification Policy, to require current password before changing it
* validate_password Component (which replaces the old validate_password Plugin), to define a secure password policy through some system variables and 3 different levels
– New caching_sha2_password plugin, which let you manage authentication in a faster and more secure way
– SQL Roles, to simplify the user access right management

Here some interesting sessions that I attended.

Keep them out of the database!

How to avoid unwanted connections to have access to our database? Flora Barrièle and Martin Berger explained some possibilities.
Following methods have limitations:
– Filter through a firewall, cause we have to involve the network team
– Use a dedicated listener for each instance, cause it’s difficult to manage in case of big number of databases and environments
To solve these issues we can use instead:
– Connection Manager (a sort of listener with in addition a set of rules to define the source, service, activity, destination)
– Access Control List (ACL, a new functionality of Oracle 12.2 which is used to protect PDBs and associated services)
– Logon triggers
– Audit and reports
In conclusion, different solutions exist. First of all we have to know our ecosystem and our environments before deciding to put something in place. Then we should make it as simple as possible, test and check what is the best for our specific situation.

The MacGyver approach

Lothar Flatz explained an approach to analyze what’s wrong with a query and how to fix it when we don’t have a lot of time.
The first step is to optimize, and for this point we have to know how the optimizer works. Then we can enforce new plans (inserting hints, changing statements text, …) and look for the outline.
Sometimes it’s not easy. Lothar’s session ended with this quote: “Performance optimization is not magic: it’s based on knowledge and facts”.

From transportable tablespaces to pluggable databases

Franck Pachot showed different ways to transport data in different Oracle versions:
– Simple logical move through export/import -> slow
– Logical move including direct-path with Data Pump export/import -> flexible, but slow
– Physical transport with RMAN duplicate -> fast, but not cross-versions
– Transportable Tablespaces which provides a mix between logical move (for metadata) and physical transport (for application/user data) -> fast and flexible (cross-versions)
– Physical transport through PDB clone -> fast, efficient, ideal in a multi-tenant environment
– Full Transportable Tablespaces to move user tablespaces and other objects such as roles, users, … -> flexible, ideal to export from 11R2 to 12c and then to non-CDB to multi-tenant, no need to run scripts on dictionary

Data Guard new features

The Oracle MAA (Maximum Availability Architectures) describes 4 HA reference architectures in order to align Oracle capabilities with customer Service Level requirements. Oracle Data Guard can match Silver, Gold and Platinum reference architectures.
Pieter Van Puymbroeck (Oracle Product Manager for Data Guard) talked about following new 19c features:
– Flashback operations are propagated automatically to the standby (requirements: configure standby for flashback database and in MOUNT state first, set DB_FLASHBACK_RETENTION_TARGET)
– Restore points are automatically propagated from the primary to the standby
– On the Active Data Guard standby, the database buffer cache state is preserved during a role change
– Multi-Instance Redo Apply (parallel redo log apply in RAC environments)
– Observe-Only mode to test fast-start failover without having any impact on the production database
– New commands such as “show configuration lag;” to check all members, and to export/import the Broker configuration

Discussion Panel

In the form of a discussion animated by Kamil Stawiarski, and with funny but serious exchanges with the audience, some Oracle Product Managers and other Oracle specialists talked about one of most topical subject today: Cloud vs on-prem. Automation, Exadata Cloud at Customer, Oracle documentation and log files and much more…

Networking moments

Lots of networking moments during this conference: a game in the city center, a speakers dinner, lunch time at the conference, the party in the Grey Music Club.

As usual it was a real pleasure to share knowledge and meet old friends and new faces.
Thanks to Luiza, Kamil and the ORA-600 Database Whisperers for their warm welcome and for the perfect organization of the event.

A suggestion? Don’t miss it next year!

Cet article POUG Conference 2019 est apparu en premier sur Blog dbi services.

Oracle GoldenGate Microservices Upgrade – 12.3.0.x/18.1.0.x to

DBASolved - Sun, 2019-09-08 16:45

Oracle GoldenGate Microservices have been out for a few years now. Many customers have pursued the architecture in many different industries and have this in many dfifernt use-cases and architectures. But what do you do when you want to upgrade your Oracle GoldenGate Microservices Architecture?

In a previous post, I wrote about how to upgrade Oracle GoldenGate Microservices using the GUI or HTML5 approach in this post – Upgrading GoldenGate Microservices Architecture – GUI Based (January 2018). Today, many of the steps are exactly the same as they were a year ago. The good news is that Oracle has documented the process a bit clearer in the lates upgrade document (here).

So why a new post on upgrading the architecture? Over the last few days, I’ve been looking into a problem that has been reported by customers. This problem affects the upgrade process, not so much in how to do the upgrade but when the upgrade is done.

In nutshell, the upgrade process for Oracle GoldenGate Microservices is done in these few steps:

1. Download the latest version of Oracle GoldenGate Microservices -> In this case: (here); however, this approach will work with as well.
2. Upload the software, if needed, to a staging area on the server where Oracle GoldenGate Microservices is running. Ideally, you should be upgrading from OGG 12c (12.3.x) or 18c (18.1.x).
3. Unzip the downloaded zip file to a temporary folder in the staging area
4. Execute runInstaller from the directory in the staging area. This will start the Oracle Universal Installer for Oracle GoldenGate.
5. Within the installation process, provide the Oracle GoldenGate Home for the Software Location.
6. Click Install to begin the installation into a New Oracle GoldenGate Home.

Note: At this point, you should have two Oracle GoldenGate Microservices Homes. One for the older version and one for the 19c version.

7. Login to the ServiceManager
8. Under Deployments -> select ServiceManager
9. Under Deployment Details -> select the pencil icon. This will open the edit field for the GoldenGate Home.
10. Edit the GoldenGate Home -> change to the new Oracle GoldenGate Microservices Home then click Apply.
This will force the ServiceManager to reboot.

At this point, you may be asking yourself, I’ve done everything but the ServiceManager has not come back up. What is going on?

If you have configured the ServiceManager as a daemon, you can try to start the ServiceManager by using the systemctl commands.

systemctl start OracleGoldenGate


This command will just return with nothing important. In order to find out if it start successfully or not, check the status of the service.

systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Sun 2019-09-08 21:27:59 UTC; 2s ago
  Process: 3430 ExecStart=/opt/app/oracle/product/12.3.0/oggcore_1/bin/ServiceManager (code=killed, signal=SEGV)
 Main PID: 3430 (code=killed, signal=SEGV)

Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service holdoff time over, scheduling restart.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Stopped Oracle GoldenGate Service Manager.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: start request repeated too quickly for OracleGoldenGate.service
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: <strong>Failed to start Oracle GoldenGate Service Manage</strong>r.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.


As you can tell the ServiceManager has failed to start. Why is this?

If you look at the output of the last systemctl status command, you see that the service is still referencing the old Oracle GoldenGate Microservices home.

Now the question becomes, how to I fix this?

The solution here is simple. Go to the deployment home for the ServiceManager and look under the bin directory. You will see teh registerServiceManager.sh script. Edit this script and change the variable OGG_HOME to match the new Oracle GoldenGate Home for 19c.

$ cd /opt/app/oracle/gg_deployments/ServiceManager/bin
$ ls
$ vi registerServiceManager.sh


# Check if this script is being run as root user
if [[ $EUID -ne 0 ]]; then
  echo "Error: This script must be run as root."

# OGG Software Home location
OGG_HOME="/opt/app/oracle/product/12.3.0/oggcore_1” <— Change to reflect new OGG_HOME

Wit the registerServiceManager.sh file edit, go back and re-run the file as the root user.

# cd /opt/app/oracle/gg_deployments/ServiceManager/bin
# ./registerServiceManager.sh
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
     Oracle GoldenGate Install As Service Script
Running OracleGoldenGateInstall.sh…

With the service now updated, you can start and check the service.

# systemctl start OracleGoldenGate
# systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-09-08 21:39:58 UTC; 2s ago
 Main PID: 21946 (ServiceManager)
    Tasks: 13
   CGroup: /system.slice/OracleGoldenGate.service
           └─21946 /opt/app/oracle/product/19.1.0/oggcore_1/bin/ServiceManager

Sep 08 21:39:58 OGG12c219cUpgrade systemd[1]: Started Oracle GoldenGate Service Manager.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.509+0000 INFO | Configuring user authorization secure store path as '/opt/app/oracle/gg_deployments/Serv...ureStore/'.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.510+0000 INFO | Configuring user authorization as ENABLED.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Oracle GoldenGate Service Manager for Oracle
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Version OGGCORE_19.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Linux, x64, 64bit (optimized) on May  8 2019 18:17:50
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Operating system character set identified as UTF-8.
Hint: Some lines were ellipsized, use -l to show in full.

At this point, you can now log back into the ServiceManager and confirm that the upgrade was done successfully.

Note: If you have your ServiceManager configured to be manually started and stopped, then you will need to edit the startSM.sh and stopSM.sh file. The OGG_HOME has to be changed in these files as well.


Categories: DBA Blogs

Finding databases on each SQL Server using Powershell

Jeff Moss - Sat, 2019-09-07 06:07

A client had the requirement to list out the SQL Servers and the databases they have installed on those SQL Servers in their Azure Cloud environment this week. The reason for the requirement was to find SQL Servers that no longer had any databases on them so they could be considered for removal.

Essentially, it gathers a list of SQL Server resources, loops through them and counts and itemises them, not including the master database since that’s not relevant to the requirement.

I wrote the following powershell:

$sqlservers = Get-AzResource -ResourceType Microsoft.Sql/servers
foreach ($sqlserver in $sqlservers)
     $databases = Get-AzResource -ResourceType Microsoft.Sql/servers/databases|Where-Object {$_.Name -notlike "master"}|Where-Object {$_.Name -like $sqlserver.Name + "/*"}
     "Database Count:" + $databases.Count
     ">>>" + $databases.Name


Which returns the following type of output (amended for privacy):

Database Count:0
Database Count:1
Database Count:1
Database Count:3
>>>mytestsqlserver4/mydatabase3 mytestsqlserver4/mydatabase4 mytestsqlserver4/mydatabase5

Oracle Cloud: Sign up failed... [2]

Dietrich Schroff - Fri, 2019-09-06 14:36
After my failed registration to Oracle cloud, i got very fast an email from Oracle support with the following requirements:
So i tried once again with a firefox "private" window - but this failed again.
Next idea was to use a completely new installed browser: so i tried with a fresh google-chrome.
But the error still remained:
Let's hope Oracle support has another thing which will put me onto Oracle cloud.


There is a tiny link "click here" just abouve the blue button. This link a have to use with the verification code provided by Oracle support.
But then the error is:
I checked this a VISA and MASTERCARD. Neither of them worked...

UPDATE 2: see here how the problem was solved.

Oracle OpenWorld and Code One 2019

Tim Hall - Fri, 2019-09-06 02:40

It’s nearly time for the madness to start again. This will be my 14th trip to San Francisco for OpenWorld, and however many it is since Java One and Code One got wrapped up into this…

  • Flights booked : ✔
  • Hotel booked : ✔
  • ESTA approved : ✔
  • Irrational fear of flying and general anxiety : ✔
  • 80 lbs weight loss : ❌
  • Talk complete : ❌
  • Denial : ✔

At the moment the scheduled stuff looks like this.

Friday :

  • 03:00 UK time : Start the trip over to SF. I know I said I would never do this again, and I know what the consequences will be…
  • Evening SF time : Groundbreaker Ambassador Dinner

Saturday : Day : ACE Director Briefing

Sunday :

  • Day : Groundbreaker Ambassador Briefing
  • Evening : Oracle ACE Dinner

Tuesday :

Session ID: DEV1314
The Seven Deadly Sins of SQL
Date: 17th Sept 2019
Time: 11:30 – 12:15

Wednesday :

Session ID: DEV6013
Embracing Constant Technical Innovation in Our Daily Life
Date: 18th Sept 2019
Time: 16:00 – 16:45
Panel: Gustavo Gonzalez, Sven Bernhardt, Debra Lilley, Francisco Munoz Alvarez, Me

Thursday : Fly home.

Friday : Arrive home, have a post-conference breakdown and promise myself I’ll never do it again…

In addition to those I have to schedule in the following:

  • A shift on the Groundbreakers Hub, but I’m not sure what day or what demo yet. I’ll probably hang around there a lot anyway.
  • Meet a photographer to get some photos done. I’ve told them they’ve got to be tasteful and “only above the waist”.
  • Spend some time annoying everyone on the demo grounds. I know Kris and Jeff are desperate to see me. It’s the highlight of their year!
  • Stalk Wim Coekaerts, whilst maintaining an air of ambivalence, so as not to give the game away. Can anyone else hear Bette Midler singing “Wind Beneath My Wings”? No? Just me?

There’s a whole bunch of other stuff too, but I’ve not got through all my emails yet. Just looking at this is giving me the fear. So much for my year off conferences…

See you there!



Oracle OpenWorld and Code One 2019 was first posted on September 6, 2019 at 8:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

What does EP Stand for? A Simple Answer!

VitalSoftTech - Thu, 2019-09-05 06:52

If you’re an independent artist or a beginner musician in the competitive music industry, you’ve probably heard of the acronym EP before, but the real question is: what does EP stand for in music? Many new independent artists and musicians have now entered the market, and the music industry has become more competitive than ever. […]

The post What does EP Stand for? A Simple Answer! appeared first on VitalSoftTech.

Categories: DBA Blogs

September 27 Arizona Oracle User Group Meeting

Bobby Durrett's DBA Blog - Wed, 2019-09-04 10:30

The Arizona Oracle User Group (AZORA) is cranking up its meeting schedule again now that the blazing hot summer is starting to come to an end. Our next meeting is Friday, September 27, 2019 from 12:00 PM to 4:00 PM MST.

Here is the Meetup link: Meetup

Thank you to Republic Services for allowing us to meet in their fantastic training rooms.

Thanks also to OneNeck IT Solutions for sponsoring our lunch.

OneNeck’s Biju Thomas will speak about three highly relevant topics:

  • Oracle’s Autonomous Database — “What’s the Admin Role?”
  • Oracle Open World #OOW 19 Recap
  • Let’s Talk AI, ML, and DL

I am looking forward to learning something new about these areas of technology. We work in a constantly evolving IT landscape so learning about the latest trends can only help us in our careers. Plus, it should be interesting and fun.

I hope to see you there.


Categories: DBA Blogs

Changing the Search Page Operator Version 2

Jim Marion - Tue, 2019-09-03 22:38

In 2011, just after PeopleTools 8.50 released, I wrote the post Changing the Search Page Operator. In that post, I demonstrated how to Monkey Patch PeopleSoft to do something you can't do with core PeopleTools: change the default advanced search page operator from Begins With to Between. A lot has changed since I wrote that initial post:

  • PeopleSoft switched from net.ContentLoader to net2.ContentLoader,
  • PeopleSoft released Branding System Options, which supports global JavaScript injection,
  • We began using RequireJS to manage JavaScript dependencies, and
  • The default user experience switched from Classic to Fluid.

Let's create a new version. Before writing any code, let's discuss that last bullet point. This post will focus on Classic. Why? Two reasons:

  1. Fluid doesn't use traditional search pages built from search record metadata and
  2. Roughly 95% of the components in PeopleSoft are still Classic.

This new version of the code will take advantage of Branding System Options and JavaScript dependency management. Our scenario will use the Job Data component (Workforce Administration > Job Information > Job Data. We will cause the Name search operator to default to between:

Let's start by creating JavaScript definitions for each library. Download the following libraries directly from their sources:

In your PeopleSoft system online, navigate to PeopleTools > Portal > Branding > Branding Objects. Switch to the JavaScript tab and create a JavaScript definition for each item. So that your names match our RequireJS configuration, use the names JSM_JQUERY_JS and JSM_REQUIRE_JS. For compatibility reasons, we should also protect our version of jQuery from any other versions of jQuery that may be loaded by PeopleTools. To do this, we create a library named JSM_PRIVATE_JQ_JS that contains the following code:

Next we need a RequireJS configuration that tells RequireJS how to locate each library we intend to use. I named mine JSM_REQUIREJS_CONFIG_JS, but this name is less important because we will select it from a prompt when configuring Branding System Options. Here is our RequireJS configuration:

Note: I snuck an extra library into the RequireJS configuration. Can you figure out what it is? I will be demonstrating this extra library at my session for OpenWorld 2019. Don't worry about removing it, however. As long as we don't reference it, RequireJS will never attempt to load it.

We must create one more JavaScript library to "listen" to page changes, waiting for PeopleSoft to load an advanced search page. Create a library containing the following code. As with the previous, name isn't as important because we will select the name from a list of values during configuration. But in case you are struggling with a creative name, I named mine JSM_SEARCHOP_JS.

Your list of JavaScript files should now look something like:

After uploading our libraries, we can turn our attention to configuration. Navigate to PeopleTools > Portal > Branding > Branding System Options. In the Addtional JavaScript Objects (this should really be named libraries) section, insert the following three libraries in order. Order matters. We first want RequireJS. We then configure RequireJS. Finally, we use RequireJS.


About the code that makes all of this happen (JSM_SETSEARCHOP_JS)... It is incredibly similar to the first version. One important difference is that this version is loaded globally whereas the prior iteration was locally scoped to the component. We therefore include a component-specific test. The %FormName Meta-HTML in our JavaScript helps us derive the HTML element that contains the component name. The fieldMap variable contains the mapping between component names and fields that should be changed.

Will this work in Fluid? Unfortunately, no. Fluid does not use search record metadata to generate search pages. It can (with a little work), but not in the same fashion. Fluid also doesn't support branding system options Additional JavaScript Objects. JavaScript automation is still possible, but requires a different approach (Event Mapping to inject, different variables, etc).

Are you interested in learning more about PeopleTools, JavaScript, and HTML? Attend one of our courses online or schedule a live in-person training session.

Azure Active Directory (AAD)

Jeff Moss - Tue, 2019-09-03 16:57
Find the ID of an existing user:
PS Azure:> $azureaduser=$(az ad user list --filter "userPrincipalName eq 'Fred.Smith@acme.com'" --query [].objectId --output tsv)


PS Azure:> $azureaduser


Show all users in AAD:

az ad user list --query [].userPrincipalName

London March 2020: “Oracle Indexing Internals and Best Practices” and “Oracle Performance Diagnostics and Tuning” Seminars !!

Richard Foote - Tue, 2019-09-03 06:44
It’s with great excitement that I announce I’ll finally be returning to London, UK in March 2020 to run both of my highly acclaimed seminars. The dates and registration links are as follows: 23-24 March 2020: “Oracle Indexing Internals and Best Practices” seminar – Tickets and Registration Link 25-26 March 2020: “Oracle Performance Diagnostics and […]
Categories: DBA Blogs

How are Rich Media Ads Different from Other Ad Formats?

VitalSoftTech - Tue, 2019-09-03 05:36

If you have clicked here, you’re probably wondering how rich media ads are different from other ad formats. You’ve probably heard the term “rich media ads” at least once in your foray across the advertising realm. But do you know how these interactive ads are different from other ad layouts? If you’re new to advertising, […]

The post How are Rich Media Ads Different from Other Ad Formats? appeared first on VitalSoftTech.

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator