Restoring default vRealize Orchestrator 6 (vRO) workflows after installing to an external database

I’ve always deployed vRealize Orchestrator (vRO) (previously known as vCenter Orchestrator (vCO)) using the embedded database, but since vRO is becoming more and more critical in our environment I’ve decided to explore deploying it in an active/active cluster.  In order to do this you need to install vRO to an external database.  I haven’t look into the possibility of clustering the embedded Postgres database.

I admit that I haven’t read all of the vRO documentation (with vSphere 6 recently coming out there is a lot to read), but I did skim the parts related to installing to an external database and didn’t see anything regarding losing the default workflows that are including in vRO.

When you first deploy vRO you have the following workflows:


After you configure vRO to use an external database you will have the following workflows:


Oh dear.

So how do you get the missing workflows back?  Well, if you knew this would happen, you would have probably backed up all of the workflows and wouldn’t be reading this post.

Backing up the workflows

Here is how to backup all of the workflows:

  1. Select Design from the drop down menu.
  2. Select the Packages tab.
  3. Select the “Export Package” icon.
  4. Save the package.  Do this for each package.


Restoring the workflows

After you’ve configured the external database, you can proceed with restoring the workflows.

  1. Select Design from the drop down menu.
  2. Select the Packages tab.
  3. Select the “Import package” icon.


Import the first package.  I don’t know how to export/import all the packages at once.  I’ll update this post if I find out.


It doesn’t too long to import the packages.  The vCenter package takes the longest.

Restoring the workflows from another vRO instance

What if you didn’t backup the workflows?  Well, you’d probably just re-deploy and back them up but you can also restore the workflows from another vRO instance.  You can also use this method to move workflows between vRO instances.

Deploy another vRO instance.  It may be one you are planning to use as the second node in a cluster or one just to use to grab the workflows off of.  Here I’ve deployed another vRO instance and configured it to use DHCP and will delete it once it’s no longer needed.

On the vRO instance you’re restoring the workflows to:

  1. Select Design from the drop down menu.
  2. Select the Packages tab.
  3. Select the “Get remote package” icon.


Enter the information for the new vRO instance.  Since I’ve done zero configuration on it, it has the default username/password of vcoadmin/vcoadmin:


Accept the certificate.

Here you can see that there is only a single package.  This is because I already imported all but one of the original packages.  This is nice because you can work your way through restoring each of the workflows and not have to remember which ones you’ve already imported:


Go ahead and select Import and you’ll see a screen that’s different from the ones we’ve seen before:


Select “Synchronize!” to import the final package.  Now you should have all of your original workflows back.

Open vSwitch on a nested KMV server running on ESXi

For this post I figured I’d attempt something that I have zero experience with.  I’ve been meaning to look into Open vSwitch for a while so I decided to deploy it running on a nested KMV server running inside of ESXi.  I deployed the environment using both Ubuntu 14.10 Server and CentOS 7.  Since Ubuntu turned out to be easier, I’m going discuss it in this post and possibly do another post using CentOS 7.

Create the KVM server

We want to start off by creating a VM on the ESXi host that will be our KVM server.  As William Lam describes in this post, we need to expose hardware assisted virtualization to the VM.  I’m also going to give the VM 4 vCPUs as I’ve seen the CPU load spike to 100% when using a 1 or 2 vCPUs while running nested VMs.  Depending on how many VMs and their configuration, you will need to specify the proper memory and hard disk sizes.  Here is what my configuration looks like:


If you’ve ever ran VMs on a nested ESXi host, you’re probably aware that the portgroup our KVM VM is connected to needs to have specific portgroup security options set in order for the KVM VMs to access the network.  I’ve found that both Promiscuous mode and Forged transmits need to be enabled.  I created a new portgroup called “vlan3_mgmt_ovs” and modified the security settings:


During the installation of the Ubuntu OS, the only package I chose to install was the SSH Server.  After the installation, go ahead and update the system by running:

sudo apt-get update
sudo apt-get upgrade

I then set a static IP by editing /etc/network/interfaces to look like:


Install KVM

Install KVM and some other utilities that we will need.

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils cpu-checker vnc4server virtinst

Verify that KVM is okay:

chris@kvm-ubuntu:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Add our user to the KVM group:

sudo adduser `id -un` kvm

Create an Open vSwitch

First watch this great video Introduction to Open vSwitch (OVS)

Install by Open vSwitch by running:

sudo apt-get install openvswitch-switch openvswitch-common

Create an Open vSwitch named ovsbr0:

sudo ovs-vsctl add-br ovsbr0

Display the new switch:

chris@kvm-ubuntu:~$ sudo ovs-vsctl show
Bridge “ovsbr0”
Port “ovsbr0”
Interface “ovsbr0”
type: internal
ovs_version: “2.1.3”

If you’re connected to the VM via ssh, you will want to jump over to the VM’s console because we are going to make some networking changes that will disconnect you from ssh.

Move eth0 over to our new Open vSwitch ovsbr0:

sudo ovs-vsctl add-port ovsbr0 eth0

At this point your ssh connection should be dead.  You can can confirm that eth0 is now on the bridge by running

chris@kvm-ubuntu:~$ sudo ovs-vsctl show
Bridge “ovsbr0”
Port “ovsbr0”
Interface “ovsbr0”
type: internal
Port “eth0”
            Interface “eth0”
ovs_version: “2.1.3”

Re-configure the KVM VM’s networking

The following set of commands will get our VM back online:

Remove the IP from eth0

sudo ifconfig eth0 0

Move the IP to ovsbr0

sudo ifconfig ovsbr0 netmask

Delete the default route if necessary.

route del default

Add the default route back using ovsbr0

route add default gw dev ovsbr0

At this point you should be able to ssh back into the VM, but if you reboot, the IP will be back on eth0 so let’s modify /etc/network/interfaces so that the networking is OK on boot.

I grabbed most of this info from

Here is my modified /etc/network/interfaces file:

auto eth0
allow-ovsbr0 eth0
iface eth0 inet manual
ovs_bridge ovsbr0
ovs_type OVSPort
pre-up ifconfig $IFACE up
post-down ifconfig $IFACE down

auto ovsbr0
allow-ovs ovsbr0
iface ovsbr0 inet static
ovs_type OVSBridge
ovs_ports ovsbr0
pre-up ifconfig $IFACE up
post-down ifconfig $IFACE down
dns-search vmware.local

Reboot the VM and verify that the networking looks good.  It should look like this:


Notice how my IP is on ovsbr0 and there is no IP configured for eth0.

Create a KVM network that our guest VM can use

When I created a KVM guest VM using the method that I’m going to use, I couldn’t find a way to tell the VM to use our Open vSwitch ovsbr0 so I’m going create a KVM network that maps to ovsbr0.

Create the XML file “ovs-network.xml” :

<forward mode=’bridge’/>
<bridge name=’ovsbr0’/>
<virtualport type=’openvswitch’/>

I generated the UUID above by running a utility called uuidgen.  You can install it by running:

sudo apt-get install uuid

Create the new network

sudo virsh net-define ovs-network.xml

Start the network and configure it to autostart

sudo virsh net-start ovs-network
sudo virsh net-autostart ovs-network

Verify state of the new network:

sudo virsh net-list
 Name                 State      Autostart     Persistent
 default              active     yes           yes
 ovs-network          active     yes           yes

Install a guest VM on the KVM server

Before we can create a VM, we have to apply the fixed described in bug 1393842.

Now we can start creating some shell variables that define our VM.  I’ve already copied the ISO images into /var/lib/libvirt/images folder at this point.  I believe you can place it elsewhere but you may have to make some modifications to SELinux to allow KVM to access it.

os=”–os-variant=rhel7 –disk path=/var/lib/libvirt/images/CentOS-7.0-1406-x86_64-Minimal.iso,device=cdrom”
net=”–network network=ovs-network”
disk=”–disk /var/lib/libvirt/images/vm1.img,size=5″
gr=”–graphics vnc,listen=″

Create the VM:

sudo virt-install $os $net $disk $src $gr $cpu $ram –hvm –noautoconsole –name=$name

You should see:

Starting install…
Creating domain…                                                                                                                                |    0 B     00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.

Let’s see what VNC display it is running on:

sudo virsh vncdisplay vm1

We should now be able to access our VM via VNC.  I’m using TightVNC.  Above we saw that the VM is running on display 0.  This corresponds with VNC port 5900.  If the VM said it w as running on display :1, the VNC port would be 5901.


You should see the install wizard:


The VM was able to get a DHCP address from my network:


After the installation you will need to reboot the VM.  You’ll find that it doesn’t actually come back up and if you list the available VMs, you’ll see that it’s powered off:

sudo virsh list –all
Id    Name                           State
–     vm1                            shut off

Start it back up by running:

sudo virsh start vm1
Domain vm1 started

After installation the VM is able to ping the internet:



We started with a fresh ESXi VM, installed and configured KVM and Open vSwitch, and then installed a VM that has full network connectivity.  There is a lot of info on the web, but I couldn’t find anything for this exact scenario so I hope it helps someone avoid a lot of the issues that I ran into.  For example, even though I run a lot of nested ESXi hosts and have to configure the portgroup security, I completely forgot about it in this case.  My nested KVM guest couldn’t communicate with my network and I couldn’t understand why.  I ran tcpdump and saw ARP requests being sent out of the VM, but nothing on my network was seeing them.  I finally realized why and was deeply ashamed.

William Lam’s blog post uses a different way of creating the KVM guest other than using install-virt.  I’m pretty sure that his method allows you to specify the Open vSwitch network without having to define KVM network.  In the interface section you would just need to specify the Open vSwitch information:

<interface type=’bridge’>
<mac address=’52:54:aa:00:f0:51’/>
     <source bridge=’ovsbr0’/>
     <virtualport type=’openvswitch’/>

Also, if you’re going to create your KVM VM’s disk ahead of time with the qemu-img command, make sure you specify the preallocation=metadata parameter or the KVM VM’s OS will see the disk as something extremely small and you won’t be allowed to proceed with the install.  Here is how you can specify the parameter:

qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/vm1.img 5G


I read a lot of pages to make this work.  Thanks everyone below for providing information for me to cobble together from.