Deepak Nadig Anantha

Recent Posts

Recent Comments




SDN Service Function Chaining with ONOS and Devstack

Deepak NadigDeepak Nadig

Service function chaining is a mechanism that overrides the typical destination-based forwarding in IP networks with a path other than the one chosen by routing table lookups (conceptually related to Policy-based routing). This is a Guide to creating a Service Function Chaining Infrastructure using ONOS SDN Controller and Devstack.
Begin by reading the Red Box at the end of the post.


Start by updating your Ubuntu Box.

sudo apt-get update
sudo apt-get -y install git

We’ll enable no sudo password for user ubuntu. In your terminal:

sudo visudo
#Add to the end of the file

at the end of the file. Alternatively, you can create a separate user and and do the soame for that user; you’ll then have to switch to that user before proceeding with the install.

Install OVS with NSH Support

If you are installing OVS on Ubuntu 14.04, you can manually install OVS using a script available HERE.
On the otherhand, if you are running Ubuntu 16.04, add the Xenial Cloud Archive for OpenStack Newton as follows:

sudo add-apt-repository cloud-archive:newton
sudo apt-get update
sudo apt-get upgrade -y 

ONOS Installation

Next, install ONOS as follows:

sudo apt-get update

cd; mkdir Downloads Applications
cd Downloads
wget -nc
tar -zxvf apache-karaf-3.0.5.tar.gz -C ../Applications/

sudo apt-get install software-properties-common -y
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer oracle-java8-set-default -y

# Ubuntu 14.04 ONLY:
# ------------------------
sudo apt-get purge maven maven2 maven3
sudo apt-add-repository ppa:andrei-pozolotin/maven3
sudo apt-get update
sudo apt-get install maven3
# ------------------------

# If running Ubuntu 16.04:
sudo apt-get install maven

Logout and log back in so that the environment variables like JAVA_HOME, MAVEN etc. are set.

Pull ONOS from Gerrit.

cd; git clone -b onos-1.7

Edit ~/.bashrc and add the ONOS environment variables at the end of the file.

# ONOS Environment Variables
export ONOS_ROOT=~/onos
source $ONOS_ROOT/tools/dev/bash_profile
# Set ONOS_IP to reflect the right interface IP address 
ONOS_IP="$(ifconfig eth0 | grep "inet addr" | awk -F'[: ]+' '{print $4 }')"

Source the new environment variables source .bashrc

Note1: Replace eth0 with the right interface.
Note2: You can edit onos/tools/test/cells/local and remove proxyarp if you have ARP problems with a hardware switch.

Build and package ONOS using:

cd ~/onos
mvn clean install

Once ONOS is built successfully, create a screen or tmux session and create two tabs. In the first tab run:

ok clean

ONOS should start up and you should see the following output:

Removing data directories...
Existing ONOS Karaf uses version different from 1.7.2-SNAPSHOT; forcing clean install...
Removing existing ONOS Karaf, apps, data and config directories...
Unpacking /home/ubuntu/Downloads/apache-karaf-3.0.5.tar.gz to /home/ubuntu/Applications...
Adding ONOS feature repository...
Adding ONOS boot features standard,ssh,webconsole,onos-api,onos-core,onos-incubator,onos-cli,onos-rest,onos-gui...
Branding as ONOS...
Creating local cluster configs for IP
Copying package configs...
Staging builtin apps...
Customizing apps to be auto-activated: drivers,openflow,fwd,proxyarp,mobility...
Welcome to Open Network Operating System (ONOS)!
     ____  _  ______  ____     
    / __ \/ |/ / __ \/ __/   
   / /_/ /    / /_/ /\ \     
Mailing lists:     

Come help out! Find out how at: 

Hit '' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '' or type 'system:shutdown' or 'logout' to shutdown ONOS.


In the second tab run tl to inspect the ONOS log information.

Setup ONOS SFC Apps

Once we have ONOS running successfully, we can activate the apps necessary for SFC. Input the following at the ONOS prompt.

feature:install onos-openflow
feature:install onos-openflow-base
feature:install onos-ovsdatabase
feature:install onos-ovsdb-base
feature:install onos-drivers-ovsdb
feature:install onos-ovsdb-provider-host
feature:install onos-app-vtn-onosfw
externalportname-set -n onos_port2

app activate org.onosproject.ovsdb-base
app activate org.onosproject.ovsdbhostprovider
app activate org.onosproject.ovsdb
app activate org.onosproject.vtn 
app activate org.onosproject.openstacknode 
app activate org.onosproject.openstackswitching 

Devstack Installation

Start by cloning Devstack with:

cd; git clone -b stable/mitaka 
# OR download the newton release with -b stable/newton
cd devstack

If you are installing stable/mitaka, create a local.conf file with the following contents:




disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-agt
enable_service neutron
disable_service tempest

#enable_plugin networking-sfc stable/mitaka
#enable_plugin networking-onos stable/mitaka

Build with


Install networking-sfc and networking-onos plugins:

cd /opt/stack/
git clone -b stable/mitaka
git clone -b stable/mitaka

cd /opt/stack/networking-onos
sudo python install

cd /opt/stack/networking-sfc
sudo python install

Update the setup.cfg file at /opt/stack/networking-sfc to include:

networking_sfc.sfc.drivers =
    onos =
networking_sfc.flowclassifier.drivers =
    onos =

Uncomment the last 2 lines in devstack/local.conf. Unstack and Stack again.


Note: If Devstack complains about VTN issues, check if ONOS is running and that the SFC apps above are activated.

Note: If you have a problem with liberasurecode, install it using sudo apt-get install liberasurecode-dev

Configuring Devstack to Enable SFC


Check if /etc/neutron/plugins/ml2/conf_onos.ini exists, and if does, verify that its updated with the correct ONOS URL, username and password. You can verify the file contents with:

sudo grep "^[^#]" /etc/neutron/plugins/ml2/conf_onos.ini 

The file should like the listing below, if not, update it with the appropriate information.

If the file does NOT exist, copy the file conf_onos.ini from /opt/stack/networking-onos/etc/ to /etc/neutron/plugins/ml2/

sudo cp /opt/stack/networking-onos/etc/conf_onos.ini /etc/neutron/plugins/ml2/

Edit the file to update the correct ONOS URL, username and password.
Use http://:8181/onos/vtn for the URL, and the ONOS username and password (onos/rocks by default unless a change is made in the local cell). The file should like below:

#Configuration options for ONOS driver

# (StrOpt) ONOS ReST interface URL. This is a mandatory field.
url_path = http://:8181/onos/vtn

# (StrOpt) Username for authentication. This is a mandatory field.
username = onos

# (StrOpt) Password for authentication. This is a mandatory field.
password = rocks


Next we update the ml2_conf.ini at /etc/neutron/plugins/ml2/ to ensure that the mechanism_drivers are setup to reflect onos_ml2 and the right ONOS credentials and ONOS_IP.
Check the existing configuration using:

sudo grep "^[^#]" /etc/neutron/plugins/ml2/ml2_conf.ini

Note: Generally, when we enable the networking-sfc and networking-onos plug-ins, the conf_onos.ini and ml2_conf.ini are automatically configured to include the right mechanism drivers and credentials. The configuration looks as follows:

tenant_network_types = vxlan
extension_drivers = port_security
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = onos_ml2,logger
path_mtu = 1500
password = onos
username = rocks
url_path = http://:8181/onos/vtn


Next, update /opt/stack/neutron/neutron.egg-info/entry_points.txt to include the ONOS ml2.mechanism_drivers and service_plugins.
The configuration looks as follows:

onos_ml2 = networking_onos.plugins.ml2.driver:ONOSMechanismDriver
onos_router = networking_onos.plugins.l3.driver:ONOSL3Plugin

You can always check if the above has been added using:

grep "^[^#]" /opt/stack/neutron/neutron.egg-info/entry_points.txt | grep onos
onos_ml2 = networking_onos.plugins.ml2.driver:ONOSMechanismDriver
onos_router = networking_onos.plugins.l3.driver:ONOSL3Plugin

Neutron DNS

Setup the correct dnsmasq DNS servers which will be used as forwarders; edit /etc/neutron/dhcp_agent.ini and include the following DNS servers:

dnsmasq_dns_servers =,

Restart the q-dhcp service (From the screen stack session).

Restart Neutron

Attach to the screen session using:

screen -x -r stack

Restart all services from q-svc to n-cpu. Restart the q-svc service with the new configs as shown below:

/usr/local/bin/neutron-server \
    --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
    --config-file /etc/neutron/plugins/ml2/conf_onos.ini \
    & echo $! >/opt/stack/status/stack/; fg || echo "q-svc failed to start" | tee "/opt/stack/status/stack/q-svc.failure"

Set Open vSwitch Manager

Setup OVS manager to point to the ONOS Controller.

sudo ovs-vsctl set-manager tcp::6640

Verify that the OVS is connected to the ONOS controller using:

sudo ovs-vsctl show
    Manager "tcp::6640"
        is_connected: true

Enable CLI access to OpenStack:

source devstack/openrc admin

Check that neutron network list using:

neutron net-list

If you able to see the default private and public networks, check the ensure that the same networks are visible through ONOS VTN:


You should see JSON output with information about the above networks like below

    "networks": [
            "id": "fd154095-002a-4e60-bfb9-fbe344bf30bb",
            "name": "public",
            "admin_state_up": true,
            "status": "ACTIVE",
            "shared": false,
            "tenant_id": "611a92cc1aa94a9d940609e1208f7919",
            "router:external": true,
            "provider:network_type": "null",
            "provider:physical_network": "null",
            "provider:segmentation_id": "1090"
            "id": "bded5878-6d9a-4922-a84b-8e71e73db7d4",
            "name": "private",
            "admin_state_up": true,
            "status": "ACTIVE",
            "shared": false,
            "tenant_id": "22805ff134f448b09223698c6a249e90",
            "router:external": false,
            "provider:network_type": "null",
            "provider:physical_network": "null",
            "provider:segmentation_id": "1039"

Creating a Service Function Chain (SFC)

Create a Ubuntu Glance Image

Download and create an Ubuntu image that can be used for creating host instances in Devstack. You can find the Ubuntu Cloud Images HERE.

wget -c
# Create the ubuntu image using Glance API as:
glance image-create --name ubuntu --disk-format qcow2 --container-format bare --file xenial-server-cloudimg-amd64-disk1.img
# You can verify that the newly created image is available using
glance image-list

Create a new Neutron Network

Next, we shall create a new Neutron network called sfc, a subnet named sfcSubNet and attach a router to this network named sfcRouter.

neutron net-create sfc
neutron subnet-create sfc --name sfcSubNet
neutron router-create sfcRouter
neutron router-interface-add sfcRouter sfcSubNet

Create ports and update port security

Create Neutron ports using the neutron port-create command and update all created ports to use --no-security-groups and --port-security-enabled=False.

neutron port-create --name p01 sfc
neutron port-create --name p02 sfc
neutron port-create --name p03 sfc
neutron port-create --name p04 sfc
neutron port-create --name p05 sfc
neutron port-create --name p06 sfc

To update port security use:

neutron port-update p01 --no-security-groups --port-security-enabled=False
neutron port-update p02 --no-security-groups --port-security-enabled=False
neutron port-update p03 --no-security-groups --port-security-enabled=False
neutron port-update p04 --no-security-groups --port-security-enabled=False
neutron port-update p05 --no-security-groups --port-security-enabled=False
neutron port-update p06 --no-security-groups --port-security-enabled=False

Create VM Instances

We will create three VMs and attach the created ports to these VMs. First create a SSH key and associate the key with each VM instance, so that we’ll be able to login to the VM Instances with login credentials.

nova keypair-add sfcKey > sfcKey.pem
chmod 600 sfcKey.pem

Create and bring up three ubuntu instances as follows:

nova boot --image ubuntu --flavor m1.small --nic net-name=private \
          --nic port-id=$(neutron port-list |grep p01 |awk '{print $2}') \
          --nic port-id=$(neutron port-list |grep p06 |awk '{print $2}') \
          --key-name sfcKey --availability-zone nova head
nova boot --image ubuntu --flavor m1.small --nic net-name=private \
          --nic port-id=$(neutron port-list |grep p02 |awk '{print $2}') \
          --nic port-id=$(neutron port-list |grep p03 |awk '{print $2}') \
          --key-name sfcKey --availability-zone nova sf1
nova boot --image ubuntu --flavor m1.small --nic net-name=private \
          --nic port-id=$(neutron port-list |grep p04 |awk '{print $2}') \
          --nic port-id=$(neutron port-list |grep p05 |awk '{print $2}') \
          --key-name sfcKey --availability-zone nova sf2

Also, update port security for all the ports in the private network. (Assuming that the three VMs were assigned IPs, and by the DHCP server).

neutron port-update --no-security-groups  --port-security-enabled=False \
        $(neutron port-list |grep |awk '{print $2}')
neutron port-update --no-security-groups  --port-security-enabled=False \
        $(neutron port-list |grep |awk '{print $2}')
neutron port-update --no-security-groups  --port-security-enabled=False \
        $(neutron port-list |grep |awk '{print $2}')

Floating IPs

Create and associate floating IPs with each instance so that we can ssh into the VMs without using ip netns exec.

nova floating-ip-create public
nova floating-ip-create public
nova floating-ip-create public

nova floating-ip-associate head $(nova floating-ip-list |grep 'public' | sed -n '1p'| awk '{print $4}')
nova floating-ip-associate sf1 $(nova floating-ip-list |grep 'public' | sed -n '2p'| awk '{print $4}')
nova floating-ip-associate sf2 $(nova floating-ip-list |grep 'public' | sed -n '3p'| awk '{print $4}')

Create the SFC

Setup the SFC by creating Port Pairs, Port Groups, Flow Classifier and the Port Chain.

# Creating Port Pairs and Port Groups
neutron port-pair-create PP1 --ingress p02 --egress p03
neutron port-pair-group-create --port-pair PP1 PPG1

neutron port-pair-create PP2 --ingress p04 --egress p05
neutron port-pair-group-create --port-pair PP2 PPG2

# Creating the Flow Classifier and the Port Chain
neutron flow-classifier-create --source-ip-prefix --destination-ip-prefix --logical-source-port p01 FC1
neutron port-chain-create --port-pair-group PPG1 --port-pair-group PPG2  --flow-classifier FC1 PC1

Comments 1
  • Roseli
    Posted on

    Roseli Roseli

    Reply Author

    Great stuff! Would you have some more updated? Preferably use the tacker?