Sponsored Links

Introducing Turku: Cloud-Friendly Backups for Your Infrastructure

It’s a topic many people don’t like to think about: backups. In addition to making sure your cloud environments are correctly deployed, highly available, secured and monitored, you need to make sure they are backed up for disaster recovery purposes. (And no, replication is not the same as backups.)

Canonical’s IS team is responsible for thousands of machines and instances, and over the years we have been a part of the shift from statically-deployed bare-metal environments to embracing dynamic environment deployments on private and public clouds. As this shift has occurred, we’ve needed to adjust our thinking about how to back up these environments. This has led to the development of Turku, a decentralized, cloud- and Juju-friendly backup system.

Old and Busted

Traditional backup systems tend to follow a similar deployment workflow:

  1. A backup agent is installed on the machine to be backed up.
  2. A centralized server is configured with information about the client machine, what to back up, and when.
  3. At scheduled times, the server connects to the client agent and performs backups.

This workflow has several disadvantages. Primarily, it relies on a centralized configuration mechanism. This may be fine if you only have a few static machines to back up, but the act of manually configuring backups on a backup server does not scale well.

In addition, most backup systems require ingress access to the machine to be backed up. While this may seem logical at first, it becomes a problem when the concept of the service unit is no longer tightly coupled to a machine’s hostname or IP. Not to mention the security aspect of allowing one machine direct access to all of your infrastructure.

Most of our environments are deployed via Juju, which abstracts the concept of networking, especially for services which are not at the front-end layer and do not have floating IPs. Your typical database / store unit is never going to have a floating IP, in most cases is not reachable from most of our networks, and in some cases isn’t even likely to be in the same location tomorrow. Having a backup server being able to reach this sort of unit is usually just not possible.

New Hotness

After struggling with the limitations of these sorts of backup systems, Canonical’s IS team put together a decentralized, cloud-friendly backup system called Turku. Turku takes a different approach to backup management:

  1. A backup agent (turku-agent) is installed on the machine to be backed up. This can be installed manually, or be deployed as a Juju subordinate service.
  2. The agent is configured with the location and API key of an API server (turku-api), and with sources of data to be backed up (where, when, what to exclude, how long to keep snapshots, etc). In a Juju subordinate charm setup, this is as easy as the master charm dropping configuration files in /etc/turku-agent/sources.d/ and running turku-update-config.
  3. The agent registers itself with the API server and sends its configuration. It then regularly checks in with the API server (every 5 minutes by default). If the API server determines it’s time for a backup (using scheduling data provided by the agent), it tells the agent to check in with a particular storage unit (turku-storage).
  4. The agent checks in with the storage unit by SSHing to it, using a unit-specific public key relayed from the agent to the storage unit via the API server. This SSH session includes a reverse tunnel to a local Turku-specific rsync daemon on the agent machine.
  5. The storage unit connects to this rsync daemon over the reverse tunnel and rsyncs the scheduled data modules. It then handles snapshotting of the data. The preferred method is using attic, a deduplication program, but it can also use hardlink trees or even no snapshotting, depending on the nature of the source of data to be backed up.
  6. Storage units occasionally expire snapshots using retention policies, again, as configured by the agent.

Turku components

This workflow gives most of the power to the client, and avoids needing to configure a centralized server every time a client unit is added or removed. In almost all situations, no configuration is needed on any server systems. And because of the reverse tunnel, no ingress access is required to each client machine; only egress to the API server and storage units are required.

Schedule and retention information is defined in the agent using natural language expressions. For example, a typical daily backup source may be configured with the schedule “daily, 0200-1400″. As we have thousands of machines being backed up, we found that it’s best to configure the source with a schedule as wide as possible, to allow the API server’s scheduler the most freedom to determine when a backup should start. In most cases, service units are not time-constrained, so most schedule definitions are simply “daily”.

You can be specific, such as “sunday, 0630″ for a weekly run, and the API scheduler will try to be as accommodating as possible, but again, it’s recommended to be as open as possible when it comes to backup times.

Similarly, a typical retention definition is “last 5 days, earliest of 1 month, earliest of 2 months”. For example, if today is December 15 and a backup is made, 7 snapshots would exist: December 11 through 15, December 1 and November 1.


A backup system is useless if you can’t be confident you can restore the data. When Turku was handed over to the IS Operations team to begin deployment and migrations from our previous backup systems, the first thing they did was test restores in a variety of situations. They came up with some interesting scenarios and helped improve usability of the restore mechanism.

In most cases, when doing a restore, you usually don’t want to restore in place. At first this seems counter-intuitive, but when dealing with a disaster recovery situation, it’s usually a matter of getting data from a previous point in time and re-integrating it with the live data in some way, depending on the exact nature of the disaster.

You may remember from above that the Turku agent runs its own local rsync daemon which is served over the reverse SSH tunnel. Most of this daemon’s modules are read-only sources of data to be backed up, but it also includes a writable restore module. When you run “turku-agent-ping –restore” on the machine to restore data to, it connects to the storage unit and establishes the reverse tunnel as normal, but then just sits there until cancelled. You then log into the storage unit, pick a snapshot to restore, then rsync it to the writable module over the tunnel. (As the tunnel ports and credentials are randomized, “turku-agent-ping –restore” helpfully gives you a sample rsync invocation using the actual port/credentials.) This is one of the only times you’ll need to log into a Turku infrastructure machine, but it gives the administrator the most flexibility, especially in a time of crisis.


Turku is designed for easy scale-out when deployed via Juju. turku-api is a standard Django application and can be easily horizontally scaled through juju add-unit turku-api (though in practice we’ve had thousands of units checking in to a pair of turku-api units with almost no load). turku-storage is also horizontally scalable, which is more important as your backup infrastructure grows. To expand storage, you can simply add more block storage to an existing turku-storage unit (they’re managed in an LVM volume on each unit), or add more units with juju add-unit turku-storage, plus block storage.

When more storage is added to Turku, either through raw block storage or new storage units, the API scheduler automatically takes care of proportionally allocating new agents/sources depending on the storage split. For example, if you have two 5TB turku-storage units, one of which is half full and the other is empty, a new registered source will be twice as likely to be assigned to the empty storage unit. When storage units reach an 80% high water mark, they stop accepting new sources altogether, but will continue to back up existing registered sources. Actively rebalancing storage units is not currently supported as the proportional registration system plus the high water mark is sufficient for most situations, but it is planned for the future.

Current Status

Most of our backup infrastructure has been migrated to Turku, which has been in operation for approximately 6 months. We’re releasing the code in the state we have been using it, but this is a very early public release. Documentation will be ported over from our internal wikis, and it’s possible there is code or functionality specific to our infrastructure (though unlikely, as Turku was developed with the goal of eventually being open sourced).

Please take a look at the Turku project page on Launchpad, download the software, take a look, file bugs… We’re excited to hear from you!

N.B.: Turku is a city on the southwest coast of Finland at the mouth of the Aura River, in the region of Finland Proper. To the Swedes, it is known by its original Latin name: Åbo, or Aboa. One of Canonical’s first server naming schemes over 10 years ago was Antarctic bases, and our first backup server was aboa, named after the Finnish research station. Backup systems since then have tended to be a play on the name Aboa.

Juju – It’s About Building Relationships!

Juju - Build Relationships
I admit it, I was excited about Juju before I even fully understood it. But isn’t that part of the magic of anything called Juju?

I’m new to Canonical, I’m less than a month into my tenure here. But I’m already super excited to have joined. What has me so excited? The company‘s vision, and the relationships I’ll build with fun, smart people while I’m here.

You see, Canonical isn’t focused on traditional enterprise computing, and it’s not entirely about the modern cloud, either. Canonical is looking toward the next phase of how we, technology people, interact to accomplish things. And sometimes, before the next way of doing things has become the norm, it can be really confusing. Many of us tend to think of new ideas as something we already know, even when they’re not.

Remember when virtualization was new? Just the concept was nearly impossible for people to wrap their heads around. You mean I now have 20 servers in one box, but I can’t see them? It’s not unreasonable for people to initially be confused and skeptical. The difficulty people initially had with understanding a disruptive technology like virtualization is the same they might have understanding a disruptive technology like Juju from Canonical.

So, why did I join Canonical? Because I knew they were doing something cool, and I wanted to be a part of it. More specifically, I wanted to tell the story of what that is. That gets us back to Juju. Juju is cool, it’s forward-looking, and it’s awesome. But it’s also largely misunderstood in the year of computing, 2015. People want to think it’s something they already know, and really, it’s not.

Juju is about relationships. It’s about building relationships between your applications. It’s about building relationships between the people you work with in technology, from developer to operator to IT architect. It’s about building the relationships between the software and the people, themselves. Basically, if there’s a relationship involved, Juju handles it.

What do we mean by relationships? Well, if you think about an application, it has dependencies, things like talking to a database, or a partner application, or a second copy of itself for high availability. Traditionally, you’ve installed the application and then manually created the relationship between it and the database it needs, or its partner application, etc.

But what if you could draw all of those relationships on something like a whiteboard, and then just click deploy? And voila! Your relationships aren’t just a picture any more, they’re reality. That’s Juju.

What about the people aspect of the relationships that Juju builds? Well, think about the traditional lifecycle of writing an application, or deploying a new, multi-tiered application solution. Typically, an IT architect might design the solution at a high level, working with development, operations, network and storage administrators. After the design has been completed, the team goes about making it happen. That could involve multiple iterations of the solution design, each time, passing the workload back and forth, and in many cases, starting some, or all, of the project over, installing, reinstalling, reconfiguring any number of pieces of software.

Imagine if you could simply pass the whiteboard around the room. Sketch out what you need, let your colleague add their part, update the dependencies you’ve diagrammed, and pass it on. Then, you could just click to deploy the entire whiteboard. Test, analyze, identify issues, pass the whiteboard back, update the solution, and deploy the whiteboard again. And, again, voila! Your solution is now designed, tested, and deployed, ready for production. That’s the power of Juju.

Juju isn’t just about developers, administrators and operators. It isn’t just about application deployment, configuration and management. Juju is about all those things, and, most importantly, the relationships between them. It’s about every person and every application involved in designing, developing, and deploying complex applications in your private or public cloud.

Juju reduces the friction of relationships. It reduces the complexity in building relationships. It reduces the time it takes for relationships to deliver something awesome.

Juju is all about relationships. Start building yours today at jujucharms.com

FCM#100-1 is OUT!

FCM99-coverFull Circle – the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninety-ninth issue.

This month:
* Command & Conquer
* How-To : LaTeX, LibreOffice, and Programming JavaScript
* Graphics : Inkscape.
* Chrome Cult
* Linux Labs: Customizing GRUB
* Ubuntu Phones
* Review: Meizu MX4 and BQ Aquaris E5
* Book Review: How Linux Works
* Ubuntu Games: Brutal Doom, and Dreamfall Chapters
plus: News, Arduino, Q&A, and soooo much more.

Get it while it’s hot!

We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.

AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.

Conducting Systems & Services: An Evening About Orchestration

Orchestration of containers is one of the hottest topics in devops. Prior to Dockercon this year we thought it would be a great idea to bring in some of the thought leaders in this area and have a discussion of the challenges that face devops today and how solving large scale problems can help drive innovation in the cloud. So we found some folks from Juju, Docker, Kubernetes, Mesos, and Netflix; threw them up on a panel and got them talking about it. As you would expect one of the first things discussed was the use of the word “orchestration” in the first place. Is it even the right term?

Our panelists are Adrian Cockroft (formely of Netflix/Sun), Ben Saller (Ubuntu), Jeff Lindsey (Docker and a variety of other OSS projects), Bill Farner (Mesos), and Brian Grant (Kubernetes). Our moderator is John Willis (Docker). Thanks go out to the folks at Yelp for hosting us, and we hope to continue to repeat these sorts of discussions at other devops conferences.

ODS Video: Making Large-scale Data Centre Deployment Easy With MAAS

Why has cloud computing been so successful? Arguably, it’s the 70s-style pay-as-you-go model of computing that allows companies a low-cost way to build a cloud from zero. More importantly, it’s how the cloud solves the problem of quick machine deployment in fast-paced business environments – provisioning a new PC in minutes.

But what if you could unlock the potential in your bare metal and manage it just like you can in the cloud, allowing you to define CPU and RAM requirements as well as location and access credentials? With MAAS you can.

In this, the latest of our OpenStack Summit keynotes, hear from Christian Reis, VP Hyperscale and Storage how MAAS is unlocking the potential in bare metal.

Join our Ubuntu OpenStack Fundamentals Training Course to learn about working with MaaS, Juju and Landscape, our industry-leading cloud toolset. Created by Canonical’s Engineering team, the course is a classroom-based combination of lectures and lab work. Learn more about the program here, and don’t forget to register for our upcoming dates in Amsterdam, Chicago, and Washington DC.

Spreedbox – most private video chat and file exchange

This is a guest post by Struktur AG.

Today, most organizations use online services for communication and often have confidential data shared and stored with service providers. Just think of Google®, Skype®, WebEx®, GoToMeeting®, BlueJeans® and many others. You may already have used these services to video conference, share files, keep your address books and stay in touch. All great features, without a doubt. But where is your data? Who has access to it? Confidential conversations, files, videos and personal contacts are uploaded and shared with these service providers without having an adequate service and privacy agreement that meets your requirements in privacy and confidentiality.

Your Data, Your Control 

This is where the Spreedbox comes in. The Spreedbox allows you to take back ownership of your data. The Spreedbox empowers you to operate your own secure audio/video chat, messaging and file sharing service with the highest measurements for control and security of your own data. It is your own video meeting and file sharing service that can be available on computers, mobile phones and tablets through the Internet or limited to an Intranet. Your data always stays on your Spreedbox. Make a call, invite your friends and clients, and collaborate in closed groups through video/audio, text messaging, and document and file sharing. You can access your private data in an easy-to-use web interface with PC, Android and iOS devices.

Open Source

The Spreedbox software is free and published under open source AGPL license that gives you the right to examine, share and modify it. An international community of software engineers and volunteer contributors developed the server software and you are invited to get involved, too.

Better-and-better-and-better Security

The Spreedbox runs Snappy Ubuntu Core Linux operating system, providing world-leading security as transactional components with rigorous application isolation. It is the smallest and safest Linux OS ever. With our secure algorithms, the high-speed ARM 4-core CPU and an off-the-silicon secure hardware key-generator (TRNG), the Spreedbox features an outstanding cryptographic budget well above any industry standard.

Use One, Deploy Many

The Spreedbox is instantly ready to use. You only need one Spreedbox to securely meet with up to 6 people in a session where you can use video/audio, share files and collaborate on documents. You can have up to 10 group sessions in parallel each with 6 attendees. This accounts for an awesome capacity of real-time meetings being held by up to 60 people at any given time – all with one Spreedbox.

One Last Thing

As the Spreedbox is your own cloudless service you can video chat and share globally 24 hours a day, 365 days a year. There are no fees, no subscriptions and no running costs.*

*Electricity and Internet connectivity is on your account.


Our friends at Struktur have used Snappy on top of an Odroid to build the Spreedbox. They are very active in the Snappy and open source community and a such we wanted to thank them. If you are worried about your privacy then check out their Kickstarter campaign.

Juju & Kubernetes: The power of components


While dogfooding my own work, I decided it was time to upgrade my distributed docker services into the shiny Kubernetes charms now that 1.0 landed last week. I’ve been running my own “production” (I say in air quotes, because my 20 or so microservices aren’t mission critical – if my RSS reader tanks, life will go on!) services with some of the charm concepts I’ve posted about over the last 4 months.

Its time to really flex the Kubernetes work we’ve done and fire up the latest and greatest, and start to really feel the burn of a long-running kubernetes cluster, as upgrades happen and unforseen behaviors start to bubble up to the surface.


One of the things I knew right away, is that our provided charm bundle was overkill for what I wanted to do. I really only needed 2 nodes, and using colocation for the services – I could attain this really easily. We spent a fair amount of time deliberating about how to encapsulate the topology of a Kubernetes cluster, and what that would look like with the mix and match components one could reasonably deploy with.

Node 1

  • ETCD (running solo, I like to live dangerously)
  • Kubernetes-Master

Node 2

  • Docker
  • Kubernetes Node (the artist formerly known as a minion)

Did you know: The Kubernetes project retired the minion title from their nodes and have re-labeled them as just ‘node’?

Why this is super cool?

I’m excited to say that our attention to requirements has made this ecosystem super simple to decompose and re-assemble in a manner that fits your needs. I’m even considering contributing a single server bundle that will stuff all the component services on a single machine. This makes it even lower cost of entry to people looking to just kick the tires and get a feel for Kubernetes.

Right now our entire stack consumes bare minimum of 4 units.

  • 1x ETCD node
  • 2x Docker/Kubernetes Nodes
  • 1x Kubernetes-Master node

This distributed system is more along the lines of what I would recommend starting your staging system with, scaling ETCD to 3 nodes for quorem and HA/Failover and scaling your Kubernetes nodes as required. Leaving the Kubes-Master to only handle the API/Load of client interfacing, and ecosystem management.

I’m willing to eat this compute space on my node, as I have a rather small deployment topology, and Kubernetes is fairly intelligent with placement of services once a host starts to reach capacity.

What does this look like in bundle format?

Note, I’m using my personal branch for the Docker charm, as it has a UFS filesystem fix that resolves some disk space concerns that hasn’t quite landed in the Charm Store yet due to a rejected review. This will be updated to reflect the Store charm once that has landed.

series: trusty services: kubernetes: charm: "cs:~kubernetes/trusty/kubernetes-6" annotations: "gui-x": "1109" "gui-y": "122.20509601567676" "kubernetes-master": charm: "cs:~kubernetes/trusty/kubernetes-master-6" num_units: 1 annotations: "gui-x": "1442.49658203125" "gui-y": "355.5472438428252" to: - "0" docker: charm: "cs:~lazypower/trusty/docker-15" num_units: 1 annotations: "gui-x": "1459" "gui-y": "116.79493450190131" to: - "1" etcd: charm: "cs:trusty/etcd-0" num_units: 1 annotations: "gui-x": "1111.94580078125" "gui-y": "506.0163547899872" to: - "0" relations: - - "kubernetes-master:etcd" - "etcd:client" - - "kubernetes:etcd" - "etcd:client" - - "kubernetes:docker-host" - "docker:juju-info" - - "kubernetes-master:minions-api" - "kubernetes:api" machines: "0": series: trusty constraints: "arch=amd64 mem=1g" "1": series: trusty constraints: "arch=amd64 cpu-cores=2 mem=2g"


Deploy Today

juju quickstart https://gist.githubusercontent.com/chuckbutler/f9218cc74ef8cfa07205/raw/3dd5a12a7d17b7d9c1b07d6a3b5b2f868681bdf4/bundle.yaml 

Deploy Happy!

Voting begins for OpenStack Tokyo talks

We’ve submitted several talks to the OpenStack Summit in Tokyo in Vancouver. We’ve listed them all below with links to where to vote for each talk so if you think they are interesting – please vote for them!


Creating business value with cross cloud infrastructure

Speaker: Mark Shuttleworth
Track: Enterprise IT Strategies

Building an OpenStack cloud is becoming easy. Delivering value to a business with cloud services in production, at scale to an enterprise class SLA needs knowledge and experience. Mark Shuttleworth will discuss how Ubuntu, as the leading linux for cloud computing is taking knowledge and experience from OpenStack, AWS, Azure and Google to build technologies that deliver robust, integrated cloud services.

Vote Now

Mark Baker

Supporting workload diversity in OpenStack

Speaker: Mark Baker
Track: Targeting Apps for OpenStack Clouds

It is the workloads that matter. Building cloud infrastructure maybe interesting for many but the value it delivers is derived from the business applications that run in it. There are potentially a huge range of business applications that might run in OpenStack: some cloud native, many monolithic; some on Windows, others in RHEL and increasing numbers in Ubuntu and CentOS. Diversity can create complexity. Issues of support, maintenance, backup, upgrade, monitoring, alerting, licensing, compliance and audit become amplified the more diverse the workloads are. Yet successful clouds should consume most workloads so the problems need to be understood and addressed. This talk will look at some of thecurrent challenges support workload diversity in OpenStack today, how end users are managing them and how they can be addressed by community in the future.

Vote Now

Mark Baker

Building an agile business for Asia market with OpenStack

Speaker: Mark Baker (Canonical), Yih Leong Sun, Dave Pitzely (Comcast)
Track: Enterprise IT Strategies

In a previous talk at Vancouver summit, “Enabling Business Agility with OpenStack”, we shared a few strategies of integrating OpenStack into an organisation including selecting enterprise workload, gaining developer acceptance, etc. This talk extends the previous one by focusing on the business aspect in the Asia market. The audience will learn how to take advantage of the growing OpenStack community in order to create a business case that meet regional market requirements and understand what are the challenges in evaluating and deploying OpenStack. This presentation is brought to you by the OpenStack Enterprise Working Group.

Vote Now

Victor Estival   arturo

Sizing Your OpenStack Cloud

Speakers: Arturo Suarez & Victor Estival
Track: Planning your OpenStack Cloud

Sizing your OpenStack environment is not an easy task. In this session we will cover how to size your controller nodes to host all your OpenStack services as well as Ceph nodes and Overcommit CPU and Memory based on real use cases and experiences. VMs? Containers? Baremetal? How do I scale? We will cover diferent approaches to do the sizing and we will also talk about the most bottlenecks that you might find while deploying an OpenStack cloud.

Vote Now

Victor Estival  ryan

Deploying OpenStack in Multi-arch environments

Speakers: Victor Estival & Ryan Beisner
Track: Compute

On this session we will talk and demonstrate how to deploy a OpenStack environment on a IBM Power 8 server and ARM processors. We will talk about the differences between the deployment over the different architectures (Intel, Power and ARM) discuss about multi-arch deployments and advantages.

Vote Now

David Cheney

High performance servers without the event loop

Speakers: David Cheney

Conventional wisdom suggests that high performance servers require native threads, or more recently, event loops.

Neither solution is without its downside. Threads carry a high overhead in terms of scheduling cost and memory footprint. Event loops ameliorate those costs, but introduce their own requirements for a complex callback driven style.

A common refrain when talking about Go is it’s a language that works well on the server; static binaries, powerful concurrency, and high performance.

This talk focuses on the last two items, how the language and the runtime transparently let Go programmers write highly scalable network servers without having to worry about thread management or blocking I/O.

The goal of this talk is to introduce the following features of the language and the runtime:

Escape Analysis
Stack management
Processes and threads vs goroutines
Integrated network poller

These four features work in concert to build an argument for the suitability of Go as a language for writing high performance servers.

Vote Now

Chen Liang

Container networking

Speakers: Chen Liang & Hua Zhang
Track: Networking

Container is making a lightweight alternative to hypervisors. There are lots of great work have been done to simplify container-to-container communication like neutron, fan, kubernetes, SocketPlane, dragonflow etc. What are the main characteristics of the container technology? What kind of design challenges those characteristics bring us? What are the main technical differences between those great container networking solutions? All of these are main topics in this session. Beyond that, we will also talk about some of our rough thought to make networking best for the container.

Vote Now

arturo  Victor Estival

Competing with Amazon and winning

Speakers: Arturo Suarez, Victor Estival
Track: Telco Strategy

IT departments of companies of any size and any industry have been loosing workloads to Amazon Web Services and the likes in the last decade. OpenStack is the vehicle to compete with the public clouds for workloads, but there are several factors to considering when building and operating it in order to stay competitive and win. In this session we will walk through the some of the factors (cost, automation, etc) that made AWS and other public clouds successful and how to apply them to your OpenStack cloud. And then we will focus on our competitive edge, on what we should do to win.
Vote Now

Arturo Suarez  OmarLara

Unlocking OpenStack for Service Providers

Speakers: Arturo Suarez, Omar Lara
Track: Public and Hybrid Clouds

Is OpenStack commercially viable for service providers?
Any service provider looking to develop its cloud solution business wants a service that can be brought to market quickly and cost effectively. It needs to provide differentiation, and to be able to scale as the service grows.
How to achieve that? Build or buy? or any combination?
In this session we will go through some of the challenges we faced when creating OpenStack based Cloud Service Providers in the early days and how we would do some things differently.
Vote Now

Ryan Beisner

Copy & Paste Your OpenStack Cloud Topology

Speakers: Ryan Beisner
Track: Products, tools and services

A discussion and live demonstration, covering the use of service modeling techniques to inspect an existing OpenStack deployment and re-deploy a new cloud with the same service, network, storage and machine topology. This talk aims to demonstrate that modeling and reproducing a cloud can help an organization test specific scenarios, stage production upgrades, train staff and develop more rapidly and more consistently.
Vote Now

Ryan Beisner

The Reproducible, Scalable, Observable, Reliable, Usable, Testable, Manageable Cloud

Speakers: Ryan Beisner
Track: User Stories

This talk discusses a proven and open-source approach to describing each of your OpenStack deployments using a simple service modeling language to make it manageable, discoverable, repeatable, observable, testable, redeployable and ultimately more reliable.

A live demonstration and walk-through will illustrate a simple method to consistently define, describe, share or re-use your cloud’s attributes, component configurations, package versions, network topologies, api service placements, storage placement, machine placement and more.

Vote Now


Deep dive OvS on Ubuntu

Speakers: Hui Xiang
Track: Networking

As many more people starting to use OvS, there are a lot of great work have been done to integrate with OpenStack, and many new areas are in the process of implementing and exploring, here we would like to talk the experience to make OvS works best on Ubuntu.

1. Performance tuning on OvS
– What would affect OvS performance, i.e. MTU,TCP options.
– Current status with dpdk to get fast running
2. What would happen when upgrading Open vSwitch and Ubuntu kernel
– Possible issues cause worse behavor after upgrading
3. Deep dive Open vSwitch for better debug
– The theory to better debug OvS
4. A quick look at Open vSwitch flow implementation
– What’s happening for supporting OVN
Vote Now


OpenStack as proven and open-souced Hypver-Converged OpenStack

Speakers: Takenori Matsumoto (Canonical), Ikuo Kumagai (BitIsle), Yuki Kitajima (Mellanox)
Track: User Stories

Many Datacenter providers are looking for proven and open-source hyper converged OpenStack solution so that they can provide simple, low-cost, high-performance and rapid-deployment OpenStack environment. To archive this goal, the followings topics become key considerations.
– Infrastructure should be simple as much as possible.
– Using commodity hardware and open source software.
– Using tool to help rapid and ease of deployment.
– High density.
– Secure performance and network bandwidth.

To address this challenges, in this session we will share the best practices and lesson learns about the idea to use OpenStack as hyper converged infrastructure with high standard technologies.
The key concept of this PoC is to evaluate OpenStack as hyper converged infrastructure with following key technologies.

* Network
40G/56G physical network
VXLAN offload capability
DVR or Some SDN

* Storage
PCI Express SSD
Ceph RDMA (Remote Direct Memory Access)

* Deploy
Ubuntu Juju/MAAS and publish charm bundle as executable whitepaper.

The agenda to be covered in this session are below.
1. System Architecture overview.
2. Performance test and verification method.
3. Best practice and Lesson leans.

Vote Now

  Ryan Beisner  gema

Testing Beyond the Gate – Openstack Tailgate

Speakers: Malini Kamalambal (Rackspace), Steve Heyman (Rackspace), Ryan Beisner (Canonical.com), James Scollard (Cisco), Gema Gomez-Solano (Canonical), Jason Bishop (HDS)
Track: Community

This talk aims to discuss the objectives, motivations and actions of the new #openstack-tailgate group. Initially comprised of QA, CI and system integrator staff from multiple organizations, this effort formed during the Liberty summit to take testing to the next level: functional validation of a production OpenStack cloud. The tailgate group intends to focus on enhancing existing test coverage, utilizing and improving existing frameworks, converging the testing practices across organizations into a community based effort, and potentially spinning off new useful tools or projects in alignment with this mission.

We are not starting with predetermined tools, we are starting with the problem of testing a production openstack cloud composed of projects from the big tent and figuring out how to effectively test it.

Vote Now

  Ryan Beisner  gema

Testing Beyond the Gate – Validating Production Clouds

Speakers: Malini Kamalambal (Rackspace), Steve Heyman (Rackspace), Ryan Beisner (Canonical.com), James Scollard (Cisco), Gema Gomez-Solano (Canonical), Jason Bishop (HDS)
Track: Operations

Join QA and operations staff from Rackspace, Canonical, Cisco Systems, Hitachi Data Systems, DreamHost and other companies as they discuss and share their specific OpenStack cloud validation approaches. This talk aims to cover a wide range of toolsets, testing approaches and validation methodologies as they relate to production systems and functional test environments. Attendees can expect to gain a high-level understanding of how these organizations currently address common functional cloud validation issues, and perhaps leverage that knowledge to improve their own processes.

Vote Now



High performance, super dense system containers with OpenStack Nova and LXD

Speakers: James Page
Track: Related OSS Projects 

LXD is the container-based hypervisor lead by Canonical, providing management of full system containers on Linux based operating systems.

Combined with OpenStack, LXD presents a compelling proposition to managing hyper-dense container based workloads via a Cloud API without the associated overheads of running full KVM based virtual machines.

This talk aims to cover the current status of LXD and its integration with OpenStack via the Nova Compute LXD driver, the roadmap for both projects as we move towards the next LTS release (16.04) of Ubuntu, and a full demonstration of deployment and benchmarking of a workload ontop of a LXD based OpenStack cloud.

Attendees can expect to learn about the differences between system and application based containers, how the LXD driver integrates containers into OpenStack Nova and Neutron, and how to effectively deploy workloads ontop of system container based clouds.

Vote Now


Why Top of Rack Whitebox Switches Is The Best Place To Run NFV Applications

Speakers: Scott Boynton
Track: Network

Since network switches began they have been created with a common design criteria that packets should stay in the switching ASIC and never be sent to the CPU. Due to the latency involved such events should happen only by exception and avoided at all cost. As a result switches have small CPUs with only enough memory to hold two images and small channels between the CPU and the switching ASIC. This works well if the switch is only meant to be a switch. However, in the world of the Open Compute Project and whitebox switches a switch can be so much more. Switches are no longer closed systems where you can only see the command line of the network operating system and just perform switching. Whitebox switches are produced by marrying common server components with high powered switching ASICs, loading a Linux OS, and running a network operating system (NOS) as an application. The user has the ability to not only choose hardware from multiple providers, they can chose the Linux distribution, and the NOS that best matches their environment. Commands can be issued from the Linux prompt or the NOS prompt and most importantly, other applications can be installed along side the NOS.

This new switch design opens up the ability to architect data center networks with higher scale and more efficient utilization of existing resources. Putting NFV applications on a top of rack switch allows direct access to servers in the rack keeping the traffic local and with lower latency. Functions like load balancing, firewalls, virtual switching, SDN agents, and even disaster recovery images are more efficient with smaller zones to manage and in rack communications to the servers they are managing. The idea of putting NFV applications on large powerful centralized servers to scale is replaced with a distributed model using untapped resources in the switches. As a server, the switch is capable of collecting analytics, managing server health, or even acting as a disaster recovery platform. Being able to recover images from a drive in the switch instead of from a storage array across the network will not only be faster but lower the expensive bandwidth required to meet recovery times.

Whitebox switch manufacturers are already delivering switches with more powerful CPUs, memory, and SSD drives. They have considerably more bandwidth between the CPU and the switching ASIC so applications running in secure containers on the Linux OS can process packets the same way they would on a separate server. With expanded APIs in the NOS, many of the functions can be performed directly between applications without even touching the PCI or XFI bus for even more performance.

A new design for how to distribute NFV applications is here today with an opportunity to stop wasting money on task specific devices in the network. With an NFV optimized whitebox switch, top of rack functionality is taken to new heights.
Vote Now


Deploying Openstack from Source to Scalable Multi-Node Environments

Speakers: Corey Bryant
Track: Products, Tools & Services

OpenStack is a complex system with many moving parts. DevStack has provided a solid foundation for developers and CI to test OpenStack deployments from source, and has been an essential part of the gating process since OpenStack’s inception.
DevStack typically presents a single-node OpenStack deployment, which has testing limitations as it lacks the complexities of real, scalable, multi-node OpenStack deployments.
Ubuntu now addresses the complexity of multi-node service orchestration of OpenStack deployments and has the ability to deploy OpenStack from source rather than from binary packages.
Come and hear about how we’ve implemented this feature for Ubuntu OpenStack, how to use it yourself, and even see a live deployment of OpenStack Mitaka from source!

Vote Now

OmarLara  arturo

Containers for the Hosting Industry

Speakers: Omar Lara, Arturo Suarez
Track: Enterprise IT Strategies

Currently the market for the sale of virtual machines in cloud is reaching a break-even point and with the introduction of the containers we will show how it is more profitable for the hosting industry to take advantage of high-density schemes that allow us to generate economies of scale more sexy in this sector.

We will discuss about deployment LXC/LXD capabilities for this industry and how to generate value by OpenStack and LXD Lightervisor to face the new challenges in terms of provisioning services like messaging / communication and collaboration / storage and web hosting workloads.

Vote Now


Deploying tip of master in production on a public cloud

Speakers: Marco Ceppi
Track: Planning your OpenStack cloud

Over the past six months I’ve been deploying and managing a public, production grade, OpenStack deployment from the lastest commits on the master branch for each component. This was not an easy process, as if deploying the lastest development release, for a production cloud no less, wasn’t challenge enough; The entire effort to do so has been done by me alone. In this session I’ll dive into the decisions I made when designing my OpenStack cloud, pitfalls I encountered into when performing my first deployments, and lessons learned during design, execution, maintenance, and upgrades of my OpenStack cloud.

Vote Now

MarcoCeppi  CoreyBryant

Building a CI/CD pipeline for your OpenStack cloud

Speakers: Marco Ceppi, Corey Bryant
Track: Planning your OpenStack cloud

Maintaining and upgrading an OpenStack deployment is a time consuming and potentially perilous adventure. For years software developers have been using continuous integration and continuous delivery tools to validate, test, and deploy their code. By using those same ideaologies, your OpenStack too can benefit from that process. In this session we’ll go over some ways this can be achieved for an OpenStack deployment which are applicable to the smallest of OpenStack deployments to full scale OpenStacks. We’ll discuss how existing techniques and tools can be leveraged to produce a stable OpenStack deployment that is updated regularly based on a continuous testing cycle.

Vote Now


Life on the edge – deploying tip of master in production for a public cloud

Speakers: Marco Ceppi
Track: User Stories

Take a walk on the other side. The goal of this cloud was simple: could you deploy the latest master development of OpenStack? Could you deploy that same setup in a production system? Now how about as a public cloud. In this talk I walk through my process to achieve this, from the first deployment, to testing and validation of a staging version, to continuously updating production with the lastest versions. This was a multi-month journey to get right and leads to some interesting findings in how to maintain and grow an OpenStack deployment over time.

In this session I’ll walk through my thought process in picking components and tools, issues I ran into, lessons I’ve learned, and discuss the future of a cloud deployed from the bleeding edge.

Vote Now


Time to Upgrade

Speakers: Billy Olsen
Track: Planning your openstack cloud

Your OpenStack cloud has been deployed on a Long Term Service release such as Ubuntu 14.04 using the Icehouse release. Now that Icehouse is out of support from core OpenStack developers and moved into the hands of the Distros its time to start planning the move from Icehouse to Kilo, Juno, Liberty or beyond. Upgrading a live cloud is no trivial task and this talk aims to walk you through the dos, don’ts, and how tos for planning and implementing your cloud upgrade.

Vote Now


Charm your DevOps and Build Cloud-Native apps using Juju

Speakers: Ramnath Sai Sagar (Mellanox), Brian Fromme (Canonical)
Track: Targeting Apps for OpenStack Clouds

DevOps are interested in building applications that are scalable, portable, resilient, and easily updatable. Oftentimes, one looks to adopt the cloud to achieve this. However, it is not as easy as simply lifting and shifting your application to the cloud or splitting your application into smaller containers or VMs. The key to cloud-native application is to be micro, requiring a complete rewrite of the application as microservices from scratch. Alternatively, one could leverage an existing microservice that is efficient, scalable and easily deployable. Juju is the next-generation open-source universal modelling framework allowing developers to reuse existing microservices using Charms and to configure them using simple relations. Join us in this presentation to see how Juju allows cloud-native tools, such as Docker, to be deployed and how Mellanox’s Efficient Cloud Interconnect helps these applications to get highest performance with Juju.

Vote Now


Have container. Need network? Ubuntu LXD+MidoNet

Speakers: Antoni Segura Puimedon (Midokura.com), Mark Wenning (Canonical)
Track: Networking

The LXD hypervisor from Canonical represents a pure-container approach to virtual machines. In our session, we describe how LXD connects with open-source MidoNet to deliver a distributed and highly available software-defined network for containers. The combination brings Ubuntu OpenStack container usage to the next level in networking performance and resilience.

Vote Now


Putting the D in LXD: Migration of Linux Containers

Speakers: Tycho Andersen
Track: Compute

lxc move c1 host2:. In 18 characters, you can live migrate containers between hosts. LXD makes using this powerful and complex technology very simple, and very fast. In this talk, I’ll give a short history of the underlying migration technology CRIU, describe a few optimizations that LXD is doing in the space.

Although this talk will include some specific examples and high level strategy for LXD it will be useful for anyone who is interested in general container migration via CRIU. In particular, I will discuss limitations and common pitfalls, so that interested users can gauge the project’s usefulness in their applications. to make things fast, and discuss future areas of work both in CRIU and in LXD to support a larger class of applications and make things even faster.

Vote Now


Better Living Through Containers: LXD with OpenStack

Speakers: Tycho Andersen
Track: Compute

In this talk I’ll cover the experience operators and users will have when using LXD as their Nova-compute driver. For operators this includes access to much better density and potentially significant cost savings for particular workloads. Users will see more rapid provisioning and access to more capacity but may experience some limitations compared to KVM. I will examine these limitations and discuss how nova-compute-lxd works around them today, as well as discuss what the kernel community is doing to lift these restrictions in the future. Finally, I’ll give a few examples of workloads that will benefit from LXD’s performance and density advantages.

Vote Now

ODS Video: Driving quality control into OpenStack cloud development

How do we make sure Ubuntu offers the best possible ecosystem of both hardware and software components for OpenStack? Chris Kenyon talks about how we’re driving quality into the OpenStack deployment journey in the latest of our OpenStack Summit keynotes.

As the cloud landscape matures, enterprise customers are looking for cost-effective, resilient and, above all, validated cloud solutions that they can be sure will integrate and operate effectively. For many, Ubuntu OpenStack offers them these assurances. Kenyon discusses some of the common interoperability and integration issues and explains how the OpenStack Interoperability Lab, alongside tools such as Autopilot and Juju, are driving quality into the OpenStack deployment experience to provide a reliable ‘push-button’ deployment journey.

Click here to find out how Canonical’s OpenStack Interoperability Lab (OIL) tests, validates and guarantees a host of easy-to-deploy cloud components on behalf of the enterprise.

GPS navigation for Ubuntu Phone

 uNAV is a turn-by-turn GPS navigation for the Ubuntu Touch OS. It is 100% GPL, and powered by OpenStreetMap and OSRM.

“I could show you a few screenshots, and I could tell you how it’s working. Or, I could show you me driving a random route [with it].”