This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact email@example.com
This is the third blog post in this series about ROS production. In the previous post we came up with a simple ROS prototype. In this post we’ll package that prototype as a snap. For justifications behind why we’re doing this, please see the first post in the series.
We know from the previous post that our prototype consists of a single launch file that we wrote, contained within our prototype ROS package. Turning this into a snap is very straight-forward, so let’s get started! Remember that this is also a video series: feel free to watch the video version of this post.
This post will assume the following:
Create the snap
The first step toward a new snap is to create the snapcraft.yaml. Put that in the root of the workspace we created in the previous post:
Do as it says, and make that file look something like this:
name: my-turtlebot-snap # This needs to be a unique name version: '0.1' summary: Turtlebot ROS Demo description: | Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs. grade: stable confinement: devmode parts: prototype-workspace: plugin: catkin rosdistro: kinetic catkin-packages: [prototype] apps: system: command: roslaunch prototype prototype.launch --screen plugs: [network, network-bind] daemon: simple
Let’s digest that section by section.
name: my-turtlebot-snap version: 0.1 summary: Turtlebot ROS Demo description: | Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.
This is the basic metadata that all snaps require. These fields are fairly self-explanatory. The only thing I want to point out specifically here is that the name must be globally unique among all snaps. If you’re following this tutorial, you might consider appending your developer name to the end of this example.
grade: stable confinement: devmode
grade can be either stable or devel. If it’s devel, the store will prevent you from releasing into one of the two stable channels (stable and candidate, specifically)– think of it as a safety net to prevent accidental releases. If it’s stable, you can release it anywhere.
confinement can be strict, devmode, or classic. strict enforces confinement, whereas devmode allows all accesses, even those that would be disallowed under strict confinement (and logs accesses that would otherwise be disallowed for your reference). classic is even less confined than devmode, in that it doesn’t even get private namespaces anymore (among other things). There is more extensive documentation on confinement available.
I personally always use strict confinement unless I know for sure that the thing I’m snapping won’t run successfully under confinement, in which case I’ll use devmode. I typically avoid classic unless I never intend for the app to run confined. In this case, I know from experience this snap won’t run confined as-is, and will require devmode for now (more on that later).
parts: prototype-workspace: plugin: catkin rosdistro: kinetic catkin-packages: [prototype]
You learned about this in the Snapcraft tour, but I’ll cover it again real quick. Snapcraft is responsible for taking many disparate parts and orchestrating them all into one cohesive snap. You tell it the parts that make up your snap, and it takes care of the rest. Here, we tell Snapcraft that we have a single part called prototype-workspace. We specify that it builds with Catkin, and also specify that we’re using Kinetic here (as opposed to Jade, or the default, Indigo). Finally, we specify the packages in this workspace that we want included in the snap. In our case, we only have one: that prototype package we created in the previous post.
apps: system: command: roslaunch prototype prototype.launch --screen plugs: [network, network-bind] daemon: simple
This is where things get a little interesting. When we build this snap, it will include a complete ROS system: roscpp, roslib, roscore, roslaunch, your ROS workspace, etc. It’s a standalone unit: you’re in total control of how the user interacts with it. You exercise that control via the apps keyword, where you expose specific commands to the user. Here, we specify that this snap has a single app, called system. The command that this app actually runs within the snap is the roslaunch invocation we got from the previous post. We use plugs to specify that it requires network access (read more about interfaces), and finally specify that it’s a simple daemon. That means this app will begin running as soon as the snap is installed, and also run upon boot. All this, and the user doesn’t even need to know that this snap uses ROS!
That’s actually all we need to make our prototype into a snap. Let’s create the snap itself:
$ cd ~/workspace $ snapcraft
That will take a few minutes. You’ll see Snapcraft fetch rosdep, which is then used to determine the dependencies of the ROS packages in the workspace. This is only prototype in our case, which you’ll recall from the previous post depends upon kobuki_node and kobuki_random_walker. It then pulls those down and puts them into the snap along with roscore. Finally, it builds the requested packages in the workspace, and installs them into the snap as well. At the end, you’ll have your snap.
Test the snap
Even though we’re planning on using this snap on Ubuntu Core, snaps run on classic Ubuntu as well. This is an excellent way to ensure that our snap runs as expected before moving on to Ubuntu Core. Since we already have our machine setup to communicate with the Turtlebot, we can try it out right here. The only hitch is that /dev/kobuki isn’t covered by any interface on classic Ubuntu (we can make this work for Ubuntu Core, though, more on that later). That’s why we used devmode as the confinement type in our snap. We’ll install it with devmode here:
$ sudo snap install --devmode path/to/my.snap
Right after this completes (give it a second for our app to fire up), you should hear the robot sing and begin moving. Once you remove the snap it’ll stop moving:
$ sudo snap remove my-turtlebot-snap
How easy is that? If you put that in the store, anyone with a Turtlebot (no ROS required) could snap install it and it would immediately begin moving just like it did for you. In fact, why don’t we put it in the store right now?
Put the snap in the store
Step 1: Tell Snapcraft who you are
We’re about to use Snapcraft to register and upload a snap using the store account you created when satisfying the prerequisites. For that to happen, you need to sign in with Snapcraft:
$ snapcraft login
Step 2: Register the snap name
Snap names are globally unique, so only one developer can register and publish a snap with a given name. Before you can publish the snap, you need to make sure that snap name is registered to you (note that this corresponds to the name field in the snapcraft.yaml we created a few minutes ago):
$ snapcraft register <my snap name>
Assuming that name is available, you can proceed to upload it.
Step 3: Release the snap
In the tour you learned that there are four channels available by default. In order of increasing stability, these channels are edge, beta, candidate, and stable. This snap isn’t quite perfect yet since it still requires devmode, so let’s release it on the beta channel:
$ snapcraft push path/to/my.snap --release=beta
Once the upload and automated reviews finish successfully, anyone in the world can install your snap on the computer controlling their Turtlebot as simply as:
$ sudo snap install --beta --devmode my-turtlebot-snap
In the next post in this series, we’ll discuss how to obtain real confined access to the Turtlebot’s udev symlink on Ubuntu Core by creating a gadget snap, moving toward our goal of having a final image with this snap pre-installed and ready to ship.
Original source here.
The open saucy Ubuntu is considering dumping the Thunderbird mail app because users tend to favour using webservices mail instead.
Firefox is a wonderful open source web browser. As a result, it comes pre-loaded on many Linux-based operating systems, such as Ubuntu and Fedora. Yeah, some people choose to install Chromium or Chrome instead, but Mozilla’s offering remains a staple in the Linux community.
Android malware capable of accessing smartphone users’ location and sending it to cyberattackers remained undetected in the Google Play store for three years, according to a security company.
Certified Ubuntu images are now available in the Oracle Bare Metal Cloud Services, providing developers with compute options ranging from single to 16 OCPU virtual machines (VMs) to high-performance, dedicated bare metal compute instances. This is in addition to the image already offered on Oracle Compute Cloud Service and maintains the ability for enterprises to add Canonical-backed Ubuntu Advantage Support and Systems Management. Oracle and Canonical customers now have access to the latest Ubuntu features, compliance accreditations and security updates.
“Oracle and Canonical have collaborated to ensure the optimal devops experience using Ubuntu on the Oracle Cloud Compute Cloud Service and Bare Metal Cloud Services. By combining the elasticity and ease of deployment on Oracle Cloud Platform, users can immediately reap the benefit of high-performance, high availability and cost-effective infrastructure services,” says Sanjay Sinha, Vice President, Platform Products, Oracle.
“Ubuntu has been growing on Oracle’s Compute Cloud Service, and the same great experience is now available to Enterprise Developers on its Bare Metal Cloud Services,” said Udi Nachmany, Head of Public Cloud at Canonical. “Canonical and Oracle engineering teams will continue to collaborate extensively to deliver a consistent and optimized Ubuntu experience across any relevant Oracle offerings.”
Canonical continually maintains, tests and updates certified Ubuntu images, making the latest versions available on the Oracle Cloud Marketplace within minutes of their official release by Canonical. For all Ubuntu LTS versions, Canonical provides maintenance and security updates for five years.
A lot of changes have been happening under Canonical’s roof and there is another one. Ubuntu 17.10 release will be having Wayland by default marking the exit of X.org server.
Over the last 4 months I have blogged 4 times about the enablement of GPUs in Kubernetes. Each time I did so, I spent several days building and destroying clusters until it was just right, making the experience as fluid as possible for adventurous readers.
It was not the easiest task as the environments were different (cloud, bare metal), the hardware was different (g2.xlarge have old K20s, p2 instances have K80s, I had 1060GTX at home but on consumer grade Intel NUC…). As a result, I also spent several hours supporting people to set up clusters. Usually with success, but I must admit some environments have been challenging.
Thankfully the team at Canonical in charge of developing the Canonical Distribution of Kubernetes have productized GPU integration and made it so easy to use that it would just be a shame not to talk about it.
And as of course happiness never comes alone, I was lucky enough to be allocated 3 brand new, production grade Pascal P5000 by our nVidia friends. I could have installed these in my playful rig to replace the 1060GTX boards. But this would have showed little gratitude for the exceptional gift I received from nVidia. Instead, I decided to go for a full blown “production grade” bare metal cluster, which will allow me to replicate most of the environments customers and partners have. I chose to go for 3x Dell T630 servers, which can be GPU enabled and are very capable machines. I received them a couple of week ago, and…
What it was in the past
If you remember the other posts, the sequence was:
Overall, on top of the Kubernetes installation, with all the scripting in the world, no less than 30 to 45min were lost to perform the specific maintenance for GPU enablement.
How is it now?
I am happy to say that the requests of the community have been heard loud and clear. As of Kubernetes 1.6.1, and the matching GA release of the Canonical Distribution of Kubernetes, the new experience is :
Yes, you read that correctly. Single command deployment of GPU-enabled Kubernetes Cluster
Since 1.6.1, the charms will now:
You don’t believe me? Fair enough. Watch me…
For the following, you’ll need:
and for the files, cloning the repo:
Putting it to the test
In the cloud
Deploying in the cloud is trivial. Once Juju is installed and your credentials are added,
I was able to capture the moment where it is installing CUDA so you can see it… When it’s done:
That’s it, you can see the K80 from the p2.xlarge instance. I didn’t do anything about it, it was completely automated. This is Kubernetes on GPU steroids.
The important option in the bundle file we deployed is:
If you want to prevent privileged containers until absolutely necessary, you can use the tag “auto”, which will only activate them if GPUs are detected.
On Bare Metal
Obviously there is a little more to do on Bare Metal, and I will refer you to my previous posts to understand how to set MAAS up & running. This assumes it is already working.
Adding the T630 to MAAS is a breeze. If you don’t change the default iDRAC username password (root/calvin), the only thing you have to do it connect them to a network (a specific VLAN for management is preferred of course), set the IP address, and add to MAAS with an IPMI Power type.
Wait for a few minutes… You will see at some point that the charm is now installing CUDA drivers. At the end,
That’s it, my 2 cards are in there: 1060GTX and P5000. Again, no user interaction. How awesome is this?
Note that the interesting aspects are not only that it automated the GPU enablement, but also that the bundle files (the yaml content) are essentially the same, but for the machine constraints we set.
Having some fun with GPUs
If you follow me you know I’ve been playing with Tensorflow, so that would be a use case, but I actually wanted to get some raw fun with them! One of my readers mentioned bitcoin mining once, so I decided to go for it.
This chart will let you configure how many nodes, and how many GPU per node you want to use. Then you can also tweak the miner. For now, it only works in ETH only mode. Don’t forget to create a values.yaml file to
By default, you’ll get the 3 worker nodes, with 2 GPUs (this is to work on my rig at home)
What did I learn from it? Well,
3 months ago, I recognize running Kubernetes with GPUs wasn’t a trivial job. It was possible, but you needed to really want it.
Today, if you are looking for CUDA workloads, I challenge you to find anything easier than the Canonical Distribution of Kubernetes to run that, on Bare Metal or in the cloud. It is literally so trivial to make it work that it’s boring. Exactly what you want from infrastructure.
GPUs are the new normal. Get used to it.
So, let me know of your use cases, and I will put this cluster to work on something a little more useful for mankind than a couple of ETH!
I am always happy to do some skunk work, and if you combine GPUs and Kubernetes, you’ll just be targeting my 2 favorite things in the compute world. Shoot me a message @SaMnCo_23!
This is a guest post by Peter Kirwan, technology journalist. If you would like to contribute a post, please contact firstname.lastname@example.org
Anyone who doubts that governments are closing in on hardware vendors in a bid to shut down IoT security vulnerabilities needs to catch up with the Federal Trade Commission’s recent lawsuit against D-Link.
The FTC’s 14-page legal complaint accuses the Taiwan-based company of putting consumers at risk by inadequately securing routers and IP cameras.
In this respect, this FTC lawsuit looks much the same as previous ones that held tech vendors to account for security practices that failed to live up to marketing rhetoric.
The difference this time around is that the FTC’s lawsuit includes a pointed reference to reports that D-Link’s devices were compromised by the same kind of IoT botnets that took down US-based Dyn and European service providers in late 2016.
In one way, this isn’t so surprising. In the wake of these recent attacks, the question of how we secure vast numbers of connected devices has rapidly moved up the agenda. (You can read our white paper on this, here.) In December 2016, for example, after analysing the sources of the Dyn attack, Allison Nixon, director of research at the security firm Flashpoint, pointed to the need for new approaches:
“We must look at this problem with fresh eyes and a sober mind, and ask ourselves what the Internet is going to look like when the professionals muscle out the amateurs and take control of extremely large attack power that already threatens our largest networks.”
In recent years, the way in which the FTC interprets its responsibility to protect US consumers from deceptive practices has evolved. It has already established itself as a guardian of digital privacy. Now, it seems, the FTC may be interested in preventing the disruption that accompanies large-scale DDoS attacks.
D-Link, which describes its security policies as “robust”, has pledged to fight the FTC’s case in court. The company argues that the FTC needs to prove that “actual consumers suffered or are likely to suffer actual substantial injuries”. To fight its cornet, D-Link has hired a public interest law firm which accuses the FTC of “unchecked regulatory overreach”.
By contrast, the FTC believes it simply needs to demonstrate that D-Link has misled customers by claiming that its products are secure, while failing to take “reasonable steps” to secure its devices. The FTC claims that this is “unfair or deceptive” under US law.
But who defines what is “reasonable steps” when it comes to the security of connected devices?
The FTC’s lawsuit argues that D-Link failed to protect against flaws which the Open Web Application Security Project (OWASP) “has ranked among the most critical and widespread application vulnerabilities since at least 2007”.
The FTC might just as easily have pointed to its own guidelines, published over two years ago. In the words of Stephen Cobb, senior security researcher at the security firm ESET: “Companies failing to heed the agency’s IoT guidance. . . should not be surprised if they come under scrutiny. Bear in mind that any consumer or consumer advocacy group can request an FTC investigation.”
The FTC has already established that consumers have a right to expect that vendors will take reasonable steps to ensure that their devices are not used to spy on them or steal their identity.
If the FTC succeeds against D-Link, consumers may also think it reasonable that their devices should be protected against botnets, too.
Of course, any successful action by the FTC will only be relevant to IoT devices sold and installed in the US. But the threat of an FTC investigation certainly will get the attention of hardware vendors who operate internationally and need to convince consumers that they can be trusted on security.
NodeSource is releasing a distribution of its enterprise-level, commercially supported NSolid Node.js runtime that works with Docker-friendly Alpine Linux. NSolid for Alpine Linux is intended to work with Alpine’s small footprint and security capabilities, said Joe McCann, NodeSource CEO.
Unitas Global, the leading enterprise hybrid cloud solution provider, and Canonical, the company behind Ubuntu, the leading operating system for container, cloud, scale out, and hyperscale computing announced they will provide a new fully managed and hosted OpenStack private cloud to enterprise clients around the world.
This partnership, developed in response to growing enterprise demand to consume open source infrastructure, OpenStack and Kubernetes, without the need to build in-house development or operations capabilities, will enable enterprise organizations to focus on strategic Digital Transformation initiatives rather than day to day infrastructure management.
This partnership along with Unitas Global’s large ecosystem of system integrators and partners will enable customers to choose an end to end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. It can then be delivered as a fully-managed solution anywhere in the world allowing organisations to easily consume the private cloud resources they need without building and operating the cloud itself.
Private cloud solutions provide predictable performance, security, and the ability to customize the underlying infrastructure. This new joint offering combines Canonical’s powerful automated deployment software and infrastructure operations with Unitas Global’s infrastructure and guest level managed services in data centers globally.
“Canonical and Unitas Global combine automated, customizable OpenStack software alongside fully-managed private cloud infrastructure providing enterprise clients with a simplified approach to cloud integration throughout their business environment,” explains Grant Kirkwood, CTO and Founder, Unitas Global. “We are very excited to partner with Canonical to bring this much-needed solution to market, enabling enhanced growth and success for our clients around the world.”
“By partnering with Unitas Global, we are able to deliver a flexible and affordable solution for enterprise cloud integration utilizing cutting-edge software built on fully-managed infrastructure,” said Arturo Suarez, BootStack Product Manager, Canonical. “At Canonical, it is our mission to drive technological innovation throughout the enterprise marketplace by making flexible, open source software available for simplified consumption wherever needed, and we are looking forward to working side-by-side with Unitas Global to deliver upon this promise.”
To learn more about Unitas Global, visit.
For more information about Canonical BootStack, visit.