Sponsored Links

ROS production: our prototype as a snap [3/5]

This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

This is the third blog post in this series about ROS production. In the previous post we came up with a simple ROS prototype. In this post we’ll package that prototype as a snap. For justifications behind why we’re doing this, please see the first post in the series.

We know from the previous post that our prototype consists of a single launch file that we wrote, contained within our prototype ROS package. Turning this into a snap is very straight-forward, so let’s get started! Remember that this is also a video series: feel free to watch the video version of this post.

Prerequisites

This post will assume the following:

  • You’ve followed the previous posts in this series
  • You know what snaps are, and have taken the tour
  • You have a store account at http://myapps.developer.ubuntu.com
  • You have a recent Snapcraft installed (2.28 is the latest as of this writing)

Create the snap

The first step toward a new snap is to create the snapcraft.yaml. Put that in the root of the workspace we created in the previous post:

$ cd ~/workspace $ snapcraft init Created snap/snapcraft.yaml. Edit the file to your liking or run `snapcraft` to get started

Do as it says, and make that file look something like this:

name: my-turtlebot-snap # This needs to be a unique name version: '0.1' summary: Turtlebot ROS Demo description: | Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs. grade: stable confinement: devmode parts: prototype-workspace: plugin: catkin rosdistro: kinetic catkin-packages: [prototype] apps: system: command: roslaunch prototype prototype.launch --screen plugs: [network, network-bind] daemon: simple

Let’s digest that section by section.

name: my-turtlebot-snap version: 0.1 summary: Turtlebot ROS Demo description: | Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

This is the basic metadata that all snaps require. These fields are fairly self-explanatory. The only thing I want to point out specifically here is that the name must be globally unique among all snaps. If you’re following this tutorial, you might consider appending your developer name to the end of this example.

grade: stable confinement: devmode

grade can be either stable or devel. If it’s devel, the store will prevent you from releasing into one of the two stable channels (stable and candidate, specifically)– think of it as a safety net to prevent accidental releases. If it’s stable, you can release it anywhere.

confinement can be strict, devmode, or classic. strict enforces confinement, whereas devmode allows all accesses, even those that would be disallowed under strict confinement (and logs accesses that would otherwise be disallowed for your reference). classic is even less confined than devmode, in that it doesn’t even get private namespaces anymore (among other things). There is more extensive documentation on confinement available.

I personally always use strict confinement unless I know for sure that the thing I’m snapping won’t run successfully under confinement, in which case I’ll use devmode. I typically avoid classic unless I never intend for the app to run confined. In this case, I know from experience this snap won’t run confined as-is, and will require devmode for now (more on that later).

parts: prototype-workspace: plugin: catkin rosdistro: kinetic catkin-packages: [prototype]

You learned about this in the Snapcraft tour, but I’ll cover it again real quick. Snapcraft is responsible for taking many disparate parts and orchestrating them all into one cohesive snap. You tell it the parts that make up your snap, and it takes care of the rest. Here, we tell Snapcraft that we have a single part called prototype-workspace. We specify that it builds with Catkin, and also specify that we’re using Kinetic here (as opposed to Jade, or the default, Indigo). Finally, we specify the packages in this workspace that we want included in the snap. In our case, we only have one: that prototype package we created in the previous post.

apps: system: command: roslaunch prototype prototype.launch --screen plugs: [network, network-bind] daemon: simple

This is where things get a little interesting. When we build this snap, it will include a complete ROS system: roscpp, roslib, roscore, roslaunch, your ROS workspace, etc. It’s a standalone unit: you’re in total control of how the user interacts with it. You exercise that control via the apps keyword, where you expose specific commands to the user. Here, we specify that this snap has a single app, called system. The command that this app actually runs within the snap is the roslaunch invocation we got from the previous post. We use plugs to specify that it requires network access (read more about interfaces), and finally specify that it’s a simple daemon. That means this app will begin running as soon as the snap is installed, and also run upon boot. All this, and the user doesn’t even need to know that this snap uses ROS!

That’s actually all we need to make our prototype into a snap. Let’s create the snap itself:

$ cd ~/workspace $ snapcraft

That will take a few minutes. You’ll see Snapcraft fetch rosdep, which is then used to determine the dependencies of the ROS packages in the workspace. This is only prototype in our case, which you’ll recall from the previous post depends upon kobuki_node and kobuki_random_walker. It then pulls those down and puts them into the snap along with roscore. Finally, it builds the requested packages in the workspace, and installs them into the snap as well. At the end, you’ll have your snap.

Test the snap

Even though we’re planning on using this snap on Ubuntu Core, snaps run on classic Ubuntu as well. This is an excellent way to ensure that our snap runs as expected before moving on to Ubuntu Core. Since we already have our machine setup to communicate with the Turtlebot, we can try it out right here. The only hitch is that /dev/kobuki isn’t covered by any interface on classic Ubuntu (we can make this work for Ubuntu Core, though, more on that later). That’s why we used devmode as the confinement type in our snap. We’ll install it with devmode here:

$ sudo snap install --devmode path/to/my.snap

Right after this completes (give it a second for our app to fire up), you should hear the robot sing and begin moving. Once you remove the snap it’ll stop moving:

$ sudo snap remove my-turtlebot-snap

How easy is that? If you put that in the store, anyone with a Turtlebot (no ROS required) could snap install it and it would immediately begin moving just like it did for you. In fact, why don’t we put it in the store right now?

Put the snap in the store

Step 1: Tell Snapcraft who you are

We’re about to use Snapcraft to register and upload a snap using the store account you created when satisfying the prerequisites. For that to happen, you need to sign in with Snapcraft:

$ snapcraft login

Step 2: Register the snap name

Snap names are globally unique, so only one developer can register and publish a snap with a given name. Before you can publish the snap, you need to make sure that snap name is registered to you (note that this corresponds to the name field in the snapcraft.yaml we created a few minutes ago):

$ snapcraft register <my snap name>

Assuming that name is available, you can proceed to upload it.

Step 3: Release the snap

In the tour you learned that there are four channels available by default. In order of increasing stability, these channels are edge, beta, candidate, and stable. This snap isn’t quite perfect yet since it still requires devmode, so let’s release it on the beta channel:

$ snapcraft push path/to/my.snap --release=beta

Once the upload and automated reviews finish successfully, anyone in the world can install your snap on the computer controlling their Turtlebot as simply as:

$ sudo snap install --beta --devmode my-turtlebot-snap

In the next post in this series, we’ll discuss how to obtain real confined access to the Turtlebot’s udev symlink on Ubuntu Core by creating a gadget snap, moving toward our goal of having a final image with this snap pre-installed and ready to ship.

Original source here.

Ubuntu might retire Thunderbird

The open saucy Ubuntu is considering dumping the Thunderbird mail app because users tend to favour using webservices mail instead.
Ubuntu 17.10 may not include a default desktop email app at all and Thunderbird is Ubuntu’s default email app at the moment.
However apparently Linux geeks use webmail providers like Google, Yahoo!, Outlook, or ProtonMail and really can’t be bothered with mail apps these days.
An email posted to the Ubuntu desktop mailing list said that the company switched Ubuntu’s default email client from Evolution to Thunderbird. Six years later, it is time to take another look.
The idea is to get people talking about killing off Thunderbird and the kinds of apps that are offered in Ubuntu by default.
‘This discussion is not suggesting email apps are bunk, it simply asks if one needs to be there by default.’
GNOME’s Michael Catanzaro suggests that, for an ideal ‘pure GNOME’ experience, a distro shouldn’t ship an email client by default because, right now, there isn’t one that’s both well-maintained and well-integrated with GNOME.
The question is though whether enough desktop Ubuntu users find Thunderbird useful enough for one to be given a prized slot on the install disk.

Source: http://www.fudzilla.com/news/43433-ubuntu-might-retire-thunderbird
Submitted by: Arnfried Walbrecht

Mozilla Firefox web browser may no longer be supported on your Linux computer

Firefox is a wonderful open source web browser. As a result, it comes pre-loaded on many Linux-based operating systems, such as Ubuntu and Fedora. Yeah, some people choose to install Chromium or Chrome instead, but Mozilla’s offering remains a staple in the Linux community.
Unfortunately, it has been revealed that the Firefox web browser will no longer be compatible with some computers running a Linux-based operating system. You see, Mozilla has dropped support for certain Intel and AMD processors.
Buried in the release notes for the all-new Firefox 53, Mozilla drops the following bombshell:
“Ended Firefox Linux support for processors older than Pentium 4 and AMD Opteron”.
It is important to note that we are talking about some very old processors here, so many users won’t be impacted. With that said, Linux-based operating systems are often popular for those with ancient hardware, so there certainly will be some computers that are affected.

Source: https://betanews.com/2017/04/20/mozilla-firefox-linux-intel-amd/
Submitted by: Arnfried Walbrecht

Location tracking Android spyware found in Google Play store

Android malware capable of accessing smartphone users’ location and sending it to cyberattackers remained undetected in the Google Play store for three years, according to a security company.
Discovered by IT security researchers at Zscaler, the SMSVova Android spyware poses as a system update in the Play Store and was downloaded between one million and five million times since it first appeared in 2014.
The app claims to give users access to the latest Android system updates, but it’s actually malware designed to compromise the victims’ smartphone and provide the users’ exact location in real time.
Researchers become suspicious of the application, partly because of a string of negative reviews complaining that the app doesn’t update the Android OS, causes phones to run slowly, and drains battery life. Other indicators that led to Zscaler looking into the app included blank screenshots on the store page and no proper description for what the app actually does.
Indeed, the only information the store page provided about the ‘System Update’ app is that it ‘updates and enables special location’ features. It doesn’t tell the user what it’s really doing: sending location information to a third party, a tactic which it exploits to spy on targets.

Source: http://www.zdnet.com/article/location-tracking-android-spyware-found-in-google-play-store/
Submitted by: Arnfried Walbrecht

Certified Ubuntu Images available on Oracle Bare Metal Cloud Service

  • Developers offered options of where to run either demanding workloads or less compute-intensive applications, in a highly available cloud environment.
  • Running development and production on Certified Ubuntu can simplify operations and reduce engineering costs

Certified Ubuntu images are now available in the Oracle Bare Metal Cloud Services, providing developers with compute options ranging from single to 16 OCPU virtual machines (VMs) to high-performance, dedicated bare metal compute instances. This is in addition to the image already offered on Oracle Compute Cloud Service and maintains the ability for enterprises to add Canonical-backed Ubuntu Advantage Support and Systems Management. Oracle and Canonical customers now have access to the latest Ubuntu features, compliance accreditations and security updates.

“Oracle and Canonical have collaborated to ensure the optimal devops experience using Ubuntu on the Oracle Cloud Compute Cloud Service and Bare Metal Cloud Services. By combining the elasticity and ease of deployment on Oracle Cloud Platform, users can immediately reap the benefit of high-performance, high availability and cost-effective infrastructure services,” says Sanjay Sinha, Vice President, Platform Products, Oracle.

“Ubuntu has been growing on Oracle’s Compute Cloud Service, and the same great experience is now available to Enterprise Developers on its Bare Metal Cloud Services,” said Udi Nachmany, Head of Public Cloud at Canonical. “Canonical and Oracle engineering teams will continue to collaborate extensively to deliver a consistent and optimized Ubuntu experience across any relevant Oracle offerings.”

Canonical continually maintains, tests and updates certified Ubuntu images, making the latest versions available on the Oracle Cloud Marketplace within minutes of their official release by Canonical. For all Ubuntu LTS versions, Canonical provides maintenance and security updates for five years.

Ubuntu 17.10 To Have Wayland Display Server As Default

A lot of changes have been happening under Canonical’s roof and there is another one. Ubuntu 17.10 release will be having Wayland by default marking the exit of X.org server.
Spotted by OMG! Ubuntu, the said change was confirmed by Ubuntu desktop engineering manager Will Cooke. However, Canonical is yet to make an official announcement.
Ubuntu getting Wayland isn’t a surprise for many, but an expected move after Unity 8 was ditched for GNOME. Canonical has already scrapped plans to further develop their home-grown Wayland replacement, Mir, which would have powered Unity in the future. The Linux distro follows Fedora’s trend of shipping Wayland as a default user session which happened with the release of Fedora 25.
GNOME desktop coming back to Ubuntu already has a Wayland implemented in it. However, there are possibilities of X.org shipping with Ubuntu as an optional session, in case, people need backward compatibility.

Source: https://fossbytes.com/ubuntu-17-10-wayland-default-server/
Submitted by: Arnfried Walbrecht

How we commoditized GPUs for Kubernetes

Over the last 4 months I have blogged 4 times about the enablement of GPUs in Kubernetes. Each time I did so, I spent several days building and destroying clusters until it was just right, making the experience as fluid as possible for adventurous readers.

It was not the easiest task as the environments were different (cloud, bare metal), the hardware was different (g2.xlarge have old K20s, p2 instances have K80s, I had 1060GTX at home but on consumer grade Intel NUC…). As a result, I also spent several hours supporting people to set up clusters. Usually with success, but I must admit some environments have been challenging.

Thankfully the team at Canonical in charge of developing the Canonical Distribution of Kubernetes have productized GPU integration and made it so easy to use that it would just be a shame not to talk about it.

And as of course happiness never comes alone, I was lucky enough to be allocated 3 brand new, production grade Pascal P5000 by our nVidia friends. I could have installed these in my playful rig to replace the 1060GTX boards. But this would have showed little gratitude for the exceptional gift I received from nVidia. Instead, I decided to go for a full blown “production grade” bare metal cluster, which will allow me to replicate most of the environments customers and partners have. I chose to go for 3x Dell T630 servers, which can be GPU enabled and are very capable machines. I received them a couple of week ago, and…


Please don’t mind the cables, I don’t have a rack…There we are! Ready for some awesomeness?

What it was in the past

If you remember the other posts, the sequence was:

  1. Deploy a “normal” K8s cluster with Juju;
  2. Add a CUDA charm and relate it to the right group of Kubernetes workers;
  3. Connect on each node, and activate privileged containers, and add the experimental-nvidia-gpu tag to the kubelet. Restart kubelet;
  4. Connect on the API Server, add the experimental-nvidia-gpu tag and restart the API server;
  5. Test that the drivers were installed OK and made available in k8s with Juju and Kubernetes commands.

Overall, on top of the Kubernetes installation, with all the scripting in the world, no less than 30 to 45min were lost to perform the specific maintenance for GPU enablement.
It is better than having no GPUs, but it is often too much for the operators of the clusters who want an instant solution.

How is it now?

I am happy to say that the requests of the community have been heard loud and clear. As of Kubernetes 1.6.1, and the matching GA release of the Canonical Distribution of Kubernetes, the new experience is :

  1. Deploy a normal K8s cluster with Juju

Yes, you read that correctly. Single command deployment of GPU-enabled Kubernetes Cluster

Since 1.6.1, the charms will now:

  • watch for GPU availability every 5min. For clouds like GCE, where GPUs can be added on the fly to instances, this makes sure that no GPU will ever be forgotten;
  • If one or more GPUs are detected on a worker, the latest and greatest CUDA drivers will be installed on the node, the kubelet reconfigured and restarted automagically;
  • Then the worker will communicate its new state to the master, which will in return also reconfigure the API server and accept GPU workloads;
  • In case you have a mixed cluster with some nodes with GPUs and others without, only the right nodes will attempt to install CUDA and accept privileged containers.

You don’t believe me? Fair enough. Watch me…

Requirements

For the following, you’ll need:

  • Basic understanding of the Canonical toolbox: Ubuntu, Juju, MAAS…
  • Basic understanding of Kubernetes
  • A little bit of Helm at the end

and for the files, cloning the repo:

 git clone https://github.com/madeden/blogposts cd blogposts/k8s-ethereum 

Putting it to the test

In the cloud

Deploying in the cloud is trivial. Once Juju is installed and your credentials are added,

 juju bootstrap aws/us-east-1 juju deploy src/bundles/k8s-1cpu-3gpu-aws.yaml watch -c juju status --color 

Now wait…

 Model Controller Cloud/Region Version default aws-us-east-1 aws/us-east-1 2.2-beta2 App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 8 ubuntu etcd 2.3.8 active 1 etcd jujucharms 29 ubuntu flannel 0.7.0 active 2 flannel jujucharms 13 ubuntu kubernetes-master 1.6.1 waiting 1 kubernetes-master jujucharms 17 ubuntu exposed kubernetes-worker-cpu 1.6.1 active 1 kubernetes-worker jujucharms 22 ubuntu exposed kubernetes-worker-gpu maintenance 3 kubernetes-worker jujucharms 22 ubuntu exposed Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 0/lxd/0 10.0.201.114 Certificate Authority connected. etcd/0* active idle 0 52.91.177.229 2379/tcp Healthy with 1 known peer kubernetes-master/0* waiting idle 0 52.91.177.229 6443/tcp Waiting for kube-system pods to start flannel/0* active idle 52.91.177.229 Flannel subnet 10.1.4.1/24 kubernetes-worker-cpu/0* active idle 1 34.207.180.182 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 34.207.180.182 Flannel subnet 10.1.29.1/24 kubernetes-worker-gpu/0 maintenance executing 2 54.146.144.181 (install) Installing CUDA kubernetes-worker-gpu/1 maintenance executing 3 54.211.83.217 (install) Installing CUDA kubernetes-worker-gpu/2* maintenance executing 4 54.237.248.219 (install) Installing CUDA Machine State DNS Inst id Series AZ Message 0 started 52.91.177.229 i-0d71d98b872d201f5 xenial us-east-1a running 0/lxd/0 started 10.0.201.114 juju-29e858-0-lxd-0 xenial Container started 1 started 34.207.180.182 i-04f2b75f3ab88f842 xenial us-east-1a running 2 started 54.146.144.181 i-0113e8a722778330c xenial us-east-1a running 3 started 54.211.83.217 i-07c8c81f5e4cad6be xenial us-east-1a running 4 started 54.237.248.219 i-00ae437291c88210f xenial us-east-1a running Relation Provides Consumes Type certificates easyrsa etcd regular certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker-cpu regular certificates easyrsa kubernetes-worker-gpu regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular cni flannel kubernetes-master regular cni flannel kubernetes-worker-cpu regular cni flannel kubernetes-worker-gpu regular cni kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker-cpu regular kube-dns kubernetes-master kubernetes-worker-gpu regular cni kubernetes-worker-cpu flannel subordinate cni kubernetes-worker-gpu flannel subordinate 

I was able to capture the moment where it is installing CUDA so you can see it… When it’s done:

 juju ssh kubernetes-worker-gpu/0 "sudo nvidia-smi" Tue Apr 18 08:50:23 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.51 Driver Version: 375.51 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 | | N/A 52C P0 67W / 149W | 0MiB / 11439MiB | 98% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Connection to 54.146.144.181 closed. 

That’s it, you can see the K80 from the p2.xlarge instance. I didn’t do anything about it, it was completely automated. This is Kubernetes on GPU steroids.

The important option in the bundle file we deployed is:

 options: "allow-privileged": "true" 

If you want to prevent privileged containers until absolutely necessary, you can use the tag “auto”, which will only activate them if GPUs are detected.

On Bare Metal

Obviously there is a little more to do on Bare Metal, and I will refer you to my previous posts to understand how to set MAAS up & running. This assumes it is already working.

Adding the T630 to MAAS is a breeze. If you don’t change the default iDRAC username password (root/calvin), the only thing you have to do it connect them to a network (a specific VLAN for management is preferred of course), set the IP address, and add to MAAS with an IPMI Power type.


Adding the nodes into MAASThen commission the nodes as you would with any other. This time, you won’t need to press the power button like I had to with the NUC cluster: MAAS will trigger via the IPMI card directly, request a PXE boot, and register the node, all fully automagically. Once that is done, tag them “gpu” to make sure to recognize them.


Details about the T630 in MAAS 

Then

 juju bootstrap maas juju deploy src/bundles/k8s-1cpu-3gpu.yaml watch -c juju status --color 

Wait for a few minutes… You will see at some point that the charm is now installing CUDA drivers. At the end,

 Model Controller Cloud/Region Version default k8s maas 2.1.2.1 App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 8 ubuntu etcd 2.3.8 active 1 etcd jujucharms 29 ubuntu flannel 0.7.0 active 5 flannel jujucharms 13 ubuntu kubernetes-master 1.6.1 active 1 kubernetes-master jujucharms 17 ubuntu exposed kubernetes-worker-cpu 1.6.1 active 1 kubernetes-worker jujucharms 22 ubuntu exposed kubernetes-worker-gpu 1.6.1 active 3 kubernetes-worker jujucharms 22 ubuntu exposed Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 0/lxd/0 172.16.0.8 Certificate Authority connected. etcd/0* active idle 0 172.16.0.4 2379/tcp Healthy with 1 known peer kubernetes-master/0* active idle 0 172.16.0.4 6443/tcp Kubernetes master running. flannel/1 active idle 172.16.0.4 Flannel subnet 10.1.9.1/24 kubernetes-worker-cpu/0* active idle 1 172.16.0.5 80/tcp,443/tcp Kubernetes worker running. flannel/0* active idle 172.16.0.5 Flannel subnet 10.1.20.1/24 kubernetes-worker-gpu/0 active idle 2 172.16.0.6 80/tcp,443/tcp Kubernetes worker running. flannel/2 active idle 172.16.0.6 Flannel subnet 10.1.91.1/24 kubernetes-worker-gpu/1 active idle 3 172.16.0.7 80/tcp,443/tcp Kubernetes worker running. flannel/4 active idle 172.16.0.7 Flannel subnet 10.1.19.1/24 kubernetes-worker-gpu/2* active idle 4 172.16.0.3 80/tcp,443/tcp Kubernetes worker running. flannel/3 active idle 172.16.0.3 Flannel subnet 10.1.15.1/24 Machine State DNS Inst id Series AZ 0 started 172.16.0.4 br68gs xenial default 0/lxd/0 started 172.16.0.8 juju-5a80fa-0-lxd-0 xenial 1 started 172.16.0.5 qkrh4t xenial default 2 started 172.16.0.6 4y74eg xenial default 3 started 172.16.0.7 w3pgw7 xenial default 4 started 172.16.0.3 se8wy7 xenial default Relation Provides Consumes Type certificates easyrsa etcd regular certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker-cpu regular certificates easyrsa kubernetes-worker-gpu regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular cni flannel kubernetes-master regular cni flannel kubernetes-worker-cpu regular cni flannel kubernetes-worker-gpu regular cni kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker-cpu regular kube-dns kubernetes-master kubernetes-worker-gpu regular cni kubernetes-worker-cpu flannel subordinate cni kubernetes-worker-gpu flannel subordinate 

And now:

 juju ssh kubernetes-worker-gpu/0 "sudo nvidia-smi" Tue Apr 18 06:08:35 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.51 Driver Version: 375.51 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106... Off | 0000:04:00.0 Off | N/A | | 28% 37C P0 28W / 120W | 0MiB / 6072MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Quadro P5000 Off | 0000:83:00.0 Off | Off | | 0% 43C P0 39W / 180W | 0MiB / 16273MiB | 2% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ 

That’s it, my 2 cards are in there: 1060GTX and P5000. Again, no user interaction. How awesome is this?

Note that the interesting aspects are not only that it automated the GPU enablement, but also that the bundle files (the yaml content) are essentially the same, but for the machine constraints we set.

Having some fun with GPUs

If you follow me you know I’ve been playing with Tensorflow, so that would be a use case, but I actually wanted to get some raw fun with them! One of my readers mentioned bitcoin mining once, so I decided to go for it.

I made a quick and dirty Helm Chart for an Ethereum Miner, along with a simple rig monitoring system called ethmon.

This chart will let you configure how many nodes, and how many GPU per node you want to use. Then you can also tweak the miner. For now, it only works in ETH only mode. Don’t forget to create a values.yaml file to

  • add your own wallet (if you keep the default you’ll actually pay me, which is fine 🙂 but not necessarily your purpose),
  • update the ingress xip.io endpoint to match the public IP of one of your workers or use your own DNS
  • Adjust the number of workers and GPUs per node

then

 cd ~ git clone https://github.com/madeden/charts.git cd charts helm init helm install claymore --name claymore --values /path/to/yourvalues.yaml 

By default, you’ll get the 3 worker nodes, with 2 GPUs (this is to work on my rig at home)


KubeUI with the miners deployed
Monitoring interface (ethmon)You can also track it here with nice graphs.

What did I learn from it? Well,

  • I really need to work on my tuning per card here! The P5000 and the 1060GTX have the same performances, and they also are the same as my Quadro M4000. This is not right (or there is a cap somewhere). But I’m a newbie, I’ll get better.
  • It’s probably not worth it money wise. This would make me less than $100/month with this cluster, less than my electricity bill to run it.
  • There is a LOT of room for Monero mining on the CPU! I run at less than a core for the 6 workers.
  • I’ll probably update it to run less workers, but with all the GPUs allocated to them.
  • But it was very fun to make. And now apparently I need to do “monero”, which is supposedly ASIC resistent and should be more profitable. Stay tuned 😉

Conclusion

3 months ago, I recognize running Kubernetes with GPUs wasn’t a trivial job. It was possible, but you needed to really want it.

Today, if you are looking for CUDA workloads, I challenge you to find anything easier than the Canonical Distribution of Kubernetes to run that, on Bare Metal or in the cloud. It is literally so trivial to make it work that it’s boring. Exactly what you want from infrastructure.

GPUs are the new normal. Get used to it.

So, let me know of your use cases, and I will put this cluster to work on something a little more useful for mankind than a couple of ETH!

I am always happy to do some skunk work, and if you combine GPUs and Kubernetes, you’ll just be targeting my 2 favorite things in the compute world. Shoot me a message @SaMnCo_23!

FTC & D-Link

This is a guest post by Peter Kirwan, technology journalist. If you would like to contribute a post, please contact ubuntu-devices@canonical.com


Anyone who doubts that governments are closing in on hardware vendors in a bid to shut down IoT security vulnerabilities needs to catch up with the Federal Trade Commission’s recent lawsuit against D-Link.

The FTC’s 14-page legal complaint accuses the Taiwan-based company of putting consumers at risk by inadequately securing routers and IP cameras.

In this respect, this FTC lawsuit looks much the same as previous ones that held tech vendors to account for security practices that failed to live up to marketing rhetoric.

The difference this time around is that the FTC’s lawsuit includes a pointed reference to reports that D-Link’s devices were compromised by the same kind of IoT botnets that took down US-based Dyn and European service providers in late 2016.

In one way, this isn’t so surprising. In the wake of these recent attacks, the question of how we secure vast numbers of connected devices has rapidly moved up the agenda. (You can read our white paper on this, here.) In December 2016, for example, after analysing the sources of the Dyn attack, Allison Nixon, director of research at the security firm Flashpoint, pointed to the need for new approaches:

“We must look at this problem with fresh eyes and a sober mind, and ask ourselves what the Internet is going to look like when the professionals muscle out the amateurs and take control of extremely large attack power that already threatens our largest networks.”

In recent years, the way in which the FTC interprets its responsibility to protect US consumers from deceptive practices has evolved. It has already established itself as a guardian of digital privacy. Now, it seems, the FTC may be interested in preventing the disruption that accompanies large-scale DDoS attacks.

D-Link, which describes its security policies as “robust”, has pledged to fight the FTC’s case in court. The company argues that the FTC needs to prove that “actual consumers suffered or are likely to suffer actual substantial injuries”. To fight its cornet, D-Link has hired a public interest law firm which accuses the FTC of “unchecked regulatory overreach”.

By contrast, the FTC believes it simply needs to demonstrate that D-Link has misled customers by claiming that its products are secure, while failing to take “reasonable steps” to secure its devices. The FTC claims that this is “unfair or deceptive” under US law.

But who defines what is “reasonable steps” when it comes to the security of connected devices?

The FTC’s lawsuit argues that D-Link failed to protect against flaws which the Open Web Application Security Project (OWASP) “has ranked among the most critical and widespread application vulnerabilities since at least 2007”.

The FTC might just as easily have pointed to its own guidelines, published over two years ago. In the words of Stephen Cobb, senior security researcher at the security firm ESET: “Companies failing to heed the agency’s IoT guidance. . . should not be surprised if they come under scrutiny. Bear in mind that any consumer or consumer advocacy group can request an FTC investigation.”

The FTC has already established that consumers have a right to expect that vendors will take reasonable steps to ensure that their devices are not used to spy on them or steal their identity.

If the FTC succeeds against D-Link, consumers may also think it reasonable that their devices should be protected against botnets, too.

Of course, any successful action by the FTC will only be relevant to IoT devices sold and installed in the US. But the threat of an FTC investigation certainly will get the attention of hardware vendors who operate internationally and need to convince consumers that they can be trusted on security.

Hardened Node.js distro comes to Docker-friendly Alpine Linux

NodeSource is releasing a distribution of its enterprise-level, commercially supported NSolid Node.js runtime that works with Docker-friendly Alpine Linux. NSolid for Alpine Linux is intended to work with Alpine’s small footprint and security capabilities, said Joe McCann, NodeSource CEO.
With the NSolid Node.js runtime, the company accommodates three critical enterprise technologies: the Linux kernel, Docker containers, and Node.js server-side JavaScript applications.
Containers using Alpine require a maximum of 8MB, and installing it to disk takes up as little as about 130MB. There has been a rise in Alpine Linux Docker distributions because of Alpine’s tiny footprint, McCann said. The Alpine kernel also offers security enhancements preventing a class of zero-day and other vulnerabilities. Users get a secure option for running Node apps in containers, said McCann.
Built around the musl library and BusyBox utilities, Alpine is a noncommercial, general-purpose Linux distribution intended for power users.

Source: http://www.infoworld.com/article/3190606/javascript/hardened-nodejs-distro-comes-to-docker-friendly-alpine-linux.html
Submitted by: Arnfried Walbrecht

Unitas Global and Canonical provide Fully-Managed Enterprise OpenStack

Unitas Global, the leading enterprise hybrid cloud solution provider, and Canonical, the company behind Ubuntu, the leading operating system for container, cloud, scale out, and hyperscale computing announced they will provide a new fully managed and hosted OpenStack private cloud to enterprise clients around the world.

This partnership, developed in response to growing enterprise demand to consume open source infrastructure, OpenStack and Kubernetes, without the need to build in-house development or operations capabilities, will enable enterprise organizations to focus on strategic Digital Transformation initiatives rather than day to day infrastructure management.

This partnership along with Unitas Global’s large ecosystem of system integrators and partners will enable customers to choose an end to end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. It can then be delivered as a fully-managed solution anywhere in the world allowing organisations to easily consume the private cloud resources they need without building and operating the cloud itself.

Private cloud solutions provide predictable performance, security, and the ability to customize the underlying infrastructure. This new joint offering combines Canonical’s powerful automated deployment software and infrastructure operations with Unitas Global’s infrastructure and guest level managed services in data centers globally.

“Canonical and Unitas Global combine automated, customizable OpenStack software alongside fully-managed private cloud infrastructure providing enterprise clients with a simplified approach to cloud integration throughout their business environment,” explains Grant Kirkwood, CTO and Founder, Unitas Global. “We are very excited to partner with Canonical to bring this much-needed solution to market, enabling enhanced growth and success for our clients around the world.”

“By partnering with Unitas Global, we are able to deliver a flexible and affordable solution for enterprise cloud integration utilizing cutting-edge software built on fully-managed infrastructure,” said Arturo Suarez, BootStack Product Manager, Canonical. “At Canonical, it is our mission to drive technological innovation throughout the enterprise marketplace by making flexible, open source software available for simplified consumption wherever needed, and we are looking forward to working side-by-side with Unitas Global to deliver upon this promise.”

To learn more about Unitas Global, visit.

For more information about Canonical BootStack, visit.