Category Archives: Work

Work related posts – Linux, Fedora, RHEL, CentOS, Docker..all things IT:)

Containers: Resurrection vs. Reincarnation

This post is to share that I am coining new analogy about containers, cloud and orchestration. Everyone uses Pet vs. Cattle, some use Ant. vs. Elephant. These animal analogies are fine and describe how YOU behave to your machines/containers if something bad happens.

But it's also important to look at things from the other side - what actually happens to the app? How does it feel? How does it perceive world around itself?

Resurrection

In a good old Mode1 world things were simple. Some app died, so you went and resurrected it back to life. Sure, PID was different, but it was the same machine, same environment, same processes around it. Feels like home...

Reincarnation

Then the cloud and container world appeared and people realised they don't want to bring dead things back to life (it might have something to do with all the scary zombie movies, I think). And so in container orchestration you just get rid of things that appear to be dead and then bring new ones to life. And you app is reincarnated instead of resurrected

Resurrection vs. Reincarnation.

Reincarnation is not completely new in IT world - it was already used in MINIX many years ago:). But I am coining this new analogy for containers context. Obviously, it's up to you now to share the wisdom and make sure people know who was the original prophet!

Forget resurrection, reincarnation is a way to go!

 

Kubernetes Persistent Storage Hell

We've started to work on a rather complex application recently with my team at Red Hat. We all agreed it'll be best to use containers, Kubernetes and Vagrant to make our development (and testing) environment easy to setup (and to be cool, obviously).

Our application consists of multiple components where those important for the post are MongoDB and something we can call worker. The reason for MongoDB is clear - we are working with JSONs and need to store them somewhere. Worker takes data, does some works on them and writes to DB. There are multiple types of workers and they need to share some data. We also need to be able to scale (That's why we use containers!) which also requires shared storage. We want both storages to be local path (for Vagrant use case especially).

Sounds easy, right? But it's not. Here is the config objects situation:

kubernetes/worker-volume.yaml
kubernetes/worker-claim.yaml
kubernetes/mongo-volume.yaml
kubernetes/mongo-claim.yaml

The way you work with volumes i Kubernetes is that you define a PersistentVolume object stating capacity, access mode and host path (still talking about local storage). Then you define PersistentVolumeClaim with access mode and capacity. Kubernetes then automagically map these two - i.e. randomly match claim and volume where volume provides correct mode and enough capacity.

You might be able to see the problem now, but if not, here it is: If you have 2 volumes and 2 claims (as we have) there is no way you can be sure which claim will get which volume. You might not care when you first start your app, because the directories you provided for volumes will be probably empty. But what if you restart the app? Or the Vagrant box (and thus the app)? You cannot be sure which volume will be assigned to which claim.

This leads to an inconsistent state where your MongoDB storage can be assigned to your worker storage and vice versa.

I've found 2 related issues on github https://github.com/.../issues/14908 and https://github.com/.../pull/17056 which, if implemented and solved, should fix it. But is there a workaround?

Hell yeah! And it's pretty simple. Instead of defining PersistentVolumeClaim object and using persistentVolumeClaim key in a replication controller, you can use hostPath directly in the RC. This is how the patch looked like:

diff --git a/kubernetes/mongodb-controller.yaml b/kubernetes/mongodb-controller.yaml
index ffdd5f3..9d7bbe2 100644
--- a/kubernetes/mongodb-controller.yaml
+++ b/kubernetes/mongodb-controller.yaml
@@ -23,5 +23,5 @@ spec:
 mountPath: /data/db
 volumes:
 - name: mongo-persistent-storage
- persistentVolumeClaim:
- claimName: myclaim-1
+ hostPath:
+ path: "/media/mongo-data"
diff --git a/kubernetes/worker-controller.yaml b/kubernetes/worker-controller.yaml
index 51181df..f62df47 100644
--- a/kubernetes/worker-controller.yaml
+++ b/kubernetes/worker-controller.yaml
@@ -44,5 +44,6 @@ spec:
 mountPath: /data
 volumes:
 - name: worker-persistent-storage
- persistentVolumeClaim:
- claimName: myclaim-2
+ hostPath:
+ path: "/media/worker-data"

The important bits of Kubernetes config then looks like:

...
   volumeMounts:
     - name: mongo-persistent-storage
       mountPath: /data/db
 volumes:
   - name: mongo-persistent-storage
     hostPath:
       path: "/media/mongo-data"
...

Mapping service ports to nodes in Kubernetes

Kubernetes is a great project and cool/hot technology. Although it made me to hate JSON (and YAML), I still enjoy exploring the possibilities it brings to your applications deployment.

It's also a base for even more awesome project called OpenShift (*cough* shameless plug included *cough*).

Anyway, I ran into a problem where I needed to expose port(s) of my application to the outer world (i.e. from Vagrant box to my host) and I struggled to find the solution quickly.

Normally, when you are on the machine where Kubes run, you will do something like this

[vagrant@centos7-adb ~]$ kubectl get services | grep flower
flower-service component=flower app=taskQueue,component=flower 10.254.126.210 5555/TCP

IOW I just listed all running services and grepped for flower. I can take IP and port from there now and use curl to get contents provided by that service. This uses the Kubernetes virtual network to get to the endpoint.

I can also do this

[vagrant@centos7-adb ~]$ kubectl get endpoints | grep flower
flower-service 172.17.0.7:5555

which gets me directly to container IP and port.

But this all happens in my Vagrant box (as you can see from the CLI prompt). This setup is good for places like Google Cloud or AWS where you get load balancing and port forwarding for free. But what if I just want to access my app on the VM IP address?

Well, you take your Kubernetes service config/artefact/JSON/YAML and modify it a bit. By default, Kubernetes services are set to "ClusterIP" mode where they are accessible only by the ways showed above. You'll want to change the type to "NodePort".

This will "use a cluster IP, but also expose the service on a port on each node of the cluster (the same port on each node)" according to docs.

apiVersion: v1
 kind: Service
 metadata:
   labels:
     component: flower
     name: flower-service
spec:
  type: NodePort
  ports:
    - port: 5555
      nodePort: 31000
  selector:
    app: taskQueue
    component: flower

By default, type NodePort will give you a random port in a range 30000-32767. You can also pick a specific port from this range (as you can see above). Well, that's it. You only need to know the IP of the machine and the given/specified port.

[vagrant@centos7-adb vagrant]$ kubectl describe service flower-service | grep "^NodePort"
NodePort: <unnamed> 31000/TCP

This is particularly useful when you are developing (with VM, as the use case described above), or if you have some testing instance in the cloud (where the load balancers are not available) and want to expose the app easily without having to fiddle with too many other pieces.

What the hell does Nulecule try to solve?

There are some statements on the Project Atomic website about Nulecule. There is also some information in the Nulecule Github repository. I've spent big portion of DockerCon explaining Nulecule to various people and here is what my explanation boiled down to:

1. Parameterization

As a deployer of multi-container application, I want to have simple way how to parameterize orchestration manifests

  •  f.e. if you look at kubernetes/examples, you can see "change this, and that in the JSON configs to make this app working in your environment" in every other example.

2. Reusability

As a developer of a multi-container application I want to use existing components (like databases) in my application so that I don't have to burn my time on something that already exists.

  • f.e. as someone creating a voting app (borrowing from DockerCon:) ) I want to use Redis and Postgresql components and only add frontend and worker to the equation.

3. Multiple Orchestrations - providers

As a developer of a multi-container application I want enable deployment to multiple orchestration providers for my application so that users can easily migrate.

4. Distribution

As an enterprise consumer of a multi-container application I want to avoid any out-of-band transport layer for the application and it's metadata

  • f.e. I use Docker images and have a private registry set up. Instead of having to setup another authenticated webserver and figure out how to verify what I pull as tarballs or plain text, I will package every piece of puzzle into a Docker image and distribute it through registry.

The next and very valid question is: "How well did we tackled down these challenges?" That's up to you to figure out and tell us and ideally help us fix bits that you struggle with.

Fedora Developer Portal – how to contribute

The great thing about working at Red Hat is being part of the Fedora community and as a part of this group of enthusiastic people you get close to cool, interesting, awesome, innovative, important... projects. And you get close to them while they are at the beginning and you can influence where they go.

One of those projects is Fedora Developer Portal which will help developers to start their projects on Fedora. It will help you figure out what languages, frameworks, or databases are available in our distribution. How to use Docker, Vagrant or Copr build system to package, distribute and deploy your projects. There is already content ready for you to help you with setting things up for Arduino. Much of other content is in preparations and even more is waiting for you to come and join the project!

My contribution to the project so far has been making it easy for you to contribute. I helped guys with contribution guidelines and I created a Docker image which will let you run the website locally so that you can review your contributions.

This is what you can also find in a README.md for the website project:

$ sudo docker run -it --rm developerportal/devel
[sudo] password for vpavlin: 
Previous HEAD position was 702f2a3... move logo to static directory
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
Already up-to-date.
Configuration file: /opt/developerportal/website/_config.yml
/home/dp/.gem/ruby/gems/jekyll-lunr-js-search-0.3.0/lib/jekyll_lunr_js_search/version.rb:3: warning: already initialized constant Jekyll::LunrJsSearch::VERSION
/opt/developerportal/website/_plugins/jekyll_lunr_js_search.rb:245: warning: previous definition of VERSION was here
 Source: /opt/developerportal/website
 Destination: /opt/developerportal/website/_site
 Generating... 
 Lunr: Creating search index...
 Build Warning: Layout 'page' requested in content/fedora_features/Fedora23_Self_contained_Changes.md does not exist.
 Build Warning: Layout 'page' requested in content/fedora_features/Fedora23_System_Wide_Changes.md does not exist.
...
 Lunr: Index ready (lunr.js v0.4.5)
 done.
 Auto-regeneration: enabled for '/opt/developerportal/website'
Configuration file: /opt/developerportal/website/_config.yml
 Server address: http://172.17.0.5:8080/
 Server running... press ctrl-c to stop.

You can see a server address - that's what you need to copy to your browser to view the page.

Screenshot from 2015-09-02 09-39-51

Now you have the local dev instance running. What if you want to display your changes? First, you clone the content repository.

$ git clone https://github.com/developer-portal/content

Then you will have to modify the run command a bit - specifically, add a volume mount (replace $PWD/content with the path to the cloned content repository):

$ sudo docker run -it --rm -v $PWD/content:/opt/developerportal/website/content developerportal/devel

Ok, now what if you don't want to contribute to content of the portal but rather to help guys making the website awesome? The approach is the same as above. First, you clone the website repository.

$ git clone https://github.com/developer-portal/website

Then you run the container just with the mount for website instead of content.

$ sudo docker run -it --rm -v $PWD/website:/opt/developerportal/website developerportal/devel

Jekyll is used to render the website and it's content and it's set up in the way that whenever you edit any file the website re-renders itself and you can simply refresh the browser when it's finished.

The rest is easy - you change whatever you want, push to your fork on Github, submit a pull request. Once it's reviewed, your changes will appear on the web. Yay!

Ok, my job is done here. Now it's your turn to contribute and promote it further!:)

How to (be a) man on Atomic Host

One major thing missing on the Atomic Host are manual pages. Not a terrible thing - you can always google for them, right? But what if you cannot? Then there is the Fedora Tools Docker image. Try this:

-bash-4.3$ alias man="sudo atomic run vpavlin/fedora-tools man"
-bash-4.3$ man systemd

You should see a manual page for systemd. Thinking about it, that's it. Nothing more you need to now about it. Simple:)

 

Running git on Atomic Host with Fedora Tools image

I added the Fedora Tools image to Fedora-Dockerfiles repository, as you might know from my earlier post. I'd like to introduce you to one use case for this image - git.

When I started to work more on Docker images, I started using Atomic Host(s) for testing as they boot faster and are easier to set up than classic installations. Problem was to get data in those VMs running Atomic Host as git was not present. That's where I first really appreciated the tools image.

bash-4.3# yum
bash: yum: command not found
bash-4.3# git
bash: git: command not found
bash-4.3# atomic run fedora/tools
[root@localhost /]# cd /host/home/vagrant/
[root@localhost vagrant]# git clone https://github.com/fedora-cloud/Fedora-Dockerfiles
Cloning into 'Fedora-Dockerfiles'...
remote: Counting objects: 2189, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 2189 (delta 3), reused 0 (delta 0), pack-reused 2180
Receiving objects: 100% (2189/2189), 915.13 KiB | 901.00 KiB/s, done.
Resolving deltas: 100% (1014/1014), done.
Checking connectivity... done.
[root@localhost vagrant]# exit
bash-4.3# ls
Fedora-Dockerfiles sync

It's simple, right? You can see there is neither yum/dnf, nor git on the host, but still, I was able to clone the repository from Github very easily. The important thing to notice is the path I cd'ed to: /host/home/vagrant. You can see /host prefix there. That's where the host's filesystem is mounted and where I can access it and modify it.

You can review the docker run command for the tools image f.e. with this command:

bash-4.3# docker inspect --format='{{.Config.Labels.RUN}}' vpavlin/fedora-tools
docker run -it --name NAME --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=NAME -e IMAGE=IMAGE -v /run:/run -v /var/log:/var/log -v /etc/localtime:/etc/localtime -v /:/host IMAGE

Obviously, you can do more, not just clone the repo - you can run commit, push, checkout or anything else the same way.

Fedora Tools Docker image

I got this request from my colleagues if there is something like Red Hat Atomic Enterprise Tools container image available for Fedora or CentOS. The answers was no, there isn't, thus I started to work on it. I'd like to tell you what it is and why do I invest my time into it.

First of all, Fedora Tools image is meant to be used mostly on Atomic Host as there is no way to install missing tools with yum or dnf. We could create tons of small images each containing a single tool. But that would a) make it hard for users to find all the tools, b) consume more space then a single image if you decide to use many (all...) of them, c) be hard to maintain.

These 3 reasons lead us to create a single image containing big number of tools important to sysadmins, performance analysts, or just users that need man pages on Atomic Host. This image is pretty big (more than 1 GB), but can be pretty useful.

Current version of the Dockerfile can be found in Fedora-Dockerfiles repository. You can find the list of additional packages (to what's already in a base image) starting on line 13.

The basic information on how to use the Fedora Tools Docker image can be found in README file and I hope to provide more how-to's here soon:).

I've set up an automated build as vpavlin/fedora-tools under my namespace on Docker Hub. To try the image, you can do:

atomic run vpavlin/fedora-tools

Enjoy;-)

Jak jsem skoro zazdil InstallFest 2015

To máte tak, spousta práce, trocha nepozornosti a pozvánka na víc akcí (skoro) najednou. Tenhle koktejl okolností způsobil, že jsem byl poměrně dlouhou dobu přesvědčen, že InstallFest 2015 se koná přístí víkend (tedy 14. a 15. března). To si tak ve čtvrtek večer projíždíte Twitter a najednou zmínka o tom, že placky už jsou připraveny na sobotu. Sobotu? Jako tuhle sobotu? Hmm..

Snímek z 2015-03-07 17:58:04

A fakt že jo! No co, jdete spát a říkáte si: "Slajdy udělám zítra v práci, to bude hned." Jenže v práci furt někdo otravuje, něco chce, takže uděláte prd. Tak prý doma, večer. Jenže to se vypravíte na jídlo a pivo. Teda hned po tom si aspoň ráno koupíte jízdenku;). Tak fajn, slajdy se spáchají ráno před odjezdem. Ráno se vyštracháte z postele, koukáte na prázdnou prezentaci a říkáte si: "Co jsem to těm lidem vlastně chtěl říct, když jsem tu prezentaci posílal?" Něco spatláte a pak strávíte ještě půlku cesty dolaďováním a přemýšlením, co jste to vlastně ráno měli v hlavě.

Takže slidy by byly, co demo? Hmm, jak to znám, na demo nedojde a když, bude jiné v závislosti na dotazech. To snad ani nemá cenu chystat;). A měl jsem rpavdu, nemělo!

Klapka, jedem...

"Dovolte mi, abych vás přivítal na své přednášce. Na úvod se vás chci zeptat...ale co to plácám. Tak já se asi představím, co?" Jak vidíte, začal jsem zkušeně, tedy chci říct zmateně. Ovšem tu otázku jsem položil: "Kdo jste slyšeli o Dockeru před tím, než jste si přečetli název téhle přednášky?" Skoro všichni, fajn. "Kdo jste si ho nainstaloval?" zněla další otázka - asi 4 ruce. Uff, to zase budu plácat kraviny. Tak a poslední dotaz: "Ok, kdo jste používali kontejnery ještě před Dockerem?" Tři ruce, sakra, tak tyhle lidi ignorovat, když se budou ptát..ti jsou určitě chytří a ví toho víc než já!

Jak jsem si myslel na začátku, na pořádné demo nedošlo. Ostatně moje jediné "pořádné" demo je to, co jsme popsal v článku Running services with Docker and systemd. Takže se na něj mrkněte a demo si zkuste sami;) Třeba se vám taky rozbije, jako by se to určitě stalo mně.

Také, jako už tradičně, se přednáška zvrhla na Q&A session, kde jsem dostával záludné otázky a poskytoval jsem na ně v zásadě nesouvisející odpovědi. (Jsem v tom čím dál lepší!) Ale musím říct, že jsem si to s vámi, InstalFesťáci, užil. Hezky jsme pokecali. A navíc jste se mě nikdo nezeptal na síťování, čehož si velice cením!

Upřímně, slajdy samotné vám asi moc neřeknou, ale tady je máte - na konci jsou nějaké odkazy, tak třeba budou užitečné. Přednáška se očividně natáčela, takže jakmile bude, přihodím ještě video. A teď už dobrou noc, jdu si pustit nějaký film, když už si konečně Student Agency obnovilo výběr, a nejspíš si i trochu schrupnu. Ještě jednou díky za účast!

EDIT:

Jak jsem zjistil, video se přímo streamovalo na Youtube, takže tady je záznam přednášky:

My Docker Helpers

I work with Docker almost all the time in my job at Red Hat. Building, running, inspecting containers... Writing same long commands every time you want to run a container or get it's IP starts to annoy you quickly. That's why I started writing small helpers in the form of bash functions which are loaded through .bashrc and thus can be used from cmd line easily.

You can find them in my docker-tools repository but let me introduce them a bit.

docker-rmi-none

If you load/import/build images often, it happens that you end up having a bunch of <none> named images in your docker images output. The above commands removes them all.

docker-rm-all

I use this mostly in VMs where I am limited in terms of disk space - every container, especially when you test f.e. if yum install works, eats some space and this command lets you remove them all quickly.

dr fedora
docker run --name tmp0 -it --rm fedora bash
dr fedora cat /etc/os-release 
docker run --name tmp0 -it --rm fedora cat /etc/os-release
NAME=Fedora
...

This dr command is probably my favourite. It runs bash in the given image with arguments I use the most. You can also specifiy a command to run if you wish so.

dl [PATH_TO_]IMAGE

Simple alias for docker load command with the advantage of being able to load from default directory so you can just give it a file name and it looks in the predefined folder.

de [CONTAINER] [CMD]

The most awesome thing about using functions instead of just aliases is that you can add whatever logic you like. So my de command (representing docker exec) can be called with a container id/name and command  - same as docker exec. But it can also be called without command which then defaults to bash and also without container id/name which default to the last entry in docker ps output. If you want to skip specifying container, but still want to use different cmd than bash, use following syntax:

de "" rpm -qa

I don't use next command as often as those above but I still like it a lot - it let's you print IP address of any container. If container id/name is not specified it uses the same logic as de.

di [CONTAINER]

Last command I have on my list at the moment is dk and you could maybe guess - yes, it's docker kill and it also provides the same logic as the two above.

dk [CONTAINER]

Do you have more aliases/ideas? Let me know I am happy to make my list richer!