14 May

Manage the Docker Swarm nodes – new Ansible 2.8 modules developed by me!

Ansible 2..8 introduces huge update for Docker Swarm modules

There are several good things about open source projects. But there is one which makes the good projects grow quickly – if you miss a feature and you have some programming skills you can write your extension. Then you can propose the change to get merged into the official repository. This is how my journey as a community Ansible developer started.

Lately, I worked more and more with Docker Swarm clusters and wanted to automate some tasks. Unfortunately, with the Ansible 2.7 there were only three modules available – thedocker_swarm, docker_swarm_serviceanddocker_secret. The tasks I wanted to automate required features as reading the Swarm configuration (this was partially covered by the docker_swarm), node status and change the node configuration. I had three options – use the shell module and execute the CLI command locally on the remote host, wait for someone to provide the module or write the module by myself.

I may not be a professional developer, but I had a great teacher when I started coding in the last class of primary school and some experience in programming. You can’t really be an engineer if you cannot write scripts or simpler apps. I think it is not a surprise I decided to test myself in professional project 😛

In the previous post, I presented the docker_swarm_facts module. I am a co-author there. Let me now show you the Swarm Node related modules fully developed and now maintained by me. All of them will be available in Ansible 2.8 release in next few weeks, but you can test it using the devel branch code on GitHub!

Tell me about my Docker Swarm node…

Before you start changing the configuration of any device, software etc. you really should get and store some information about its current state. To read the essentials about the Docker Swarm node you need to use the docker_node_facts module. The module inherits all default docker modules options like the TLS configuration. You can execute it locally using the Docker socket or remotely using the Docker API. Remember that Docker by default use an only local socket so you need to reconfigure your daemon to support remote management. And one of the most important requirements – you need to run the module on Docker Swarm Manager node.

    - name: Get the swarm node facts
      docker_node_facts:
        self: yes
        docker_host: "tcp://172.10.10.10:2376"
        tls: true

By default, Ansible will open an SSH connection to the remote host and execute the module there connecting to local Docker socket. The host is usually an inventory entry. Using the API requires providing the URL of the management interface as the docker_host parameter. You can execute the module still on the remote host (the same as in docker_host or different), but usually, in such case, the playbook is set with the parameter connection: local and run on the localhost. Understanding this difference is crucial for running any Swarm module and avoid problems.

You also need to tell the module information on which Swarm node you want the manager to return. The default behavior is returning facts about all Swarm nodes. If you know the names of the nodes you are interested in you can provide them as a list in the name parameter. Setting the self: yes option will tell the module you want facts only about the node module communicates with.

Module output matches the output of the docker node inspect CLI command. The nodes key in the returned structure contains an array of dictionaries where each element matches the CLI command. Its structure may vary depending on the version of Docker daemon and Docker API – you need to check the documentation for details.

Changing the node configuration

The docker_node module allows changing some parameters of the Swarm node configuration like the node role or the availability. The first parameter defines if the node is a manager or a worker. Initially, you must define the role when you add a node to the cluster. The availability defines how the swarm cluster will use the node for work distribution. Three allowed states are active (accepting new containers), pause (operating the existing containers but not accepting new ones) or drain (not accepting new containers and existing will restart on another node). The last state is useful during upgrades or when you want to remove a node from the cluster

The docker_node module also allows changing the labels assigned to the Swarm node. Labels are usefull for management and automated work distribution. Labels are the key-value pairs. The default module behavior is merging the labels provided in playbook with those already assigned to node and update the value of a label if it is already defined.

- name: Merge node labels and new labels
  docker_node:
    hostname: mynode
    labels:
      key: value

To override default behavior and replace the labels with new ones you must set labels_state: replace. To remove all assigned labels you also have to use this option, but without providing any new labels.

- name: Remove all labels assigned to node
  docker_node:
    hostname: mynode
    labels_state: replace

If you want to remove specified labels you need to provide the list of label keys in parameter labels_to_remove.

11 Mar

Using Docker Swarm? You gonna love Ansible 2.8!

Ansible 2..8 introduces huge update for Docker Swarm modules

The Ansible is kind of an icon of automation platform. Owned by RedHat but available as a free product on the public license. Developed both by RedHat employees and the community. Docker itself is an icon of containerization. If you use containers you know that automation is the key to simplify management of dockerized infrastructure. In Ansible 2.x many modules covering the docker operations has been introduced, but Docker Swarm was not really covered. However, it is going to change soon! There is a huge update coming with Ansible 2.8 release!

Lately, I started missing some features in Ansible that will allow me to perform some operations on Docker Swarm clusters. And I try to avoid using the command or shell modules as much as possible – running CLI commands on a remote host is like asking for troubles. I decided to fill this gap by myself and as a result, I can say I am now the Ansible community developer, author, and maintainer of a few modules – docker_swarm_facts, docker_node_facts, docker_host_facts, docker_node; and co-author of docker_swarm and the ansible.docker.swarm library.

In next posts, I wanna show you how those modules work. However, if you are looking forward using them I strongly advise you to give them a try now and report any bugs you find so we can fix them before Ansible 2.8 release. You can find them in the devel branch of Ansible repository.

Read More
06 Jun

Dynamic inventory from GIT on AWX

Blog post title - automation

In many deployments, people do not run the Ansible playbooks directly via command line. In the long term, it is not flexible and cannot provide proper permission granulation as well as trusted code control. So people use Ansible or its free version – AWX. Every playbook, no matter how you run it, needs an inventory definition. Sometimes you use static files with a list of hosts, another time you use the script that dynamically provides the inventory for the playbook. We call it then a dynamic inventory.

If you keep your Ansible project with all scripts and playbooks on GIT repository, you can import it to AWX. You can also maintain dynamic inventory scripts as a part of your project and use them to build AWX inventory.

Read More

23 Nov

Dynamic VIRL inventory for Ansible playbooks

Ansible is one of the powerful tools providing us an automation of recurring tasks. In the current world, it is impossible to manage infrastructure manually efficiently. Many people still do this but the world has already changed and we need to progress otherwise our business will be cost ineffective. You can provide static inventory – list of the devices where you want to execute the playbook. But in dynamic environments, such as Cisco VIRL simulations you don’t want to edit inventory file manually. That is why I use Python script that will generate Dynamic VIRL inventory for Ansible playbook for me.

Read More

28 Jul

AWS Lambda guide part IV – API Gateway and Lambda without S3

AWS Lambda Tutorial, I will show you how to create or import your Python application to Lambda, use S3 bucket, add S3 trigger for Lambda and more!

It is time for some new final tuning of my small certificate signing service. In previous parts, I showed you what AWS Lambda service is and how to import simple Python application into serverless microservice. I also connected Lambda function to S3 storage service where I put certificates and key files. Then I added a trigger to the function, so Lambda function will execute automatically every time someone uploads new CSR file with certificate request to S3 bucket. Now I will show you not only how to make this function serverless but also storageless using API Gateway. It is not standard approach but in some scenarios might be interesting. So we will connect API Gateway and Lambda without S3 backend for keys and certificates.

Read More

04 Jul

AWS Lambda guide part III – Adding S3 trigger in Lambda function

AWS Lambda Tutorial, I will show you how to create or import your Python application to Lambda, use S3 bucket, add S3 trigger for Lambda and more!

This is third part of the tutorial of AWS Lambda. In previous chapters I presented my small Python app I created for signing certificate requests and imported it to AWS Lambda service (check AWS Lambda guide part I – Import your Python application to Lambda). Then I modified the code so instead of using reference to static local files we can read and write to S3 bucket (check AWS Lambda guide part II – Access to S3 service from Lambda function). Now let’s move forward and add S3 trigger in Lambda function.

We can always execute Lambda function manually either from web panel or using CLI. We can also execute it from our other application if required. But microservices are often triggered by events. In this article I will show you how to automatically sign certificate using my Lambda function when request file is uploaded to S3 bucket. Let me show you how to program S3 trigger in Lambda.

Read More

26 Jun

AWS Lambda guide part II – Access to S3 service from Lambda function

AWS Lambda Tutorial, I will show you how to create or import your Python application to Lambda, use S3 bucket, add S3 trigger for Lambda and more!

In previous chapter I talked a little what is AWS Lambda and idea behind serverless computing. Furthermore I presented small Python application I wrote to sign certificate requests using my CA authority certificate (how to create such you can find in my post How to act as your own local CA and sign certificate request from ASA). Then after importing the sandboxed Python environment (required because of non-standard library used for SSL, whole procedure is described in my post How to create Python sandbox archive for AWS Lambda) and small change in the code we managed to execute it in Lambda. Also I mentioned that we can use other AWS services in our code, in example Access to S3 service from Lambda.

As you remember the initial version of my application have static paths to all files and assume that it can open it from folders on local hard drive. If you run function in Lambda you need a place where you can store files. This place is AWS S3. In this chapter I show you how to use S3 service in function on Lambda. We will use boto3 library that you can locally install on your computer using pip.

Read More

15 Jun

AWS Lambda guide part I – Import your Python application to Lambda

AWS Lambda Tutorial, I will show you how to create or import your Python application to Lambda, use S3 bucket, add S3 trigger for Lambda and more!

I lately started playing with AWS Lambda for few reasons. I become interested in serverless architecture, ways to save money while running apps and I wanted finally to learn Python. I’m a network engineer, not a software developer. I like cloud computing and see it as important part of market now. So that was an opportunity for me to learn something new. Now I want to share my knowledge with you and show you how to import your Python application to Lambda.

In my tutorial I want to show you that Lambda and programming is something interesting that you can use for everyday work whatever you do. Of course Lambda tutorials are already available on Internet but they show you how to make new application from scratch. I want to show you how to import your own small Python application to Lambda, required changes to the code, python environment, testing approach and finally how to expand it using other AWS services. This post is just first chapter!

Read More

09 Jun

How to create Python sandbox archive for AWS Lambda

AWS Lambda and Python

AWS Lambda contain now 1067 Python libraries that we can use in our programs. The number is big and small at the same time. It should give us flexibility in writing apps but same time is limitation – there are many non-standard libraries that are better replacement for default ones. I will show you how to create Python application sandbox and then ZIP archive for AWS Lambda that will contain libraries not available by default so you can use them in your serverless application.

Using this application I’ve generated list of available libraries for Python 2.7 and you can check the list here.

Serverless applications idea is that we don’t have access to operating system. We just run our code in own sandbox. Therefor we can’t just install new package if we miss it. Solution is providing ZIP archive with code of our application and python environment that have all non-standard libraries inside. Let me show you how to do this.

Read More