14 Nov

Should vendors ask for a license to enable the API?

My friend let me use one of his older, unused servers in his data center so I have a place to run some virtual machines for my automation projects. As you may expect there is no SLA for this server, no redundancy, limited storage. For me, it is still better to have lab running 24/7 there than using my desktop PC or pay for resources in the public cloud. The server is running ESXi 6.0 hypervisor and have a free license installed. 

The latest release of new VMware modules for Ansible was a trigger to develop a playbook that will let me back up the virtual machine from the datastore on a remote server to my home storage. Quite simple and straightforward idea lead to some unexpected problems and questions – Where should be the limit of features available in free licenses? Should automation be blocked, or at least some tasks, in free features?

Read More
05 Nov

Ansible can’t read some facts from Juniper devices

Juniper automation with Ansible

It is really amazing how fast Ansible is developed lately. Stable versions are released more often and contain more changes required by IT professionals. Many of them fill the gaps between two worlds – the developers and operations engineers. Unfortunately, some modules are not catching up as fast as they should which causes problems in developing simple tasks. I experienced such when I was working on playbook example required for my latest press articles for ‘IT Professional’ magazine. The default Ansible junos_facts module couldn’t correctly read JunOS version on some devices. Usually on devices running the older firmware release. This can be a real problem if some tasks execution depends on the firmware version on the router or switch.

Besides the official modules and lots of roles available on Ansible Galaxy repository many vendors developed their own modules and let them use for free. In many cases, it should be considered a better, more secure approach as long as the vendor repository is still maintained. In my situation it was the easiest workaround of my problem.

Read More
08 Oct

REST API in VMware Workstation 15

VMware Workstation

If you have a small home lab or use virtualization on your desktop PC or laptop you must hear about VMware Workstation – a hosted hypervisor that runs on the x64 version of Windows or Linux. It is a really good product for all engineers and enthusiasts that do not have or don’t need dedicated server-class hardware for they work. You can even run ESXi hypervisor as VMware Workstation virtual machine. What you could not do is manage the configuration and virtual machines in a programmable way. You had to do everything manually via GUI interface. Not anymore! The gap is filled with the REST API in VMWare Workstation 15 release that hit the market late September.

The REST API features are limited to 20 operations including the most essential ones and match the features in VMware Fusion 10. This includes VM management, VM power management as well as host and guest virtual networking. Let’s take a quick look at how it works.

Read More
24 Sep

If you build containers Alpine Linux is your friend

This post is related to Docker and automation

Every container image must start from a parent image or base image (the scratch). The parent image is the image you base your image on. The base image is like a completely empty container you need to fill with content. But in most cases, you will use another image as a parent, and you want it to be as minimal as possible. The Alpine Linux is your friend – remember this name and use it as much as possible.

Read More
28 Aug

Let Jenkins build the Jenkins image

Jenkins and Docker - flexible environment

Properly designed and implemented automation system requires its own infrastructure. You need to place components like code and configuration repository, agents responsible for executing the automated tests, acceptance tests software and tool to define automation processes. A leader in the last category is Jenkins. You can use it to automate the building, testing and deployment processes. Many of Jenkins features are available as plugins. In my previous post, I recommended running the infrastructure components as Docker containers. That includes Jenkins as well. You can use official images available on Docker Hub, but very soon you will find those images are missing many components, so you need to make your own. Of course not manually! Let Jenkins build the Jenkins image!

Read More
20 Aug

Run Jenkins in the ​container

Jenkins and Docker - flexible environment

The more you work with automation, the more you will like the containers. They fit and scale correctly in CI/CD model and can be easily managed. The whole infrastructure for automation should be flexible, easy to maintain and extendable – containers fit perfectly into this model. So why not start from putting Jenkins in the container?

Read More
05 Jul

Repository with Docker templates

Don't be afraid​ of the source code

Everyone who worked on any project knows that sites like GitHub, GitLab or StackOverflow are full of open-source examples, functions, issue solutions and so on. I used them as well. It is not obligatory, but I feel that it is fair if I share something I create that may be useful for others. After all, I know what I was looking for in the past. No matter if we are creating Python script, the Java application or Docker templates.

I decided to create a small project called the DockerTemplates. The repository contains at the moment docker-compose files for various products I set up using Docker containers. I prefer using the docker-compose for everything even project consists only of one container. I find it the easiest way to maintain a consistent approach across multiple systems or projects. You will find the docker-compose configuration files in folders named after the application. In each folder, there is a separate documentation of what are the configuration details implemented in each file.

The initial release of this repository contains configuration files for following applications:

  • AWX
  • Jenkins
  • GitLab-CE

The repository will accept pull requests for any registered GitLab user so it is easy to participate and put your configuration. Please check requirements described in CONTRIBUTING.md guideline. You can also email files directly to me or leave a comment.

18 Jun

Automated scripts can send commands faster than RP can process

Juniper automation with Ansible

When I was writing one Ansible playbook I faced an interesting situation. We all keep forgetting that automated tasks are executed faster than from CLI. Ane we do not take it into consideration when we write playbooks. Time is only a problem when playbook execution takes to much time.

On Juniper SRX you can enable VPN debugging using the traceoptions feature on IKE and IPSec processes. By default, it stores logs of all configured VPNs on the device in either /var/log/kmd file or one specified in traceoptions configuration. JunOS 11.4R3 brings additional enhancement to limit debugging to single VPN tunnel specified by local and peer IPs. You can turn it on by request security ike debug-enable command. This is very handy because in most cases you want to troubleshoot just the not operational connection.

There is a slight difference between the output of those commands in the log file. If you use the traceoptions you will see pretty much standard Juniper log output

[Mar 25 13:12:12]IPSec SA done callback called for sa-cfg VPN-V201 local:10.0.201.3, remote:10.0.201.4 IKEv1 with status Timed out

If you decide to perform troubleshooting using the second method you will get output like that

[Mar 25 13:20:12][10.0.201.3 <-> 10.0.201.4] IPSec SA done callback called for sa-cfg VPN-V201 local:10.0.201.3, remote:10.0.201.4 IKEv1 with status Timed out

The log now contains both peers IP addresses. It is really handy for my playbook. Because in one of the tasks I need to get specific information from the log output I can easily identify interesting lines using the peer IP addresses.

The Playbook

There are two tasks in my playbook divided by 60 seconds pause. In the first task, I send set of three commands to the router using the juniper_junos_command module here but it works the same in the one available in Ansible. I send three commands to save the time, and also provide the variables as a parameter – I gather them in previous tasks.

    - name: Prepare and start VPN debugging on remote device
      juniper_junos_command:
        commands:
          - "request security ike debug-disable"
          - "clear log VPN"
          - "request security ike debug-enable local {{ sa_tunnel_local_gateway }} remote {{ sa_tunnel_remote_gateway }}"
        format: xml

Then after one minute pause required to generate the logs I run another task where I simply read the log file via CLI command.

- name: Collect VPN logs
  juniper_junos_command:
    commands:
      - "show log VPN"
    format: xml
  register: result_VPN_log_xml

The problem

If you run those commands in this order from CLI you will get logs from troubleshooting the single VPN specified by IP parameters. But if you execute them from Ansible playbook you would not get the peer-specific logs output. It looks like the request command is ignored by the router. If you enable commands logging on the router you will see those from netconf as well. You can also check the debugging status directly from the router CLI. If no peer-specific debugging is running you will get following output

show security ike debug-status
Enabled
flag: all
level: 5

When the debug is running you will get information about its level and the peer’s addresses

show security ike debug-status
Enabled
flag: all
level: 7
Local IP: 10.0.201.3, Remote IP: 10.0.201.4

Because the delivery of command itself was not a problem then one of the workarounds could be a playbook, where I send only one command in each task. This workaround is working but it is not optimal. Remember that connection to the router is opened for each task and closed when the task completes. It takes time and the more command you want to send the longer it takes to finish the playbook.

Don’t send commands to fast

I asked JTAC for help in this case. It happened not to be standard customer problem, it was verified in their lab and at some point, the Engineering team was asked for support. I received the answer I like the best – “it is not a bug, it is a feature”!

Because there is no output when you execute any of the three commands from the first task the Ansible module sends the next command almost without the gap after the previous one. As Junos does not acknowledge that it has finished the first request command, the second comes in and gets ignored as the first one is not finished yet. Remember, you can never issue commands via CLI as fast as you can send them from the script.

I still want to send multiple commands in one task to save time, so here is other way to mittigate the problem

    - name: Prepare and start VPN debugging on remote device
      juniper_junos_command:
        commands:
          - "request security ike debug-disable"
          - "clear log VPN"
          - "show system uptime"
          - "request security ike debug-enable local {{ sa_tunnel_local_gateway }} remote {{ sa_tunnel_remote_gateway }}"
        format: xml

To slow down the task a little I put additional command before the ignored request command. It does not take much time to execute it and it generates the output that I simply ignore.

There is no documentation of how long the gap between commands should be but it would differ depending on platform.

 

06 Jun

Dynamic inventory from GIT on AWX

Blog post title - automation

In many deployments, people do not run the Ansible playbooks directly via command line. In the long term, it is not flexible and cannot provide proper permission granulation as well as trusted code control. So people use Ansible or its free version – AWX. Every playbook, no matter how you run it, needs an inventory definition. Sometimes you use static files with a list of hosts, another time you use the script that dynamically provides the inventory for the playbook. We call it then a dynamic inventory.

If you keep your Ansible project with all scripts and playbooks on GIT repository, you can import it to AWX. You can also maintain dynamic inventory scripts as a part of your project and use them to build AWX inventory.

Read More

07 May

Why I use VMware Photon OS for Docker

Photon OS Project provides lightweight container-oriented operating system you can run as VM

One of my readers asked me what platform I use for my docker containers in the lab. He assumed it is one of the public cloud platform providing such service. This is not true. I run few containers in public cloud, but this is not cost-optimal for lab tests. Especially in the way I sometimes work, when there might be few hours gap between the tests. I run all the containers locally on my own infrastructure which does not consist of few racks in mine basement. Actually, it is very small, so it has to be resource-optimal. I decided that for now, VMware Photon OS Project is the best solution for me.

Read More