Quantcast
Channel: Ansible Collaborative
Viewing all 512 articles
Browse latest View live

Red Hat Ansible Tower Performance Improvements between 3.6 and 3.7

$
0
0

As one of our customers pointed out, "job events are not showing in Tower UI", causing significant performance issues for users trying to view job status updates. To make Red Hat Ansible Tower more approachable in viewing Real-Time job status updates, we’ve applied the following performance improvements. 

 

Performance Improvements

Between the 3.6 and 3.7 releases, there have been significant performance advancements to improve event processing, job running performance and the user interface. This work was done in conjunction with our customers and the Red Hat Scale and Performance team. These include:

  • Added notable performance improvements to event processing to drastically speed up stdout ingestion speed.
  • Updated Ansible Tower to no longer rely on RabbitMQ for clustering and event distribution. Redis is added as a new dependency for event handling.
  • Improved performance in the User Interface for various job views when many simultaneous users are logged into Ansible Tower.
  • Improved job run performance and the write speed of stdout for running playbooks and parallel jobs through optimization of the job dependency/scheduling algorithm.
  • Fixed event processing for inventories with very large numbers of hosts to prevent Ansible Tower slow down.
  • Improved running jobs to no longer block associated inventory updates.

Setup & Performance Tests Covered

To better understand the delay between job run time and job results processing time, as well as running performance benchmarks, we created a HA Tower cluster setup, which is comparable to the average production setup.

Setup

 

Node-A

Node-B

Tower Version

Red Hat Ansible Tower v3.6.2

Red Hat Ansible Tower v3.7.1

CPU

X5650 @ 2.67GHz 24 Core

X5650 @ 2.67GHz 24 Core

RAM

98GB RAM

98GB RAM

* VM images are on separate dedicated disks

For proper benchmarking, we needed to have a large number of job events sufficient enough to compare the performance improvements that were made between the versions. To reach that goal, we generated 100k job events by creating an inventory file with 100 fake hosts (with “ansible_connection: local”) and a job that generated 100 events/hosts. Launching 10 jobs concurrently over a given inventory produced 100k events, which was sufficient for our needs.

 

Performance Results

Job duration

Tower performance blog 1

The duration of Ansible Tower completing 10 concurrent jobs remains the same on v3.6.2 and v3.7.1,which shows no sudden decrease of performance and thus is sufficient.

Job events processing duration

tower performance blog 2

The duration it takes Ansible Tower to process all of the job results has been improved significantly: a 82.56% speedup of events processing on Ansible Tower v3.7.1 compared to v3.6.2. This is a major improvement and will lead to a drastically improved user experience.

Job events lag

tower performance blog 3

The additional time required to process all the job results after the job finishes has been reduced by 95.77% in Ansible Tower v3.7.1 compared to v3.6.2. Again, a major improvement of performance is clearly visible here, bringing forward a smoother user experience around job events.

 

Takeaways & Where to go Next 

With performance improvements that are made available starting in Ansible Tower v3.7.1, now Ansible Tower can process all the job results in less additional time to the job run time. This helps users to quickly view the job status updates in the Ansible Tower dashboard in a more Real-Time manner, leading to a much smoother and overall more pleasing user experience.

If you're interested in detailed information on Ansible Tower, then the Ansible Tower Documentation is a must-read. To download and install the latest version, Please visit the Ansible Tower Installation Guide, and to view the release notes of recent Ansible Tower releases, Please visit Release notes 3.7.x and Release notes 3.6.x. If you are interested in more details about the Red Hat Ansible Automation Platform, be sure to check out our e-books.


Developing and Testing Ansible Roles with Molecule and Podman - Part 1

$
0
0

One of the beauties of the Red Hat Ansible Automation Platform is that the language to describe automation is readable not only by a few dedicated experts, but by almost anyone across the IT ecosystem. That means all IT professionals can take part in the automation, enabling cross team collaboration and really drive automation as a culture inside an organization. With so many people contributing to the automation, it is crucial to test the automation content in-depth. So when you’re developing new Ansible Content like playbooks, roles and collections, it’s a good idea to test the content in a test environment before using it to automate production infrastructure. Testing ensures the automation works as designed and avoids unpleasant surprises down the road. 

Testing automation content is often a challenge, since it requires the deployment of specific testing infrastructure as well as setting up the testing conditions to ensure the tests are relevant. Molecule is a complete testing framework that helps you develop and test Ansible roles, which allows you to focus on the content instead of focusing on managing testing infrastructure.

According to its official documentation, Molecule is a project:

 “designed to aid in the development and testing of Ansible roles. It encourages an approach that results in consistently developed roles that are well-written, easily understood and maintained.”

Molecule allows you to test your role with many instances, ensuring it works properly across different combinations of operating systems and virtualization environments. Without it, you would have to provision and maintain a testing environment separately. You would also have to configure connectivity to those instances and ensure they are clean and ready before every test. Molecule manages those aspects for you in an automated and repeatable manner.

In this two part series, we will use Molecule to develop and test a new Ansible role. The first article will guide you through installing and configuring Molecule. In Part 2, we will use Molecule to aid with the role development.

If this role is part of a Collection, use this approach to develop and “unit” test the role. In a future article, we’ll see how to use Molecule to run integrated tests in a Collection.

Molecule uses drivers to provision testing instances using different technologies, including Linux containers, virtual machines and cloud providers. By default, it comes with three drivers pre-installed: Docker and Podman drivers to manage containers, and Delegated that allows you to customize your integration. Drivers for other providers are available through the open source community.

In this article, we will use the Podman driver to develop and test a new role using Linux containers. Podman is a lightweight container engine for Linux that does not require a running daemon, and allows execution of containers in “rootless” mode for increased security. 

By using Molecule with the Podman driver, we will develop and test a new Ansible role from scratch. This basic role deploys a web application supported by the Apache web server. It must run on Red Hat Enterprise Linux (RHEL) 8 or Ubuntu 20.04 operating systems.

This example shows a common scenario where a role is expected to work on different versions of operating systems. Using Podman and Linux containers allows us to create many instances to test the role with the specific required versions. Since containers are lightweight, they also allow us to quickly iterate over the role functionality while developing it. Using containers for testing roles is applicable in this situation because the role is configuring the running Linux instances only. To test other provisioning scenarios or cloud infrastructure, we can use the delegated driver or another appropriate driver provided by the community.

 

What do you need?

To follow this tutorial, use a physical or virtual machine running Linux with Python 3 and Podman installed. For these examples, we’re running RHEL 8.2. You also need Podman configured to run rootless containers. The installation of Podman is out of the scope of this blog, so please refer to the official documentation for more information. To install Podman on RHEL 8, you can also check the RHEL 8 container documentation.

 

Getting Started

Molecule is available as a Python package and thus can be installed via pip. As a first step, we create a dedicated Python environment for our Molecule installation, and install it there:

$ mkdir molecule-blog
$ cd molecule-blog
$ python3 -m venv molecule-venv
$ source molecule-venv/bin/activate
(molecule-venv) $ pip install "molecule[lint]"

Note that we installed Molecule with the “lint” option. By using this option, pip also installed the “yamllint” and “ansible-lint” tools that allow you to use Molecule to  perform static code analysis of your role, ensuring it complies with Ansible coding standards.

The installation downloads all of the dependencies from the Internet, including Ansible. Verify the installed version:

$ molecule --version
molecule 3.0.4
   ansible==2.9.10 python==3.6

Next, let’s use the “molecule” command to initialize a new Ansible role.

 

Initializing a New Ansible Role

Generally speaking, when developing a new Ansible role, you initialize it by running the “ansible-galaxy role init” command. In this case, instead use “molecule” to initialize the new role. By doing this, you’ll have the same role structure provided by the “ansible-galaxy” command and the basic boilerplate code required to run Molecule tests.

By default, Molecule uses the Docker driver to execute tests. Since we want to execute tests using “podman”, we need to specify the driver name using the option “--driver-name=podman” when initializing the role with “molecule”. 

Switch back to the “molecule-blog” directory and initialize the new role “mywebapp” with this command: 

$ molecule init role mywebapp --driver-name=podman
--> Initializing new role mywebapp...
Initialized role in /home/ricardo/molecule-blog/mywebapp successfully.

Molecule created the structure for your new role in a directory named “mywebapp”. Switch into this directory and check the content created by Molecule:

$ cd mywebapp
$ tree
.
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── molecule
│   └── default
│       ├── converge.yml
│       ├── INSTALL.rst
│       ├── molecule.yml
│       └── verify.yml
├── README.md
├── tasks
│   └── main.yml
├── templates
├── tests
│   ├── inventory
│   └── test.yml
└── vars
    └── main.yml
 
10 directories, 12 files

Molecule includes its configuration files under the “molecule” subdirectory. When initializing a new role, Molecule adds a single scenario named “default”. Later, you can add more scenarios to test different conditions. For this tutorial, we’ll use the “default” scenario.

Verify the basic configuration in the file “molecule/default/molecule.yml”:

$ cat molecule/default/molecule.yml 
---
dependency:
  name: galaxy
driver:
  name: podman
platforms:
  - name: instance
    image: docker.io/pycontribs/centos:7
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible

As per our requirements, this file specifies the Podman driver for tests. It also defines a default platform “instance” using the container image “docker.io/pycontribs/centos:7” that you’ll change later.

Unlike Molecule v2, Molecule v3 does not specify a linter by default. Open the configuration file “molecule/default/molecule.yml” using your favorite editor to include the lint configuration at the end:

$ vi molecule/default/molecule.yml
...
verifier:
  name: ansible
lint: |
  set -e
  yamllint .
  ansible-lint .

Save and close the configuration file. Run “molecule lint” from the project root to lint the entire project:

$ molecule lint

This command returns a few errors because the file “meta/main.yml” is missing some required values. Fix these issues by editing the file “meta/main.yml” and adding “author”, “company”, “license”, “platforms”, and removing the blank line at the end. Without comments - for brevity - the “meta/main.yaml” looks like this:

$ vi meta/main.yml
galaxy_info:
  author: Ricardo Gerardi
  description: Mywebapp role deploys a sample web app 
  company: Red Hat 
  license: MIT 
  min_ansible_version: 2.9
  platforms:
  - name: rhel
    versions:
    - 8 
  - name: ubuntu
    versions:
    - 20.04
  galaxy_tags: []
dependencies: []

Now re-lint the project and verify that there are no errors this time.

$ molecule lint
--> Test matrix
└── default
    ├── dependency
    └── lint
    
--> Scenario: 'default'
--> Action: 'dependency'
Skipping, missing the requirements file.
Skipping, missing the requirements file.
--> Scenario: 'default'
--> Action: 'lint'
--> Executing: set -e
yamllint .
ansible-lint . 

The role is initialized and the basic molecule configuration is in place. Let’s set up the test instances next.

 

Setting up Instances

By default, Molecule defines a single instance named “instance” using the “Centos:7” image. According to our requirements, we want to ensure our role works with RHEL 8 and Ubuntu 20.04. In addition, because this role starts the Apache web server as a system service, we need to use container images that enable “systemd”.

Red Hat provides an official Universal Base Image for RHEL 8, which enables “systemd”: 

  • registry.access.redhat.com/ubi8/ubi-init

For Ubuntu, there’s no official “systemd” enabled images so we’ll use an image maintained by Jeff Geerling from the Ansible open-source community:

  • geerlingguy/docker-ubuntu2004-ansible

To enable the “systemd” instances, modify the “molecule/default/molecule.yml” configuration file, remove the “centos:7” instance and add the two new instances.

$ vi molecule/default/molecule.yml
---
dependency:
  name: galaxy
driver:
  name: podman
platforms:
  - name: rhel8
    image: registry.access.redhat.com/ubi8/ubi-init
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    capabilities:
      - SYS_ADMIN
    command: "/usr/sbin/init"
    pre_build_image: true
  - name: ubuntu
    image: geerlingguy/docker-ubuntu2004-ansible
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    capabilities:
      - SYS_ADMIN
    command: "/lib/systemd/systemd"
    pre_build_image: true
provisioner:
  name: ansible
verifier:
  name: ansible
lint: |
  set -e
  yamllint .
  ansible-lint .

With these parameters, we’re mounting the temporary filesystem “/run” and “/tmp”, as well as the “cgroup” volume for each instance. We’re also enabling the “SYS_ADMIN” capability, as they are required to run a container with Systemd.

Also, if you’re following this tutorial on a RHEL 8 machine with SELinux enabled - as it should - you need to set the “container_manage_cgroup” boolean to true to allow containers to run Systemd. See the RHEL 8 documentation for more details:

sudo setsebool -P container_manage_cgroup 1

Molecule uses an Ansible Playbook to provision these instances. Modify and add parameters for provisioning by modifying the “provisioner” dictionary in the “molecule/default/molecule.yml” configuration file. It accepts the same configuration options provided in an Ansible configuration file “ansible.cfg”. For example, update the provisioner configuration by adding a “defaults” section. Set the Python interpreter to “auto_silent” to prevent warnings. Enable the “profile_tasks”, “timer”, and “yaml” callback plugins to output profiling information with the playbook output. Then, add the “ssh_connection” section and disable SSH pipelining because it does not work with Podman:

provisioner:
  name: ansible
  config_options:
    defaults:
      interpreter_python: auto_silent
      callback_whitelist: profile_tasks, timer, yaml
    ssh_connection:
      pipelining: false

Save the configuration file and create the instances by running “molecule create” from the role root directory:

$ molecule create

Molecule runs the provisioning playbook and creates both instances. You can check the instances by running “molecule list”:

$ molecule list
Instance Name    Driver Name    Provisioner Name    Scenario Name    Created    Converged
---------------  -------------  ------------------  ---------------  ---------  -----------
rhel8            podman         ansible             default          true       false
ubuntu           podman         ansible             default          true       false

You can also verify that both containers are running in Podman:

$ podman ps
CONTAINER ID  IMAGE                                                   COMMAND               CREATED             STATUS                 PORTS  NAMES
2e2f14eaa37b  docker.io/geerlingguy/docker-ubuntu2004-ansible:latest  /lib/systemd/syst...  About a minute ago  Up About a minute ago         ubuntu
2ce0a0ea8692  registry.access.redhat.com/ubi8/ubi-init:latest         /usr/sbin/init        About a minute ago  Up About a minute ago         rhel8

While developing the role, Molecule uses the running instances to test it. In case a test fails, or an error causes an irreversible change that requires you to start over, delete these instances by running “molecule destroy” and recreate them with “molecule create” at any time.

 

Takeaways and Where to go Next

Now that you’ve successfully installed, configured and used Molecule to set up new testing instances, you are ready to apply it to help develop and test the Ansible role.

In Part 2 of this blog series, we will use Molecule to aid with the role development. If you cannot wait until then and want to dive deeper into the topic of role development and testing, or Ansible automation in general, we’ve collected some resources in the meantime:

 

*Red Hat provides no expressed support claims to the correctness of this code. All content is deemed unsupported unless otherwise specified

Ansible Certified Content Collection for Chocolatey

$
0
0

It’s a constant battle to keep your Windows estate updated and secure. Using Red Hat Ansible Automation Platform and Chocolatey, you can easily keep your software up-to-date and react quickly to bug fixes, security issues and 0-days on dozens, hundreds or thousands of nodes.

We’re going to take you through three simple steps to show you how simple it is to deploy and update software using Chocolatey and Ansible.

 

Before We Start: Windows Prerequisites

Ansible  uses Winrm by default to communicate with Windows machines. Therefore, we need to ensure we have that enabled by running `Enable-PSRemoting` on the remote Windows computer.

For production use, we recommend enabling HTTPS for WinRM

The code examples shown below are all using the user ‘ansible’ as the default. If you are using a different username, make sure you change it!

 

Step 1: Configure Ansible to use Chocolatey.

We need to install the Chocolatey module so that Ansible can use. The  Chocolatey Ansible Content Collection is called chocolatey:chocolatey and is maintained by the Chocolatey Team. To install the Collection, and therefore the win_chocolatey modules, on your Ansible server, run:

ansible-galaxy collection install chocolatey.chocolatey

That’s all there is to it! Ansible can now work with Chocolatey using the modules in the Collection.

 

Step 2: Install software on a remote computer

Now that we have the win_chocolatey module installed, we can go ahead and install or manage software on our remote computers.

Let’s create a file called `install_notepadplusplus.yml` with the following contents:

---
- hosts: all
  gather_facts: false
  vars_prompt:
    - name: password
      prompt: "Enter the password for the node"
  vars:
      ansible_user: ansible
      ansible_password: "{{ password }}"
      ansible_connection: winrm
      ansible_winrm_transport: ntlm
      ansible_winrm_server_cert_validation: ignore
  tasks:
      - name: Install Notepad++ version 7.8
        win_chocolatey:
          name: notepadplusplus
          version: ‘7.8’

Run `ansible-playbook install_notepadplusplus.yaml -i <ip address>,` (note the comma after the IP address) to install Notepad++ on your remote computer. Note that we are not installing the latest version in this example as we will update to that in the next step.

Once installed, open Notepad++ and press `F1` to ensure we have installed the requested version. 

 

Step 3: Update software on a remote computer

To ensure you always have the latest version of software installed on your computers, you can use Chocolatey to upgrade them. We’ll upgrade to the latest version of Notepad++.

Create a file called `upgrade_notepadplusplus.yml` with the following contents:

---
- hosts: all
  gather_facts: false
  vars_prompt:
    - name: password
      prompt: "Enter the password for the node"
  vars:
    ansible_user: ansible
    ansible_password: "{{ password }}"
    ansible_connection: winrm
    ansible_winrm_transport: ntlm
    ansible_winrm_server_cert_validation: ignore
  tasks:
    - name: Install latest Notepad++
      win_chocolatey:
        name: notepadplusplus
        state: latest

Run `ansible-playbook upgrade_notepadplusplus.yaml -i <ip address>,` (note the comma after the IP address) to update, or install, the latest Notepad++ on your remote computer. Once installed, open Notepad++ and press `F1` to ensure we have installed the latest version. 

 

Next Steps: 

While we have only worked with one remote computer in this blog post, Ansible allows you to replicate this across dozens, hundreds and thousands of remote computers.

Now that you have the Ansible Chocolatey modules installed, you can install, uninstall, update and manage packages on your computers. Other modules in the Chocolatey Ansible  Content Collection give you the ability to manage the configuration, features and sources for Chocolatey itself. You can find more information on the Ansible Galaxy Chocolatey collection page.

Chocolatey has a recommended architecture for organizations, which includes setting up an internal repository. To speed up that process, there is a Quick Deployment Environment that allows you to be up and running with an internal repository with useful packages already loaded, Jenkins for automation and Chocolatey Central Management for reporting in around two hours.

For package management on Windows, Chocolatey is the package manager of choice. Working in harmony with Ansible, you can use it to update and manage your Windows computers in a similar way as you would with Linux.

If you want to see Chocolatey live head over to the recording's from last year's AnsibleFest presentation: Simplify Windows Software packaging and Automation with Chocolatey. And if you want to learn more about the Red Hat Ansible Automation Platform, make sure to register for the upcoming AnsibleFest.

Getting Started With OSPFV2 Resource Modules

$
0
0

With the increasing size and complexity of modern enterprise networks, the demand on simplifying the networks management becomes more intense. The introduction of resources modules with Ansible 2.9 provide a path to users to ease the network management, especially across multiple different product vendors.

In the past, we’ve already covered resource modules for VLAN management and for ACLs. However, simplifying network management is not limited to rather local network setups: OpenShortestPath First ( OSPFv2) is a protocol used to distribute IP routing information throughout a single Autonomous System (AS). It is used in larger network setups, as the Wikipedia page so aptly observes:

OSPF is a widely used IGP in large enterprise networks. IS-IS, another LSR-based protocol, is more common in large service provider networks.

Managing OSPFv2 manually for a network device can be a very difficult and tedious task, and more often this needs to be performed carefully, as the manual process is more prone to human error.

This blog post goes through the OSPFV2 resource module for the VyOS network platform. We will walk through several examples and describe the use cases for each state parameter and how we envision these being used in real world scenarios.

 

OSPFv2 resource modules example: Vyos

The goal of OSPFv2 resource modules is to make sure configurations are consistently applied across the infrastructure with less effort. It simplifies management and makes it faster and easier to scale without worrying about the actual implementation details of the network platforms working under the hood.

In October of 2019, as part of Red Hat Ansible Engine 2.9, the Ansible Network Automation teamintroduced the concept of resource modules to  make network automation easier and more consistent for those automating various network platforms in production.

Ansible Content refers to Ansible Playbooks, modules, module utilities and plugins. Basically all of the Ansible tools that users utilize to create their Ansible and the OSPFv2 resource module is part of Ansible Content Collections. To learn more about Ansible Content Collections, you can check out two of our blogs: Getting Started with Ansible Content CollectionsandThe Future of Ansible Content Delivery.

Let’s have a closer look at how the OSPFv2 resource modules work. As an example, we pick the vyos_ospfv2 resource module. In this blog, we’ll be using a VyOS router with version 1.1.8 (helium) for all the configuration management specific operations. Also, to better showcase the effect of the modules, we will start with some OSPF version 2 specific attributes already configured. Check out the linked listing for further details.

 

Accessing and using the VyOS Collection

To download the VyOS Collection, refer to Automation Hub (fully supported, requires a Red Hat Ansible Automation Platform subscription) or Ansible Galaxy (upstream community supported):

To learn more about how to configure downloading via the ansible.cfg file or requirements.yml file, please refer to the blog, “Hands On With Ansible Collections.”

Before we get started, let’s quickly explain the rationale behind naming the network resource modules. Notice for resource modules that configure OSPFV2 routes, the newly added modules will be named based on the IP address type. This was done so that those using existing network modules would not have their Ansible Playbooks stop working and have sufficient time to migrate to the new network automation modules.

A module to configure OSPFv2 is also available for the following supported platforms:

The OSPFV2 resource module provides the same level functionality that a user can achieve when configuring manually on to the VyOS device with all advantages of Ansible, plus with an added edge of Ansible facts gathering and resource moduleapproach, which is more closely aligned with network professionals day to day working.

 

Use Case: OSPFv2 configuration changes

Using state gathered - Building an Ansible inventory

Resource modules allow the user to read in existing network configuration and convert that into a structured data model. The state: gathered is the equivalent for gathering Ansible Facts for this specific resource. This example will read in the existing network configuration and store it as a flat-file.

Here is an Ansible Playbook example of using state: gathered and storing the result as YAML into host_vars. If you are new to Ansible Inventory and want to learn about group_vars and host_vars, please refer to the documentation here.

---
- name: convert configured OSPFV2 resource to structured data
  hosts: vyos
  vars:
    inventory_dir: "lab_inventory"
    inventory_hostname: "vyos"
  gather_facts: false
  tasks:

  - name: Use the OSPFV2 resource module to gather the current config
    vyos.vyos.vyos_ospfv2:
      state: gathered
    register: ospfv2

  - name: Create inventory directory
    file:
      path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}"
      state: directory

  - name: Write the OSPFV2 configuration to a file
    copy:
      content: "{{ {'ospfv2': ospfv2['gathered']} | to_nice_yaml }}"
      dest: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}/ospfv2.yaml"

Execute the Ansible Playbook with the ansible-playbook command:

$ ansible-playbook example.yml

Here is the data structure that was created from reading/gathered operation  in a brown-field configuration:

$ cat nw_inventory/host_vars/vyos/ospfv2.yaml
ospfv2:
  areas:
  - area_id: '2'
    area_type:
      normal: true
    authentication: plaintext-password
    shortcut: enable
  - area_id: '4'
    area_type:
      stub:
        default_cost: 20
        set: true

You can check out the full detailed listing of the output of this example in the state: gathered reference gist.

 

Using state merged - Pushing configuration changes

The state merged will take your Ansible configuration data (i.e. Ansible variables) and merges them into the network device’s running configuration. This will not affect existing configuration not specified in your Ansible configuration data. Let’s walk through an example.

We will modify the flat-file created in the first example with a configuration to be merged. Here are the most important pieces:

areas:
 - area_id: '2'
   area_type:
     normal: true
   authentication: "plaintext-password"
   shortcut: 'enable'
 - area_id: '3'
   area_type:
     nssa:
       set: true

Now let’s create an Ansible Playbook to merge this new configuration into the network device’s running configuration:

---
- name: Merged state play
  hosts: vyos
  gather_facts: false
  tasks:
    - name: Merge OSPFV2 config with device existing OSPFV2 config
      vyos.vyos.vyos_ospfv2:
        state: merged
        config: "{{ ospfv2 }}"

Execute the Ansible Playbook with the ansible-playbook command:

$ ansible-playbook merged.yml

And, once we run the respective Merge play, all of the provided parameters will be configured on the VyOS router with Ansible changed=True.

Note the network configuration after the merge operation:

vyos@vyos:~$ show configuration commands | grep ospf
set protocols ospf area 2 area-type 'normal'
set protocols ospf area 2 authentication 'plaintext-password'
set protocols ospf area 2 shortcut 'enable'
set protocols ospf area 3 area-type 'nssa'
set protocols ospf area 4 area-type stub default-cost '20'

Note that this listing only shows a few highlights; the full listing is available in the merged gist.

Let’s take a look at what has changed through this operation: if we go through the device output, there are a few observations:

  • Attribute area with area_id ‘3’ got added to the OSPF areas list. 
  • The redistribute and parameter attribute got configured for OSPF.
  • If there was an already configured OSPF with AREA and the user wanted to update any parameter for that particular AREA, then the user can also use the Merged state to update the AREA under OSPFV2.

With the second run, the respective Merge Play runs again and Ansible charm of Idempotency comes to picture. If nothing’s changed, play run results into changed=False,which confirms to the user that all of the provided configurations in the play are already configured on the IOS device.

 

Using state replaced - Replacing configuration 

If the user wants to re-configure the VyOS device entirely pre-configured OSPFV2 with the provided OSPFV2 configuration, then the resource module replaced state comes into picture.

The scope of the replaced operation is up to the individual processes. In the case of VyOS only a single process is supported. As a result, the replaced state acts similar to the overridden state. For that reason a dedicated overridden state is not required with the VyOS modules. Other network platforms that support multiple OSPFV2 processes do have the overridden state operation.

Using the overridden state, a user can override all OSPFV2 resource attributes with user-provided OSPFV2 configuration. Since this resource module state overrides all pre-existing attributes of the resource module, the overridden state should be used cautiously, as OSPFV2 configurations are very important; if all the configurations are mistakenly replaced with the play input configuration, it might create unnecessary issues for the network administrators. 

In this scenario, OSPF with ‘n’ number AREAs are already configured on the VyOS device, and now the user wants to update the AREA list with a new set of AREAs and discard all the already configured OSPF AREAs. Here, the resource module replaced state will be an ideal choice and, as the name suggests, the replaced state will replace OSPF existing AREA list with a new set of AREAs given as input by the user.

If a user tries to configure any new OSPFV2 AREA/attribute that’s not already pre-configured on the device, it’ll act as a merged state and the vyos_ospfv2 module will try to configure the OSPF AREAs given as input by the user inside the replace play.

We will modify the flat-file created in the first example:

areas:
 - area_id: '2'
   area_type:
     normal: true
   authentication: "plaintext-password"
   shortcut: 'enable'
 - area_id: '4'
   area_type:
     stub:
      default_cost: 20

Check out the full input config structure if you want to learn more details.

Again, we create an Ansible Playbook to merge this new configuration into the network device’s running configuration:

---
- name: Replaced state play
  hosts: vyos
  gather_facts: false
  tasks:
    - name: Replace OSPFV2 config with device existing OSPFV2 config
      vyos.vyos.vyos_ospfv2:
        state: replaced
        config: "{{ ospfv2 }}"

Once we run the respective Replaced play, all of the provided parameters will override all the existing OSPFv2 resource specific config on the VyOS router with Ansible changed=True.

The network device configuration after the Replaced operation:

vyos@vyos:~$ show configuration commands | grep ospf
set protocols ospf area 2 area-type 'normal'
set protocols ospf area 2 authentication 'plaintext-password'
set protocols ospf area 2 shortcut 'enable'
set protocols ospf area 4 area-type stub default-cost '20'
set protocols ospf area 4 network '192.0.2.0/24'

Check out the corresponding gist for more details.

If we dig into the above output, we note the following changes:

  • Replaced negates all of the pre-existing OSPFV2 resource-specific attributes and deletes those configurations, which are not present inside the replaced play. In the above example, ospfv2 area-id 3 got deleted.
  • For the OSPFV2 configurations that are pre-existing and also in the play, vyos_ospfv2 replaced state will try to delete/negate all the pre-existing OSPFV2 config and then configure the new OSPFV2 config as mentioned in the play.
  • For any non-existing OSPFV2 specific attribute, the replaced state will configure the OSPFV2 in the same manner as the Merged state. In the above example, a new network address configured for OSPFv2 area-id 4.

With the second run of the above play,there are no changes reported which satisfies the Ansible idempotency.

 

Using state deleted - Delete configuration 

Now that we’ve talked about how we can configure OSPFV2 specific attributes on the VyOS device by using vyos_ospfv2 resource module merged and replaced state, it’s time we talk about how we can delete the pre-configured OSPFV2 attributes and what level of granularity is available with the deleted operational state for the user.

Deleting ALLOSPFV2 configin one go leads to deleting all the pre-configured OSPFV2 specific attributesfrom the VyOS device. But that said, this is a very critical delete operation and if not used judiciously, it has the power to delete all pre-configured OSPFV2 and can result in the production environment with the router having no pre-configured  OSPFV2 attributes.

Let’s create an Ansible Playbook to merge this new configuration into the network device’s running configuration:

---
- name: Deleted state play
  hosts: vyos
  gather_facts: false
  tasks:
    - name: Delete ALL OSPFV2 config
      vyos.vyos.vyos_ospfv2:
        state: deleted

After we execute the playbook, the network device configuration changed:

vyos@vyos:~$ show configuration commands | grep ospf
vyos@vyos:~$

Make sure to look at the full listing of the changed values. If we dig into the above output briefly, we can see that all the ospfv2 resource-specific config has been removed from the network configuration.

 

Using state rendered - Development and working offline

Ansible renders the provided configuration in the task in the device-native format (for example, VyOS CLI). Ansible returns this rendered configuration in the rendered key in the result. Note this state does not communicate with the network device and can be used offline.

To have a config to be rendered, modify the YAML file created in the first scenario.  For example, if this is the vyos_ospfv2 module, you can just add a few more attributes to show we change the data model yet again.

areas:
 - area_id: '2'
   area_type:
     normal: true
   authentication: "plaintext-password"

See the full listing in the corresponding rendered gist.

We create a playbook to execute this:

---
- name: Rendered state play
  hosts: vyos
  gather_facts: false
  tasks:
    - name: Render the provided configuration
      vyos.vyos.vyos_ospfv2:
        config: "{{ ospfv2 }}"
        state: rendered

This produces the following output:

"rendered": [
       "set protocols ospf log-adjacency-changes 'detail'",
       "set protocols ospf max-metric router-lsa administrative",
       "set protocols ospf max-metric router-lsa on-shutdown 10",

Check out the corresponding gist for more details.

If we dig into the above output, we can see that nothing has changed at all; rendered doesn’t even require the connection establishment with an actual network device.

 

Using state parsed - Development and working offline

Ansible parses the configuration from the running_configuration option into Ansible structured data in the parsed key in the result. Note this does not gather the configuration from the network device, so this state can be used offline.

As the config to be parsed we take device-native format configuration:

set protocols ospf area 2 area-type 'normal'
set protocols ospf area 2 authentication 'plaintext-password'
set protocols ospf area 2 shortcut 'enable'
set protocols ospf area 4 area-type stub default-cost '20'
set protocols ospf area 4 network '192.0.2.0/24'
set protocols ospf area 4 range 192.0.3.0/24 cost '10'

The playbook to apply this configuration is:

---
- name: Parsed state play
  hosts: vyos
  gather_facts: false
  tasks:
    - name: Parse the provided OSPFV2 configuration
      vyos.vyos.vyos_ospfv2:
        running_config:
           "set protocols ospf area 2 area-type 'normal'
            set protocols ospf area 2 authentication 'plaintext-password'
            set protocols ospf area 2 shortcut 'enable'
        state: parsed

Execute the playbook generates the following output:

"parsed": {
        "areas": [
            {
                "area_id": "2",
                "area_type": {
                    "normal": true
                },
                "authentication": "plaintext-password",
                "shortcut": "enable"
             }
                ]
            }  
...

If we dig into the above output, we can see that nothing has changed at all, parsed operation doesn’t even require the connection establishment with an actual network device.
Note: parsed input to be provided as value to running_config key.

 

Takeaways & Next Steps

As shown above, with the help of the resource modules management of OSPFV2, resource-specific configurations can be greatly simplified. Users don't need to bother much about OSPFV2 implementation details for each platform, they can just enter the actual data. By using the merged, replaced and overridden parameters, we allow much more flexibility for network engineers to adopt automation in incremental steps. The other operations like gathered, rendered and parsed allow a better, user friendly handling of the facts and the data managed within these tasks.

If you want to learn more about the Red Hat Ansible Automation Platform and network automation, you can check out these resources:

 

*Red Hat provides no expressed support claims to the correctness of this code. All content is deemed unsupported unless otherwise specified

Network Automation at AnsibleFest 2020

$
0
0

This year, we are adapting our signature automation event, AnsibleFest, into a free virtual experience to connect our communities with a wider audience and to collaborate to solve problems. Seasoned pros and brand new Ansiblings alike can find answers and guidance for Red Hat Ansible Automation Platform, the enterprise solution for building and operating automation at scale. We’re giving our attendees an inside peek into exactly what to expect from each channel. Let’s take a closer look at what is to come from the network channel at AnsibleFest 2020.

 

Network Automation at AnsibleFest

Gone are the days of hand-typing commands into network devices one by one. Manage your network infrastructure using Ansible throughout the entire development and production life cycle. This AnsibleFest channel focuses on network automation topics for module and Collection developers to playbook writers, and is geared towards network and cloud engineers/operators. The channel has a good mix of community, customers, partners and Red Hatters that aims to provide something for everyone.

Attendees will learn how network automation can no longer be a “point tool”, but instead part of a holistic automation strategy that spans IT teams. Although Ansible was built as a DIY tool, it needs to be a focal point of the IT infrastructure in order for automation to be successful. .

Attendees can expect a little bit of everything from this channel because “network automation” isn’t just automating switches and routers anymore. It now also means automating cloud networking, understanding how inventory and IPAM factor into automation, all the way to automating multi-vendor multi-team environments at scale.

The move to Ansible Content Collections is now in full swing, and multiple sessions detail how to develop, build, publish and use network resource modules as part of Collections. Here are a few sessions that you can expect to see in the network channel: 

  • “Getting Started with Network Resource Modules” by Gomathi Selvi Srinivasan, Red Hat
  • “Automating IPAM in Cloud: Ansible + Netbox” by William Collins, Humana
  • “Automate Your Network: Introducing the Modern Approach to Enterprise Network Management” by John Capobianco, Canadian House of Common

 

What's Next?

  • Register today for the AnsibleFest 2020 virtual experience
  • Check out the session catalog to see what sessions to expect at the event

Automation Architect Channel at AnsibleFest 2020

$
0
0

As we continue to expand all the insightful content that our attendees can expect from AnsibleFest 2020, we are excited to share with you our Automation Architect channel. Here is a sneak peek of exactly what to expect from the Automation Architect channel at AnsibleFest 2020.

 

Automation Architect Channel

Automation has become a key discipline in large IT organizations, but introducing automation to new areas is likely going to invoke technical and non-technical challenges. As organizations focus on building end-to-end automation solutions and increasing the automation footprint, Automation Architects will play a pivotal role as the interface with both technologists and business owners. 

In this track, you will learn more about Ansible best practices for building your organization’s automation architecture, how to best collaborate with the business it serves and how it can help in broader corporate initiatives, such as your cloud journey. Whether you are an Enterprise or  Automation Architect today or are interested in developing the skills for this career path, you will learn the best practices to successfully implement an automation initiative at scale. 

Understand how you can use and share automation assets and how customers automate across hybrid, scalable infrastructures. Learn about integrating the Red Hat Ansible Automation Platform with other components in the Red Hat portfolio, including Red Hat Enterprise Linux and Red Hat OpenShift, as well as products from key partners. Hear from your peers about the mindset changes they achieved as they bridged operational silos with automation. And of course, learn about what’s new for Ansible Automation Platform and where this product is headed.  

Here is a sampling of a few key talks in this track: 

  • Re-imagining Agentless Automation Architectures 
  • Securing your deployment of Ansible Automation: Reference architecture and best practices
  • Free their minds: Driving culture change and workforce transformation.
  • Customer talks including ones from Comcast and IBM 

Telco Mini Channel at AnsibleFest 2020

$
0
0

As we adapt AnsibleFest into a free virtual experience this year, we wanted to share with our automation lovers what to expect. Seasoned pros and brand new Ansiblings alike can find answers and guidance for Red Hat Ansible Automation Platform, the enterprise solution for building and operating automation at scale. We are giving our attendees an inside peek of exactly what to expect from each channel. Let’s take a closer look at what is to come from the network-telco mini channel at AnsibleFest 2020.

 

Network-Telco Automation at AnsibleFest

Telecommunication service providers have extremely critical and complex workflows that require specialized attention for automation. The network is no longer isolated to the data center, but extends to the enterprise and now the edge, each that have specific requirements. 

This is the first time Telco as an industry or use case has been specifically highlighted as part of its own channel at AnsibleFest. Data center automation has long been a use case for Ansible automation, but as Telco workloads are moving to the edge, so does the need to automate the enterprise, branch-office and entry points for end-users. 

Attendees can expect to hear about targeted use cases for Telecommunications customers, partners, and vendors. Topics include closed loop network automation, data modeling recommendations and NFV container management. Track participants will learn how Ansible is leveraged in Telecommunications/Service Provider networks, from the datacenter to containerized NFV to 5G radios.

Here are a few talks that you can expect to see in the network-telco mini channel:

  • Automation with Ansible: A New Dimension to Network Engineering” by Dekia Black, Cox Communications
  • “5G Network Orchestration and Management with Ansible Automation” by Tony O’Brien, IBM
  • “Greenfield and Brownfield Closed Loop Automation for Service Providers” by Randy Levensalor, CableLabs

 

What's Next?

  • Register today for the AnsibleFest 2020 virtual experience
  • Check out the session catalog to see what sessions to expect at the event

Developer Channel at AnsibleFest 2020

$
0
0

As a developer, have you ever made a change that takes down an entire Kubernetes production cluster, requiring you to rebuild all YAML and automation scripts to get production back up?  Have you ever wanted to create reproducible, self-contained environments that can be run locally or in production? Welcome to the new AnsibleFest Developer Channel! Here you can learn how Ansible is critical to the journey of the developer as an open-source software configuration management, provisioning and application-deployment tool that enables infrastructure as code.  

 

Ansible Developer Channel

Many themes will be presented in the Ansible Developer Channel, including Kubernetes operations, Red Hat Ansible Automation Platform use cases, as well as execution speed and development efficiency considerations. You can learn how Ansible can streamline Kubernetes Day 2 Operations, where monitoring, maintenance and troubleshooting come into play and the application moves from a development project to an actual strategic advantage for your business. You will also learn how Ansible execution environments solve problems for developers using Ansible Automation Platform and how to create self-contained environments that can be run locally or in production Red Hat Ansible Tower deployments. In addition, you can learn how to optimize execution speed and development efficiency with three types of Operator SDKs, including Go, Helm and Ansible. These topics and many more will be presented in a compelling Ansible Developer Channel.

 

Building Out Your Expertise as a Developer

The Ansible Developer Channel provides a diverse set of topics to build your expertise. You will learn how CI workflows using GitHub Actions can manage numerous Ansible-based projects. You can also learn to implement effective GitOps workflows using Git version control as your system’s “source of truth”. For scenarios where execution speed is a major consideration, you will learn how caching and reusing Python resources at the host level can achieve this goal. We welcome you to join us for a developer-oriented set of sessions in the Ansible Developer Channel during the several listed below:

  • Simplifying Kubernetes with Ansible
  • Creating and Using Ansible Execution Environment
  • Fast vs. Easy: Operators by the Numbers
  • Continuous Testing With Molecule, Ansible and GitHub Actions
  • Operations by Pull Request: An Ansible GitOps Story
  • How to Speed Up Your Modules

What's Next?

  • Register today for the AnsibleFest 2020 virtual experience
  • Check out the session catalog to see what sessions to expect at the event

Operations Channel at AnsibleFest 2020

$
0
0

AnsibleFest 2020 is right around the corner and we could not be more excited. This year we have some great content in each of our channels. Here is a preview of what attendees can expect from the Operations channel at AnsibleFest.

 

Operations Channel

This channel will take Operators on an automation journey through the Technical Operations lifecycle and how The Ansible Automation Platform is the center of your automation goals. Learn how to get your automation moving with Certified Content Collections, then scale out with execution environments and tune the performance. Once you are running at scale we have tools to show you what teams are using automation and how much it is saving you with some real world examples and by using Analytics. 

You should be leaving with some great examples and walkthroughs on infrastructure automation, from  operating systems to public cloud and how you can leverage Ansible Automation Platform to foster cross-functional team collaboration and empower your whole organization with automation they need.

There will be something for everyone. You’ll get to hear from customers, Red Hatters and our partners. Also pick up some tips for your server deployments, performance and cluster management. 

 

Operation Channel Highlights

  • Running of the Automation - Dylan Silva
  • How to introduce automation to your colleagues in IT Security (and what's in for you) - Massimo Ferrari
  • Performance Tuning Ansible Tower - Scale Up vs. Scale Out - Chase Hoffman; Matthew Jones | Red Hat

 

What's Next?

  • Register today for the AnsibleFest 2020 virtual experience
  • Check out the session catalog to see what sessions to expect at the event

Security Channel at AnsibleFest 2020

$
0
0

Security automation is an area that encompasses different practices, such as investigation & response, security compliance, hardening, etc. While security is a prominent topic now more than ever, all of these activities also greatly benefit from automation. 

For the second year at AnsibleFest, we will have a channel dedicated to security automation. We talked with channel Lead Massimo Ferrari to learn more about the security automation channel and the sessions within it. 

 

Security Channel

The sessions in this channel will show you how to introduce and consume Red Hat Ansible Automation Platform in different stages of maturity of your security organization as well as using it to share processes through cross-functional teams. Sessions include guidance from customers, Red Hat subject matter experts and certified partners.

 

What will Attendees learn?

The target audience is security professionals who want to learn how Ansible can support and simplify their activities, and automation experts tasked with expanding the footprint of their automation practice and support security teams in their organization. This track is focused on customer stories and technical guidance on response & remediation, security operations and vulnerability management use cases. 

Content is suitable for both automation veterans and security professionals who want to learn how to leverage Ansible in security environments to support activities like incident investigation and response, compliance enforcement and system hardening. They will learn how Ansible can support SecOps and security analysis teams to streamline and accelerate the activities they perform every day. 

 

Channel Highlights

This year, you’ll have the opportunity to learn how automation is a foundational technology for a successful DevSecOps initiative as well as how Ansible can be used to automate container security and endpoint management platforms.

Additionally, for the second year, we’re running our enhanced Ansible security automation workshop that will allow you to practice how Ansible can be used to automate different security platforms to create more efficient and streamlined investigation and remediation processes. Workshops have limited space and fill up quickly, so make sure to register for AnsibleFest 2020 so you can sign up for workshops!

 

What's Next?

  • Register today for the AnsibleFest 2020 virtual experience
  • Check out the session catalog to see what sessions to expect at the event

Developing and Testing Ansible Roles with Molecule and Podman - Part 2

$
0
0

Molecule is a complete testing framework that helps you develop and test Ansible roles, which allows you to focus on role content instead of focusing on managing testing infrastructure. In the first part of this series, we’ve successfully installed, configured and used Molecule to set up new testing instances.

Now that the instances are running, let’s start developing the new role and apply Molecule to ensure it runs according to the specifications.

This basic role deploys a web application supported by the Apache web server. It must support Red Hat Enterprise Linux (RHEL) 8 and Ubuntu 20.04.

 

Developing the Ansible Role with Molecule

Molecule helps in the development stage by allowing you to “converge” the instances with the role content. You can test each step without worrying about managing the instances and test environment. It provides quick feedback, allowing you to focus on the role content, ensuring it works in all platforms.

In the first part of this series, we initialized a new role “mywebapp”. If you’re not there yet, switch to the role directory “mywebapp” and add the first task, installing the Apache package “httpd” using the “package” Ansible module. Edit the file “tasks/main.yaml” and include this task:

$ vi tasks/main.yml
---
# tasks file for mywebapp
- name: Ensure httpd installed
  package:
    name: "httpd"
    state: present

Save the file and “converge” the instances by running “molecule converge”. The “converge” command applies the current version of the role to all the running container instances. Molecule “converge” does not restart the instances if they are already running. It tries to converge those instances by making their configuration match the desired state described by the role currently testing.

$ molecule converge
... TRUNCATED OUTPUT ... 
   TASK [mywebapp : Ensure httpd installed] ***************************************
    Saturday 27 June 2020  08:45:01 -0400 (0:00:00.060)       0:00:04.277 *********
fatal: [ubuntu]: FAILED! => {"changed": false, "msg": "No package matching 'httpd' is available"}
    changed: [rhel8]
... TRUNCATED OUTPUT ... 

Notice that the current version worked well on the RHEL8 instance, but failed for the Ubuntu instance. By using Molecule, you can quickly evaluate the result of your tasks in all platforms and verify if the role works according to the requirements! In this example however, the tasks failed because Ubuntu does not have a package named “httpd”. For that platform, the package name is “apache2”.

So let’s modify the role to include variables with the correct package name for each platform. Start with RHEL8 by adding a file “RedHat.yaml” under the “vars” sub-directory with this content:

$ vi vars/RedHat.yaml
---
httpd_package: httpd

Save this file and add the corresponding file “vars/Debian.yaml” for Ubuntu:

$ vi vars/Debian.yaml
---
httpd_package: apache2

Save this file and modify the “tasks/main.yaml” file to include these variable files according to the OS family identified by Ansible via the system fact variable“ansible_os_family”. We also have to include a task to update the package cache for systems in the “Debian” family since their package manager caches results otherwise. Last, we update the install task to use the variable “httpd_package” that you defined in the variables files:

$ vi tasks/main.yml
- name: Include OS-specific variables.
  include_vars: "{{ ansible_os_family }}.yaml"
- name: Ensure package cache up-to-date
  apt:
    update_cache: yes
    cache_valid_time: 3600
  when: ansible_os_family == "Debian"
- name: Ensure httpd installed
  package:
    name: "{{ httpd_package }}"
    state: present

Save this file, and “converge” the instances again to ensure it works this time:

$ molecule converge
... TRUNCATED OUTPUT ... 
   TASK [mywebapp : Ensure httpd installed] ***************************************
    Saturday 27 June 2020  08:59:13 -0400 (0:00:07.338)       0:00:12.925 *********
    ok: [rhel8]
    changed: [ubuntu]
... TRUNCATED OUTPUT ...

Because the package was already installed in the RHEL8 instance, Ansible returned the status “OK” and it did not make any changes. It installed the package correctly in the Ubuntu instance this time.

We have installed the package - but the naming problem also exists with the service itself: they are named differently in RHEL and Ubuntu. So we add service name variables to the playbooks and variable files. Start with RHEL8:

$ vi vars/RedHat.yaml
---
httpd_package: httpd
httpd_service: httpd

Save this file and then edit the file “vars/Debian.yaml” for Ubuntu:

$ vi vars/Debian.yaml
---
httpd_package: apache2
httpd_service: apache2

Save the file and add the new task at the end of the “tasks/main.yml” file:

$ vi tasks/main.yml
- name: Ensure httpd svc started
  service:
    name: "{{ httpd_service }}"
    state: started
    enabled: yes

Save the file and “converge” the instances again to start the Apache httpd service:

$ molecule converge
... TRUNCATED OUTPUT ... 
   TASK [mywebapp : Ensure httpd svc started] *************************************
    Saturday 27 June 2020  09:34:38 -0400 (0:00:06.776)       0:00:17.233 *********
    changed: [ubuntu]
    changed: [rhel8]
... TRUNCATED OUTPUT ...

Let’s add a final task to create some content for the web application. Each platform requires the HTML files owned by different groups. Add new variables to each variable file to define the group name:

$ vi vars/RedHat.yaml
---
httpd_package: httpd
httpd_service: httpd
httpd_group: apache

Save this file then edit the file “vars/Debian.yaml” for Ubuntu:

$ vi vars/Debian.yaml
---
httpd_package: apache2
httpd_service: apache2
httpd_group: www-data

Save the file and add the new task at the end of the “tasks/main.yml” file:

$ vi tasks/main.yml
- name: Ensure HTML Index
  copy:
    dest: /var/www/html/index.html
    mode: 0644
    owner: root
    group: "{{ httpd_group }}"
    content: "{{ web_content }}"

This task allows the role user to specify the content by using the variable “web_content” when calling the role. Add a default value to this variable in case the user does not specify it:

$ vi defaults/main.yml
---
# defaults file for mywebapp
web_content: There's a web server here

Save this file and “converge” the instances one more time to add the content:

$ molecule converge
... TRUNCATED OUTPUT ... 
   TASK [mywebapp : Ensure HTML Index] ********************************************
    Saturday 27 June 2020  09:50:11 -0400 (0:00:03.261)       0:00:17.753 *********
    changed: [rhel8]
    changed: [ubuntu]
... TRUNCATED OUTPUT ...

At this time, both instances are converged. Manually verify that the role worked by using the molecule login command to log into one of the instances and running the “curl” command to get the content:

$ molecule login -h rhel8
[root@2ce0a0ea8692 /]# curl http://localhost
There's a web server here 
[root@2ce0a0ea8692 /]# exit

You used Molecule to aid with the role development by ensuring that it is working properly across multiple platforms for each step of the way.

Next, let’s automate the verification process.

 

Verifying the Role with Molecule

In addition to helping you converge the instance to aid with the role development, Molecule can also automate the testing process by executing a verification task. To verify the results of your playbook, Molecule can use either the “testinfra” framework or it can use Ansible itself. 

Let’s use an Ansible Playbook to verify the results of this new role. By default, Molecule provides a basic verifier playbook “molecule/default/verify.yml” as a starting point. This playbook contains the basic required structure but does not do any useful verification. Update this playbook to test this role result by using the Ansible’s “uri” module to obtain the content from the running web server and the “assert” module to ensure it’s the correct content:

$ vi molecule/default/verify.yml 
---
# This is an example playbook to execute Ansible tests.
- name: Verify
  hosts: all
  vars:
    expected_content: "There's a web server here"
  tasks:
  - name: Get index.html
    uri:
      url: http://localhost
      return_content: yes
    register: this
    failed_when: "expected_content not in this.content"
  - name: Ensure content type is text/html
    assert:
      that:
      - "'text/html' in this.content_type"
  - name: Debug results
    debug:
      var: this.content

Save and close this file. Verify the results by running “molecule verify”:

$ molecule verify
... TRUNCATED OUTPUT ... 
   TASK [Ensure content type is text/html] ****************************************
    Saturday 27 June 2020  10:03:18 -0400 (0:00:03.131)       0:00:07.255 *********
    ok: [rhel8] => {
        "changed": false,
        "msg": "All assertions passed"
    }
    ok: [ubuntu] => {
        "changed": false,
        "msg": "All assertions passed"
    }
... TRUNCATED OUTPUT ... 
Verifier completed successfully.

Molecule runs the verifier playbook against all instances ensuring the results match the expected values.

You can change the default values for the test by editing the converge playbook to update the “web_content” variable:

$ vi molecule/default/converge.yml
---
- name: Converge
  hosts: all
  tasks:
    - name: "Include mywebapp"
      include_role:
        name: "mywebapp"
      vars:
         web_content: "New content for testing only"

Then, update the “expected_content” variable in the verifier playbook:

$ vi molecule/default/verify.yml 
---
# This is an example playbook to execute Ansible tests.
- name: Verify
  hosts: all
  vars:
    expected_content: "New content for testing only"
  tasks:

Converge the instances one more time to update the web server content, then verify the results:

$ molecule converge
... TRUNCATED OUTPUT ... 
   TASK [mywebapp : Ensure HTML Index] ********************************************
    Saturday 27 June 2020  10:09:34 -0400 (0:00:03.331)       0:00:19.607 *********
    changed: [rhel8]
    changed: [ubuntu]
... TRUNCATED OUTPUT ... 
$ molecule verify
... TRUNCATED OUTPUT ... 
   TASK [Debug results] ***********************************************************
    Saturday 27 June 2020  10:10:15 -0400 (0:00:00.299)       0:00:10.142 *********
    ok: [rhel8] => {
        "this.content": "New content for testing only"
    }
    ok: [ubuntu] => {
        "this.content": "New content for testing only"
    }
... TRUNCATED OUTPUT ... 
Verifier completed successfully.

Using the verifier, you can define a playbook to execute checks and ensure the role produces the required results.

In the final step, let’s put it all together with automated tests.

 

Automating the Complete Test Workflow

Now that all of the pieces are together, automate the complete testing process workflow using the command “molecule test”.

Unlike the “molecule converge”, which aided with role development, the goal with “molecule test” is to provide an automated and repeatable environment to ensure the role works according to its specifications. Therefore, the test process destroys and re-creates the instances for every test.

By default, “molecule test” executes these steps in order:

  1. Install required dependencies
  2. Lint the project
  3. Destroy existing instances
  4. Run a syntax check
  5. Create instances
  6. Prepare instances (if required)
  7. Converge instances by applying the role tasks
  8. Check the role for idempotence
  9. Verify the results using the defined verifier
  10. Destroy the instances

You can change these steps by adding the “test_sequence” dictionary with the required steps to the Molecule configuration file. For additional information, consult the official documentation.

Execute the test scenario:

$ molecule test
... TRUNCATED OUTPUT ... 
--> Test matrix
└── default
    ├── dependency
    ├── lint
    ├── cleanup
    ├── destroy
    ├── syntax
    ├── create
    ├── prepare
    ├── converge
    ├── idempotence
    ├── side_effect
    ├── verify
    ├── cleanup
    └── destroy
    
--> Scenario: 'default'
... TRUNCATED OUTPUT ... 

If the test workflow fails at any point, the command returns a status code different than zero. You can use that return code to automate the process or integrate Molecule with CI/CD workflows.

 

Conclusion

Now that you’ve successfully applied Molecule to develop and test a role that is well written and works reliably across different environments, you can integrate it into your development cycle to produce high standard roles consistently without worrying about the testing infrastructure.

Molecule helps during the role development process by providing constant feedback, which ensures your role works as designed each step of the way.

For more advanced scenarios, Molecule supports additional drivers that allow you to test roles with different platforms, virtualization and cloud providers.

Finally, you can integrate Molecule with CI/CD workflows to automate the complete testing process for your Ansible roles.

For more information about Molecule and Ansible, consult the following resources:

*Red Hat provides no expressed support claims to the correctness of this code. All content is deemed unsupported unless otherwise specified

Introducing the VMware REST Ansible Content Collection

$
0
0

The VMware Ansible modules as part of the current community.vmware Collection are extremely popular. According to GitHub, it's the second most forked Collection1, just after community.general. The VMware modules and plugins for Ansible have benefited from a stream of contributions from dozens of users. Many IT infrastructure engineers rely on managing their VMware infrastructure by means of a simple Ansible Playbook. The vast majority of the current VMware modules are built on top of a dependent python library called pyVmomi, also known as vSphere Automation SDK for Python.

 

Why a new VMware Ansible Content Collection?

VMware has recently introduced the vSphere REST API for vSphere 6.0 and later, which will likely replace the existing SOAP SDK used in the community.vmware Collection.

Since the REST API’s initial release, vSphere support for the REST API has only improved. Furthermore, there is no longer a need for any dependent python packages. In order to maintain the existing VMware modules in the community.vmware Collection, a set of modules specifically for interacting with the VMware REST API is now available in the newly created vmware.vmware_rest Collection.

If you compare modules used with the VMware vSphere API (SOAP) to the ones using the REST API, you’ll notice the REST API modules are not yet feature complete, as this is an early release of the Collection. For example, there currently is no way to create a cluster or a folder using the modules in the vmware.vmware_rest Collection, but the API provides all you need for a VMware guest for future Collection enablement and much more.

 

Using the VMware REST API

In order to understand how the new modules function against the new REST API, let’s take a look at the REST API itself first. For example, the com.vmware.vcenter.vm.power API endpoint changes the power state of a VM. It's equivalent to the following sample URL: https://vcenter.test/rest/vcenter/vm/$vm/power

With the vCenter 7.0 release, 723 total REST endpoints are exposed, which can be discovered using the following curl command:

$ curl -k https://vcenter.test/rest/com/vmware/vapi/metadata/cli/command|jq -r ".[][].path"|uniq|wc -l
723

The VMware REST APIs are documented in the Swagger 2.0 format. You can find the JSON files on your vCenter node in the following directory path:

root@vcenter [ /etc/vmware-vapi/apiexplorer/json ]# ls -lh
total 3.3M
-rw-r--r-- 1 vapiEndpoint users  145 Aug 31 15:37 api.json
-rw-r--r-- 1 vapiEndpoint users 396K Aug 31 15:36 appliance.json
-rw-r--r-- 1 vapiEndpoint users 153K Aug 31 15:36 cis.json
-rw-r--r-- 1 vapiEndpoint users 272K Aug 31 15:37 content.json
-rw-r--r-- 1 vapiEndpoint users 395K Aug 31 15:36 esx.json
-rw-r--r-- 1 vapiEndpoint users 153K Aug 31 15:36 stats.json
-rw-r--r-- 1 vapiEndpoint users 176K Aug 31 15:37 vapi.json
-rw-r--r-- 1 vapiEndpoint users 1.8M Aug 31 15:36 vcenter.json

To summarize, the vmware.vmware_rest Collection has all these REST endpoints ready to be consumed with the descriptions in a well documented format.

 

Building the vmware_rest  Collection

The modules contained in this Collection are generated using a tool called  vmware_rest_code_generator, which was developed and open sourced by the Ansible team. It loads the Swagger files and then auto-generates a module per each resource, generating more than 300 modules this way. You’ll notice that not every module has been released to the Collection. For purposes of starting small, we are only generating modules against a subset of the endpoints exposed, only those associated with guest management use cases. We may expand and extend the number of modules over time.

 

Using the vmware_rest Collection

The following tasks retrieve a list of the VM, shuts them down, and then deletes them:

- name: Collect the list of the existing VM
  vcenter_vm_info:
  register: existing_vms
  until: existing_vms is not failed
- name: Turn off the VM
  vcenter_vm_power:
	state: stop
	vm: '{{ item.vm }}'
  with_items: "{{ existing_vms.value }}"
  ignore_errors: yes
- name: Delete some VM
  vcenter_vm:
	state: absent
	vm: '{{ item.vm }}'
  with_items: "{{ existing_vms.value }}"

Refer to the following gist file for more information: https://gist.github.com/goneri/6afd05397390cf5a0976f3611814949a

 

Downloading the vmware_rest Collection

The goal of this early release is to get as much community feedback as possible.

The Collection is available on Ansible Galaxy, and requires the following:

  • Ansible 2.9 or later 
  • Python 3.6 or later
  • The aiohttp package

Use the ansible-galaxy command to retrieve the Collection:

# ansible-galaxy collection install vmware.vmware_rest

If you use a virtualenv, you can install aiohttp with following command:

# pip install aiohttp

Else, you will need to download and install the python3-aiohttp package.

To read the module documentation, use the ansible-doc command. For example, to read documentation for the vcenter_cluster_info module, refer to the following command:

# ansible-doc -t module vmware.vmware_rest.vcenter_cluster_info

 

vCenter-Managed Object Reference ID

If you are already using the community.vmware Collection, the main difference is that the newer modules rely on the MORef ID to identify the elements instead of the name of the object. For example, if the user creates a datacenter called dc1, the MORef ID using the new modules will be datacenter-2. The community.vmware Collection uses the name and the folder instead.

By using the MORef ID directly, the module is able to interact with the resource without any time consuming preliminary look up.

 

How can I contribute?

Because the modules are auto-generated, it GitHub pull requests should be raised against the code generator itself, and not the resulting Collection contents.

Don't hesitate to report any issues on the GitHub project at https://github.com/ansible-collections/vmware_rest/issues.

 

Learn more!

Come hear from the Automation developers at AnsibleFest 2020, which is a free virtual event this year. Specifically, to learn more about VMware, check out the talk entitled “Manage your guests with REST-based modules for VMware vSphere” by Abhijeet Kasurde (who works with me) on all things cloud. We hope to see you there!

 

Reference:

1. The forks per collection can be found programmatically by accessing the Github API: https://api.github.com/orgs/ansible-collections/repos.  This can be sorted, for example:

method: curl -s https://api.github.com/orgs/ansible-collections/repos|jq -r -c --sort-keys '.|sort_by(.forks)|reverse|.[]|[.name, .forks]'

Customer Spotlights at AnsibleFest 2020

$
0
0

AnsibleFest 2020 will be here before we know it, and we cannot wait to connect with everyone in October. We have some great content lined up for this year’s virtual experience and that includes some amazing customer spotlights. This year you will get to hear from CarMax, Blue Cross Blue Shield of NC, T-Mobile, PRA International and CEPSA. These customers are using Ansible in a variety of ways, and we hope you connect to their incredible stories of teamwork and transformative automation.

 

Customer Spotlights

Benjamin Blizard, a Network Engineer at T-Mobile, will explore how T-Mobile transformed from a disparate organization with difficulty enforcing standards to a collaborative group of engineers working from repeatable templates and processes. T-Mobile, a major telecommunications provider, uses Ansible Automation Platform to standardize processes across their organization. Ben will show how automation supports T-Mobile’s compliance standards, data integrity, and produces speed and efficiency for network teams. 

 

What Next?

Join us for AnsibleFest 2020 to hear from more customer like T-mobile talk about their automation journey. Make sure to go and register today and check out the session catalog that lists all the content that we have prepared for you this year. We look forward to connecting with you Oct. 13-14.

The Network CLI is Dead, Long Live XML! (just kidding, it’s an Ansible+NETCONF+YANG Deep Dive)

$
0
0

Now that I've startled you, no, the network CLI isn’t going away anytime soon, nor are people going to start manipulating XML directly for their network configuration data. What I do want to help you understand is how Ansible can now be used as an interface into automating the pushing and pulling of configuration data (via NETCONF) in a structured means (via YANG data models) without having to truly learn about either of these complex concepts. All you have to understand is how to use the Ansible Content Collection as shown below, obfuscating all technical implementation details that have burdened network operators and engineers for years.

 

Setting the stage

Before we even start talking about NETCONF and YANG, our overall goal is for the network to leverage configuration data in a structured manner. This makes network automation much more predictable and reliable when ensuring operation state. NETCONF and YANG are the low-level pieces of the puzzle, but we are making it easier to do via well known Ansible means and methodologies.

What we believe as Ansible developers is that NETCONF and YANG aren't (and shouldn't) be quintessential or ultimate goals for network automation engineers. You should not need to memorize complex YANG data models or NETCONF RPC commands. It should be an implementation detail that’s already solved elsewhere, such as Ansible. It’s more about the goal of solving the problem of treating network configuration data as platform agnostic key-value pairs in a standardized schema. NETCONF and YANG fit the bill here, and then use Ansible as the primary interface in for users.

Here’s the problem though: NETCONF and YANG are incredibly complex and somewhat unapproachable concepts to network engineers, therefore the implementation is still somewhat rare and unpopular. If we can take NETCONF and YANG and make Ansible as the presentation layer for users, this becomes much more approachable to network engineers that may not be able to (or have the desire to) become python programmers.

So let’s do this in an Ansible “friendly” way, using Ansible content such as Ansible modules, roles and plugins wrapped up into a Collection. You don’t have to worry about the YANG model itself or the actual XML that’s being passed. It can all be represented by YAML.

 

Types of YANG Implementations

Although the YANG schema is indeed a standard, how the data itself is represented may vary depending on the implementation method. Therefore, there are YANG models that are defined and maintained by specific network vendors, and models that are defined and maintained by the community that are vendor neutral.

  • Vendor defined:

https://github.com/openconfig/public/tree/master/release/models.

Therefore, it can become extremely difficult to manage multiple YANG implementations depending on the platform being automated. The goal is to leverage Ansible to standardize and normalize the management of the YANG data models on behalf of the user.

 

How does NETCONF factor in?

The NETCONF protocol is an IETF standard for managing configuration as well as retrieving state data from a NETCONF enabled device. It uses SSH for transport to send RPC requests and receives the responses in XML format. The protocol defines standard RPC endpoints like edit-config, get, get-config copy-config, delete-config, lock, unlock and many others to manage the remote device. Ansible has supported NETCONF as a standard connection method for quite some time, and is now provided by the ansible.netcommon Collection for Ansible 2.9.10 and later.

 

YANG to NETCONF mapping

Ansible yang blog snapshot

The above figure represents how the hierarchy of the YANG model maps to that of the NETCONF XML RPC. The <edit-config> and the <running> tags represent the RPC request to edit the running datastore configuration given in XML payload onto the device. 

The data hierarchy in the payload starting with interfaces tag followed by interface tag is defined in the YANG model wherein interfaces is defined as a container node, followed by the interface child node as a list (since there are multiple interfaces to be configured).

Thus by parsing the YANG model, the Network Management System (NMS) can either generate and/or validate the NETCONF XML RPC structure to be sent to the device in a programmatic approach.

 With this context, let's now move on to talk about the Ansible YANG Content Collection. While developing this Collection, we adhered to the following requirements:

  • Functions with all variants of YANG models
  • Abstracts YANG related complexities from the user via Ansible modules
  • Uses structured data in JSON format as input/output
  • Fetches YANG models from a network appliance at runtime (if supported)
  • Renders skeleton structured data (JSON, XML, YANG tree) for given YANG model

Based on these guidelines, let’s go through the modules supported in this Collection.

 

The community.yang.fetch module

This module fetches the YANG model and its dependent yang models from the device using the NETCONF get schemas capability, if supported, and optionally stores the YANG files locally on disk.

The task to get the list of supported YANG models on the remote appliance:

- name: Fetch the list of supported yang model names
  community.yang.fetch:

Example: https://gist.github.com/ganeshrn/f45d34602058aa5eeacc188b14f05206

The output of this task runs against Cisco IOS XR version 6.1.3 and can be referred to the following gist file.

To fetch the given YANG model provided by name option along with its dependencies, refer to the following example:

- name: Fetch given yang model and it’s dependent from remote host
  community.yang.fetch:
    name: Cisco-IOS-XR-ifmgr-cfg
    dir:"{{playbook_dir}}/{{inventory_hostname}}/yang_files"
  register: result

Example: https://gist.github.com/ganeshrn/b03ebddce9579ea8749a7889d638a2a5

The fetched YANG models are stored at the location given by dir option.

ls iosxr01/yang_files/
Cisco-IOS-XR-ifmgr-cfg.yang     Cisco-IOS-XR-types.yang

Fetch all the YANG models supported by the remote host by assigning the all value to the name option:

- name: Fetch all the yang models and store it in dir location
  community.yang.fetch:
    name: all
    dir:"{{playbook_dir}}/{{inventory_hostname}}/yang_files"
  register: result

Example: https://gist.github.com/ganeshrn/d82648f74cd217b724cf0ecb7ddeaa94

The YANG models are fetched and stored at the location given by dir option.

 

The community.yang.get module

This module uses the NETCONF get RPC to fetch configuration data from the remote host and render it in JSON format (as per RFC7951 which defines JSON encoding of data modelled with YANG).

To fetch a subset of running configuration, render it in JSON format and save it in file use the below tasks:

- name: get interface configuration using cisco iosxr yang model
community.yang.get:
filter: |
<interface-configurations xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">
<interface-configuration>
<interface-name>GigabitEthernet0/0/0/0</interface-name>
</interface-configuration>
</interface-configurations>
file: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files/Cisco-IOS-XR-ifmgr-cfg.yang"
search_path: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files"
register: result- name: copy json config to file
copy:
content: "{{ result['json_data'] | to_nice_json }}"
dest: "{{playbook_dir}}/{{inventory_hostname}}/config/interfaces.json"

Example: https://gist.github.com/ganeshrn/3fe905aef0556fbfa7b8c2e466d1c0c5

The filter option in community.yang.get task refers to the subset of the configuration that should be fetched from the remote host; it currently supports XML format and will soon support JSON format as well. The filter structure can be derived from output of community.yang.generete_spec task, which I’ll explain later. The file option corresponds to the YANG model that the configuration adheres to. The search_path points to the directory location in which all the dependent YANG files are stored. The copy task copies the retrieved configuration in JSON format into a file path provided by the dest option.

After the task is run, the sample contents of interfaces.json file is:

$cat iosxr01/config/interfaces.json

{
   "Cisco-IOS-XR-ifmgr-cfg:interface-configurations": {
       "interface-configuration": [
           {
               "active": "act",
               "description": "manually configured",
               "interface-name": "GigabitEthernet0/0/0/0"
           }
       ]
   }
}

 

The community.yang.configure module

This module takes the JSON configuration as input (as per RFC 7951, which defines JSON encoding of data modelled with YANG), pre-validates the config with the corresponding YANG model, and converts input JSON configuration to XML payload to be pushed on the remote host using the netconf_config module.

To demonstrate this task, we will first modify the interface config fetched using the community.yang.get task earlier:

$cat iosxr01/config/interfaces.json

{
   "Cisco-IOS-XR-ifmgr-cfg:interface-configurations": {
       "interface-configuration": [
           {
               "active": "act",
               "description": "configured using Ansible YANG collection",
               "interface-name": "GigabitEthernet0/0/0/0"
           }
       ]
   }
}

After that push, use the configure module:

- name: configure interface using structured data in JSON format
 community.yang.configure:
   config: "{{ lookup('file', './iosxr01/config/interfaces.json') | to_json }}"
   file: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files/Cisco-IOS-XR-ifmgr-cfg.yang"
   search_path: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files"
 register: result

Example: https://gist.github.com/ganeshrn/89137bf3303aaea3e8907e1b638f0eba

The config option accepts the value in JSON format and the file lookup plugin is used to read the content of the file updated. The file option corresponds to the YANG model that the configuration adheres to. The search_path points to the directory location in which all the dependent YANG files are stored. This task reads the YANG file and does pre-validation on the value of the config option to check for correctness of the configuration that is to be pushed on to the device based on the constraints defined in the YANG file.

The combination of community.yang.get and community.yang.configure is particularly useful in brownfield deployments to retrieve the current running configuration on the device, update it to the required values and push back onto the device.

 

The community.yang.generate_spec module

Handcrafting JSON and/or XML data manually is not straightforward, nor an activity that is realistic to an end-user. The user should be able to easily generate the configuration data structure using the given YANG model, and generate the corresponding JSON, XML schema and the YANG tree representation (as per RFC 8340) of the model. This is particularly useful if the given hierarchy is not configured on the device or it is a greenfield deployment.

To generate the reference JSON, XML, and YANG tree schema, run the following task:

- name: generate spec from cisco iosxr interface config data and store it in       file
 community.yang.generate_spec:
   file: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files/Cisco-IOS-XR-ifmgr-cfg.yang"
   search_path: "{{ playbook_dir }}/{{inventory_hostname}}/yang_files"
   json_schema:
     path: "{{ playbook_dir }}/{{ inventory_hostname }}/spec/Cisco-IOS-XR-ifmgr-cfg.json"
   xml_schema:
     path: "{{ playbook_dir }}/{{ inventory_hostname }}/spec/Cisco-IOS-XR-ifmgr-cfg.xml"
     defaults: True
   tree_schema:
     path: "{{ playbook_dir }}/{{ inventory_hostname }}/spec/Cisco-IOS-XR-ifmgr-cfg.tree"

Example: https://gist.github.com/ganeshrn/f52b984a106247e6e3e843be4a3cc7c2

The file option corresponds to the YANG model for which the JSON, XML and YANG tree schema should be generated. The search_path points to the directory location in which all the dependent YANG files are stored. The optional path option within the json_schema, xml_schema and tree_schema options identifies the file where the generated schema are to be stored. When defaults option is set to true either via json_schema or xml_schema schemas are generated by adding the default values for data as mentioned in the YANG model. 

After running the task, the sample contents within the files are:

Resources and Getting Started

Getting Started With AWS Ansible Module Development and Community Contribution

$
0
0

We often hear from cloud admins and developers that they’re interested in giving back to Ansible and using their knowledge to benefit the community, but they don’t know how to get started.  Lots of folks may even already be carrying new Ansible modules or plugins in their local environments, and are looking to get them included upstream for more broad use.

Luckily, it doesn’t take much to get started as an Ansible contributor. If you’re already using the Ansible AWS modules, there are many ways to use your existing knowledge, skills and experience to contribute. If you need some ideas on where to contribute, take a look at the following:

  • Creating integration tests: Creating missing tests for modules is a great way to get started, and integration tests are just Ansible tasks!
  • Module porting: If you’re familiar with the boto3 Python library, there’s also a backlog of modules that need to be ported from boto2 to boto3.
  • Repository issue triage: And of course there’s always open Github issues and pull requests. Testing bugs or patches and providing feedback on your use cases and experiences is very valuable.

The AWS Ansible Content Collections

Starting with Ansible 2.10, the AWS modules have been migrated out of the Ansible GitHub repo and into two new Collection repositories.

The Ansible-maintained Collection, (amazon.aws) houses the modules, plugins, and module utilities that are managed by the Ansible Cloud team and are included in the downstream Red Hat Ansible Automation Platform product.

The Community Collection (community.aws) houses the modules and plugins that are supported by the Ansible community.  New modules and plugins developed by the community should be proposed to community.aws. Content in this Collection that is stable and meets other acceptance criteria has the potential to be promoted and migrated into amazon.aws.

For more information about how to contribute to any of the Ansible-maintained Collections, including the AWS Collections, refer to the Contributing to Ansible-maintained Collections section on docs.ansible.com.

 

AWS module development basics

For starters, make sure you’ve read the Guidelines for Ansible Amazon AWS module development section of the Ansible Developer Guide.  Some things to keep in mind:

If the module needs to poll an API and wait for a particular status to be returned before proceeding, add a waiter to the waiters.py file in the amazon.aws collection rather than writing a loop inside your module. For example, the ec2_vpc_subnet module supports a wait parameter. When true, this instructs the module to wait for the resource to be in an expected state before returning. The module code for this looks like the following:

if module.params['wait']:
    handle_waiter(conn, module, 'subnet_exists', {'SubnetIds': [subnet['id']]}, start_time)

And the corresponding waiter:

        "SubnetExists": {
            "delay": 5,
            "maxAttempts": 40,
            "operation": "DescribeSubnets",
            "acceptors": [
                {
                    "matcher": "path",
                    "expected": True,
                    "argument": "length(Subnets[]) > `0`",
                    "state": "success"
                },
                {
                    "matcher": "error",
                    "expected": "InvalidSubnetID.NotFound",
                    "state": "retry"
                },
            ]
        },

This polls the EC2 API for describe_subnets(SubnetIds=[subnet['id']]) until the list of returned Subnets is greater than zero before proceeding. If an error of InvalidSubnetID.NotFound is returned, this is an expected response and the waiter code will continue.

Use paginators when boto returns paginated results and build the result from the .build_full_result() method of the paginator, rather than writing loops.

Be sure to handle both ClientError and BotoCoreError in your except blocks.

except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
    module.fail_json_aws(e, msg="Couldn't create subnet")

All new modules should support check_mode if at all possible.

Ansible strives to provide idempotency. Sometimes though, this is inconsistent with the way that AWS services operate. Think about how users will interact with the service through Ansible tasks, and what will happen if they run the same task multiple times.  What API calls will be made?  What changed status will be reported by Ansible on subsequent task executions?

Whenever possible, avoid hardcoding data in modules. Sometimes it’s unavoidable, but if your contribution includes a hardcoded list of instance types or a hard-coded partition, this will likely be brought up in code review - for example, arn:aws: will not match the GovCloud or China regions, and your module will not work for users in these regions. If you’ve already determined there’s no reasonable way to avoid hard-coding something, please mention your findings in the pull request.

 

Module Utilities

There’s a substantial collection of module_utils available for working with AWS located in the amazon.aws collection:

$ ls plugins/module_utils/
acm.py  batch.py  cloudfront_facts.py  cloud.py  core.py  direct_connect.py  ec2.py  elb_utils.py  elbv2.py  iam.py  __init__.py  rds.py  s3.py  urls.py  waf.py  waiters.py

Of particular note, module_utils/core.py contains AnsibleAWSModule(), which is the required base class for all new modules. This provides some nice helpers like client() setup, the fail_json_aws() method (which will convert boto exceptions into nice error messages and handle error message type conversion for Python2 and Python3), and the class will handle boto library import checks for you.

AWS APIs tend to use and return Camel case values, while Ansible prefers Snake case.  Helpers for converting between these in are available in amazon.aws.module_utils.ec2, including ansible_dict_to_boto3_filter_list(), boto3_tag_list_to_ansible_dict(), and a number of tag and policy related functions.

 

Integration Tests

The AWS Collections primarily rely on functional integration tests to exercise module and plugin code by creating, modifying, and deleting resources on AWS. Test suites are located in the Collection repository that contains the module being tested.  The preferred style for tests looks like a role named for the module with a test suite per module. Sometimes it makes sense to combine the tests for more than one module into a single test suite, such as when a tightly coupled service dependency exists. These will generally be named for the primary module or service being tested.  For example, *_info modules may share a test with the service they provide information for. An aliases file in the root of the test directory controls various settings, including which tests are aliased to that test role.

tests/integration/targets/ecs_cluster$ ls
aliases  defaults  files  meta  tasks

tests/integration/targets/ecs_cluster$ cat aliases 
cloud/aws
ecs_service_info
ecs_task
ecs_taskdefinition
ecs_taskdefinition_info
unsupported

In this case, several modules are combined into one test, because an ecs_cluster must be created before an ecs_taskdefinition can be created. There is a strong dependency here.

You may also notice that ECS is not currently supported in the Ansible CI environment.  There’s a few reasons that could be, but the most common one is that we don’t allow unrestricted resource usage in the CI AWS account. We have to create IAM policies that allow the minimum possible access for the test coverage. Other reasons for tests being unsupported might be because the module needs resources that we don’t have available in CI, such as a federated identity provider. See the CI Policies and Terminator Lambda section below for more information.

Another test suite status you might see is unstable. That means the test has been observed to have a high rate of transient failures. Common reasons include needing to wait for the resource to reach a given state before proceeding or tests taking too long to run and exceeding the test timer. These may require refactoring of module code or tests to be more stable and reliable. Unstable tests only get run when the module they cover is modified and may be retried if they fail. If you find you enjoy testing, this is a great area to get started in!

Integration tests should generally check the following tasks or functions both with and without check mode:

  • Resource creation
  • Resource creation again (idempotency)
  • Resource modification
  • Resource modification again (idempotency)
  • Resource deletion
  • Resource deletion (of a non-existent resource)

Use module_defaults for credentials when creating your integration test task file, rather than duplicating these parameters for every task. Values specified in module_defaults can be overridden per task if you need to test how the module handles bad credentials, missing region parameters, etc.

- name: set connection information for aws modules and run tasks
  module_defaults:
    group/aws:
      aws_access_key: "{{ aws_access_key }}"
      aws_secret_key: "{{ aws_secret_key }}"
      security_token: "{{ security_token | default(omit) }}"
      region: "{{ aws_region }}"

  block:

  - name: Test Handling of Bad Region
    ec2_instance:
	region: "us-nonexistent-7"
      ... params …

  - name: Do Something
    ec2_instance:
      ... params ...

  - name: Do Something Else
    ec2_instance:
      ... params ...

Integration tests should make use of blocks with test tasks in one or more blocks and a final always: block that deletes all resources created by the tests.

 

Unit Tests

While most modules are tested with integration tests, sometimes this is just not feasible.  An example is when testing AWS Direct Connect. The community.aws.aws_direct_connect* modules can be used to establish a network transit link between AWS and a private data center. This is not a task that can be done simply or repeatedly in a CI test system. For modules that cannot practically be integration tested, we do require unit tests for inclusion into any AWS Ansible Collection.  The placebo Python library provides a nice mechanism for recording and mocking boto3 API responses and is preferred to writing and maintaining AWS fixtures when possible.

 

CI Policies and Terminator Lambda

The Ansible AWS CI environment has safeguards and specific tooling to ensure resources are properly restricted, and that test resources are cleaned up in a reasonable amount of time. These tools live in the aws-terminator repository. There are three main sections of this repository to be aware of:

  1. The aws/policy/ directory
  2. The aws/terminator/ directory
  3. The hacking/ directory

The aws/policy/ directory contains IAM policies used by the Ansible CI service. We generally attempt to define the minimum AWS IAM Actions and Resources necessary to execute comprehensive integration test coverage. For example, rather than enabling ec2:*, we have multiple statement IDs (Sids) that specify different actions for different resource specifications.

We permit ec2:DescribeImages fairly broadly in the region our CI runs in:

  Resource:
      - "*"
    Condition:
      StringEquals:
        ec2:Region:
          - '{{ aws_region }}'

But are more restrictive on which instance types can be started or run via CI:

- Sid: AllowEc2RunInstancesInstanceType
    Effect: Allow
    Action:
      - ec2:RunInstances
      - ec2:StartInstances
    Resource:
      - arn:aws:ec2:us-east-1:{{ aws_account_id }}:instance/*
    Condition:
      StringEquals:
        ec2:InstanceType:
          - t2.nano
          - t2.micro
          - t3.nano
          - t3.micro
          - m1.large  # lowest cost instance type with EBS optimization supported

The aws/terminator/ directory contains the terminator application, which we deploy to AWS Lambda.  This acts as a cleanup service in the event that any CI job fails to remove resources that it creates.  Information about writing a new terminator class can be found in the terminator’s README.

The hacking/ directory contains a playbook and two sets of policies that are intended for contributors to use with their own AWS accounts.  The aws_config/setup-iam.yml playbook creates IAM policies and associates them with two iam_groups. These groups can then be associated with your own appropriate user:

  • ansible-integration-ci: This group mirrors the permissions used by the the AWS collections CI
  • ansible-integration-unsupported: The group assigns additional permissions on top of the 'CI' permissions required to run the 'unsupported' tests

Usage information to deploy these groups and policies to your AWS user is documented in the setup-iam.yml playbook.

 

Testing Locally

You’ve now written your code and your test cases, but you’d like to run your tests locally before pushing to GitHub and sending the change through CI.  Great!  You’ll need credentials for an AWS account and a few setup steps. 

Ansible includes a CLI utility to run integration tests.  You can either set up a boto profile in your environment or use a credentials config file to authenticate to AWS.  A sample config file is provided by the ansible-test application included with Ansible.  Copy this file to tests/integration/cloud-config-aws.ini in your local checkout of the collection repository and fill in your AWS account details for @ACCESS_KEY, @SECRET_KEY, @SECURITY_TOKEN, @REGION.

NOTE: Both AWS Collection repositories have a tests/.gitignore file that will ignore this file path when checking in code, but you should always be vigilant when storing AWS credentials to disk or in a repository directory.

If you already have Ansible installed  on your local machine, ansible-test should already be in your PATH.  If not, you can run it from a local checkout of the Ansible project.

git clone https://github.com/ansible/ansible.git
cd ansible/
source ansible/hacking/env-setup

You will also need to ensure that any Collection dependencies are installed and accessible in your COLLECTIONS_PATHS.  Collection dependencies are listed in the tests/requirements.yml file in the Collection and can be installed with the ansible-galaxy collection install command.

You can now run integration tests from the Collection repository:

cd ~/src/collections/ansible_collections/amazon/aws
ansible-test integration ec2_group

Tests that are unstable or unsupported will not be executed by default.  To run these types of tests, there are additional flags you can pass to ansible-test:

ansible-test integration ec2_group --allow-unstable  --allow-unsupported 

If you prefer to run the tests in a container, there is a default test image that ansible-test can automatically retrieve and run that contains the necessary Python libraries for AWS tests.  This can be pulled and run by providing the --docker flag.  (Docker must already be installed and configured on your local system.)

ansible-test integration ec2_group --allow-unstable  --allow-unsupported --docker

The test container image ships with all Ansible-supported versions of Python.  To specify a particular Python version, such as 3.7, test with:

ansible-test integration ec2_group --allow-unstable  --allow-unsupported --docker --python 3.7

NOTE: Integration tests will create real resources in the specified AWS account subject to AWS pricing for the resource and region.  Existing tests should make every effort to remove resources at the end of the test run, but make sure to check that all created resources are successfully deleted after executing a test suite to prevent billing surprises.  This is especially recommended when developing new test suites or adding new resources not already covered by the test’s always cleanup block.  

NOTE: Be cautious when working with IAM, security groups, and other access controls that have the potential to expose AWS account access or resources.

 

Submitting a Change

When your change is ready to submit, open a pull request (PR) in the GitHub repository for the appropriate AWS Collection.  Shippable CI will automatically run tests and report the results back to the PR.  If your change is for a new module or tests new AWS resources or actions, you may see permissions failures in the test.  In that case, you will also need to open a PR in the mattclay/aws-terminator repository to add IAM permissions and possibly a Terminator class to support testing the new functionality, as described in the CI Policies and Terminator Lambda section of this post.  Members of the Ansible AWS community will triage and review your contribution, and provide any feedback they have on the submission.  

 

Next Steps and Resources

Contributing to open source projects can be daunting at first, but hopefully this blog post provides a good technical resource on how to contribute to the AWS Ansible Collections. If you need assistance with your contribution along the way, you can find the Ansible AWS community on Freenode IRC in channel #ansible-aws.

Congratulations and welcome, you are now a contributor to the Ansible project!


AnsibleFest 2020 Live Q&A

$
0
0

We are less than a week away from AnsibleFest 2020! We can’t wait to connect with you and help you connect with other automation lovers. We have some great content lined up for this year’s virtual experience and that includes some amazing Live Q&A Sessions. This year, you will be able to get your questions answered from Ansible experts, Red Hatters and Ansible customers. Let’s dive into what you can expect. 

 

Tuesday, October 13

11am

Live Q&A: Get all your network automation questions answered with Brad Thornton, Iftikhar Khan and Sean Cavanaugh

In this session, a panel of experts discuss a wide range of use cases around network automation.  They will talk about the Red Hat Ansible Automation Platform and the product direction including Ansible Network Collections, resource modules and managing network devices in a GitOps model. Bring your questions for the architects and learn more about how Red Hat is helping organizations operationalize automation in their network while bridging gaps between different IT infrastructure teams.

 

Live Q&A: Bridging traditional, container, and edge platforms through automation with Joe Fitzgerald, Ashesh Badani, and Stefanie Chiras

Join this panel discussion, moderated by Kelly Fitzpatrick (Redmonk), to hear from Joe Fitzgerald, Ashesh Badani, and Stefanie Chiras about using automation to connect traditional, container, and edge platforms.

 

Live Q&A: Efficiently scaling automation across your organization

Automating the enterprise requires breaking down silos and empowering self-service. But how do you scale your automation efficiently across your organization? Join this "ask the expert" session where you will learn the key training to accelerate your automation use, how to advance your organization's goals through a guided mentorship and have the opportunity to ask a Red Hat expert technical and non-technical questions about enterprise automation.

12pm 

Live Q&A: Ansible Diversity & Inclusion working group with Bianca Henderson, Thanh Nguyet Vo, Alicia Cozine, Jill Rouleau and Carol Chen

The Diversity and Inclusion working group will answer your questions about D&I in the Ansible community and offer ways for you to get involved.

12:30pm 

Live Q&A: Common Connections - Automation in the Public Sector 

Automation is the cornerstone of cloud, datacenter consolidation and IT modernization.  No matter what aspect of IT you are engaging with, automating manual processes is proven to increase reliability by reducing manual errors, decreasing time to deliver IT assets and is key to building self service capabilities for your enterprise. More recently, we've seen an interesting confluence of technologies bringing automation to bear on solving issues of security scanning, remediation and documentation within the public sector.  This session is public sector focused where you'll be able to ask the speakers any question from beginner to advanced on all things automation.

3:30pm

Live Q&A: Automation Architects and the automation strategy in today's organizations

The use of automation has emerged from a single task-based solution to a strategic imperative, where many companies are looking to implement full end-to-end automation strategies.  Along with this shift comes the need for an “Automation Architect” role to work with the business and the technology team to define and implement an automation strategy.  This session is an open Q&A session with Chad Ferman, an architect looking after automation for Exxon Mobil.

 

Live Q&A: Building Ansible Content Collections and using Automation Hub 

Get all your questions answered on building Ansible Content Collections, why they matter and how to share, consume, use and access content hosted in Automation Hub.



Wednesday, October 14

11am

Live Q&A: Open forum about connecting traditional, container and edge platforms through management and automation with Joe Fernandes, Dave Lindquist and Tom Anderson

Join Joe Fernandes, Dave Lindquist and Tom Anderson in an open discussion about delivering the next generation of enterprise infrastructure at scale with automation as the bridge and what Red Hat is doing to deliver on multi-cloud management and automation. 

 

Live Q&A: Open forum about the future of Automation with Ansible Engineers Jason McKerr, Aaron Withrow, Adam Miller and Richard Henhsall

A panel of engineering leaders will discuss the future of Red Hat Ansible Automation Platform and how they are working to deliver a flexible platform for delivering end-to-end automation. This panel will be moderated by Tim Cramer, vice president,, Software Engineering at Red Hat.

 

Live Q&A: Get all your security automation questions answered with Sumit Jaiswal, Faz Sadeghi and Roland Wolters

We have assembled a panel of experts to answer any of your questions as they relate to automating security across your security operations center. Some topics for discussion include the numerous integrations and certified collections with Red Hat's security partners, and how this can help you get started or optimize your security practice with Ansible security automation.  This panel can also answer questions relating to where your security team is on their automation journey, including how security teams can move from beginner, to automating Security Information and Event Management (SIEM), all the way to Security Orchestration, Automation, and Response (SOAR). Join us at 11am  EST on Wednesday, October 14 to engage with our panel and get all of your security automation questions answered.

12pm

Live Q&A: Applying open source principles to diversity and inclusion with Sam Knuth, Allie DeVolder and Koren Townsend

Open source principles apply to much more than code. Tune in as Red Hat Diversity and Inclusion Community leaders share how they used Red Hat’s open approach to foster inclusivity across the company and in their communities. Come with questions about how you can bring your open source expertise to inclusion initiatives that you feel passionate about and that would enhance diversity in your organization.

12:30pm

Live Q&A: Common Connections - Automation patterns and trends in the Telco vertical with ATT, Bell and Vodafone 

Ansible has an ever increasing footprint in the Telco industry. From automating backbone networks to provisioning services and making the life of operations teams a lot more efficient, Ansible has been a successful automation framework due to it’s short learning curve, it’s “batteries included” approach and extensive community.

The panel will address experience with deploying Ansible and Ansible’s impact on cultural transformation within teams, in addition to emerging automation patterns and trends within the Telco industry. The panel will discuss CI/CD strategies and infrastructure as code enablement through Ansible. 

3:30pm 

Live Q&A: Red Hat Advanced Cluster Management for Kubernetes with Jimmy Alvarez, Loic Avenel and Jeff Brent

Do you have questions about managing your Kubernetes clusters? Want to learn more about how you can manage your cluster and application life cycle, along with security and compliance of your entire Kubernetes domain? Red Hat Advanced Cluster Management for Kubernetes provides a single view to manage your Kubernetes clusters—from Red Hat OpenShift deployed on premise and in public clouds, as well as clusters from public cloud providers like AWS, Google, IBM and Microsoft Azure. Join the Red Hat product team as they answer your questions around Kubernetes management.

 

Live Q&A: Automation in a hybrid cloud environment with John Wadleigh

As you move to a cloud platform, whether private or public, you may find that you are still supporting on-premises infrastructure and provisioning, and your application workloads are spread across both cloud and on-prem. Ansible Automation Platform provides a centralized place for management, but what is the role of dynamic inventory and how have recent enhancements to Ansible Tower made it easier and more valuable to implement? This "ask the expert" session will provide an overview of dynamic inventories and an opportunity to ask an Ansible expert technical and non-technical questions about automation in a hybrid cloud environment.




What’s Next? 

If you have not already registered for AnsibleFest, go register today. Curious about the other content we have to offer at AnsibleFest? Check out the session catalog. We look forward to connecting with you next week and answering all your questions!

Culture at AnsibleFest 2020

$
0
0

At Red Hat, we’ve long recognized that the power of collaboration enables communities to achieve more together than individuals can accomplish on their own. Developing an organizational culture that empowers communities to flourish and collaborate -- whether in an open source community or for an internal community of practice -- isn’t always straightforward. This year at AnsibleFest, the Culture topic aims to demystify some of these areas by sharing the stories, practices, and examples that can get you on your path to better collaboration. 

 

Culture at AnsibleFest: “Open” for participation

Because we recognize that culture is not a “one size fits all” topic, we’ve made sure to sprinkle nearly every track at AnsibleFest with relevant content to help every type of Ansible user (or manager of Ansible users!) participate in developing healthy cultures and communities of automation inside their organizations. 

Whether you’re interested in contributing to open source communities, learning how others have grown the use of Ansible inside their departments or organizations, or if you’re simply interested in building healthy, diverse, inclusive communities, inside or outside the workplace -- the Culture (cross) Channel at AnsibleFest has you covered. 

 

Be a Cultural Catalyst for Automation

We’re looking forward to seeing you at AnsibleFest and discussing the practices and lessons that have contributed to improving cultures of automation around the world, shared directly from organizations and open source communities using Ansible. We hope they will inspire you and your colleagues to catalyze the change you’d love to see in your own organizations and workplaces.

Here are a few of the sessions you’ll see in various tracks at AnsibleFest that are culture-focused -- but be sure to check out the full AnsibleFest catalog for more sessions with a bit of that culture flavor. 

  • Making the business case for contributing to open source
  • Free their minds: Driving culture change and workplace transformation
  • How to introduce automation to your colleagues in IT security (and what’s in it for you)
  • Live Q&A: Applying open source principles to diversity and inclusion with Sam Knuth, Allie DeVolder, and Koren Townsend
  • Demystifying contributor culture: IRC, mailing lists, and netiquette for the 21st century

IT Leader Channel at AnsibleFest 2020

$
0
0

Whether you have automated different domains within your business or are just getting started, creating a roadmap to automation that can be passed between teams and understood at different levels is critical to any automation strategy. 

We’ve brought back the IT Decision Maker track at AnsibleFest this year after its debut in 2019, featuring sessions that help uplevel the conversation about automation, create consensus between teams and get automation goals accomplished faster. 

 

What you can expect

There are a variety of sessions in the IT Decision Maker track. A few are focused on specific customer use cases of how they adopted and implemented Ansible. These tracks are great companions to our customer keynotes, including those from CarMax and PRA Health Sciences, that will dive into their Ansible implementation at a technical level. This track aims to cover the many constituents of automation within a business and how to bring  the right type of teams together to extend your automation to these stakeholders. 

Newcomers to AnsibleFest will get a lot out of this track, as many of the sessions are aimed at those with a beginner’s level knowledge of Ansible Automation Platform and its hosted services. Those looking to get started with automation in the hybrid cloud will also get a lot out of this track, as it covers the variety of technology platforms that Ansible can automate - including those from Red Hat partners.

 

What you will learn

The target audience includes team leads who are actively building or growing their cross-functional automation teams. Many of the sessions in this track focus on automation culture and bringing together diverse point of views; we’re featuring a special session on creating consensus between automation decision makers and implementers. We also have a live Q&A session that will highlight the press announcements made on day one of AnsibleFest - you won’t want to miss it!

This track is focused on automation patterns and trends seen in production customers using Red Hat Ansible Automation Platform, along with interactive Q&A sessions that will dive into AnsibleFest technology announcements. 

 

What next?

There is still time to register for AnsibleFest (it's free!) If you are interested in learning what other content will be available at AnsibleFest this year, take a look at the session catalog.

Deep Dive: ACL Configuration Management Using Ansible Network Automation Resource Modules

$
0
0

In October 2019 as part of the Red Hat Ansible Engine 2.9 release, the Ansible Network Automation teamintroduced the first resource modules. These opinionated network modules make network automation easier and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration. 

This blog post covers the newly released ios_acls resource module and how to automate manual processes associated with switch and router configurations. These network automation modules are used for configuring routers and switches from popular vendors (but not limited to) Arista, Cisco, Juniper, and VyOS. The access control lists (ACLs) network resource modules are able to read ACL configuration from the network, provide the ability to modify and then push changes to the network device. These opinionated network resource modules make network automation easier and more consistent for those automating various network platforms in production. I’ll walk through several examples and describe the use cases for each state parameter (including three newly released state types) and how these are used in real world scenarios.

 

The Certified Content Collection

This blog uses the cisco.ios Collection maintained by the Ansible team, but there are other platforms that also have ACL resource modules, such as arista.eos, junipernetworks.junos, and vyos.vyos.

Before starting, let’s quickly explain the rationale behind the naming of the network resource modules. The newly added ACLs modules will be plural (eos_acls, ios_acls, junos_acls, nxos_acls, iosxr_acls).  The older singular form modules (e.g. ios_acl, nxos_acl) will be deprecated over time. This naming change was done so that those using existing network modules would not have their Ansible Playbooks stop working and have sufficient time to migrate to the new network automation modules.

 

Platform support

This module is also available for the following Ansible-maintained platforms on both Automation Hub (supported) and Galaxy (community):

Platform

Full Collection path
namespace.collection.module

Automation Hub Link (requires subscription)

Ansible Galaxy Link

Arista EOS

arista.eos.eos_acls 

Automation Hub

Galaxy

Cisco IOS

cisco.ios.ios_acls

Automation Hub

Galaxy

Cisco IOSXR

cisco.iosxr.iosxr_acls

Automation Hub

Galaxy

Cisco NXOS

cisco.nxos.nxos_acls

Automation Hub

Galaxy

Juniper JunOS

junipernetworks.junos.junos_acls

Automation Hub

Galaxy

VyOs

vyos.vyos.vyos_firewall_rules

Automation Hub

Galaxy



Getting started - Managing the ACL configuration with Ansible

An access control list (ACL) provides rules that are applied to port numbers and/or IP addresses  permitted to transit or reach that network device. ACL order of access control entry (ACE) is critical because the ACEs sequence/order route decides which rules are applied to inbound/outbound network traffic.

An ACL resource module provides the same level of functionality that a user can achieve when configuring manually on the Cisco IOS device. But combined with Ansible facts gathering and resource module approach, this is more closely aligned with how network professionals work day to day.

I’ll be using an IOS router with version 15.6(3)M2 for all the configuration of this post. Below is the initial state of router ACLs configuration and currently there are already active ACLs configured on the device.

Network device configuration

cisco#sh access-lists
Extended IP access list 110
  10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
  20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list test_acl
  10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
  deny tcp any eq www any eq telnet ack dscp af11 sequence 10

 

Using state gathered - Building an Ansible inventory

Resource modules allow the user to read in an existing network configuration and convert that into a structured data model. The state: gathered is the equivalent for gathering Ansible facts for this specific resource. This example will read in the existing network configuration and store it as a flat file.

Ansible Playbook Example

Here is an Ansible Playbook example of using state: gathered and storing the result as YAML into host_vars.  If you are new to the concept of Ansible inventory and want to learn more about group_vars and host_vars, please refer to the Ansible User Guide: Inventory.

---
- name: convert configured ACLs to structured data
  hosts: cisco
  gather_facts: false
  tasks:


    - name: Use the ACLs resource module to gather the current config
       cisco.ios.ios_acls:
           state: gathered
           register: acls

    - name: Create inventory directory
      file:
       path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}"
       state: directory

    - name: Write the ACL configuration to a file
      copy:
        content: "{{ {‘acls’: acls['gathered']} | to_nice_yaml }}"
        dest: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}/acls.yaml"

Execute the Ansible Playbook with the ansible-playbook command:

ansible-playbook example.yml

Examine File contents 

Here is the data structure that was created from reading in an existing configuration:

$ cat lab_inventory/host_vars/rtr2/acls.yaml
acls:
- acls:
     - aces:
         - destination:
             address: 192.0.3.0
             wildcard_bits: 0.0.0.255
           dscp: ef
           grant: deny
           protocol: icmp
           protocol_options:
             icmp:
               traceroute: true
           sequence: 10
           source:
             address: 192.0.2.0
             wildcard_bits: 0.0.0.255
           ttl:
             eq: 10
         - destination:
             host: 198.51.110.0
             port_protocol:
               eq: telnet
           grant: deny
           protocol: tcp
           protocol_options:
             tcp:
               ack: true
           sequence: 20
           source:
             host: 198.51.100.0
       acl_type: extended
       name: '110'
     - aces:
         - destination:
             address: 192.0.3.0
             port_protocol:
                 eq: www
             wildcard_bits: 0.0.0.255
           grant: deny
           option:
               traceroute: true
           protocol: tcp
           protocol_options:
               tcp:
                   fin: true
           sequence: 10
           source:
             address: 192.0.2.0
             wildcard_bits: 0.0.0.255
           ttl:
               eq: 10
       acl_type: extended
       name: test_acl
   afi: ipv4
 - acls:
     - aces:
         - destination:
             any: true
             port_protocol:
               eq: telnet
           dscp: af11
           grant: deny
           protocol: tcp
           protocol_options:
             tcp:
               ack: true
           sequence: 10
           source:
             any: true
             port_protocol:
               eq: www
       name: R1_TRAFFIC
   afi: ipv6

In the above output (and future reference):

  • afi refers to address family identifier, either IPv4 or IPv6
  • acls refers to access control lists, and returns a list of dictionaries (ACEs)
  • aces refers to access control entry, or the specific rule and sequence 

 

Using state merged - Pushing configuration changes

The state merged will take your Ansible configuration data (for example Ansible variables) and merges them into the network device’s network configuration. This will not affect existing configuration not specified in your Ansible configuration data. Let’s walk through an example.

Modify stored file

We will modify the flat file created in the first example. We will then create an Ansible Playbook to merge this new configuration into the network device’s running configuration.

Reference link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Merged.txt

acls:
- afi: ipv4
  acls:
   - name: std_acl
     acl_type: standard
     aces:
       - grant: deny
         source:
           address: 192.168.1.200
       - grant: deny
         source:
           address: 192.168.2.0
           wildcard_bits: 0.0.0.255
   - name: 110
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           icmp:
             traceroute: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
         dscp: ef
         ttl:
           eq: 10
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           host: 198.51.100.0
         destination:
           host: 198.51.110.0
           port_protocol:
             eq: telnet
   - name: test
     acl_type: extended
     aces:
       - grant: deny
         protocol_options:
           tcp:
             fin: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         option:
           traceroute: true
         ttl:
           eq: 10
   - name: 123
     aces:
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 198.51.101.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         tos:
           service_value: 12
      - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.4.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           lt: 20
- afi: ipv6
  acls:
   - name: R1_TRAFFIC
     aces:
       - grant: deny
         protocol_options:
           tcp:
             ack: true
         source:
           any: true
           port_protocol:
             eq: www
         destination:
           any: true
           port_protocol:
             eq: telnet
         dscp: af11

Ansible Playbook Example

---
- name: Merged state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Merge ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: merged
        config: "{{ acls }}"

Once we run the respective Merge play, all of the provided parameters will be configured on the Cisco IOS router with Ansible changed=True

Network device configuration

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

If we dig slightly into the device output, we make the following observations:

  • Based on the AFI value, it’s decided by the module to call IP/IPV6 access-lists. 
  • The ‘acl_type’ key is required for named ACLs.
  • For ACLs identified by a number rather than a name, the ‘acl_type’ is derived from the platform’s documented ACL number ranges. (eg Standard = 1–99 and 1300–1999, Extended = 100–199 and 2000–2699, etc)
  • If the sequence number is not mentioned in the ACE, it will be configured based on the order provided in the play. 
  • With the second run, the respective Merge Play runs again and Ansible charm of Idempotency comes to picture, and if nothing’s changed, play run results into changed=False, which confirms to the user that all of the provided configurations in the play are already configured on the IOS device.

 

Using state replaced - Pushing configuration changes

The replaced parameter enforces the data model on the network device for each configured ACL/ACE. If we modify any of the ACL/ACEs, it will enforce all the parameters this resource module is aware of. To think of this another way, the replaced parameter is aware of all the commands that should and shouldn’t be there.

For this scenario, an ACL with some ACEs is already configured on the Cisco IOS device, and now the user wants to update the ACL with a new set of ACEs and discard all the already configured ACL ACEs. The resource module that replaced “s” will replace ACL existing ACEs with a new set of ACEs given as input by the user.

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Replaced.txt

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         protocol_options:
           tcp:
             syn: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         sequence: 20
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10

Ansible Playbook Example

---
- name: Replaced state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Replace ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: replaced
        config: "{{ acls }}"

With the above play, the user is replacing the 123 extended ACL with the provided ACL ACEs configuration and also configuring the 150 extended new ACL ACEs.

Before running the replaced play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With replaced Play run, commands that are fired:

- ip access-list extended 110
- no 10
- no 20
- deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10
- ip access-list extended 150
- 20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn  dscp ef ttl eq 10

After running the replaced play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list 150
   20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

If we dig the output briefly, we may have following observation:

  • replaced will negate all the pre-existing ACEs under the input ACL and then apply the configuration provided as input in the play. The same behaviour can be seen in the commands output above for numbered ACL 123, where the pre-existing ACEs at sequence 10 and 20 are negated first before applying the changes for newer ACE configuration.
  • For the 150 extended ACL ACEs, since it wasn’t already pre-configured on the device, the module goes ahead and applies the ACE configuration provided as input in the play. One thing to note here is that in the play input configuration value to sequence is mentioned as 20, and as a result ACE is configured on sequence 20 instead of 10, which would have been the case if value to sequence wasn’t provided by the user.

With the second run of the above play, changed comes as false, which satisfies the Ansible idempotency.

 

Using state overridden - Pushing configuration changes

For this example, we will mix it up slightly. Pretend you are a user making a bespoke configuration on the network device (making a change outside of automation).  The state: overridden will circle back on enforcing the data model (configuration policy enforcement) and remove the bespoke change.

If the user wants to re-configure the Cisco IOS device entirely pre-configured ACLs, then resource module overridden state is the most appropriate. When using the overridden state, a user can override all ACLs with user provided ACLs.

To show the difference between replaced and overridden state working, we will be using the same play that we used for the replaced scenario, keeping the pre-existing configuration the same as well.

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Overridden.txt

ACLs configuration:

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         sequence: 20
         protocol_options:
           tcp:
             ack: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10

Ansible Playbook Example

---
- name: Overridden state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Override ACLs config with device existing ACLs config
      cisco.ios.ios_acls:
        state: overridden
        config: "{{ acls }}"

With the above play, the user is replacing the 123 extended ACL with the provided ACL ACEs configuration and also configuring the 150 extended new ACL ACEs.

Before running the Overridden play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Overridden play run, commands that are sent:

- no ip access-list standard std_acl
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended 150
- no ip access-list extended test
- no ipv6 access-list R1_TRAFFIC
- ip access-list extended 150
- 10 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10
- ip access-list extended 110
- 20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq www ack dscp ef ttl eq 10

After running the Overridden play network device configuration:

cisco#sh access-lists
Extended IP access list 110
   20 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq www ack dscp ef ttl eq 10
Extended IP access list 150
   10 deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10

Now, again if we dig the overridden play output:

  • Overridden negates all of the pre-existing ACLs and deletes those configurations, which are not present inside the provided config.
  • For the ACL configurations that are pre-existing and also in the play, ios_acls overridden state will try to delete/negate all the pre-existing ACEs and then configure the new ACE as mentioned in the play
  • For any non-existing ACLs, overridden state will configure the ACL in a manner same as merged

Now that we talked about how we can configure ACLs and the ACEs on the CISCO IOS device by using ios_acls resource module merged, replaced and overridden state, it’s time we talk about how we can delete the pre-configured ACLs and ACEs and what level of granularity is available with the deleted operational state for the user.

Using state Deleted  - Deleting configuration changes

If the user wants to delete the Cisco IOS device pre-configured ACLs with the provided ACL configuration, then use the resource module delete state.

 

Method 1: Delete individual ACL based on ACL number (which means if the user needs to delete any specific ACLs configured under IPV4 or IPV6).

Ref gist link: 

https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Deleted.txt

ACLs that need to be deleted

acls:
- afi: ipv4
  acls:
    - name: test
      acl_type: extended
    - name: 110
    - name: 123
- afi: ipv6
  acls:
    - name: R1_TRAFFIC

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ACLs based on ACL number
      cisco.ios.ios_acls:
        state: deleted
        config: "{{ acls }}"

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs Play run, commands that are sent:

- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ipv6 access-list R1_TRAFFIC

After running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
cisco#

 

Method 2: Deleting individual ACL based on it’s AFI (i.e. Address Family Indicator) which means if the user needs to delete all of the ACLs configured under IPV4 or IPV6.

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Deleted_by_AFI

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ALL IPV4 configured ACLs
      cisco.ios.ios_acls:
        config:
          - afi: ipv4
        state: deleted

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs Play run, commands that are fired:

- no ip access-list standard std_acl
- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended test

After running the Deleted play network device configuration:

cisco#sh access-lists
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10
cisco#

 

Method 3: Delete ALLACLs at once

Note: this is a very critical delete operation and if not used judiciously, it has the power of deleting all pre-configured ACLs

Ref gist link:https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/056d2a6a44910863cbbbf38cad2273435574db84/Deleted_wo_config.txt

Ansible Playbook Example

---
- name: Deleted state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Delete ALL configured ACLs w/o passing any config
      cisco.ios.ios_acls:
        state: deleted

Before running the Deleted play network device configuration:

cisco#sh access-lists
Standard IP access list std_acl
   10 deny   192.168.1.200
   20 deny   192.168.2.0, wildcard bits 0.0.0.255
Extended IP access list 110
   10 deny icmp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 traceroute dscp ef ttl eq 10
   20 deny tcp host 198.51.100.0 host 198.51.110.0 eq telnet ack
Extended IP access list 123
   10 deny tcp 198.51.100.0 0.0.0.255 198.51.101.0 0.0.0.255 eq telnet ack tos 12
   20 deny tcp 192.0.3.0 0.0.0.255 192.0.4.0 0.0.0.255 eq www ack dscp ef ttl lt 20
Extended IP access list test
   10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www fin option traceroute ttl eq 10
IPv6 access list R1_TRAFFIC
   deny tcp any eq www any eq telnet ack dscp af11 sequence 10

With Delete by ACLs Play run, commands that are fired:

- no ip access-list standard std_acl
- no ip access-list extended test
- no ip access-list extended 110
- no ip access-list extended 123
- no ip access-list extended test
- no ipv6 access-list R1_TRAFFIC

After running the Overridden play network device configuration:

cisco#sh access-lists
cisco#

 

Using state rendered - Development and working offline

The state rendered transforms the provided structured data model into platform specific CLI commands. This state does not require a connection to the end device. For this example, it will render the provided data model into the Cisco IOS syntax commands.

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Rendered.txt

ACLs Config that needs to be rendered

acls:
- afi: ipv4
  acls:
   - name: 110
     aces:
       - grant: deny
         sequence: 10
         protocol_options:
           tcp:
             syn: true
         source:
           address: 192.0.2.0
           wildcard_bits: 0.0.0.255
         destination:
           address: 192.0.3.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: www
         dscp: ef
         ttl:
           eq: 10
   - name: 150
     aces:
       - grant: deny
         protocol_options:
           tcp:
             syn: true
         source:
           address: 198.51.100.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         destination:
           address: 198.51.110.0
           wildcard_bits: 0.0.0.255
           port_protocol:
             eq: telnet
         dscp: ef
         ttl:
           eq: 10	

Ansible Playbook Example

---
- name: Rendered state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Render the provided configuration
      cisco.ios.ios_acls:
        config: "{{ acls }}"
        state: rendered

With Render state module execution results:

"rendered": [
   "ip access-list extended 110",
   "10 deny tcp 192.0.2.0 0.0.0.255 192.0.3.0 0.0.0.255 eq www syn dscp ef ttl eq 10",
   "ip access-list extended 150",
   "deny tcp 198.51.100.0 0.0.0.255 eq telnet 198.51.110.0 0.0.0.255 eq telnet syn dscp ef ttl eq 10"
]

NOTE: Render state won’t change anything from configuration end

 

Using state parsed - Development and working offline

This state reads the configuration from running_config option and transforms it into structured data (i.e. JSON). This is helpful if you have off-line configurations, such as a backup text file, and want to transform it into structured data. This is helpful for experimenting, troubleshooting or offline creation of a source of truth for your data models.

Ref gist link: https://gist.githubusercontent.com/justjais/bb2a65c373ab4e64d1eeb47bc425c613/raw/8c65946eae561ff569cfc5398879c51598ae050c/Parsed.txt

ACLs Config that needs to be Parsed

Ansible Playbook Example

---
- name: Parsed state play
  hosts: cisco
  gather_facts: false
  tasks:
    - name: Parse the provided ACLs configuration
      cisco.ios.ios_acls:
        running_config:
           "ipv6 access-list R1_TRAFFIC
           deny tcp any eq www any eq telnet ack dscp af11"
        state: parsed

With Parsed state module execution results:

"parsed": [
       {
           "acls": [
               {
                   "aces": [
                       {
                           "destination": {
                               "any": true,
                               "port_protocol": {
                                   "eq": "telnet"
                               }
                           },
                           "dscp": "af11",
                           "grant": "deny",
                           "protocol_options": {
                               "tcp": {
                                   "ack": true
                               }
                           },
                           "source": {
                               "any": true,
                               "port_protocol": {
                                   "eq": "www"
                               }
                           }
                       }
                   ],
                   "name": "R1_TRAFFIC"
               }
           ],
           "afi": "ipv6"
       }
   ]

 

Conclusion

The ACLs resource modules provide an easy way for network engineers to begin automating access lists on multiple network platforms. While some configuration can remain static on network devices, ACLs might need constant updates and verification. These resource modules allow users to adopt automation in incremental steps to make it easy for organizations to adopt.  As soon as you have transformed your ACLs into structured data, any resource module from any network platform can read. Imagine reading in ACLs for your Cisco IOS box and transforming them into Cisco IOS-XR commands. 

Do you want to meet the blog authors? Come attend virtual AnsibleFest 2020! We have an entire track dedicated to Ansible Network Automation. Did we mention it is free? Sign up here.

Are you new to Ansible Network Automation? Check out our Getting Started Page!

Do you want some free training? Check out Ansible Automation Technical Workshops.

Or just come chat with us on Slack.

Best of Fest: AnsibleFest 2020

$
0
0

Thank you to everyone who joined us over the past two days for the AnsibleFest 2020 virtual experience. We had such a great time connecting with Ansible lovers across the globe. In case you missed some of it (or all of it), we have some event highlights to share with you! If you want to go see what you may have missed, all the AnsibleFest 2020 content will be available on demand for a year. 

 

Community Updates

This year at AnsibleFest 2020, Ansible Community Architect Robyn Bergeron kicked off with her keynote on Tuesday morning. We heard how with Ansible Content Collections, it’s easier than ever to use Ansible the way you want or need to, as a contributor or an end user. Ansible 2.10 is now available, and Robyn explained how the feedback loop got us there. If you want to hear more about the Ansible community project, go watch Robyn’s keynote on demand

 

Product Updates

Ansible’s own Richard Henshall talked about the Red Hat Ansible Automation Platform product updates and new releases. In 2018, we unveiled the Ansible certified partner program and now we have over 50 platforms certified. We are bridging traditional platforms, containers and edge with a new integration between Red Hat Advanced Cluster Management for Kubernetes and Ansible Automation Platform. Learn more about the new integration from our press release. This year at AnsibleFest, we also introduced private Automation Hub, where users can now manage and curate Ansible content privately, from trusted sources. You can learn more about this and other Ansible Automation Platform updates from our press release. You can also listen to Richard’s full keynote in the AnsibleFest platform on demand now.

 

Channel Content

AnsibleFest 2020 showcased six channels of content; with something for everyone. Some popular talks included the Ansible Automation Platform Roadmap, Managing your own Private Ansible Content, Ansible Automation Platform Technical Content Strategy, How to manage your Ansible automation and how your automation is performing with analytics, and much more! All 70+ breakout sessions are available on demand now.

 

We hope everyone enjoyed our first virtual AnsibleFest. Thank you to all our attendees who helped make AnsibleFest 2020 the largest and most successful AnsibleFest to date. To see more highlights of the event, you can visit the AnsibleFest homepage. Don’t forget, all the content will be available until October 2021, so you can go back and watch the content whenever you would like. Thank you for connecting with us this year and happy automating!

Viewing all 512 articles
Browse latest View live