Quantcast
Channel: Ansible Collaborative
Viewing all 512 articles
Browse latest View live

Ansible Automates 2020

$
0
0

Today, the operational role of IT is obvious. The rapid developments enabled by automation create genuine business value. The results that can be achieved by automation have a direct link to a company’s business goals. 

As a CTO or CIO, sometimes you need help articulating this to stakeholders. Translating IT departments’ performance into business prioritized KPIs. Most see efficiency gains, cost and risk reductions, for example. Automation is clearly an executive-level issue.

At first, Ansible was a classical tool that was  utilized for specific automation. Ansible helps your team automate routine tasks, so that they can instead focus on what you want to do. The platform enables you to structure work by automating your processes.

 

Automation is a journey - start yours at Ansible Automates 2020

The global, all-day digital event – Ansible Automates 2020 – takes place on June 10. The event provides inspiration as to how the automation journey can be accelerated and taken to the next level. And no, we’re not going to discuss functionality and technology all day. We want to highlight the cultural and behavioral changes that are linked to the trend towards greater automation. 

For organizations to achieve the best results, they need to focus on the new tools and tactics at their disposal. At Ansible Automates 2020, we’ll be deep diving at a consultative level. We will discuss both the human and the business aspects of automation.

Ansible Automates 2020 takes place in a more digital, and more unusual, time than ever before With so many unprecedented challenges emerging in the IT sphere, automation can provide increased value and much sought after solutions for teams. The cost efficiencies associated with automation are a top priority right now given the economic challenges that many people are facing.

Automation is not a destination; it’s a journey. At Ansible Automates 2020, we’ll be hosting a roundtable with four clients from our broad spectrum of fields. They will discuss and share their respective automation journeys specific to their use cases, key issues and challenges. What would they do differently if they started the same journey again today? What are the challenges they face now? There is enormous value in knowledge sharing, which is a key principle in open source technology. 

 

Automated change is coming

Those of us who have worked in IT a little longer than we might admit can probably recall some of the old ways we used to get things done. We tested every code ourselves and documented everything we did, from start to finish. But in the early 2000s, creating a more efficient process was increasingly prioritized, adding dedicated teams for testing and documentation to the department. Automation has emerged as a similar watershed moment in time as an integral part of the technological evolution.

Now, striving to automate all manual processes has become second nature, with many organizations accelerating the pace. This has resulted in an increase in the number of change journeys. Almost all of those who see the positive effects of automation undergo a deeper behavioral change. 

 

Advice to those who want to get started with automation:

At Ansible Automates 2020, you will connect with stories on how to get started with automation at more than just a technical level. The event will help you start thinking about important considerations, such as: 

1) Start by asking yourself the right questions. What do you want to achieve, both from organizational and business perspectives? What are your business goals? Where are you now?

2) Actors such as Red Hat will always add new, exciting technology to the mix. As automation becomes increasingly relevant, its related areas of application and creativity grow, which is fantastic. The important thing is to continue to identify the issues mentioned above: what do you need? 

3) The community exists because they want to help! Set a plan for how to best use the Ansible community, as well as the open source community in general. 

Join us on June 10

 

And, if you have your own automation story to tell, submit your story for AnsibleFest Virtual Experience. Call for proposals is open through July 15.


Tolerable Ansible

$
0
0

Ansible Playbooks are very easy to read and their linear execution makes it simple to understand what will happen while a playbook is executing. Unfortunately, in some circumstances, the things you need to automate may not function in a linear fashion. For example, I was once asked to perform the following tasks with Ansible:

  • Notify an external patching system to patch a Windows target
  • Wait until the patching process was completed before moving on with the remaining playbooks tasks

While the request sounded simple, upon further investigation it would prove more challenging for the following reasons:

  • The system patched the server asynchronously from the call. i.e. the call into the patching system would simply put the target node into a queue to be patched 
  • The patching process itself could last for several hours
  • As part of the patching process the system would reboot no fewer than two times but with an unspecified maximum depending on the patches which need to be applied
  • Due to the specific implementation of the patching system the only reliable way to tell if patching was completed was by interrogating a registry entry on the client
  • If the patching took too long to complete additional actions needed to be taken

Due to the asynchronous nature, long running process and an unspecified number of reboots of the patching system, it was challenging to make Ansible correctly initiate and monitor the process. For example, if the machine was rebooted while Ansible tried to check the registry the playbook would fail. Or, if the reboot took too long, Ansible might continue with the playbook prematurely and fail to connect to the client. Fortunately, as you will see in this blog post there are features within Ansible which make it possible to handle these error conditions to achieve the desired effects for cases like this.

 

Setting Up A Simple Test

In the following examples, we are going to write some Ansible playbooks which will:

  • Look for the presence of a specific file on a target machine
  • Handle any number of non-Ansible initiated reboots
  • Timeout after some number of retries (which we will keep low for these tests)
  • Perform one of two actions based on the results of the monitoring (in our cases we will either fail or print a debug message)

These examples are designed to run on Linux machines but the concepts we are using would apply the same way for Windows tasks. 

At the root of our test we will see if a file exists. So let's start with a playbook like this:

---
- name: Try to survive and detect a reboot
  hosts: target_node
  gather_facts: False
  tasks:
    - name: Check for the file
      file:
        path: "/tmp/john"
        state: file
      register: my_file
    - name: Post Task
      debug:
        msg: "This task is the end of our monitoring"

With the file present out playbook completes successfully:

PLAY [Test file] *******************************************************************
TASK [Check for the file] *******************************************************************
ok: [192.168.0.26]
TASK [Post Task] *******************************************************************
ok: [192.168.0.26] => {
    "msg": "This task is the end of our monitoring"
}
PLAY RECAP *******************************************************************
192.168.0.26               : ok=4    changed=0    unreachable=0    failed=0

But when we run this playbook with the file missing the task fails with an error:

TASK [Check for the file] *******************************************************************
fatal: [192.168.0.26]: FAILED! => {"changed": false, "msg": "file (/tmp/john) is absent, cannot continue", "path": "/tmp/john", "state": "absent"}

 

Using a Loop to Wait

The first modification to our playbook will be to use a loop to allow the file check to wait for the file to show up. To do this, we will add some parameters to the “Check for the file” task:

    - name: Check for the file
      file:
        path: "/tmp/john"
        state: file
      register: my_file
      # Keep trying until we found the file
      until: my_file is succeeded
      retries: 2
      delay: 1

This tells Ansible that we want to retry this step up to two times with a one second delay between each check and, if the registered my_file variable was successful, we can be done with this step. This time, when we run our playbook without a file the specific task looks different already:

TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left).
FAILED - RETRYING: Check for the file (1 retries left).
fatal: [192.168.0.31]: FAILED! => {"attempts": 2, "changed": false, "msg": "file (/tmp/john) is absent, cannot continue", "path": "/tmp/john", "state": "absent"}

We can see Ansible try twice to find the file and then die. Now if we ran our playbook and, while its looping we create the file its looking for, the task output will become:

TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left)
ok: [192.168.0.31]

After that task, the final task will run which means our monitoring is now working as expected. However, note what happens in our task if I simulate a reboot of our server while the loop is monitoring for the file:

TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left)
fatal: [192.168.0.31]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 192.168.0.31 closed.\r\n", "unreachable": true}

In this case we get a connection failure and Ansible terminates the execution on this node. Since we are only running on one node this also ends our play. To help with this, there was a recent feature added to Ansible called ignore_unreachable. This allows us to continue a playbook even if a host has become unreachable. Let's modify our check task to include this parameter:

    - name: Check for the file
      file:
        path: "/tmp/john"
        state: file
      register: my_file
      until: my_file is succeeded
      # Keep trying until we found the file
      retries: 2
      delay: 1
      # It is ok if we can’t connect to the server during this task
      ignore_unreachable: True

If we run the last test with the machine reboot in the loop we now get these results:

TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left)
fatal: [192.168.0.31]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.31 port 22: Connection refused", "skip_reason": "Host 192.168.0.31 is unreachable", "unreachable": true}
TASK [Post Task] *******************************************************************
ok: [192.168.0.31] => {
    "msg": "This task is the end of our monitoring"
}
PLAY RECAP *******************************************************************
192.168.0.31               : ok=2    changed=0    unreachable=1    failed=0    skipped=1    rescued=0    ignored=0

Note the skipped and the unreachable in the play recap at the end of the output.

 

Looping Over A Loop

With the configuration above, Ansible will now ignore the connection error for that step and continue to run the playbook. This is both good and bad for us. It’s bad because we made it to the final task indicating a successful patch (or in our example a found file) but we never actually put the file on the system (so we got a false positive). However, this is good because it means we can keep writing Ansible to further handle the error conditions. 

Next, we are going to make Ansible retry the logic to find the file after a reboot. To do this, we have to loop over the “Check for the file” task... but we already are. To perform a loop of a loop we are going to leverage the include_tasks module. Our main playbook will now look like this:

---
- name: Try to survive and detect a reboot
  hosts: target_node
  gather_facts: False
  tasks:
    - include_tasks: run_check_test.yml
    - name: Post Task
      debug:
        msg: "This task is the end of our monitoring"

And the included file (run_check_test.yml) will perform our check and then conditionally include the same file again:

---
# Check for my file in a loop
- name: Check for the file
  file:
    path: "/tmp/john"
    state: file
  register: my_file
  until: my_file is succeeded
  # Keep trying until we found the file
  retries: 2
  delay: 1
  # It is ok if we can’t connect to the server during this task
  ignore_unreachable: True
 
 
# if I didn’t find the file or my target host was unreachable
# run again
- include_tasks: run_check_test.yml
  when:
    - my_file is not succeeded or my_file is unreachable

If you are familiar with programming you may realize that this could potentially make an infinite loop if the file is never found. To prevent an infinite loop we will add another task and an additional check on the include within our include file to limit the total number of tries we have:

---
- name: Check for the file
  file:
    path: "/tmp/john"
    state: file
  register: my_file
  until: my_file is succeeded
  # Keep trying until we found the file
  retries: 2
  delay: 1
  # It is ok if we can’t connect to the server during this task
  ignore_unreachable: True
 
- set_fact:
    safety_counter: "1"
 
- include_tasks: run_check_test.yml
  when:
    - (safety_counter | int > 0)
    - my_file is not succeeded or my_file is unreachable

With this change we are guaranteed to never run our "outer loop" more than two times. Again, this is a low number specifically for our testing. Lets run our updated code without a reboot and without the file:

TASK [include_tasks] *******************************************************************
included: run_check_test.yml for 192.168.0.31
TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left).
FAILED - RETRYING: Check for the file (1 retries left).
fatal: [192.168.0.31]: FAILED! => {"attempts": 2, "changed": false, "msg": "file (/tmp/john) is absent, cannot continue", "path": "/tmp/john"}
PLAY RECAP *******************************************************************
192.168.0.31               : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Did you expect that to happen? We get a failure in the until loop and Ansible failed out instead of running the include file again. To prevent Ansible from failing on our loop step we will add the igore_errors parameter on our file check task:

- name: Check for the file
  file:
    path: "/tmp/john"
    state: file
  register: my_file
  until: my_file is succeeded
  # Keep trying until we found the file
  retries: 2
  delay: 1
  # It is ok if we can’t connect to the server during this task
  ignore_unreachable: True
  # If this step fails we want to continue processing so we loop
  ignore_errors: True

With this modification running again with a reboot and no file we will now get this:

TASK [include_tasks] *******************************************************************
included: run_check_test.yml for 192.168.0.31
TASK [Check for the file] *******************************************************************
FAILED - RETRYING: Check for the file (2 retries left).
FAILED - RETRYING: Check for the file (1 retries left).
fatal: [192.168.0.31]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.31 port 22: Connection refused", "skip_reason": "Host 192.168.0.31 is unreachable", "unreachable": true}
TASK [set_fact] *******************************************************************
ok: [192.168.0.31]
TASK [include_tasks] *******************************************************************
included: run_check_test.yml for 192.168.0.31
TASK [Check for the file] *******************************************************************
fatal: [192.168.0.31]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.0.31 port 22: Connection refused", "skip_reason": "Host 192.168.0.31 is unreachable", "unreachable": true}
TASK [set_fact] *******************************************************************
ok: [192.168.0.31]
TASK [include_tasks] *******************************************************************
skipping: [192.168.0.31]
TASK [Post Task] *******************************************************************
ok: [192.168.0.31] => {
    "msg": "This task is the end of our monitoring"
}
PLAY RECAP *******************************************************************
192.168.0.31               : ok=6    changed=0    unreachable=2    failed=0    skipped=3    rescued=0    ignored=0

 

Slowing Things Down

You can't see the timing in the blog but once the server was down from the reboot the loop executes extremely fast. Since the node is down we looped around to the file check and the node was still rebooting so it immediately triggered the ignore_unavailable flag burning through our saftery_counter variable in no time at all. To prevent this from happening we can add a sleep step in our include file and only execute it if the node is down:

- name: Sleep if the host was unreachable
  pause:
    seconds: 3
  when: my_file is unreachable
  delegate_to: localhost

This will tell Ansible to pause for three seconds if it found the target node was unreachable. A server in a production environment may take more time than three seconds to start so this number should be adjusted accordingly for your environment. 

Another issue you might have spotted in our last output was that our last task ran indicating that we successfully found our file (which we did not). To handle this condition we want an error message to indicate that the patching was not completed within our retries. To achieve this we are going to add a conditional fail step in our main playbook after the loop. This could be an include_role or anything else you need to handle a failure in patching.

---
- name: Try to survive and detect a reboot
  hosts: target_node
  gather_facts: False
  tasks:
    - include_tasks: run_check_test.yml
    # Fail if we:
    #     never registered my_file or
    #     we failed to find the file or
    #     our loop ended with the server in an unreachable state
    - name: Fail if we didn't find the file
      fail:
        msg: "Patching failed within the timeframe specified"
      when: my_file is not defined or my_file is not succeeded or my_file is unreachable

    - name: Post Task
      debug:
        msg: "This task is the end of our monitoring"

 

The Completed Product

With all of these steps in this order, Ansible can now meet all our initial requirements:

  • It can launch a patching system
  • It can monitor something on the target node
  • It can survive an unspecified number of reboots
  • We can take additional actions if patching failed

Here are our final files with some additional in-line comments.

main.yml:

---
- name: Try to survive and detect a reboot
  hosts: all
  gather_facts: False
  tasks:
    # inside the include we will call the same include to perform a loop
    - include_tasks: run_check_test.yml
    # The my_file variable comes from the include, its a registered variable from a task
    # Here we are going to force a failure if:
    #     we didn't get to run for some reason
    #     we didn't succeed
    #     we were unable to reach the target
    - name: Fail if we didn't get the file
      fail:
        msg: "It really failed"
      when: my_file is not defined or my_file is not succeeded or my_file is unreachable
    # Otherwise we can move on to any remaining tasks we have
    - name: Post task
      debug:
        msg: "This is the post task"

And our run_check_test.yml:

# Perform our sample file check
- name: Check for the file
  file:
    path: "/tmp/john"
    state: file
  register: my_file
  # As long as we are connected, keep trying until we find the file
  until: my_file is succeeded
  # if the machine is up the retries will keep looping looking for the file pausing for the delay
  retries: 2
  delay: 1
  # This setting will not mark the machine as failed if its unreachable
  ignore_unreachable: True
  # We also need to ignore errors incase the file just does not exist on the server yet and we run out of retries
  ignore_errors: True
# If the machine was not available for the last step pause so we don't just cruise through the number of retries
- name: Sleep if the host was unreachable
  pause:
    seconds: 3
  when: my_file is unreachable
  delegate_to: localhost
# decrement a safety counter so we don't end up in an infinite loop
- set_fact:
    safety_counter: "5"
# Loop if:
#    we still have safety retires
#    didn't get the file or we were unreachable to connect
- include_tasks: run_check_test.yml
  when:
    - (safety_counter | int > 0)
    - my_file is not succeeded or my_file is unreachable

To see an example of running our completed playbook while our target machine is rebooted twice before the file is created check out this listing

 

Takeaways and where to go next

Ansible is a great platform for performing automation. Putting together many fundamental concepts we can make Ansible extremely tolerant when dealing with unpredictable behaviour from client machines due to simultaneous processes being run outside of our Ansible automation.

If you want to learn more about the Red Hat Ansible Automation Platform:

Now Available: Red Hat-Maintained Content Collections on Automation Hub

$
0
0

Today marks an important milestone for Red Hat Ansible Automation Platform subscribers: The initial release of Red Hat-maintained Ansible Content Collections have been published to Automation Hub for automating select platforms from Arista, AWS, Cisco, IBM, Juniper, Splunk and more. The addition of these 17 Red Hat-maintained Collections on Automation Hub brings the total number to 47 Collections certified and published since September 2019. Finally, we are thrilled to have Ansible Collections for automating Red Hat Insights and Red Hat Satellite included as part of this release as well.

Why is this significant? First, it is important to understand that the Ansible project has recently completed an effort to decouple the Ansible executable from most of the content, and all migrated content now resides in new upstream repositories on GitHub. This change has had a ripple effect on backend development, testing, publishing, and maintenance on Ansible content. The good news is that now features of high quality, can be delivered more quickly, asynchronously from Ansible releases. 

Today’s announcement highlights the successful culmination of the following: 

  1. Migration of Ansible-maintained content from Ansible project to Collections. 
  2. Releasing new features and functionality since Ansible 2.9, without having to wait until Ansible 2.10.
  3. Publishing of these fully supported Collections to Automation Hub for Ansible subscribers to download and install, while continuing to enable the community to contribute via Ansible Galaxy.

The end result is a set of fully supported Ansible Content Collections that include new feature enablement tested against stable Ansible releases. No longer do Red Hat customers need to wait 6-8 months for Ansible releases in order to consume new or updated Platform content. This also allows for the underlying Ansible execution engine to track for much longer release cadences in order to maintain stability, while content can be released at its own cadence.

A Quick History, and Moving Forward

We announced at AnsibleFest Atlanta 2019 the foundational pieces that allow developers to create Ansible Content Collections as part of the Ansible 2.9 upstream release, and subsequently in the Red Hat Ansible Automation Platform product release. The writing was on the wall: The Ansible project was about to undergo extreme change, and Collections was the path forward. There is no doubt community and developer contributions have made Ansible what it is today, but in order to continue to expand and scale, a disaggregated approach to “execution vs. content” had to be made in order for the Ansible open source project to continue on its growth path. This change was necessary and inevitable; Ansible had 2,000+ outstanding pull requests and 4,000+ open issues. The project was becoming difficult to triage, manage, and update given that most approved changes had to go through a small group of project maintainers, which were often seen as gatekeepers that could not cope with the flood of contributions, issues, or requests. With changes now in place, the Ansible project maintainers can now focus on hardening, security, and performance improvements, while module, plugin, and role developers can release updated content at their own development cadence.

As my colleague Brad Thornton said in an Ansible team chat channel, “I went on to tell my friends how excited I am about our Collections release and how we are trying to split the ‘how Ansible works’ from the ‘what Ansible can do,’ and built a team around that vision.”

We hear similar asks frequently from our enterprise customers - they want to worry less about the underlying “plumbing” and more about how to implement what Ansible enables across an IT organization. An enterprise automation strategy needs to span multiple domains, functions, and departments, and Red Hat Ansible Automation Platform was designed to enable teams to get automating quickly and efficiently. 

Collections Added

The following Collections are now available and each are written, tested and maintained by Red Hat:

  • amazon.aws - Amazon AWS collection
  • ansible.netcommon - Ansible Netcommon collection
  • ansible.posix - Ansible Posix collection
  • ansible.tower - Ansible Tower collection
  • arista.eos - Arista EOS collection
  • cisco.asa - Cisco ASA collection
  • cisco.ios - Cisco IOS collection
  • cisco.iosxr - Cisco IOS XR collection
  • cisco.nxos - Cisco NX OS collection
  • frr.frr - Free Range Routing collection
  • ibm.qradar - IBM Qradar collection
  • junipernetworks.junos - Juniper JunOS collection
  • openvswitch.openvswitch - OpenvSwitch collection
  • redhat.insights - Red Hat Insights collection
  • redhat.satellite - Red Hat Satellite collection
  • splunk.es - Splunk Enterprise Security collection
  • vyos.vyos - VyOs collection

Full information about these collections can be found in the following Knowledgebase Article: https://access.redhat.com/articles/4993781

These Collections are fully supported by Red Hat and available with a Red Hat Ansible Automation Platform subscription. The equivalent open source versions are also available on Ansible Galaxy for community consumption. We are extremely excited to have Red Hat Insights and Red Hat Satellite Collections published as well, which provide Red Hat Insights and Red Hat Satellite customers end-to-end support with an Ansible subscription. We are planning for Collections  to soon have their own extended support lifecycles to provide enterprises with content that is low risk and maintains backward compatibility. More information on this will be provided later this year.

Please note that each of these Collections have many interesting features that have been added. Stay tuned for follow-up deep dive blog posts that go into more detail on many popular network, security, and cloud platforms.

Summary / Wrap Up

This release marks a milestone that addresses many challenges we’ve heard over the past few years. We believe with recent upstream project changes, combined with downstream product enhancements, Red Hat customers benefit from the following:

  • Flexibility on feature development and release, while maintaining compatibility and stability against current Ansible versions

Decoupling content from the executable also means decoupling the support roadmap of the content from the executable, enabling content to be supported immediately on the current Ansible releases (and tested on future ones).

  • Better clarity on what content is fully supported vs. certified vs. community supported

A good rule of thumb is if the content is downloaded from Automation Hub it is either fully supported (Red Hat-maintained) or certified (partner-maintained), which means that Red Hat Support may be engaged to assist with any issues via a Red Hat Ansible Automation Platform subscription. Please note that content on Ansible Galaxy is community supported.

  • Strengthening developer communities without the “red tape” or project overhead

Ansible community contributors were already organizing themselves to best align with their interests, and now they can now have more freedom to contribute without lag or wait for merging into the project. Collections themselves can now become their own sub-communities, without the overhead.

 

Resources and More Information

  • Register now for a webinar on July 14, 2020 that goes into detail around the supported collections, and an update from Ansible Engineering on the collections framework in general. The webinar will be recorded and posted on the Ansible Webinars and Training page.

  • If you are an active community contributor, a new addition to the Community Developer Guide, “Contributing to Ansible-maintained Collections” provides clarification on where and how to publish issues, and criteria for consideration.

  • Read the press release formally announcing the newly added Ansible Content Collections to Automation Hub as well as the components that were recently added to Red Hat Ansible Automation Platform.

  • More information on Partner Certified and Red Hat Supported Ansible Collections, via knowledgebase article.

Adding integration tests to Ansible Content Collections

$
0
0

In the previous installment of our "let us create the best Ansible Content Collection ever" saga, we covered the DigitalOcean-related content migration process. What we ended up with was a fully functioning Ansible Content Collection that unfortunately had no tests. But not for long; we will be adding an integration test for the droplet module.

 

We do not need tests, right?

If we were able to write perfect code all of the time, there would be no need for tests. But unfortunately, this is not how things work in real life. Any modestly useful software has deadlines attached, which usually means that developers need to strike a compromise between polish and delivery speed.

For us, the Ansible Content Collections authors, having a semi-decent Collection of integration tests has two main benefits:

  1. We know that the tested code paths function as expected and produce desired results.
  2. We can catch the breaking changes in the upstream product that we are trying to automate.

The second point is especially crucial in the Ansible world, where  one team of developers is usually responsible for the upstream product, and a separate group maintains Ansible content.

With the "why integration tests" behind us, we can focus our attention on how to write them.

 

Setting up the environment

If you would like to follow along, you will need to have Ansible 2.9 or later installed. You will also need to clone the DigitalOcean Ansible Content Collection. The following commands will set up the environment:

$ mkdir -p ~/digital_ocean/ansible_collections/digital_ocean

$ cd ~/digital_ocean/ansible_collections/digital_ocean

$ git clone \

    https://github.com/xlab-si/digital_ocean.digital_ocean.git \    digital_ocean

$ cd digital_ocean

$ export ANSIBLE_COLLECTIONS_PATHS=~/digital_ocean

$ ansible-doc digital_ocean.digital_ocean.droplet

If the last command printed the droplet module documentation, you are all set.

 

Manually testing Ansible modules

The most straightforward integration test for an Ansible module is a playbook that has two tasks. The first task executes the operation and the second task validates the results of the first task.

For example, to test that the droplet module created an instance with the correct parameters, we could use the following playbook.yaml file:

---
- hosts: localhost
  gather_facts: false
  name: Put DigitalOcean's droplet module through its paces

  tasks:
    - name: Create a new droplet
      digital_ocean.digital_ocean.droplet:
        oauth_token: ""
        name: test-droplet
        size: s-1vcpu-1gb
        region: fra1
        image: centos-8-x64
        unique_name: true
        tags: [ ansible, test, tags ]
      register: result

    - assert:
        that:
          - result is success
          - result is changed
          - "result.data.droplet.name == 'test-droplet'"
          - "result.data.droplet.size_slug == 's-1vcpu-1gb'"
          - "result.data.droplet.region.slug == 'fra1'"
          - "result.data.droplet.image.slug == 'centos-8-x64'"
          - "result.data.droplet.tags == ['ansible', 'test', 'tags']"
          - "result.data.droplet.status == 'active'"

To keep our DigitalOcean API token secure, we will place it in a separate file called vars.yaml:

---
do_api_token: 1a2b3c4d5e6f

Make sure you replace the API token with a real one. You can generate one in the API section of the DigitalOceans's console.

When we run the ansible-playbook -e @vars.yaml playbook.yaml command, Ansible will print something like this to the terminal:

PLAY [Put DigitalOcean's droplet module through its paces] **********

TASK [Create a new droplet] *****************************************
changed: [localhost]

TASK [assert] *******************************************************
ok: [localhost] => {
	"changed": false,
	"msg": "All assertions passed"
}
PLAY RECAP **********************************************************
localhost        : ok=2       changed=1  unreachable=0  failed=0
                   skipped=0  rescued=0  ignored=0

The main workhorse of the previous example is the assert Ansible module. Each assert's condition is an Ansible test, and the assert task will fail if any of the listed conditionals evaluates to false.

There are a few other things that we should test: parameter handling, check mode and idempotence, to name a few. We excluded those tests from the blog post for brevity, but feel free to check the full playbook.yaml for more details.

And while manually testing modules is simple, it does not scale to more than a few modules. Usually, we would need to write a script that runs all of the tests. But luckily, Ansible comes bundled with a tool aptly called ansible-test that can do this for us.

 

Automate the automation tests

The ansible-test knows how to perform a wide variety of testing-related tasks, from linting module documentation and code to running unit and integration tests. But before we can use it, we must prepare a directory structure for it:

$ mkdir -p tests/integration/targets/droplet/tasks

We know that the directory structure is quite heavily nested, but there is a logical explanation for all these directories:

  1. The tests/integration is where all things related to integration tests live.
  2. The tests/integration/targets directory contains all our test cases. Each test case is a barebones Ansible role.
  3. The tests/integration/targets/droplet is the test case that we will be adding today. And since each test case is an Ansible role, it needs to have a tasks subdirectory containing a main.yml file.

Now we can start populating our tests/integration/targets/droplet/tasks/main.yml file. Because we already have the playbook for manually testing the droplet module, creating the main.yml file is as simple as copying the tasks from the playbook.

As for the API token, we can copy the vars.yaml file content to tests/integration/integration_config.yml and ansible-test will pass any variables that are defined to our test cases.

And now we are ready to run the tests by executing the following command:

$ ansible-test integration

All that we need to do now is save the changes. But make sure you DO NOT commit the tests/integration/integration_config.yml file since it contains our DigitalOcean credentials.

To give our future selves some hints about the configuration options, we will create a template file, containing placeholders for real values. We will name this file integration_config.yml.template and populate it with the following content:

---
do_api_token: ${DO_API_TOKEN}

And we are done. Bye!

You want to see more, you say? I guess we could look at the GitHub Actions integration for the grand finale. Are you interested? Ok, let’s do it!

 

Integrating with CI/CD

Tests are useless if no one is running them. And since we all know that you cannot trust a programmer to run them locally, we will instead run them on the GitHub-provided CI/CD service.

It turns out that all we need to get things going is the following .github/workflows/test.yaml file:

name: Run DigitalOcean Ansible Integration Tests
on: [ push ]
jobs:
  integration:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: ansible_collections/digital_ocean/digital_ocean

    steps:
      - name: Clone the repo
        uses: actions/checkout@v2
        with:
          path: ansible_collections/digital_ocean/digital_ocean

      - name: Set up Python 3.7
        uses: actions/setup-python@v2
        with:
          python-version: 3.7

      - name: Install Ansible
        run: pip install ansible

      - name: Configure integration test run
        env:
          DO_API_TOKEN: $
        run: |
          ./tests/utils/render.sh \
            tests/integration/integration_config.yml.template \
            > tests/integration/integration_config.yml

      - name: Run the integration tests
        run: ansible-test integration --python 3.7

The only exciting step in the workflow is the fourth one. It is responsible for creating the configuration file that contains our DigitalOcean API token. Consult the render.sh script for the gory details of template rendering.

And where is the token stored? In the GitHub's repository secrets storage. The official documentation lives here.

xlab blog 1

Once we have our secrets in place and workflow description committed, we can push our changes to GitHub and enjoy some well-deserved Jenkins cinema.

xlab blog 2

 

Is there more?

We have just scratched the surface when it comes to testing. And while having integration tests for modules is a great start, there are other things that we should test if we are serious about creating a robust Ansible Content Collection.

If you want to learn more about:

  1. testing the built-in documentation,
  2. linting the modules,
  3. writing unit tests,
  4. preparing integration tests for other kinds of Ansible plugins, and
  5. integrating with other CI/CD providers,

make sure to check out our upcoming webinar about Ansible testing.

Cheers!

Automating Red Hat Satellite with Ansible

$
0
0

Red Hat Satellite is a great tool to automate deployment, provisioning, patching and configuration of your infrastructure, but how can you automate Satellite itself?

Using the Red Hat Ansible Automation Platform and the Satellite Ansible Content Collection, of course!

Since you’re already tuning in, you probably don’t need convincing that automation is great; it helps enable easier collaboration, better accountability and easier reproducibility. But have you already heard about Collections?

We’ll show you how you can use the Satellite Ansible Content Collection to manage your Satellite installations via Ansible

What is the Satellite Ansible Content Collection?

The Satellite Ansible Content Collection is, as you might have guessed already, a set of Ansible modules and plugins to interact with Red Hat Satellite.

These modules are an evolution from the foremanandkatello modules previously available in Ansible itself, as those are deprecated since Ansible 2.8 and are scheduled for removal in 2.12. Due to the use of a Satellite-specific library, the old modules would not work properly in plain Foreman setups and often lacked features that were not present in Red Hat Satellite. At the same time, using the modules together with Satellite wasn’t easy either, as the used library only supported a specific Satellite release and you had to find the right version of Satellite for yourself.

Over the past year, the community sat together, cleaned up the modules, created tests and documentation, and finally ported the modules to a Satellite independent library.

Today, we cover many core Satellite workflows and examples. We would also love your feedback to extend to other workflows like OpenSCAP and Remote Execution.

Where can the Collection be downloaded?

You can download the redhat.satellite Collection from Automation Hub (requires Ansible Automation Platform subscription) immediately (along with the updated Satellite 6.7.z erratum), or wait for the forthcoming ansible-collection-redhat-satellite RPM from the Satellite 6.8 repositories later this year. When installing from Automation Hub, you’ll also have to make sure you have the latest apypie Python library now available from the Satellite repository.

The community can download and contribute to the corresponding unsupported upstream theforeman.foreman Collection from Ansible Galaxy as well.

Information on how to configure downloading via the ansible.cfg or requirements.yml files, please refer to the blog entitled, “Hands On With Ansible Collections.

How can the modules be used?

Usually you’ll find one module per Satellite entity (Organization, Location, Host Group, etc.) or action (Repository Sync, Content Upload, etc.). Each module takes a set of common parameters:

  • server_url: the URL of your Satellite instance (e.g. https://satellite.example.com)
  • username: the login of the user that will be used for API authentication (e.g. admin)
  • password: the password of said user (e.g. changeme)
  • validate_certs: whether or not to validate the TLS certificates the server presents

For example, if you’re about to create a new domain, the task in your Ansible playbook will look like this:

- name: create example.org domain
  redhat.satellite.domain:
    name: example.org
    state: present
    server_url: https://satellite.example.com
    username: admin
    password: changeme

That’s it! All modules follow the same basic calling convention and you’re set up using them in your environment. Now is a good time to look through the list of available modules and start writing playbooks for the most common workflows.

Examples

The previous example was quite short. Here are a few real world examples of how we use the modules today. For the sake of readability, the server_url, usernameandpassword parameters were omitted.

Enable and sync a repository from the CDN and add it to a Content View

One very common workflow is to sync content from the Red Hat CDN and then publish it to the clients. For that, the following steps need to happen:

  1. The repository set needs to be enabled, which will create all the necessary products in Satellite. This is a step that needs to happen once.
  2. The repository needs to be synced. This will usually happen regularly either by executing the workflow from Tower on a schedule or by creating a Sync Plan in Satellite. We show the scheduled variant here.
  3. A content view needs to exist and contain the repository in question, so that clients can consume it.
  4. The content view needs to be published to get the newly synced content.
- hosts: localhost
  vars:
    content_view: RHEL
    product: "Red Hat Enterprise Linux Server"
    repo: "Red Hat Enterprise Linux 7 Server (RPMs)"
    repo_variants:
      - releasever: "7Server"
        basearch: "x86_64"
    organization: ACME
  tasks:
    - name: "Enable {{ repo }} repository"
      redhat.satellite.repository_set:
        name: "{{ repo }}"
        product: "{{ product }}"
        repositories: "{{ repo_variants }}"
        organization: "{{ oragnization }}"
        state: enabled
    - name: "Sync {{ repo }} repository"
      redhat.satellite.sync:
        repository: "{{ repo }}"
        product: "{{ product }}"
        organization: "{{ organization }}"
    - name: "Create RHEL ContentView"
      redhat.satellite.content_view:
        name: "{{ content_view }}"
        repositories:
          - name: "{{ repo }}"
            product: "{{ product }}"
        organization: "{{ organization }}"
        state: present
    - name: "Publish RHEL content view"
      redhat.satellite.content_view_version:
        content_view: "{{ content_view }}"
        organization: "{{ organization }}"

Create Lifecycle Environment and Activation Key

Another common workflow is to organize system updates in Lifecycle Environments. This allows clients to vary on patching cadence and enables the use of a set of machines as a testing environment. To achieve that, we first create a Lifecycle Environment and then an Activation Key that “points” to that Lifecycle Environment. Now when a system is set up, utilizing this Activation Key in the registration step will allow the admin to assign the system directly into the correct environment.

- hosts: localhost
  vars:
    activation_key: rhel
    lifecycle_env: Test
    content_view: RHEL
    subscriptions:
      - name: "Red Hat Enterprise Linux"
    organization: ACME
  tasks:
    - name: "Create {{ lifecycle_env }} LCE"
      redhat.satellitelifecycle_environment:
        name: "{{ lifecycle_env }}"
        prior: "Library"
        organization: "{{ organization }}"
        state: present
    - name: "Create {{ activation_key }}-{{ lifecycle_env }} Activation Key"
      redhat.satellite.activation_key:
        name: "{{ activation_key }}-{{ lifecycle_env }}"
        lifecycle_environment: "{{ lifecycle_env }}"
        content_view: "{{ content_view }}"
        subscriptions: "{{ subscriptions }}"
        organization: "{{ organization }}"
        state: present

Takeaways/Going Forward

Now is the best time to try out the Collection - we’d love to hear about the workflows that you implement with it and especially the ones that you’re still missing so we can make the Collection even better!

If you want to learn more, check out the following resources:

Ansible security automation resource modules

$
0
0

Security professionals are increasingly adopting automation as a way to help unify security operations into structured workflows that can reduce operational complexity, human error, time to respond and can be integrated into existing SIEM (Security Information and Event Management) or SOAR (Security Orchestration Automation and Response) platforms.

In October of 2019 the Ansible network automation team introduced the concept of resource modules:

So what exactly is a “resource module?” Sections of a device’s configuration can be thought of as a resource provided by that device. Network resource modules are intentionally scoped to configure a single resource and can be combined as building blocks to configure complex network services.

Keep in mind that the first network automation modules could either execute arbitrary commands on target devices, or read in the device configuration from a file and deploy it. These modules were quite generic and provided no fine-tuning of certain services or resources.

In contrast, resource modules can make network automation easier and more consistent for those automating multiple platforms in production by avoiding large configuration file templates covering all kinds of configuration. Instead they focus on the task at hand, providing separate building blocks which can be used to describe complex configurations.

The same principle can be applied in the security space, and we started exploring the possibility with Ansible security automation.

 

Resource modules in security automation

In security automation, many Collections already have more refined modules targeting use cases or workflows of the corresponding target environment. Therefore, there is little standardization or generic abstraction in terms of product agnostic resources.

For example, if you have a closer look at our investigation enrichment blog post, you will see that while we used a certain amount of modules, those were usually very product specific and didn’t offer much in terms of generic resources.

At the same time, security automation does cover many tasks where resource modules can add a lot of value. Whether it is granting and denying access to networks via Access Control Lists (ACLs) or policies, the management of rules in IDPS systems or the log forwarding of nodes to a central SIEM: all those tasks are often executed on a well-defined resource across multiple products, which makes these tasks good candidates to be helped by resource modules.

 

Security automation resource modules for Access Control Lists

Following this line of thought we have started to introduce ACLs resource modules within Ansible. ACLs can help provide a first layer of security when applied to interfaces, or globally as access rules, as they permit or deny traffic flows in firewalls.

Within an ACL the order of Access Control Entries (ACEs) are crucial since based on the ACEs sequence/order, appliances decide whether traffic is allowed or not. Given this background, an ACL resource module provides the same level of functionality that a user can achieve when manually changing the configuration on a corresponding device. However, the ACL resource module comes with the advantages of Ansible:

  • Automating things using Ansible can accelerate the time to become productive.
  • Ansible is powerful and users can automate a wide variety of tasks,  at both the user or enterprise level. This helps to orchestrate the complete app lifecycle including the ACLs, and makes the security automation part of the app deployment process and the entire technical business process.
  • Ansible has agentless architecture which uses the native communication protocols of the managed target nodes. This avoids the need to introduce and install new software and new security protocols in the managed environments.
  • Last but not least, with the help of Ansible’s fact gathering, the data structures of managed nodes can be collected and made accessible in an efficient manner.

Please note that the naming convention for the new ACL resource modules uses the plural form instead of singular: “acls” instead of “acl”. If the platform you’re automating has modules with both names, the plural form of the module is the newer one corresponding to the resource module initiative. The singular form of the module will likely be deprecated in a future release. This distinction was introduced to ease the transition to resource modules and avoid disruption of the current automated workflows.

 

Example: Cisco ASA ACLs

A good way to understand the new ACLs resource modules is via an example. For this, let’s have a look at the Cisco ASA Collection which targets the Cisco Adaptive Security Appliance family of security devices. In this Collection you will find a module called asa_acls which is the resource module to manage named or numbered ACLs on ASA devices.

As an example, let’s first check the current documentation. For that, we can use the capability of the module to gather the existing ACLs configuration:

---
- name: Get structured data
  hosts: cisco
  gather_facts: false
  collections:
   - cisco.asa

  tasks:
  - name: Gather facts
    asa_acls:
      state: gathered
      register: gather
  - name: output data
    debug:
      vars: “{{ gather }}”

The output will be something along the lines of:

- acls:
   - aces:
       - destination:
           address: 192.0.3.0
           netmask: 255.255.255.0
           port_protocol:
                 eq: www
         grant: deny
…

Note that the output generated this way is purely focused on the resource at hand - ACLs. This is in contrast to a generic fact gathering wheremore data is provided, making it difficult to keep an overview and handle the data subsequently.

Given the configuration at hand, let’s assume for the sake of this example that we analyze the gathered configuration, and want to make a change to it. The next configuration looks like:

- acls:
  - name: global_access
    acl_type: extended
    aces:
    - grant: deny
      line: 1
      protocol_options:
        tcp: true
      source:
        address: 192.0.4.0
        netmask: 255.255.255.0
        port_protocol:
          eq: telnet
      destination:
        address: 192.0.5.0
        netmask: 255.255.255.0
        port_protocol:
          eq: www

This configuration describes that access from a defined source to a target is denied. Note that this entire definition is mostly product agnostic and can be used with other systems as well.

Given this description is available as the variable acls this can be deployed with the cisco_acls module:

---
- name: Replace ACLs device configuration
  hosts: cisco
  gather_facts: false
  collections:
   - cisco.asa

  tasks:
  - name: Replaces device configuration of listed acls
    asa_acls:
      config: “{{ acls }}”
      state: replaced

As you see it is possible to apply an existing resource description to an existing device. Resource modules allow the user to read in existing configuration and convert that into a structured data model. These data models can be used as a base to further deploy changed configuration on the target nodes.

 

Takeaways and where to go next

Security professionals are in need of unification of their operational workflows. Automation helps - even more so if the platform it is running on provides a simpler means to control otherwise rather complex structures. The Ansible security automation resource modules provided are a building block  in standardizing automation actions.

If you want to follow up on that topic, there are many steps you can do next:

Also we are planning to publish a follow up blog post providing a more deep-dive view into the cisco_acls module and its capabilities. Stay tuned!

Simplifying secrets management with CyberArk and Red Hat Ansible Automation Platform

$
0
0

Access credentials and secrets are a crucial piece of today’s infrastructure management: if they get compromised, the environment itself is at risk. Thus some time ago, back at about version 3.5.1, the idea of a secrets management system was introduced into Ansible Tower, one of the components of our Red Hat Ansible Automation Platform. What this essentially means is that Ansible Tower has a credential store where it will encrypt at-rest secrets that you need in order to log in to a remote host, authenticate with a cloud endpoint or pull content from a version control system. 

We have always needed secrets in order to log in and then configure a remote resource. We do this every day with usernames and passwords. Ansible Tower has a very secure built-in mechanism for providing this capability, but some may see that as an additional security island or bespoke to the enterprise direction. In this blog post, I will highlight the Ansible way of solving the “security island” problem and propose a solution using Ansible credential plugins integration via CyberArk Conjur. Conjur is an API addressable vault where you store access and authorization information instead of having the secrets stored in Ansible Tower. An integration like this solves many enterprise privilege access problems, like least privileged access and password resets, as well as storing important auditing information centrally.

This blog post describes a step by step on how to deploy a container based CyberArk conjur application and integrate it with Ansible Tower.

 

Deploy a container host

To get started, launch yourself a new Linux instance. In our example we use a Red Hat Enterprise Linux 8. If you don’t have access to RHEL 8 yet, check out the RHEL 8 developer program. Otherwise, you may have issues using some of the commands or getting access to content within these instructions. Once your Linux instance is accessible, log into it and gain root privileges to run the following sequence of commands that will prepare the host with all the software dependencies.

Note: Install CyberArk Conjur and Ansible Tower onto a single host if you prefer an all in one solution. Please refer to the installation steps on docs.ansible.com if you don’t have an Ansible  Tower system at your disposal. Id also recommend reading through this article for details around usage of podman

# dnf -y install python3 git podman
# curl -L "https://raw.githubusercontent.com/sheeshkebab/podman-compose-1/devel/podman_compose.py" -o /usr/local/bin/podman-compose
# chmod +x /usr/local/bin/podman-compose
# alternatives --install /usr/bin/podman-compose podman-compose /usr/local/bin/podman-compose 1
# cd /var/tmp

 

Set up Conjur

The setup of Conjur is rather simple, and others have already described it perfectly. Thus complete these steps right up to the sub heading where it says “Integrating Ansible”. If you follow the guide correctly, you will end up with a development version of Conjur running inside a container and a service account / password configured within.

I performed a few additional steps by setting up a load balancer instance that exposes my Conjur service to the outside world with a valid SSL certificate and DNS name.

 

Take it for a spin

Now you have a working containerised Conjur service, you are ready to integrate it into Ansible Tower. Let's start by defining a credential with Ansible Tower that allows us to authenticate against the Conjur service API.

To create a credential in Ansible Tower, click on “Credentials” in the left navigation bar and create a new one. Fill in the details as shown below and note the “Credential Type”.

  • The Conjur URL needs to point to your Conjur installation. If you have installed all on one host, this is localhost. In my case it references my public load balancer. Conjur listens on port 8443.
  • The API key can be found within the contents of your 'ansible.out' file that was created during the installation.

Hailstone blog 1

Once your credential data has been entered, click the blue test button and populate the SECRET IDENTIFIER as per below screenshot. Note; the secret identifier matches what you set up earlier in your Conjur policy. Specifically, the location of the username “service01”. I've given my Conjur account a very simple annotation “Conjur-API-Account”, we will need this again soon.

hailstone blog 2

If you get a green box that means you have things configured correctly. Adding it as per the screenshot should just work, but if you get a red box indicating failures, test that you have firewall rules and routing between your Conjur and Ansible Tower systems configured correctly and confirm that you are using the correct secrets identifier.

 

Add a new credential into Ansible Tower

Let’s use the freshly created Conjur credential, and Create a new credential of type “Machine” using Conjur. In the left navigation bar of Ansible Tower, click on “Credentials”, create a new one of type “Machine” and give it a name and organization. Machine credential types perform the authentication against automated endpoints. 

In our example the username will be ‘service01’ because it matches the key stored at db/host1/user in Conjur. 

The password field will be auto-populated via an API call against your Conjur service. Select the 'magnifying glass' within the password field of your new credential and locate your CyberArk credential that was created earlier.

hailstone blog 3

Populate the metadata field with db/host1/pass so that the password returned from our API lookup against Conjur matches our ‘service01’ account.

hailstone blog 4

Finally, Click ok and then save your machine credential. You are done configuring the credential within Ansible Tower. 

To test out the entire integration and automated password lookup, build another Linux host and add the ‘service01’ account as a user with the same password. This would obviously work equally well if your host was connected to a centralised LDAP directory.

#useradd -m service01
#passwd service01 < ‘contents from db/host1/pass’

Once that is done, add your test host to Ansible Tower’s inventory and run a ping module test.

A ping test is ideal because it will confirm that Ansible Tower was able to connect to your Linux host, authenticate and confirm that it is up. The ping test should output a result of type pong.

Cyberark’s security solutions provide a much greater capability than what is referenced in this blog. Feel free to explore further.

 

Takeaways and where to go next

Many organizations have enterprise vaults for storing passwords and credentials. Integrating with Ansible Tower means that you can avoid having another island for storing credentials. Plus, you will keep your security department happy by not introducing a new process into the authentication and access model.

If you want to learn more about CyberArk’s solution or the Red Hat Ansible Automation Platform, read one of the following articles:

Happy automating!

Deep dive on Cisco ASA resource modules

$
0
0

Recently, we published our thoughts on resource modules applied to the use cases targeted by the Ansible security automation initiative. The principle is well known from the network automation space and we follow the established path. While the last blog post covered a few basic examples, we’d like to show more detailed use cases and how those can be solved with resource modules.

This blog post goes in depth into the new Cisco ASA Content Collection, which was already introduced in the previous article. We will walk through several examples and describe the use cases and how we envision the Collection being used in real world scenarios.

 

The Cisco ASA Certified Content Collection: what is it about?

The Cisco ASA Content Collection provides means to automate the Cisco Adaptive Security Appliance family of security devices - short Cisco ASA, hence the name. With a focus on firewall and network security they are well known in the market.

The aim of the Collection is to integrate the Cisco ASA devices into automated security workflows. For this, the Collection provides modules to automate generic commands and config interaction with the devices as well as resource oriented automation of access control lists (ACLs) and object groups (OGs).

 

How to install the Cisco ASA Certified Ansible Content Collection

The Cisco ASA Collection is available to Red Hat Ansible Automation Platform customers at Automation Hub, our software as a service offering on cloud.redhat.com and a place for Red Hat subscribers to quickly find and use content that is supported by Red Hat and our technology partners.

Read more about Automation Hub in the blog post “Getting Started with Automation Hub”. There you will also learn how to configure your Ansible command line tools to access Automation Hub for collection downloads.

Once that is done, the Collection is easily installed:

ansible-galaxy collection install cisco.asa

Alternatively you can also find the collection in Ansible Galaxy, our open source hub for sharing content in the community.

 

What’s in the Cisco ASA Content Collection?

The focus of the Collection is on the mentioned modules (and the plugins supporting them): there are three modules for basic interaction, asa_facts, asa_cli and asa_config. If you are familiar with other networking and firewall Collections and modules of Ansible you will recognize this pattern: these three modules provide the most simple way of interacting with networking and firewall solutions. Using those, general data can be received, arbitrary commands can be sent and configuration sections can be managed.

While these modules already provide a great value for environments where the devices are not automated at all, the focus of this blog article is on the other modules in the Collection: the resource modules asa_ogs and asa_acls. Being resource modules they have a limited scope, but enable users of the Collection to focus on that particular resource without being disturbed by other content or configuration items. They also enable a simpler cross-product automation since other Collections follow the same pattern.

If you take a closer look, you will find two more modules: asa_ogs and asa_acls. As mentioned in our first blog post about security automation resource modules, those are deprecated modules, which previously were used to configure ACLs and OGs. They are superseded by the resource modules.

 

Connect to Cisco ASA, the Collection way

The Collection supports network_cli as a connection type. Together with the network OS cisco.asa.asa, a username and a password, you are good to go. To get started quickly, you can simply provide these details as part of the variables in the inventory:

[asa01]
host_asa.example.com

[asa01:vars]
ansible_user=admin
ansible_ssh_pass=password
ansible_become=true
ansible_become_method=ansible.netcommon.enable
ansible_become_pass=become_password
ansible_connection=ansible.netcommon.network_cli
ansible_network_os=cisco.asa.asa
ansible_python_interpreter=python

Note that in a productive environment those variables should be supported in a secure way, for example, with the help of Ansible Tower credentials.

 

Use Case: ACLs

After all this is setup, we are now ready to dive into the actual Collections and how they can be used. For the first use case, we want to look at managing ACLs within ASA. Before we dive into Ansible Playbook examples, let’s quickly discuss what ASA ACLs are and what an automation practitioner should be aware of.

ASA Access-lists are created globally and are then applied with the access-group “command”. They can either be applied inbound or outbound. There are few things users should be aware with respect to access-lists on the Cisco ASA firewall:

  • When a user creates an ACL for higher to lower security level i.e. outbound traffic then the source IP address is the address of the host or the network (not the NAT translated one).
  • When a user creates an ACL for lower to higher security level i.e. inbound traffic then the destination IP address has to be either of the below two:
    • The translated address for any ASA version before 8.3.
    • The address for ASA 8.3 and newer.
  • The access-list is always checked before NAT translation.

Additionally, changing ACLs can become very complex quickly. It is not only about the configuration itself, but also the intent of the automation practitioner: should a new ACL just be added to the existing configuration? Or should it replace it? And what about merging them?

The answer to these questions usually depends on the environment and situation the change is deployed in. The different ways of changing ACLs are noted here and in the Cisco ASA Content Collection as “states”: different ways to deploy changes to ACLs.

The ACLs module knows the following states:

  • Gathered
  • Merged
  • Overridden
  • Replaced
  • Deleted
  • Rendered
  • Parsed

In this use case discussion, we will have a look at all of them, though not always in full detail. However, we will provide links to full code listings for the interested readers.

Please note that while we usually use network addresses for the source and destination examples, other values like network object-groups are also possible.

 

State Gathered: Populating an inventory with configuration data

Given that resource modules allow to read-in existing network configuration and convert that into structured data models, the state “gathered” is the equivalent for gathering Ansible Facts for this specific resource. That is helpful if specific configuration pieces should be reused as variables later on. Another use case is to read-in the existing network configuration and store it as a flat-file. This flat file can be committed to a git repository on a scheduled base, effectively tracking the current configuration and changes of your security tooling.

To showcase how to store existing configuration as a flat file, let’s take the following device configuration:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 2 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log default (hitcnt=0) 0xdc46eb6e
access-list test_access line 2 extended deny icmp 198.51.100.0 255.255.255.0 198.51.110.0 255.255.255.0 alternate-address log errors interval 300 (hitcnt=0) 0x68f0b1cd
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52

To gather and store the content as mentioned above, we need to first gather the data from each device, then create a directory structure mapping our devices and then store the configuration there, in our case as YAML files. The following playbook does exactly that. Note the parameter state: gathered in the first task.

---
- name: convert interface to structured data
  hosts: asa
  gather_facts: false
  tasks:


    - name: Gather facts
       cisco.asa.asa_acls:
         state: gathered
       register: gather

    - name: Create inventory directory
      become: true
      delegate_to: localhost
      file:
       path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}"
       state: directory

    - name: Write each resource to a file
      become: true
      delegate_to: localhost
      copy:
        content: "{{ gather[‘gathered’][0] | to_nice_yaml }}"
        dest: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}/acls.yaml"

The state “gathered” only collects existing data. In contrast to most other states, it does not change any configuration. The resulting data structure from reading in a brownfield configuration can be seen below:

$ cat lab_inventory/host_vars/ciscoasa/acls.yaml
- acls:
   - aces:
       - destination:
           address: 192.0.3.0
           netmask: 255.255.255.0
           port_protocol:
                 eq: www
         grant: deny
         line: 1
         log: default
         protocol: tcp
         protocol_options:
           tcp: true
         source:
           address: 192.0.2.0
           netmask: 255.255.255.0
...

You can the full detailed listing of all the commands and outputs of the example in the state: gathered reference gist.

 

State Merged: Add/Update configuration

After the first, non-changing state we now have a look at a state which changes the target configuration: “merged”. This state is also the default state for any of the available resource modules - because it just adds or updates the configuration provided by the user. Plain and simple.

For example, let’s take the following existing device configuration:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 1 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log debugging interval 300 (hitcnt=0) 0xdc46eb6e

Let us assume we want to deploy the configuration which we stored as a flat-file in the gathered example. Note that the content of the flat file is basically one variable called “acls”. Given this flat file and the variable name, we can use the following playbook to deploy the configuration on a device:

---
- name: Merged state play
  hosts: cisco
  gather_facts: false
  collections:
   - cisco.asa

  tasks:
    - name: Merge ACLs config with device existing ACLs config
      asa_acls:
        state: merged
        config: "{{ acls }}"

Once we run this merge play all of the provided parameters will be pushed and configured on the Cisco ASA appliance.

Afterwards, the network device configuration is changed:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
ccess-list test_access; 2 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log default (hitcnt=0) 0xdc46eb6e
access-list test_access line 2 extended deny icmp 198.51.100.0 255.255.255.0 198.51.110.0 255.255.255.0 alternate-address log errors interval 300 (hitcnt=0) 0x68f0b1cd
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52

All the changes we described in the playbook with the resource modules are now in place in the device configuration.

If we dig slightly into the device output there are following observations:

  • The merge play configured 2 ACLs:
    • test_access, configured with 2 Access Control Entries (ACEs) 
    • test_R1_traffic with only 1 ACEs
  • test_access is an IPV4 ACL where for the first ACE we have specified the line number as 1 while for the second ACE we only specified the name which is the only required parameter. All the other parameters are optional and can be chosen depending on the particular ACE policies . Note however that it is considered best practice to configure the line number if we want to avoid an ACE to be configured as the last in an ACL.
  • test_R1_traffic is an IPV6 ACL 
  • As there weren’t any pre-existing ACLs on this device, all the play configurations have been added. If we had any  pre-existing ACLs and the play also had the same ACL with either different ACEs or same ACEs with different configurations, the merge operation would have updated the existing ACL configuration with the new provided ACL configuration.

Another benefit of automation shows when we run the respective merge play a second time: Ansible’s charm of idempotency comes into the picture! The play run results in “changed=False” which confirms to the user that all of the provided configurations in the play are already configured on the Cisco ASA device.

You can the full detailed listing of all the commands and outputs of the example in the state: merged reference gist.

 

State Replaced: Old out, new in

Another typical situation is when a device is already configured with an ACL with existing ACEs, and the automation practitioner wants to update the ACL with a new set of ACEs while entirely discarding all the already configured ones.

In this scenario the state “replaced” is an ideal choice: as the name suggests, the replaced state will replace ACL existing ACEs with a new set of ACEs given as input by the user. If a user tries to configure any new ACLs that are not already pre-configured on the device it’ll act as a merge state and the asa_acls module will try to configure the ACL ACEs given as input by the user inside the replace play.

Let’s take the following brown field configuration:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 2 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log default (hitcnt=0) 0xdc46eb6e
access-list test_access line 2 extended deny icmp 198.51.100.0 255.255.255.0 198.51.110.0 255.255.255.0 alternate-address log errors interval 300 (hitcnt=0) 0x68f0b1cd
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52

Now we assume we want to configure a new ACL named “test_global_access”, and we want to replace the already existing “test_access” ACL configuration with a new source and destination IP. The corresponding ACL configuration for our new desired state is:

- acls:
   - name: test_access
     acl_type: extended
     aces:
       - grant: deny
         line: 1
         protocol: tcp
         protocol_options:
           tcp: true
         source:
           address: 192.0.3.0
           netmask: 255.255.255.0
         destination:
           address: 192.0.4.0
           netmask: 255.255.255.0
           port_protocol:
             eq: www
         log: default
   - name: test_global_access
     acl_type: extended
     aces:
       - grant: deny
         line: 1
         protocol_options:
           tcp: true
         source:
           address: 192.0.4.0
           netmask: 255.255.255.0
           port_protocol:
             eq: telnet
         destination:
           address: 192.0.5.0
           netmask: 255.255.255.0
           port_protocol:
             eq: www

Note that the definition is again effectively contained in the variable “acls” - which we can reference as a value for the “config” parameter of the asa_acls module just as we did in the last example. Only the value for the state parameter is different this time:

---
- name: Replaced state play
  hosts: cisco
  gather_facts: false
  collections:
   - cisco.asa

  tasks:
    - name: Replace ACLs config with device existing ACLs config
      asa_acls:
        state: replaced
        config: "{{ acls }}"

After running the playbook, the network device configuration has changed as intended: the old configuration was replaced with the new one. In cases where there was no corresponding configuration in place to be replaced, the new one was added:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 1 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.3.0 255.255.255.0 192.0.4.0 255.255.255.0 eq www log default (hitcnt=0) 0x7ab83be2
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52
access-list test_global_access; 1 elements; name hash: 0xaa83124c
access-list test_global_access line 1 extended deny tcp 192.0.4.0 255.255.255.0 eq telnet 192.0.5.0 255.255.255.0 eq www (hitcnt=0) 0x243cead5

Note that the ACL test_R1_traffic was not modified or removed in this example!

You can the full detailed listing of all the commands and outputs of the example in the state: replaced reference gist.

 

State Overridden: Drop what is not needed

As noted in the last example, ACLs which are not explicitly mentioned in the definition remain untouched. But what if there is the need to reconfigure all existing and pre-configured ACLs with the input ACL ACEs configuration - and also affect those that are not mentioned? This is where the state “overridden” comes into play.

If you take the same brown field environment from the last example and deploy the same ACL definition against it, but this time switch the state to “overridden”, the resulting configuration of the device looks quite different:

Brownfield device configuration before deploying the ACLs:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 2 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log default (hitcnt=0) 0xdc46eb6e
access-list test_access line 2 extended deny icmp 198.51.100.0 255.255.255.0 198.51.110.0 255.255.255.0 alternate-address log errors interval 300 (hitcnt=0) 0x68f0b1cd
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52

Device configuration after deploying the ACLs via the resource module just list last time, but this time with state “overridden”:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_access; 1 elements; name hash: 0x96b5d78b
access-list test_access line 1 extended deny tcp 192.0.3.0 255.255.255.0 192.0.4.0 255.255.255.0 eq www log default (hitcnt=0) 0x7ab83be2
access-list test_global_access; 1 elements; name hash: 0xaa83124c
access-list test_global_access line 1 extended deny tcp 192.0.4.0 255.255.255.0 eq telnet 192.0.5.0 255.255.255.0 eq www (hitcnt=0) 0x243cead5

Note that this time the listing is considerably shorter - the ACL test_R1_traffic was dropped since it was not explicitly mentioned in the ACL definition which was deployed. This showcases the difference between “replaced” and “overridden” state.

You can the full detailed listing of all the commands and outputs of the example in the state: overridden reference gist.

 

State Deleted: Remove what is not wanted

Another more obvious use case is the deletion of existing ACLs on the device, which is implemented in the “deleted” state. In that case the input is the ACL name to be deleted and the corresponding delete operation will delete the entry of the particular ACL by deleting all of the ACEs configured under the respective ACL.

As an example, let’s take our brown field configuration already used in the other examples. To delete the ACL test_access we name it in the input variable:

- acls:
   - name: test_access

The playbook looks just like the one in the other examples, just with the parameter and value state: deleted. After executing it, the configuration of the device is:

ciscoasa# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
            alert-interval 300
access-list test_R1_traffic; 1 elements; name hash: 0x2c20a0c
access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet (hitcnt=0) 0x11821a52

The output is clearly shorter than the previous configuration since an entire ACL is missing.

You can the full detailed listing of all the commands and outputs of the example in the state: deleted reference gist.

 

State Rendered and State Parsed: For development and offline work

There are two more states currently available : “rendered” and “parsed”. Both are special in that they are not meant to be used in production environments, but during development of your playbooks and device configuration. They do not change the device configuration - instead they output what would be changed in different formats.

The state “rendered” returns a listing of the commands that would be executed to apply the provided configuration. The content of the returned values given the above used configuration against our brown field device configuration:

"rendered": [
   "access-list test_access line 1 extended deny tcp 192.0.2.0 255.255.255.0 192.0.3.0 255.255.255.0 eq www log default",
   "access-list test_access line 2 extended deny icmp 198.51.100.0 255.255.255.0 198.51.110.0 255.255.255.0 alternate-address log errors",
   "access-list test_R1_traffic line 1 extended deny tcp 2001:db8:0:3::/64 eq www 2001:fc8:0:4::/64 eq telnet"
]

You can the full detailed listing of all the commands and outputs of the example in the state: rendered reference gist.

State “parsed” acts similar, but instead of returning the commands that would be executed, it returns the configuration as a JSON structure, which can be reused in subsequent automation tasks or by other programs. See our full detailed listing of all the commands and outputs of the parsed example in the state: parsed reference gist.

 

Use Case: OGs

As mentioned before, the Ansible Content Collection does support a second resource: object groups. Think of networks, users, security groups, protocols, services and the like. The resource module can be used to define them or alter their definition. Much like the ACLs resource module, the basic workflow defines them via a variable structure and then deploys them in a way identified by a state parameter. The states are basically the same as the ACLs resource module understands.

Due to this similarity, we will not go into further details here but instead refer to the different state examples mentioned above.

From a security perspective however, the object group resource module is crucial: in a modern IT environment, communication relations are not only defined by IP addresses, but can also be defined by the types of objects that are in focus: it is crucial for security practitioners to be able to abstract those types in object groups and address their communication relations in ACLs later on.

This also explains why we picked these two resource modules to start with: they work closely hand in hand and together pave the way for an automated security approach using the family of Cisco ASA devices.

 

Takeaways and going forward

The Cisco ASA Content Collection can be of great use to security practitioners in need of automation and unification of their operational workflows around the family of Cisco ASA devices. The resource modules help as building blocks in standardizing automation actions, even more when products of other vendors are part of the IT security environment.

If you want to follow up on this topic, here are some next  steps:

 


Make Consistent Enterprise Automation a Reality with Ansible Content for AIX and IBM i

$
0
0

As we navigate through unprecedented times, the spotlight is on enhancing IT resilience and ensuring business continuity. We see that enterprises are experiencing shifts in market conditions and automation can be a key to rapidly responding to changes. With many enterprises having hybrid IT and multiple operating system environments, each with its own tooling and processes, implementing a consistent automation strategy to help scale and maximize impact has been a challenge. This is where Red Hat Ansible Automation Platform can help, by more easily enabling automation across different IT environments.

Red Hat Ansible Automation Platform provides automation in areas that span across development, DevOps, compute, network, storage, applications, security, and Internet of Things (IoT). A common request we at IBM had been getting from our users was for Ansible Automation support of AIX and IBM i operating systems. Red Hat and IBM are pleased to announce the general availability of Red Hat Ansible Certified Content for IBM Power Systems.  Red Hat Ansible certification involves Red Hat testing the Collections developed by IBM and a commitment to provide enterprise support. The Collections for AIX and IBM i are maintained and supported by IBM.

Ansible content for AIX and IBM i helps enable IBM Power Systems users to integrate these operating systems into their existing Ansible-based enterprise automation approach. Red Hat Ansible Certified Content for IBM Power Systems, delivered with Red Hat Ansible Automation Platform, is designed to provide easy to use modules that can accelerate the automation of operating system configuration management. Users can also take advantage of the open source Ansible community-provided content (i.e. no enterprise support available) to automate hybrid cloud operations on IBM Power Systems.

 

Red Hat Ansible Certified Content for AIX and IBM i configuration management

Ansible users have requested a rich set of modules for AIX and IBM i configuration management. To that end, we’ve introduced several certified Ansible modules to automate operations such as patching (e.g., service packs and PTFs), user and group management, boot management, running commands and SQL queries, managing object authority! More information can be found here.

 

Ansible community-provided content for cloud operations on IBM Power Systems

Not only that, with AIX and IBM i applications running in a hybrid cloud environment, you can leverage Ansible community provided modules (i.e. no enterprise support available) to automate cloud operations such as deploying a virtual machine, creating networks, storage volumes, working with flavors, etc. Here are some resources that will help you to get started quickly –

  • Community provided content for AIX and IBM i
  • Check out this example to provision IBM Power Systems in IBM Cloud

We hope these new capabilities help you to break down traditional technology silos and make a consistent enterprise automation strategy a reality!

 

*This blog was co-written by Tom Anderson, VP of Product Management for Ansible

Getting Started with IBM QRadar and Red Hat Ansible Automation Platform

$
0
0

IBM Security QRadar is a Security Information and Event Management (SIEM), which can help security teams to accurately detect and prioritize threats across the organization, providing intelligent insights that enable organisations to respond quickly to reduce the impact of incidents. By consolidating log events and network flow data from thousands of devices, endpoints, users and applications distributed throughout your network, QRadar correlates all this different information and aggregates related events into single alerts to accelerate incident analysis and remediation. 

 

Ansible and QRadar, better together

Ansible is the open and powerful language security teams can use to interoperate across the various security technologies involved in their day-to-day activities.

Customers can take advantage of the IBM QRadar Content Collection to create sophisticated security workflows through the automation of the following functionalities:

  • Log sources configuration
  • Offense rules enablement
  • Offense management

Ansible allows security organizations to integrate QRadar into automated security processes, enabling them to automate QRadar configuration deployments in recurring situations like automated test environments, but also in large scale deployments where similar tasks have to be rolled out and managed across multiple nodes.

Security practitioners can automate investigation activities enabling QRadar to programmatically access newdata sources. Also, they now have the ability to enable and disable correlations rules to support incident prioritization in more complex security workflows.

Furthermore, users can leverage Ansible to change the priority of an offense, its ownership and track activities in its note field directly as part of automated processes.

 

The IBM Security QRadar Content Collection

The integration of QRadar into a security environment automated with Red Hat Ansible Automation Platform is done through the Collection ibm.qradar. To use the Collection, it needs to be installed on the target nodes for example via:

$ ansible-galaxy collection install ibm.qradar
Process install dependency map
Starting collection install process
Installing 'ibm.qradar:1.0.1' to '/home/liquidat/.ansible/collections/ansible_collections/ibm/qradar'

For more information on how to use and install Ansible Content Collections, check out our blog post Hands on with Ansible collections from our Ajay Chenampara.

As of today, the Collection contains multiple modules and two plugins. The plugins provide the core functionality to connect to QRadar in the first place: QRadar provides a rich REST API to interact with, and the Collection uses this to execute various tasks. The plugins manage the authentication and the handling of the REST API calls.

The modules are built around the typical use cases of QRadar and follow the usage patterns of QRadar. Notable modules are:

  • deploy - Trigger a qradar configuration deployment 
  • log_source_management - Manage Log Sources in QRadar
  • offense_action - Take action on a QRadar Offense
  • offense_info - Obtain information about one or many QRadar Offenses
  • offense_note - Create or update a QRadar Offense Note
  • rule - Manage state of QRadar Rules
  • rule_info - Obtain information about one or many QRadar Rules

Using the modules in the Ansible Content Collection

To give a better idea of how to use the Collection, we will illustrate a simple example. After the installation of the collection mentioned above, we need to make sure that Ansible is capable of authenticating to QRadar. This can be ensured by a corresponding inventory entry of a QRadar instance:

qradar ansible_user=admin ansible_httpapi_pass="Ansible1!" ansible_connection=httpapi ansible_httpapi_use_ssl=yes ansible_network_os=ibm.qradar.qradar

As mentioned, communication with QRadar is done via REST API, so ansible_connection has to be set to httpapi. The connection should also use SSL (ansible_httpapi_use_ssl), and we need to provide a username and password via ansible_user and ansible_httpapi_pass, respectively. Last but not least we set the network os to QRadar: ansible_network_os=ibm.qradar.qradar

After the inventory is set up to talk to QRadar, we can execute the first playbook. For example, if we want to deactivate an existing rule inside QRadar, we can write a playbook that in the first task uses the module rule_info to query the existing rule, and in the second task deactivates the rule using the rule module:

---
- name: Change QRadar rule state
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: get info about qradar rule
      rule_info:
        name: "Potential DDoS Against Single Host (TCP)"
      register: rule_info

    - name: disable rule by id
      rule:
        state: disabled
        id: "{{ rule_info.rules[0]['id'] }}"

Another typical example is log sources management: imagine that during an investigation the log information of a given source needs to be added to the SIEM for further investigation. This can be done with the module log_source_management:

---
- name: Add CISCO ASA log source to QRadar
  hosts: qradar
  collections:
    - ibm.qradar

  tasks:
    - name: Add CISCO ASA remote logging to QRadar
      log_source_management:
        name: "CISCO ASA source"
        type_name: "Cisco Adaptive Security Appliance (ASA)"
        state: present
        description: "CISCO ASA log source"
        identifier: 11.22.33.44

In this example the new log source “CISCO ASA source” is configured, and all logs coming from the IP “11.22.33.44” of the type “Cisco Adaptive Security Appliance (ASA)” are put into that log source.

 

Enabling security automation use cases: investigation enrichment

The real power of Red Hat Ansible Automation Platform integrating QRadar shows when we use it in typical security automation use cases. Let’s take the task of investigation enrichment as an example: security practitioners often have to investigate suspicious behavior, and as part of this they gather more information from affected or related systems. Doing this manual can be repetitive and time-consuming. The Ansible Content Collections developed as part of the Ansible security automation initiative can help to overcome these challenges, as we have already shown in our dedicated blog post Getting started with Ansible security automation: investigation enrichment.

In that blog post we showed how QRadar as SIEM is a crucial part of the security environment and how Ansible automates the corresponding tasks: log sources from various systems can be automatically added or removed as needed, enabling security analysts to view information the moment they need it - and removing the logs when the investigation is done. Note that adding or removing log sources is usually only a part of larger automation processes supporting the security practitioners. They can also be created in advance and be part of a library of predefined automation processes ready to be consumed when needed. Together with Ansible Tower access to the elements of such a library can be controlled with typical enterprise governance processes like RBAC.

 

Takeaways and going forward

IBM Security QRadar helps security teams accurately detect and prioritize threats across the organization. Using the Ansible Content Collection for IBM QRadar, customers are able to integrate QRadar in larger security automation processes like investigation enrichment and others and automate sophisticated security workflows through the automation.

As next steps there are plenty of resources to follow up on the topic:

Centralize your Automation Logs with Ansible Tower and Splunk Enterprise

$
0
0

For many IT teams, automation is a core component these days. But automation is not something on it’s own - it is a part of a puzzle and needs to interact with the surrounding IT. So one way to grade automation is how well it integrates with other tooling of the IT ecosystem - like the central logging infrastructure. After all, through the central logging the IT team can quickly survey what is happening, where, and what the state of it is.

The Red Hat Ansible Automation Platform is a solution to build and operate automation at scale. As part of the platform, Ansible Tower integrates well with external logging solutions, such as Splunk, and it is easy to set that up. In this blog post we will demonstrate how to perform the necessary configurations in both Splunk and Ansible Tower to let them work well together.

 

Setup of Splunk

The first step is to get Splunk up and running. You can download a Splunk RPM after you register yourself at the Splunk home page.

After the registration, download the rpm and perform the installation:

$ rpm -ivh splunk-8.0.3-a6754d8441bf-linux-2.6-x86_64.rpm
warning: splunk-8.0.3-a6754d8441bf-linux-2.6-x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID b3cd4420: NOKEY
Verifying...                    ################################# [100%]
Preparing...                    ################################# [100%]
Updating / installing...
   1:splunk-8.0.3-a6754d8441bf  ################################# [100%]
complete

After the installation is complete, execute the command below to start the service and make the necessary settings.

$  /opt/splunk/bin/splunk start -accept-license

Accept the terms, set the username and password, and wait for the service to start.

All preliminary checks passed.

Starting splunk server daemon (splunkd)...
Done
                                                       	[  OK  ]

Waiting for web server at http://127.0.0.1:8000 to be available... Done


If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com

The Splunk web interface is at http://splunk-server:8000

Access the web interface and enter the username and password. 

 

Configuring Data Input with Red Hat Ansible Content Collections

To receive the Ansible Tower logs in Splunk, we need to create a Data Input TCP. To do that we will use the Splunk Enterprise Security Content Collection available on Automation Hub as part of the Red Hat-Maintained Content Collections release.

This Collection has been created to support Splunk Enterprise Security, a security product delivered as an add-on application for Splunk Enterprise and extends that to deliver Security Information and Event Management (SIEM) functionalities. Splunk Enterprise Security leverages many capabilities of the underlying platform hence, despite having been developed for security automation use cases, most of the modules in this Collection can be used to support Day 0 and  Day 1 IT Operations use cases as well. If you want to read more about how Ansible Content Collections developed as part of the Ansible security automation initiative can help to overcome security operation challenges, check out our blog post Getting started with Ansible security automation: investigation enrichment from our Roland Wolters.

The Splunk Enterprise Security Content Collection has the following modules as of today:

  • adaptive_response_notable_event - Manage Splunk Enterprise Security Notable Event Adaptive Responses
  • correlation_search - Manage Splunk Enterprise Security Correlation Searches
  • correlation_search_info - Manage Splunk Enterprise Security Correlation Searches
  • data_input_monitor - Manage Splunk Data Inputs of type Monitor
  • data_input_network - Manage Splunk Data Inputs of type TCP or UDP

If you want to learn more about collections in general and how to get started with them, check out our blog post Hands on with Ansible collections from our Ajay Chenampara.

Coming back to our use case, we will use the data_input_network module. First let's install the Collection splunk.es:

$ ansible-galaxy collection install splunk.es
Process install dependency map
Starting collection install process
Installing 'splunk.es:1.0.0' to '/root/.ansible/collections/ansible_collections/splunk/es'

After the installation of the Collection, the next step is to create our inventory:

$ cat inventory.ini
[splunk]
splunk.customer.com 

[splunk:vars]
ansible_network_os=splunk.es.splunk
ansible_user=USER
ansible_httpapi_pass=PASS
ansible_httpapi_port=8089
ansible_httpapi_use_ssl=yes
ansible_httpapi_validate_certs=True
ansible_connection=httpapi

Note that we set the connection type to httpapi: the communication with Splunk Enterprise Security takes place via REST API. Also, remember to adjust the authentication, port and certificate data according to your environment.

Next let's create the playbook which will set up the input network:

$ cat splunk_with_collections.yml
---
- name: Splunk Data Input
  hosts: splunk
  gather_facts: False
  collections:
    - splunk.es

  tasks:
    - name: create splunk_data_input_network 
      splunk.es.data_input_network:
        name: "9199"
        protocol: "tcp"
        source: "http:tower_logging_collections"
        sourcetype: "httpevent"
        state: "present"

Let's run the playbook to create the input network:

$ ansible-playbook -i inventory.ini splunk_with_collections.yml

 

Validating Data Input

To validate if our data input was created, in the Splunk web interface, click on Settings> Data inputs> TCP. Verify that the TCP port is listed as a source type “httpevent” like in the screenshot below:

Splunk blog 1

We can also validate the data input by checking if the port 9199 is open and does receive connections:

$  telnet splunk.customer.com 9199
Trying 1.2.3.4...
Connected to splunk.customer.com.
Escape character is '^]'.

 

Configuring Ansible Tower 

The activity stream logs in Ansible Tower provide information on creating and deleting objects, such as logging activities within the Ansible Tower, for more information and details, check out the documentation.

After Splunk is all set up, let’s dive into Ansible Tower, and connect both tools with each other! First we are going to configure Ansible Tower to send logs to Data Input  in Splunk. For this, we enter the Ansible Tower Settings: there, pick “System” and click  “Logging”. This opens an overview of the logging configuration of Ansible Tower as shown below. In there, we specify the URL for Splunk as well as the URL context "/services/collector/event". Also, we have to provide the port, here 9199, and select the right aggregator type, here Splunk. Now select protocol TCP, and click first the “Save” button and then, to verify our configuration, the “Test” button.

Splunk blog 2

 

Viewing the logs in Splunk

Now that Ansible Tower is all set up, let’s head back to Splunk and check if the logs are making their way there. In Splunk home, click on “Search & Reporting”. In “What to Search” pick “Data Summary”. A window will open up, where you can click on the “Sources” column:

Splunk blog 3

Click on the source http:tower_logging_collection, this will take us to the Search screen, where it is possible to view the records received from Ansible Tower:

splunk blog

If all is working fine, you should see the last log events received from Ansible Tower, showing that the two tools are now properly connected. Congratulations!

But we don’t want to stop there: after all, logging is all about analyzing the incoming information and making sense of it. So let’s create a filter: click on the field you’d like to filter, to be filtered and then pick "Add to search".

splunk blog 5

After that, the search field will be filled with our filter.

splunk blog 6

 

Creating a simple dashboard

In this example, we will create a simple graph of the events generated by Ansible Tower.

We will use the previous step on how to create a filter, but this time we will filter the event field and in the search field we will leave it this way:

source="http:tower_logging_collection"| spath event | search event=*

With "event = *" all events are filtered.  After that click on the "All Fields" button on the left side menu, select the event field and click on exit. That done, click on Visualization and then select the Pivot option, in the window select "Selected Fields (1)" and click OK.

splunk blog 7

In this window, we will keep the filters as "All time", in "Split Columns" select event and then "Add To Table", after that we can already have a view of the information separated in columns with the name of the column being the event and their number of appearances in the logs.

splunk blog 8

After viewing the information in columns, click "Save As" and select "Dashboard Panel".  In "Dashboard" select "New", in "Dashboard Title" define the name you want for the Dashboard, this name will generate the Dashboard ID, in Panel Title and Model Title, define the name of this search, for example all_events and click Save and then View Dashboard.

splunk blog 9

In the following screen, click on Edit in the upper right menu then in the all_events panel click on "Select Visualization", choose the visualization you want, in this example we select “Bar Chart” and click “Save”.

splunk blog 10

Now that we have our dashboard with a chart listing all events, repeat the process of creating filters and in saving the search, select an existing dashboard to add new panels to the dashboard we created.

After creating some panels and adding them to the existing dashboard, we will have a visualization like this:

splunk blog 11

To use more advanced features of integrating Ansible Tower with Splunk, see the Collection Splunk_enterprise_security, which will allow you to configure Data Inputs and search correlation options, among other features.

 

Takeaways and where to go next

In this post, we demonstrate how to send the Ansible Tower usage logs to Splunk to enable a centralized view of all events generated by Ansible Tower. That way we can create graphs from various information, such as the number of playbooks that failed or succeeded, modules most used in the executed playbooks and so on.

If you're interested in detailed views across your entire automation environment, have a look at Automation Analytics on cloud.redhat.com. Also, my previous blog post about Red Hat Ansible Tower Monitoring: Using Prometheus + Node Exporter + Grafana might be of interest here. 

And if you want to know more about the Red Hat Ansible Automation Platform:

Bringing Order to the Cloud: Day 2 Operations in AWS with Ansible

$
0
0

Cloud environments do not lend themselves to manual management or interference, and only thrive in well-automated environments. Many cloud environments are created and deployed from a known definition/template, but what do you do on day 2? In this blog post, we will cover some of the top day 2 operations use cases available through our Red Hat Certified Ansible Content Collection for AWS (requires a Red Hat Ansible Automation Platform subscription) or from Ansible Galaxy (community supported).

 

Let’s manage some clouds!

No matter the road that led you to managing a cloud environment, you’ll likely have run into the ever-scaling challenge of maintaining cloud-based services over time. Cloud environments do not operate the same ways the old datacenter-based infrastructures did. Coupled with the ease of access for just about anyone to deploy services, you’ll have a potential recipe for years of unlimited maintenance headaches.

The good news is that there is one way to bring order to all the cloud-based chaos: Ansible. In this blog post we will explore common day 2 operations use cases for Amazon Web Services using the amazon.aws Ansible Certified Content Collection. For more information on how to use Ansible Content Collections, check out our blog post for Getting Started With Ansible Content Collections.

 

Snapshotting

Snapshotting is a common operation during maintenance windows. The example below demonstrates a simple snapshotting action on an EC2 instance using the ec2_snapshot module from the amazon.aws Collection.

- name: take a snapshot of the instance to create an image
      amazon.aws.ec2_snapshot:
        instance_id: "{{ instance_id }}"
        device_name: /dev/xvda
        state: present
      register: setup_snapshot

In this example, all we need to know is the instance ID and the device ID you want to snapshot. Note the register line (we’ll come back to that). “That’s great,” I can hear you saying. “Excellent, a bunch of code that does what I can do with the click of a button.” Indeed, so let’s explore an application for this.

Say you have an instance that needs patching during a maintenance window. Each instance should be snapshotted, patched, verified; then the snapshot should be cleared - standard fare. Next, imagine that you actually have over a hundred EC2 instances in need of patching. Now imagine that a few Ansible tasks were able to accomplish that entire procedure, including clearing the snapshot. Now we’re talking!

You may not love the ever-growing number of EC2 instances out there, but at least you can rest assured that your patching operations can be scaled to match. Next, let’s explore another use for the ec2_snapshot module.

 

AMI Creation

AMI Management can become quite a challenge without automated workflows to assist, especially when managing otherwise identical AMIs in multiple regions. Let’s look at a few different AMI-related tasks that may spark ideas for how to simplify your daily image maintenance, starting with generating a new AMI from an EC2 instance snapshot.

- name: AMIs
  block:
    - name: take a snapshot of the instance to create an image
      amazon.aws.ec2_snapshot:
        instance_id: "{{ instance_id }}"
        device_name: /dev/xvda
        state: present
      register: setup_snapshot
   - name: create an image from the instance
     amazon.aws.ec2_ami:
       instance_id: "{{ instance_id }}"
       state: present
       name: "acme_{{ ec2_ami_name }}_ami"
       description: "{{ ec2_ami_description }}"
       tags:
         Name: "acme_{{ ec2_ami_name }}_ami"
       wait: yes
       root_device_name: /dev/xvd

In fewer than 20 lines, you too can automate AMI creation! How else can this apply? It has long been a standard practice to create virtual machine templates from a gold standard. If you are maintaining your baseline configurations with Ansible Tower, you can add this step to a Workflow Job Template and set it to a schedule. This process would ensure you have up to date AMIs to deploy instances from as often as that scheduled workflow runs.

 

AMI Lookup

If you’ve ever tried to deploy an instance using an automation tool, you’ve probably found yourself hopelessly wading through the sea of available AMIs to find “the right one,” only to find out that there’s a different AMI ID for every single global region. If this sounds like you, you’re not alone. Also, good news - Ansible can help with this too. Let’s start with the following code snippet.

    - name: Get a list of our AMIs
      amazon.aws.ec2_ami_info:
        filters:
          architecture: x86_64
          virtualization-type: hvm
          root-device-type: ebs
          name: "acme_*_ami"
      register: amis

    - name: Pick the first AMI ID returned in the previous step 
      set_fact:
        image_id: "{{ (amis.images|first).image_id }}"

This will pull a list of the AMIs we created that match the virtualization and root device types, using a search criteria that will match our AMI naming scheme. It will set as a fact the AMI ID of the first AMI in the list. But Amazon may not always return a consistent list - so what should we do? 

    - name: Get a list of Amazon HVM AMIs
      amazon.aws.ec2_ami_info:
        filters:
          architecture: x86_64
          virtualization-type: hvm
          root-device-type: ebs
          name: "acme_*_ami"
      register: amis

    - name: And select the most recent one
      set_fact:
        image_id: "{{ amis.images | sort(attribute='creation_date') | last }}"

In this example, we sort the results by creation_date and set the fact to the most recent (the last in the list). This is a much more useful example in the real world. Let’s tie this back to our previous two examples. In conjunction with using Ansible to deploy instances, you can reasonably set up a system that will always have fresh AMIs ready for provisioning, and a provisioning workflow that will always take the most recent AMI. 

 

Elastic Load Balancers

There are a host of ways to use and configure ELBs. For the sake of demonstrating what is possible with Ansible, let’s take a fairly simple action: adding a listener.

    - name: Deploy listeners with health checks on 80 and 443
      amazon.aws.ec2_elb_lb:
        name: "{{ lb_name }}"
        state: present
        zones: "{{ ec2_zones }}"
        listeners:
          - protocol: http
            load_balancer_port: 80
            instance_port: 80
          - protocol: https
            load_balancer_port: 443
            instance_port: 443
        health_check:
            ping_protocol: http 
            ping_port: 80
            ping_path: "/healthcheck.html”
            response_timeout: 5
            interval: 30
            unhealthy_threshold: 2
            healthy_threshold: 10

“But that’s a day 1 task!” While true, it is also important that services like ELBs have standardized and consistent configurations. A listener definition like the above can be paired with application deployment workflows to ensure that the load balancer configuration will always stay up to date with each passing release, and can be kept consistent with all other load balancer configurations. From day 2 onward, you will be able to query your load balancers by their definitions and easily (and with much lower risk) deploy changes.

 

VPC Management

Our final day 2 management example focuses on VPCs. As with the ELB example, it is imperative that VPCs are deployed and maintained from a definition, and that definition is kept up to date. While there are multitudes of reasons for this, a good one is that you can do useful things like this:

    - name: Add IPv6 CIDR to existing VPC
      ec2_vpc_net:
        state: present
        cidr_block: "{{ vpc_cidr }}"
        name: "{{ vpc_name }}"
        ipv6_cidr: true
      register: vpc_info

Now, would you needlessly start adding IPv6 to your network definitions just because? Of course not! But what’s important to understand is that from day 2 onward, you have the capability to make incremental, even large changes with simple updates to your cloud infrastructure definitions. After executing the above, there would be a host of options available to you, many of which would require little more than minor code changes to existing definitions.

 

Takeaways and where to go next

In this blog, we covered some typical day 2 cloud management operations with Ansible ranging from AMI creation to full VPC management. We hope you found this blog useful! More importantly, we hope it inspired you to start thinking about your cloud management in a different way. Check out the getting started guide when you’re ready to get started!

And if you want to know more about the Red Hat Ansible Automation Platform:

 

*This blog was co-written by Jill Rouleau, Sr Software Engineer on the Ansible Cloud Engineering team

Automating Mitigation of the F5 BIG-IP TMUI RCE Security Vulnerability Using Ansible Tower (CVE-2020-5902)

$
0
0

On June 30, 2020, a security vulnerability affecting multiple BIG-IP platforms from F5 Networks was made public with a CVSS score of 10 (Critical). Due to the significance of the vulnerability, network administrators are advised to mitigate this issue in a timely manner. Doing so manually is tricky, especially if many devices are involved. Because F5 BIG-IP and BIG-IQ are certified with the Red Hat Ansible Automation Platform, we can use it to tackle the issue.

This post provides one way of temporarily mitigating CVE-2020-5902 via Ansible Tower without upgrading the BIG-IP platform. However, larger customers like service providers might struggle to upgrade on a short notice, as they may have to go through a lengthy internal validation process. For those situations, an automated mitigation may be a reasonable workaround until such time to perform an upgrade.

 

Background of the vulnerability

The vulnerability is described in K52145254 of the F5 Networks support knowledgebase

The Traffic Management User Interface (TMUI), also referred to as the Configuration utility, has a Remote Code Execution (RCE) vulnerability in undisclosed pages.

And describes the impact is serious:

This vulnerability allows for unauthenticated attackers, or authenticated users, with network access to the Configuration utility, through the BIG-IP management port and/or self IPs, to execute arbitrary system commands, create or delete files, disable services, and/or execute arbitrary Java code.

The mitigation can be performed on command line via the F5 traffic management shell (TMSH) or remotely via the F5 iControl REST interface.

 

Mitigation using Ansible

Ansible can help in automating a temporary workaround across multiple BIG-IP devices. As an example a playbook is included below which, when executed from within Ansible Tower, has been shown to successfully mitigate this security vulnerability. The following factors need to be considered:

  • The provided Ansible Playbook requires editing a file using F5’s traffic management shell (TMSH).
  • Editing of this file through bash does not persist after a reboot, furthermore doing so is not supported by F5 because it should be edited via TMSH.
  • For those customers that have existing running instances of F5 BIG-IP or need to automate the creation or deletion of F5 instances, running the Ansible Playbook is still required.
  • Running this playbook does persist after a BIG-IP reboot.
  • An upgrade to software versions that do not have the permanent fix will also need to be mitigated. Therefore, the playbook should be run again after this situation.
  • The provided playbook was written specifically for Ansible Tower and serves as an example of how the mitigation can be carried out. The playbook is provided as-is and is only provided for guidance. Customers are advised to write their own playbooks to mitigate the issue. Red Hat makes no claim of official support for this playbook.

Playbook Details

In order to successfully run the referenced playbook, you'll need to provide login credentials to the F5 BIG-IP instances. For example, the variables that define the server, userid, and password fields needs to be set by an authorized administrative user on the F5 BIG-IP with this information. 

The tasks in the playbook connect to TCP port 8443 of the management IP address of the F5 BIG-IP. If you are patching an on-premises F5 instance connecting to TCP, port 443 is required instead.

Although there are two methods for restarting the HTTP daemon, due to a current known issue, please use the TMSH method. Therefore, you’ll notice that we do not use the bigip_command module.

The referenced playbook contains three tasks which each provide the following:

  • The first task “Editing HTTPD” makes the changes necessary to mitigate the vulnerability. 
  • The second task “Saving HTTPD change” saves the change to disk.
  • The third play “Restarting HTTPD daemon with tmsh” restarts the service to make the configuration active.

Please Note:

---
- name: Mitigate CVE-2020-5902
  hosts: all
  connection: local
  gather_facts: false

  tasks:
    - name: Editing HTTPD
      raw: curl -ku "{{ansible_user}}":"{{ansible_ssh_pass}}" -k https://"{{ansible_host}}":8443/mgmt/tm/sys/httpd -H content-type:application/json -X PATCH -d '{"include":"\n <LocationMatch \\\";\\\">\n Redirect 404 /\n </LocationMatch>\n <LocationMatch \\\"hsqldb\\\">\n Redirect 404 /\n </LocationMatch>\n "}'
    - name: Saving HTTPD change
      bigip_command:
        commands: save sys config
        provider:
          server: "{{ansible_host}}"
          user: "{{ansible_user}}"
          password: "{{ansible_ssh_pass}}"
          server_port: 8443 # port 8443 for public cloud, port 443 for on-prem
          validate_certs: false

    - name: Restarting HTTPD daemon with tmsh      
      raw: curl -u "{{ansible_user}}":"{{ansible_ssh_pass}}" -k https://"{{ansible_host}}":8443/mgmt/tm/util/bash  -H "Content-type:application/json" -d "{\"command\":\"run\", \"utilCmdArgs\":\"-c 'killall -9 httpd;tmsh restart /sys service httpd'\"}"
      ignore_errors: True
      register: httpd_restart
      failed_when: "httpd_restart.rc != 52"

See the full version of the playbook, including comments and more in-depth details, here.

 

Validating that the Playbook Succeeded

 A mitigation that has not been verified should be treated as no mitigation. Thus let’s check that we have been successful:

  1. From the command line of the F5 BIG-IP, issue the following command:  tmsh edit sys httpd all-properties 
  2. Verify the section after include contains the following: 
include "<LocationMatch \";\">
Redirect 404 /</LocationMatch><LocationMatch \"hsqldb\">
Redirect 404 /</LocationMatch>
"

If this is the case, the mitigation was successful. Close this file (without saving any edits) with a :q! command.

Download the F5 BIG-IP Ansible Content Collection

The provided playbook is written with the assumption that the Ansible Tower 3.7 installation is utilizing the included Ansible Engine 2.9. Therefore, the F5 Ansible modules are already included as part of the installation. 

If you are a community developer using the open source Ansible distribution, refer to the latest modules available for F5 BIG-IP and BIG-IQ from the f5networks.f5_modules Ansible Content Collection available on Automation Hub (fully supported, requires a Red Hat Ansible Automation Platform subscription) or Ansible Galaxy (upstream community supported):

For more information on how to use and install Ansible Content Collections, check out our blog post Hands on with Ansible collectionsby Ajay Chenampara.

Finally, the F5 Ansible Content Collection also includes modules to patch and/or upgrade the BIG-IP directly such as bigip_software_image, bigip_software_install, bigip_config and more.

Takeaways and where to go next

Remediating vulnerabilities in network devices is crucial, and in this blog we showed how Ansible can help with that given the current example of the F5 BIG-IP TMUI RCE Security Vulnerability, CVE-2020-5902. 

If you want to know more about the Red Hat Ansible Automation Platform:

Manage Red Hat Enterprise Linux like a Boss with Red Hat Ansible Content Collection for Red Hat Insights

$
0
0

Running IT environments means facing many challenges at the same time: security, performance, availability and stability are critical for the successful operation of today’s data centers. IT managers and their teams of administrators, operators and architects are well advised to move from a reactive, “fire-fighting” mode to a proactive approach where systems are continuously scanned and improvements are applied before critical situations come up. Red Hat Insights routinely analyzes Red Hat Enterprise Linux systems for security/vulnerability, compliance, performance, availability and stability threats, and based on the results, can provide guidance on how to improve daily operations. Insights is included with your Red Hat Enterprise Linux subscription and located at cloud.redhat.com

We recently announced a new Red Hat Ansible Content Collection for Insights, an integration designed to make it easier for Insights users to manage Red Hat Enterprise Linux and to automate tasks on those systems using Ansible. The Ansible Content Collection for Insights is ideal for customers that have large Red Hat Enterprise Linux estates that require initial deployment and ongoing management of the Insights client. 

In this blog, we will look at how this integration with Ansible takes care of key tasks via included Ansible Roles, Modules and Plugins, such as:

  • Deploy the required packages to Red Hat Enterprise Linux instances on the cloud and on-premises 
  • Register systems to Insights
  • Provide custom facts to Ansible like the system ID, which can be reused in future  automation tasks

Downloading the Collection

In order to add the Insights content to your playbooks, you can download them from either Automation Hub or Ansible Galaxy directly, or through the ansible-galaxy CLI tool. Automation Hub contains the certified content for Red Hat Ansible Automation Platform customers, while Ansible Galaxy contains the experimental community version.

To download the Collection, refer to Automation Hub (fully supported, requires an Red Hat Ansible Automation Platform subscription) or Ansible Galaxy (upstream community supported):

Automation Hub Collection: redhat.insights

Ansible Galaxy Collection: redhatinsights.insights

Information about how to configure downloading via the ansible.cfg file or requirements.yml file, please refer to the blog, “Hands On With Ansible Collections.”

 

Deploying the Insights Client

Before you can start analyzing your systems with Insights, you must deploy the Insights Client to each server. Deploying any client or agent with Ansible makes the process painless. Even easier, integrate it into a provisioning workflow in Ansible Tower and you’ll never have to think about it again. A playbook to deploy the Insights Client with the provided role would look like the following: 

---
- name: deploy insights client to rhel servers
  hosts: rhel
  tasks:
  - include_role:
      name: redhat.insights.insights_client

By the way, for purposes of this blog, all examples will be using the fully qualified Collection name (FQCN) for content downloaded from Automation Hub.

It doesn’t stop there. The Insights Client can also report tags back to cloud.redhat.com to help you organize your servers. We’ll take a look at why this is really important in the next section, but for now, let’s focus on how to do it. Let’s say you want to indicate the environment, site, application group and default IP. Tags can be deployed by the Insights Client role by providing a variable called insights_tags. Let’s look at an example:

---
- name: deploy insights client with tags
  hosts: rhel
  tasks:
  - include_role:
      name: redhat.insights.insights_client
      vars:
        insights_tags:
          env: prod
          site: rdu
          app: ecomm
          default_ip: “{{ ansible_default_ipv4.address }}”

As you can see in the example, tags can be statically defined, computed from a fact or set using the value of another variable. If you are provisioning your server with Ansible Tower, you may have additional data about the cluster, vpc or network where the server is located. This type of data makes for great inputs for defining your Insights tags.

 

Inventory: Who is What and What is Where

Now that you have the Insights Client deployed and properly tagged, you should have a nice consolidated list of your Red Hat Enterprise Linux servers oncloud.redhat.com. This list can be used as an Ansible inventory with the Insights Inventory Plugin. If you are new to inventory plugins, do not confuse them with inventory scripts. While they seemingly produce the same result, they work very differently. To use the inventory plugin, we need to define an inventory file as a YAML file. In this inventory file, the different options for the plugin are specified. The most basic usage of the plugin will simply populate a list of all of your systems registered to Insights.

#insights.yml
---
plugin: redhat.insights.insights

There are a few very important things to note about what is happening in the background. First, the default Ansible configuration will automatically enable this plugin. Nothing additional needs to be set in a configuration file. The default INVENTORY_ENABLED setting includes a plugin called auto. Save yourself some unnecessary configuration and let auto do the work. The second very important detail is the name of the file; it must end with insights.yml. This is a design pattern for inventory plugins and you can prefix the file name with something like prod.insights.yml but the file name MUST end with insight.yml. Finally, to authenticate to Insights, set the environment variables INSIGHTS_USER and INSIGHTS_PASSWORD in your shell or via a Custom Credential Type in Ansible Tower. Push this file to your source control repository and then inventory source in Ansible Tower choosing “Sourced from Project” as the type. Ansible Tower won’t show the file in the drop down, but you can type in the name of your file and click it to create the source. Next, let’s look at how to pull additional information into your inventory to make it more meaningful.

In the example inventory below, we use default variables to create additional variables and groups. Note: the inventory file name must end in insights.yml. You can prefix this to differentiate like, prod.insights.yml if you’d like.

#insights.yml
---
plugin: redhat.insights.insights
get_patches: True
get_tags: True
compose:
  ansible_host: “{{insights_tags[‘insights_client’][‘default_ip’]}}”
groups:
  patching: insights_patching.enabled
  bug_patch: insights_patching.rhba_count > 0
  security_patch: insights_patching.rhsa_count > 0
  enhancement_patch: insights_patching.rhea_count > 0
keyed_groups:
  - key: insights_tags['insights-client'][‘env’]
    prefix: env
  - key: insights_tags['insights-client'][‘site’]
    prefix: site
  - key: insights_tags['insights-client'][‘app’]
    prefix: app

However, Insights knows much more about our systems and we can use that information to make our inventory more useful. The get_patches option will add host variables indicating the number of security, bug fix and enhancement patches the system is missing, and the get_tags option will allow creation of host variables based on the tags that we deployed with the insights_client role. 

This inventory file will create inventory groups for each category of patches as well as groups of each different environment, site and application group. If we need to apply security patches to all of the production servers running the ecomm application, patterns will select those hosts for you by setting the hosts line of your playbook to env_prod:&app_ecomm.

 

Where to go from here?

In this blog, we walked through automating system registration with the Insights Client role followed by using cloud.redhat.com as an inventory source for running your Insights remediation playbooks and other general automation. More proactive management means less late-night outages, less failed upgrades, better security, better performance and more uninterrupted weekends. Red Hat Ansible Automation Platform and Insights put the right tools in your hands to proactively manage your environment at scale and couldn’t be simpler. 

Feel free to visit these other resources for more information:

Securing Tower Installer Passwords

$
0
0

One of the crucial pieces of the Red Hat Ansible Automation Platform is Ansible Tower. Ansible Tower helps scaling IT automation, managing complex deployments and speeding up productivity. A strength of Ansible Tower is its simplicity that also extends to the installation routine: when installed as a non-container version, a simple script is used to read in variables from an initial configuration to deploy Ansible Tower. The same script and initial configuration can even be re-used to extend the setup and add, for example, more cluster nodes.

However, part of this initial configuration are passwords for the database, Ansible Tower itself and so on. In many online examples, these passwords are often stored in plain text. One question I frequently get as a Red Hat Consultant is how to protect this information. A common solution is to simply remove the file after you complete the installation of Ansible Tower. But, there are reasons you may want to keep the file around. In this article, I will present another way to protect the passwords in your installation files.

 

Ansible Tower’s setup.sh

For some quick background, setup.sh is the script used to install Ansible Tower and is provided in both the regular and bundled installer. The setup.sh script only performs a couple of tasks, such as validating that Ansible is installed on the local system and setting up the installer logs; but most importantly, it launches Ansible to handle the installation of Ansible Tower. An inventory file can be specified to the installer using the -i parameter or, if unspecified, the default provided inventory file (which sits alongside setup.sh) is used. In the first section of the inventory file, we have groups to specify the servers that Ansible Tower and the database will be installed on:

[tower]
localhost ansible_connection=local

[database]

And, after those group specifications, there are variables that can be used to set the connections and passwords ,and is where you would normally enter your plain text passwords, such as:

[all:vars]
admin_password='T0w3r123!'

pg_host=''
pg_port=''

pg_database='awx'
pg_username='awx'
pg_password='DB_Pa55w0rd!'

In the example above, these passwords are displayed as plain text. Many clients I have worked with are not comfortable with leaving their passwords in plain text within the inventory file for security reasons. Once Ansible Tower is installed, this file can be safely removed, but if you ever need to modify your installation to add a node to a cluster or add/remove inventory groups, this file will need to be regenerated. Likewise, if you want to use the backup and restore functions of setup.sh, you will also need the inventory file with all of the passwords as it was originally installed.

 

Vault to the Rescue

Since the installer is using Ansible to install Ansible Tower, we can leverage some Ansible concepts to secure our passwords. Specifically, we will use Ansible vault to have an encrypted password instead of a plain text password. If you are not familiar with Ansible vault, it is a program shipped with Red Hat Ansible Automation Platform itself and is a mechanism to encrypt and decrypt data. It can be used against individual strings or it can encrypt an entire file. In our example, we will encrypt individual strings as passwords. This will be beneficial if you end up committing your inventory file into a source control management tool. The SCM will be able to show you individual passwords that were changed in a commit versus just being able to say an encrypted file changed (but not being able to show which password within the encrypted file changed).

To start, we are going to encrypt our admin password with the following command (fields in <> indicate input to ansible-vault):

$ ansible-vault encrypt_string --stdin-name admin_password
New Vault password: 
Confirm New Vault password: 
Reading plaintext input from stdin. (ctrl-d to end input)<t0w3r123!>admin_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          66663534356430343166356461373464336332343439393731363365303063353032373564623537
          3466663861633936366463346135656130306538376637320a303738653264333737343463613366
          31396336633730323639303436653330386536363838646161653562373631323766346431663461
          6536646264633563660a343163303334336164376339363161373662613137633436393263376631
          3539
Encryption successful</t0w3r123!>

In this example, we are running ansible-vault and asking it to encrypt a string. We’ve told ansible-vault that this variable will be called admin_password and it will have a value of T0w3r123! (what we would have entered into our inventory file). In the example, we used a password of ‘password’ to encrypt these values. In a production environment, a much stronger password should be used to perform your vault encryption. In the output of the command, after the two ctrl-d inputs, our encrypted variable is displayed on the screen. We will take this output and put it into a file called passwords.yml next to our inventory file. After encrypting the second pg_password our password.yml file looks like this:

---
admin_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          66663534356430343166356461373464336332343439393731363365303063353032373564623537
          3466663861633936366463346135656130306538376637320a303738653264333737343463613366
          31396336633730323639303436653330386536363838646161653562373631323766346431663461
          6536646264633563660a343163303334336164376339363161373662613137633436393263376631
          3539
pg_password: !vault |
          $ANSIBLE_VAULT;1.1;AES256
          65633239383761336539313437643733323235366337653164383934303563643464626562633865
          3130313231666531613131633736386134343664373039620a336237393631333532373066343135
          65316431626630633965623134623133353635376236306538653230363038333661623236376330
          3664346237396139610a376536373132313237653239353832623433663230393464343331356561
          3435

Now that we have our completed passwords.yml file, we have to tell the installer to load the passwords from this file and also to prompt us for the vault password to decrypt the value. To do this we will add three parameters to our setup.sh command. The first option is -e@passwords.yml, which is a standard syntax to tell Ansible to load variables from a specified file name (in this case passwords.yml). The second option will be --, which will tell the setup.sh script that any following options should be passed on to Ansible instead of being processed by setup.sh. The final option will be --ask-vault-pass, which tells Ansible to prompt us for the password to be able to decrypt the vault secrets. All together our setup command will become:

$ ./setup.sh -e@passwords.yml -- --ask-vault-pass

If you normally add arguments to setup.sh, they will need to be merged into this command structure. Arguments to setup.sh will need to go before the -- and any arguments you passed to Ansible would go after the --

When running setup.sh with these options you will now be prompted to enter the vault password before the Ansible installer begins:

$ ./setup.sh -e@passwords.yml -- --ask-vault-pass
Using /etc/ansible/ansible.cfg as config file
Vault <password>: 

PLAY [tower:database:instance_group_*:isolated_group_*] ******************************************************************************************

Here I have to enter my weak vault password of ‘password’ for the decryption process to work. 

This technique will work even if you leave the blank password variables in the inventory file because of the variable precedence from Ansible. The highest precedence any variable can take comes from extra_vars (which is the -e option we added to the installer), so values in our vault file will override any values specified in the inventory file.

Using this method allows you to keep the inventory file and password files on disk or in an SCM and not have plain text passwords contained within them. 

 

Another Solution

Another option you could take if you only wanted a single inventory file would be to convert the existing ini inventory file into a yaml based inventory. This would allow you to embed the variables as vault encrypted values directly. While the scope of doing that is beyond this article, an example inventory.yml file might look similar to this:

all:
  children:
    database: {}
    tower:
      hosts:
        localhost:
  vars:
    admin_password: !vault |
        $ANSIBLE_VAULT;1.1;AES256
        66663534356430343166356461373464336332343439393731363365303063353032373564623537
        3466663861633936366463346135656130306538376637320a303738653264333737343463613366
        31396336633730323639303436653330386536363838646161653562373631323766346431663461
        6536646264633563660a343163303334336164376339363161373662613137633436393263376631
        3539
    ansible_connection: local
    pg_database: awx
    pg_host: ''
    pg_password: !vault |
        $ANSIBLE_VAULT;1.1;AES256
        65633239383761336539313437643733323235366337653164383934303563643464626562633865
        3130313231666531613131633736386134343664373039620a336237393631333532373066343135
        65316431626630633965623134623133353635376236306538653230363038333661623236376330
        3664346237396139610a376536373132313237653239353832623433663230393464343331356561
        3435
    pg_port: ''
    pg_sslmode: prefer
    pg_username: awx
    rabbitmq_cookie: cookiemonster
    rabbitmq_password: ''
    rabbitmq_username: tower
    tower_package_name: ansible-tower
    tower_package_release: '1'
    tower_package_version: 3.6.3

Using a file like this, setup.sh could then be called as:

$ ./setup.sh -i inventory.yml -- --ask-vault-pass

Using this method will require more work when upgrading Ansible Tower, as any field changes in the provided inventory file will need to be reflected in your yaml inventory, whereas the previous method only requires new password fields added to the inventory file to be added into the password.yml file.

 

Takeaways/Going Forward

As part of the Red Hat Ansible Automation Platform, Ansible Tower can be a cornerstone of an enterprise automation strategy. As part of that, many customers wish to secure the Ansible Tower installer passwords. Leveraging basic Ansible concepts, this can easily be done.

If you want to follow up on this and other Ansible related topics, you can find more information here:

*Red Hat provides no expressed support claims to the correctness of this blog. All content is deemed unsupported unless otherwise specified.


Ansible Workshops, Value for partners

$
0
0

The Red Hat Ansible Automation Platform makes IT automation simple and powerful. In line with the fast growing adoption and community, we want Red Hat’s business partners and customers to be familiar with the Red Hat Ansible Automation Platform. Of course, there are lots of resources for learning about Ansible out there: books, blogs, tutorials and training. But the people at Red Hat working behind the scenes on Ansible created something especially useful: the Red Hat Ansible Automation Platform workshops! 

As a Red Hat partner, no matter if you are planning to run an Ansible demo, train your internal staff or deliver a workshop to get your customers started with Ansible, the Ansible workshops are the way to go! Instead of creating your own workshop framework and content, you can focus on delivering Ansible enablement with consistent messaging through tested and curated exercises created by Red Hat. Using consistent, scalable content following best practices allows you to concentrate on your main business, building solutions for your customers and enabling the customer teams on the corresponding technology.

 

The Ansible Workshops

The Ansible workshops provide you with everything you need to successfully run workshops, including presentations, guided exercises and dedicated lab environments for every attendee. The workshops take around 3-6 hours to complete depending on attendees exposure to automation and the managed technologies. There are Ansible Engine and Ansible Tower, of course, but depending on the workshop type, other technologies like virtual network devices, security related products and even Microsoft Windows can be part of the workshops as well. 

The workshops are designed to be run by an instructor, who will lecture and assist students. They are used extensively for internal training at Red Hat, for enabling our business partners and introducing our customers to Red Hat Ansible Automation Platform. We use them regularly to deliver hands-on labs at events like the Red Hat Summits in San Francisco, Boston and even virtually with great success. With the workshops being available to Red Hat partners, you can start using them in the same way today.  

The workshop content is hosted on github.com/ansible/workshops and of course Open Source. There, you’ll find everything needed for learning and teaching Ansible: the provisioner, a fully Ansible-based tool to deploy lab environments to the Amazon cloud and guided exercises that will lead your workshop attendees through the corresponding workshop. Currently, the following workshops are available:

 

Ansible Red Hat Enterprise Linux Workshop

Getting started with Ansible Engine and Ansible Tower for Linux automation

Ansible Network Automation Workshop

Learn how to use Ansible for automating network technologies using platforms like Arista, Cisco and Juniper, as examples

Ansible F5 Workshop

A specialized workshops teaching automation of F5 BIG-IP devices

Ansible Security Automation

A recent addition focused on automation of security tools like Check Point Firewall, IBM QRadar and the IDS Snort 

Ansible Windows Automation Workshop

Ansible’s strength is the broad range of supported IT technologies; this one gives an intro on automation of Microsoft Windows

 

How to deploy the workshops from Red Hat Product Demo system 

With the introduction out of the way, a brief walk-through to show you how to deploy and use the workshops should be in order. As a Red Hat business partner, you can use the Red Hat Product Demo system (RHPDS), a resource for deploying demos and workshops covering our products. RHPDS provides a service catalog that allows you to order the Ansible workshops via self-service. To do so, after accessing RHPDS, navigate to the “Workshops” folder in the catalog, where you’ll find the Ansible Workshops. 

workshop blog image 1

Choose the one you are interested in and click it. The following page provides information and links pointing to the workshop content (like slide decks and exercises). To actually deploy a workshop, click “Order”. This will take you to the next page where you have to agree to some disclaimers and put in your company name. After choosing the AWS region (usually the one you are located in) there is one last question: “Number of Attendees”. Enter the number of attendees (1-50) and RHPDS will deploy a dedicated lab for each one!

ansible workshop blog image 2

To finish the order process, click “Submit”. As deployment takes some time, refill on coffee or whatever beverage you prefer. You will get email notifications to the address connected to your RHPDS account informing you about the state of the deployment. The final “deployment completed” email contains information about the environment lifetime and a link to access the lab environments.

workshop blog image 3

The workshop environment is now successfully deployed.

Note: The runtime of the workshop environments is limited to 2 days and can’t be extended. So after 48 hours, the entire environment is decommissioned automatically.

 

Giving attendees access to lab environments

Following the link takes you to the landing page for your very own workshop. This page has three sections:

  • Ansible Automation Workshops - Links to the lab guide with the exercises and a introduction presentation
  • Workshop Resources - some helpful pointers
  • Workbench Information - the most important part, the actual information to access the dedicated environments

How you run a workshop is entirely up to you. But at some point, you have to give attendees access to their lab environments. This is where the “Workbench Information” comes into play. It’s basically a list of student names that can fold out to show specific access information like URLs, hostnames and user credentials to access the web UI of an Ansible Tower instance, for SSH access and something else: “VS Code access”which is a VS Code-like application running in your browser. Your users can use it to open a terminal (no SSH client needed) for command line work and to create and edit files.

workshop image 4

To get your attendees started with the hands-on labs, hand out student names (student1 - student<N>) and the URL to the landing page. Tell them to look up the access information in the workbench information and off they go. The attendees can then follow the lab guide, which leads them through the environment and the corresponding exercises. 

 

Deploy the Workshops without RHPDS Access

As mentioned, the Ansible Workshops live on Github. If you don’t have access to RHPDS and want to deploy a devel version or want to contribute, you can deploy the workshops into your own AWS account. You basically need a Linux system running at least Ansible Engine v2.9.0 and your Amazon AWS account access keys. 

 

Prepare the control node

The first step is to install Ansible and Boto (needed by Ansible’s AWS modules) and then clone the Ansible Workshop Github repository:

$ sudo -i
# yum install ansible python3-boto python3-boto3
$ git clone https://github.com/ansible/workshops.git
$ cd workshops/provisioner/

You also have to set up authentication with AWS. For all AWS-related modules, you can either specify your access and secret key as environment variables, as module arguments or in a credentials file. Let’s use the latter so your keys don’t end up in your bash history. Create this file with your access key and secret key:

# cat ~/.aws/credentials

[default]
aws_access_key_id = ABCDEFGHIJKLMNOP
aws_secret_access_key = ABCDEFGHIJKLMNOP/ABCDEFGHIJKLMNOP

 

Configure the Deployment

The Ansible Workshops are deployed by a tool appropriately called “provisioner” living in the Github repository, which is doing entirely Ansible-based cloud automation. The provisioner is configured via a vars file: it defines all details of the deployment like a name, student count, flavor and so on. 

The directory ./sample_workshops/ holds sample vars files. The easiest way to get started is to make a copy of a vars file for the required workshop type and then customize it. For example, to deploy the Red Hat Enterprise Linux Workshop, at a bare minimum you have to change ec2_name_prefix, admin_password and workshop_dns_zone:

---
# region where the nodes will live
ec2_region: us-east-1
# name prefix for all the VMs
ec2_name_prefix: myworkshoptest
# amount of work benches to provision
student_total: 2
# workshop is put into rhel mode
workshop_type: rhel
## Optional Variables
# password used for student account on control node
admin_password: AhNuuf6u
# turn DNS on for control nodes, and set to type in valid_dns_type
dns_type: aws
# Sets the Route53 DNS zone to use for the S3 website
workshop_dns_zone: myroute53.org
# this will install Ansible Tower on all control nodes
towerinstall: true

Note: workshop_dns_zone must point to a domain hosted in the AWS Route53 DNS management service. You have to create one if you don’t have one yet.

As the last preparation step, you’ll need to provide the provisioner with an Ansible Tower license file. Request an evaluation subscription if you don’t have an active subscription. Afterwards, download the license file and rename it to tower_license.json. 

 

Deploying a Workshop

Now the only thing left is to run the top-level playbook and hand it the file with the variable definitions:

$ ansible-playbook provision_lab.yml -e@sample-vars-rhel.yml

After the provisioner has finished its job without error, you’ll get information about how to access the workshop:

PROVISIONER SUMMARY
   *******************
   - Workshop name is myworkshoptest
   - Instructor inventory is located at  /home/grieger/workshops/provisioner/myworkshoptest/instructor_inventory.txt
   - Private key is located at /home/grieger/workshops/provisioner/myworkshoptest/myworkshoptest-private.pem
   - Website created at http://myworkshoptest.rhdemo.io
   - Auto-Assignment website located at http://myworkshoptest.gritest.rhdemo.io

The most important line is “- Website created at…”. Following this link will take you to the landing page as described above giving access to the lab environments. From here, just follow the same process as laid out already. 

 

Running a Successful Workshop

Now you should have a good idea of the Ansible workshops, but running a successful workshop, automation event or technical enablement requires more than this. Think up front about the timeframe and what you want to achieve. Do you want to get your customer excited about automation and Ansible? Do you want to get your consultants started in Ansible? Or is it going to be a larger event where the hands-on part is the icing on the cake?

Answering these questions will give you some directions on how to design and deliver for your occasion. Consider the Ansible Workshops hands-on labs environments and the lab guides and collaterals as ingredients that you can add to your automation menu. At Red Hat, we have a lot of experience using this content for delivering successful events; here are some of the main points:

  • Prepare an agenda, interweave presentations and hands-on labs, and make use of the modular character of the Workshops. This makes for a great experience: attendees will remember the  meaningful interactions and overall value of the event, and even better, avoid long days filled with 8 hours of presentations.
  • Plan the execution meticulously, especially when delivering to more than 10-15 attendees. Prepare in advance how to assign the student names; this is crucial to prevent attendees ending up on each other’s lab environments. For example, you can put numbers on desks or hand out paper-slips with student numbers. For virtual or large events, make sure to read below about the new attendance feature.
  • Make room for discussion, but keep the time and agenda in mind! This is important for any meeting or event, but especially important for hands-on sessions you mix with presentations or panels. There will likely be a few attendees who are unable to complete the assignment, but make it clear in the beginning this may happen due to the advanced nature of the exercises.
  • Have enough people to help out if problems arise. We have good experience with roughly one facilitator for 15-20 attendees.

Getting Virtual

In these tough times, consider running workshops virtually as we did for Red Hat Summit 2020. The main takeaway is that you have to plan in even more detail, since you can’t just walk over to help someone out. And if attendees don’t know how to access their lab environments or the guides, they will drop out as quickly as they signed up. So think about:

  • How to convey access data and how to keep important information accessible to the attendees (maybe in the chat of your conferencing tool or on an external website).
  • How to engage with your attendees: you will have to present something, sure. But the strongest value will be for people to learn something new and to get hands-on experience. Use the options offered by your conference tooling like Q&A, be prepared to pick up questions though audio and have links to advanced information ready.
  • Try to lighten the mood… it gets pretty quiet in the virtual world when people start to go through labs. Prepare to tell a few brief stories about automation, kick off a morning session with virtual coffee and ask about their favorite beverages in the poll section, and implement any further engagement opportunities to make the experience fun.
  • Have enough helpers available. In virtual environments the ratio might be different, roughly one facilitator per 10-15 attendees.
  • Make sure to follow general best practices, like clear audio and a nice background, and make sure one or two members of your team always keep the camera on! This makes for a lot more personal experience.

Tips for larger Audiences

When hosting larger audiences, especially in a virtual environment, getting attendees onto their assigned lab environments can become a big issue. This is where the attendance feature comes into play: It provides a URL where attendees enter their email address as an identifier and get an environment assigned. No need to hand out paper slips or telling attendees to look up their student number on some HTML table anymore. 

To use the feature, you take the URL of the landing page as described above and prepend it with “login”, e.g. when the URL you have been given via email is “http://2f54.rhdemo.io” you give your attendees “http://login.2f54.rhdemo.io”. This will lead them to the same landing page, but this time under “Workbench Information”, they will not see the access information for all students but this form:

workshop blog 4

After filling in their name and email address (used solely as an identifier), they will get one of the lab environments assigned and receive access information for it. The environment is now locked and the next attendee will get the next one available. The attendance feature even includes an admin part: as the facilitator you can append “list.php” so the URL looks like “http://login.2f54.rhdemo.io/list.php”. The page will ask for the workshop password and then provide an overview of the lab environments complete with whom they are assigned to:

workshop blog 5

By clicking “Delete” you can free up an environment for re-assignment.

 

Be Part of the Workshops Community!

A lot of Red Hatters contribute to the workshops, and we have a growing number of Red Hat business partners and individuals contributing as well. Forming a larger community around the workshop development is what Open Source is all about, after all!

When you get started with the workshops, don’t hesitate to raise Github issues if you identify areas that can be improved or you run into issues. And of course, feel free to fork the repository, improve the workshops and hand in Pull Requests.

 

Final Words and key takeaways

Ansible workshops can be a crucial tool in running an Ansible demo, training your internal staff or delivering a workshop to get your customers started with Ansible. Especially for Red Hat partners, there is a big potential here to get customers interested in Red Hat Ansible Automation Platform, paving the way for new business.

Not sure where to go next?  

*Red Hat provides no expressed support claims to the correctness of this blog. All content is deemed unsupported unless otherwise specified.

Automating Security with CyberArk and Red Hat Ansible Automation Platform

$
0
0

Proper privilege management is crucial with automation. Automation has the power to perform multiple functions across many different systems. When automation is deployed enterprise-wide, across sometimes siloed teams and functions, enterprise credential management can simplify adoption of automation — even complex authentication processes can be integrated into the setup seamlessly, while adding additional security in managing and handling those credentials.

Depending on how users have defined them, users can craft Ansible Playbooks that require access to credentials and secrets that have wide access to organizational systems. These are necessary to systems and IT resources to accomplish their automation tasks, but they’re also a very attractive target for bad actors. In particular, they are tempting targets for advanced persistent threat (APT) intruders. Gaining access to these credentials could give the attacker the keys to the entire organization.

Most breaches involve stolen credentials, and APT intruders prefer to leverage privileged accounts like administrators, service accounts with domain privileges, and even local admin or privileged user accounts.

You’re probably familiar with the traditional attack flow: compromise an environment, escalate privilege, move laterally, continue to escalate, then own and exfiltrate. It works, but it also requires a lot of work and a lot of time. According to the Mandiant Report, median dwell time for an exploit, while well down from over 400 days in 2011, remained over 50 days in 2019. However, if you can steal privileged passwords or the API keys to a сloud environment, the next step is complete compromise. Put yourself into an attacker’s shoes: what would be more efficient? 

While Ansible Tower, one of the components of Red Hat Ansible Automation Platform, introduced built-in credentials and secret management capabilities, some may have the need for tighter integration with the enterprise management strategy. CyberArk works with Ansible Automation Platform, automating privileged access management (PAM), which involves the policies, processes and tools that monitor and protect privileged users and credentials.

 

Why Privileged Access Management Matters 

Technologies like cloud infrastructure, virtualization and containerization are being adopted by organizations and their development teams alongside DevOps practices that make the need for security practices based on identity and access management critical. Identity and access management isn't just about employees; it includes managing secrets and access granted to applications and infrastructure resources as well.

A PAM solution ideally handles the following key tasks for your organization:

  • Continuously scan an environment to detect privileged accounts and credentials. 
  • Add accounts to a pending list to validate privileges.
  • Perform automated discovery of privileged accounts and credentials.
  • Provide protected control points to prevent credential exposure and isolate critical assets.
  • Record privileged sessions for audit and forensic purposes.
  • View privileged activity by going directly to specified activities and even keystrokes.
  • Detect anomalous behavior aiming to bypass or circumvent privileged controls, and alert SOC and IT admins to such anomalies.
  • Suspend or terminate privileged sessions automatically based on risk score and activity type.
  • Initiate automatic credential rotation based on risk in the case of compromise or theft.

The common theme in the preceding functions is automation. There’s a reason for that: Automation is not just a “nice to have” feature. It’s absolutely essential to PAM. Large organizations may have thousands of resources that need privileged access, and tens of thousands of employees who may need various levels of privilege to get their work done. Even smaller organizations need to monitor and scale privileged access as they grow. Automated PAM solutions handle the trivial aspects of identity and access management so your team can focus on business goals and critical threats. 

Automation is what you use to:

  • Onboard and discover powerful secrets, where you auto-discover secrets, put them in a designated vault and trigger rotation, just to be on the safe side.
  • Apply compliance standards, such as auto-disabling certain network interfaces. 
  • Harden devices via OS- and network-level controls — like blocking SSH connections as root.
  • Track and maintain configurations.

And, of course, automation becomes indispensable in the remediation and response (R&R) stage. When you’re under attack, the absolute worst-case scenario is having to undertake manual R&R. We’ve seen many times — as you probably have — that it puts security and operations teams at odds with each other, and makes both of these look at development as a source of continuous trouble. 

Security can, and should, exist as code. Integrating Ansible with CyberArk implements security-as-code, which allows security, operations and developers to work in sync as your “first responder” group, giving them the time and peace of mind to meaningfully respond to the threat — and likely to find a way to prevent it from recurring.

 

Automatically Respond to Threats

For most teams, keeping a constant watch on every detail of privileged access is unsustainable and hard to scale. The default reaction is often to simply lock down access, making growth and development difficult. PAM automation can make responding to threats much more scalable. Your team can focus on setting identity and access parameters, and let automated tools apply those rules to daily access needs. 

For example, Ansible Automation Platform, working with CyberArk Response Manager (CARM), can respond to threats automatically by managing users, security policies and credentials based on preconfigured parameters. CARM is part of theCyberArk PAM Collection, developed as part of the Ansible security automation initiative. 

At a high level, the CARM algorithm works like this:

1. An event is detected. For example:
  • A user leaves the company
  • User credentials get compromised
  • An email address gets compromised
2. An automated workflow is triggered
3. A credential is retrieved to authenticate CyberArk
4. The relevant module is invoked:
  • cyberark_user
  • cyberark_policy
  • cyberark_account
  • cyberark_credential
5. A remediation is performed through the module

Depending on the specifics of the detected threat and the CyberArk platform configuration, the security action might be to, for example:

  • Reset a user’s credentials or disable the user so that the user must reset their password.
  • Enhance or relax a security policy or workflow.
  • Trigger a credential rotation, in which a vaulted credential is rotated.

As your environment goes about its daily business of deploying, testing and updating payloads, as well as processing and maintaining data, security operators can use Ansible to automatically call CARM  to perform the security actions, and then CARM also performs them automatically. 

Automating threat responses that previously required human intervention now serves as the basis for proactive defense in depth.

Credential retrieval is the first step in many scenarios using Ansible and CARM. This step is performed by the cyberark_credential module of the cyberark.pas Collection. The module can receive credentials from the Central Credential Provider. That way, we can obviate the need to hard code the credential in the environment:

- name: credential retrieval basic
  cyberark_credential:
    api_base_url: "http://10.10.0.1"
    app_id: "TestID"
    query: "Safe=test;UserName=admin"

As can be seen in this example, a target URL needs to be provided in addition to the application ID authorized for retrieving the credential. 

The central parameter is the query: it contains the details of the object actually being queried, in this case the “UserName” and “Safe”. The query parameters depend on the use case, and possible values are “Folder”, “Object”, “Address”, “Database” and “PolicyID”. 

If you are more familiar with the CyberArk API, here is the actual URI request that is created out of these parameter values:

{ api_base_url }"/AIMWebService/api/Accounts?AppId="{ app_id }"&Query="{ query }

The return value of the module contains — among other information — the actual credentials, and can be reused in further automation steps.

A more production-level approach is to also encrypt the communication to the API via client certificates:

- name: credential retrieval advanced
  cyberark_credential:
    api_base_url: "https://components.cyberark.local"
    validate_certs: yes
    client_cert: /etc/pki/ca-trust/source/client.pem
    client_key: /etc/pki/ca-trust/source/priv-key.pem
    app_id: "TestID"
    query: "Safe=test;UserName=admin"
    connection_timeout: 60
    query_format: Exact
    fail_request_on_password_change: True
    reason: "requesting credential for Ansible deployment"

Now, let’s look at an example where the detected “bad” event requires rotation of account credentials. With the help of the cyberark_account module, we can change the credentials of the compromised account. The module supports account object creation, deletion and modification using the PAS Web Services SDK.

    - name: Rotate credential via reconcile and provide new password
      cyberark_account:
        identified_by: "address,username"
        safe: "Domain_Admins"
        address: "prod.cyberark.local"
        username: "admin"
        platform_id: WinDomain
        platform_account_properties:
            LogonDomain: "PROD"
        secret_management:
            new_secret: "Ama123ah12@#!Xaamdjbdkl@#112"
            management_action: "reconcile"
            automatic_management_enabled: true
        state: present
        cyberark_session: "{{ cyberark_session }}"

In this example, we changed the password for the user “admin”. Note that the authentication is handled via the cyberark_session value, which is usually obtained from the  cyberark_authentication module.

 

Next Steps

In this post, you learned how CyberArk and Ansible Automation Platform joined forces to protect privileged accounts, which are the most attractive targets to APT intruders. The integration not only makes your system and data more secure, but it also increases the degree of automation.

If you want to test this solution, install the CyberArk PAS solution and get busy configuringCARM

If you prefer, you canget in touch with CyberArk experts and the Red Hat Ansible team for advice specific to your case. You can schedule a CyberArk demo or CyberArk DNA risk assessment, which provides a detailed view of your environment to help locate privileged accounts and credentials and discover system vulnerabilities you might not otherwise be aware of. You can also download a free Ansible Automation Platform evaluation.

To dive a bit deeper into the Ansible/CyberArk realm on your own, here are some useful resources:

Using an Inventory Plugin from a Collection in Ansible Tower

$
0
0

Many IT environments grow more and more complex. It is more important than ever that an automation solution always has the most up to date information about what nodes are present and need to be automated. To answer this challenge, the Red Hat Ansible Automation Platform uses inventories: lists of managed nodes.

In its simplest form, inventories can be static files. This is ideal when getting started with Ansible, but as the automation is scaled, a static inventory file is not enough anymore:

  1. How do we update and maintain a list of all of our managed nodes if something changes, if workloads are spun up or teared down?
  2. How do we classify our infrastructure so that we can be more selective in what managed nodes we automate against?

The answer to both of these questions is to use adynamic inventory: a script or a plugin that will go to a source of truth and discover the nodes that need to be managed. It will also automatically classify the nodes by putting them into groups, which can be used to more selectively target devices when automating with Ansible.

Inventory plugins allow Ansible users to use external platforms to dynamically discover target hosts and use those platforms as a Source of Truth for their Ansible inventory. Common sources of truth include AWS EC2, Google GCP and Microsoft Azure , but there are a number of other inventory plugins available with Ansible.

Ansible Tower ships with a number of inventory plugins that work out of the box. These include the cloud examples mentioned earlier as well as VMware vCenter, Red Hat OpenStack Platform and  Red Hat Satellite. To use these inventory plugins, credentials need to be added that can query the source platform. Afterwards, the inventory plugins can be used as a source for an inventory in Ansible Tower. 

There are additional inventory plugins available, which are not shipped with Ansible Tower, but which are written by the Ansible community. With the move to Red Hat Ansible Content Collections, these inventory plugins are being packaged as part of the corresponding Collections.

In this example, we are having a look at the ServiceNow inventory plugin. ServiceNow is a very popular IT Service Management platform and customers often use the ServiceNow CMDB to store details of all of their devices. A CMDB can provide additional context to automation. For example, server owner, service level (production/non-production) and patch & maintenance windows. Their Ansible Inventory plugin can be used to query the ServiceNow CMDB and is delivered as part of the servicenow.servicenow collection available on galaxy

 

Git Repository 

To use an inventory plugin from a Collection in Ansible Tower, we need to source it from a Project. A Project within Ansible Tower is the integration of a source control repository like a git repository. In Ansible Tower, projects are used to pull Ansible Playbooks but also variables and inventories. 

The contents of my source control repository are very simple:

├── collections
│   └── requirements.yml
└── servicenow.yml

The servicenow.yml file contains the details for the inventory plugin. In our case, we  specify the correct table in the ServiceNow CMDB that we want to use. We also select the fields we want to add as host_vars and some information on the groups that we want it to create.

$ cat servicenow.yml
plugin: servicenow.servicenow.now
table: cmdb_ci_linux_server
fields: [ip_address,fqdn,host_name,sys_class_name,name,os]
keyed_groups:
  - key: sn_sys_class_name | lower
	prefix: ''
	separator: ''
  - key: sn_os | lower
	prefix: ''
	separator: ''

Note that no details of the ServiceNow instance that we want to connect to or any credentials are defined here. Those will be configured within Ansible Tower later on.

The collections/requirements.yml file is needed so that Ansible Tower can download the Collection and therefore the inventory plugin. Otherwise, we would have to install and maintain the Collection on all of our Ansible Tower nodes manually.

$ cat collections/requirements.yml
---
collections:

- name: servicenow.servicenow

Once we have pushed this configuration into the source control repository, we can create a project in Ansible Tower referencing the repository. Below is an example that links Ansible Tower to my github repository. Note the SCM URL. We can optionally specify a credential if the repository is private and also specify a specific branch, tag or commit to pull from.

plugin blog image 1

 

Create the ServiceNow Credential

As mentioned, the configuration in our repository does not include credentials to use with ServiceNow, or the definition of the ServiceNow instance to speak to. Thus we will create a credential in Ansible Tower to define those values. Looking at the documentation for the ServiceNow inventory plugin, we can see that there are a number of environment variables that we can set to define the connection details. For example:

= username
    	The ServiceNow user account, it should have rights to read cmdb_ci_server (default), or table specified by SN_TABLE

    	set_via:
      	env:
      	- name: SN_USERNAME

In this case, if the SN_USERNAME environment variable is set then the inventory plugin will use it as the user account to connect to ServiceNow.

The other variables we need to set are - SN_INSTANCE & SN_PASSWORD

However, in Ansible Tower, there is no credential type for ServiceNow where we can enter these details. Luckily for such use cases, Ansible Tower allows us to define custom credential types. You can read more about custom credentials in our “Ansible Tower Feature Spotlight: Custom Credentials” by Bill Nottingham.

In our case, the input configuration for a custom credential for ServiceNow is as follows:

fields:
  - id: SN_USERNAME
	type: string
	label: Username
  - id: SN_PASSWORD
	type: string
	label: Password
	secret: true
  - id: SN_INSTANCE
	type: string
	label: Snow Instance
required:
  - SN_USERNAME
  - SN_PASSWORD
  - SN_INSTANCE

The credentials will be exposed as environment variables of the same name. This is described in the injector configuration:

env:
  SN_INSTANCE: '{{ SN_INSTANCE }}'
  SN_PASSWORD: '{{ SN_PASSWORD }}'
  SN_USERNAME: '{{ SN_USERNAME }}'

With the custom credential type defined, we can now add a ServiceNow credential and set the instance, username and password as shown:

plugin blog image 2

 

Create the Inventory

The final step is to create the inventory within Ansible Tower. We need a name - here ServiceNow: 

plugin blog image 3

With the inventory created, we can now attach a source to it. Here we specify the Project that we created earlier and enter the path to our inventory YAML file in the source control repository- in this case, that is servicenow.yml in the root of the project. We also need to associate our ServiceNow credential.

plugin blog image 4

To test the setup, we can try syncing with the source. Pressing the button “Sync all” does just that. If everything was configured correctly, the hosts should be imported into the inventory:

plugin blog image 5

Note the groups that we requested were also created.

 

Summary and going forward

In this example, we have shown how to use inventory plugins from Collections within  Ansible Tower using the ServiceNow inventory plugin. We have also securely defined the credentials to authenticate to our ServiceNow instance. Sourcing an inventory plugin from a Project is not exclusive to third party or custom plugins either: this is a valid method for modifying the behaviour of some of the built-in inventory plugins as well. These capabilities enable the Ansible Automation Platform to seamlessly integrate with existing tools while automating IT environments of growing complexity. 

If you want to follow up on this and other Ansible related topics, you can find more information here:

*Red Hat provides no expressed support claims to the correctness of this code. All content is deemed unsupported unless otherwise specified

AnsibleFest 2020 - The Biggest AnsibleFest EVER

$
0
0

It is almost that time of year again for everyone’s favorite automation event! 2020 has given us our fair share of change (and then some). But we’re not just facing new challenges. We’re adapting to them and innovating to overcome them together. We’re distributed yet we’re connected -- connected to new technologies, to new ways of working, and most importantly, to each other.

This year’s AnsibleFest is now a virtual experience, and we are using this opportunity to engage and collaborate with Ansible users across the globe. It will be a free virtual experience where our communities can connect to a wider audience to collaborate and solve problems. The venue may be different this year, but it is still the same AnsibleFest you know and love.

 

Keynotes

This year we have a great lineup of keynote speakers. We have brought together a group of people rich with Ansible knowledge, tapped to share meaningful insights with you right at home:

  • Richard Henshall, Senior Manager for Product Management - Ansible Product Updates
  • Matt Jones, Ansible Senior Principal Software Engineer - The Future of Automation
  • Chris Wright, Red Hat CTO - Automation at the Edge
  • Robyn Bergeron, Senior Principal Community Architect - Ansible Community Update
  • Red Hatters Colin McNaughton and Walter Bentley - This Ansible Automation Platform demo will be bananas!
  • Red Hatters Dylan Silva, Adam Miller and Brad Thornton - Ansible Automation of the Future demo 

 

Breakout Sessions

This year, we will be breaking out our sessions into six different persona tracks. You can customize your AnsibleFest experience based on your role as it relates to automation, or choose tracks in areas where you want to grow your automation skills. We will offer breakout tracks centered around IT leaders, network, security, operations, developer and automation architect. Wondering what and who you will be connecting with in each track? Let us help answer that... 

IT Leaders

  • Connect with the automation journey
  • Connect with competitive edge
  • Connect with automation leaders 

Network

  • Connect to automated network management 
  • Connect to multi-vendor integrations
  • Connect to solve challenges 

Security

  • Connect to integrated, multi-vendor  security
  • Connect to rapid incident  response 
  • Connect to better-coordinated teams

Operations

  • Connect to automated operations  
  • Connect to crush complexity 
  • Connect to efficiency 

Developer

  • Connect with automation you can use 
  • Connect to the automation community
  • Connect to progress skills 

Automation Architect

  • Connect to build your an automation vision
  • Connect to business value 
  • Connect to culture change 

 

Get Registered

Now is the time to register for one of the most highly anticipated automation events of the year. Keep an eye on the Ansible Twitter page - we will be releasing a poll in the coming month to allow you to vote for this year’s AnsibleFest T-shirt design! 

AnsibleFest 2020 is your opportunity to hear from Ansible practitioners and experts, ask questions, and collaborate and network with people across the globe. So what are you waiting for? Register now.

Automating Mitigation of the Microsoft (CVE-2020-1350) Security Vulnerability in Windows Domain Name System Using Ansible Tower

$
0
0

On July 14, 2020, a Critical Remote Code Execution (RCE) vulnerability in Windows DNS Server was released that is classified as a ‘wormable’ vulnerability, and has a CVSS base score of 10.0. This issue results from a flaw in Microsoft’s DNS server role implementation and affects all Windows Server versions. Non-Microsoft DNS Servers are not affected.

Updates to this vulnerability are available. However, in some use cases, applying the update quickly might not be practical: in many enterprises, even hotfixes need to run through a series of tests that require time. For such cases, a registry-based workaround is available that also requires restarting the DNS service.  However, doing so manually is time consuming and prone to error, especially if many servers are involved. For customers with the Red Hat Ansible Automation Platform, a playbook has been written to automate the workaround.

 

Background of the vulnerability

The vulnerability is described in CVE-2020-1350

Wormable vulnerabilities have the potential to spread via malware between vulnerable computers without user interaction. Windows DNS Server is a core networking component. While this vulnerability is not currently known to be used in active attacks, it is essential that customers apply Windows updates to address this vulnerability as soon as possible.

The mitigation can be performed by editing the Windows registry and restarting the DNS service. Guidance for this workaround can be found at KB4569509: Guidance for DNS Server Vulnerability CVE-2020-1350.

Also check out the related blog post of the Microsoft Security Response Center.

 

Mitigation using Ansible

Ansible can help in automating a temporary workaround across multiple Windows DNS servers. As an example, a playbook is included below which, when executed from within Ansible Tower, has been shown to successfully mitigate this security vulnerability. The following factors need to be considered:

  • The provided Ansible Playbook requires making changes to the Windows registry.
  • This playbook will first make a backup of the HKLM registry and will save this backup to the root of the C: drive. It is suggested that this location be changed to an offbox share.
  • The provided playbook was written specifically for Ansible Tower and serves as an example of how the mitigation can be carried out. The playbook is provided as-is and is only provided for guidance. Customers are advised to write their own playbooks to mitigate the issue. Red Hat makes no claim of official support for this playbook.

Playbook Details

In order to successfully run the referenced playbook, you'll need to run this against a Windows server that has the DNS server running. The credentials should have administrative permissions and if using WinRM as the connection method, the authentication should be “credssp” or “kerberos”. Documentation for configuring Windows servers for WinRM authentication can be found at Windows Remote Management in the Ansible documentation.

The referenced playbook contains three tasks which each provide the following:

  • The first task “Backing up the registry settings for HKLM” makes a backup of the HKLM registry key. 
  • The second task “Changing registry settings for DNS parameters” makes a change to the registry to restrict the size of the largest inbound TCP-based DNS response packet that's allowed.
  • The third play “restarting DNS service” restarts the service to make the configuration active.

Also of note is that this playbook is idempotent in that you can run it multiple times and it results in the same outcome. As such, it can be run to validate that servers have the workaround in place.

---
- name: Mitigating Microsoft CVE-2020-1350.
  hosts: all
  gather_facts: False

  tasks:
    - name: Backing up Registry settings for HKLM
      win_command: REG EXPORT HKLM C:\HKLM.Reg /y
      register: reg_save

    - name: Changing registry settings for DNS parameters
      win_regedit:
        path: HKLM:\SYSTEM\CurrentControlSet\Services\DNS\Parameters
        name: TcpReceivePacketSize 
        data: 0xFF00
        type: dword

    - name: restarting DNS service
      win_service:
        name: DNS
        state: Restarted
        start_mode: auto

The most recent version of this playbook is available via Github repository.

 

Validating that the Playbook Succeeded

 A mitigation that has not been verified should be treated as no mitigation. Thus let’s check that we have been successful:

  1. From the GUI interface of the Windows server, open the registry with the command “regedit” 
  2. Navigate to HKLM:\\SYSTEM\CurrentControlSet\Services\DNS\Parameters and validate that the “TcpReceivePacketSize” has a value of “0xff00” 

Mark blog image 1This can also be validated with the following Ansible Playbook. This will check the that the TcpReceivePacketSize value exists and is set to “0xff00”.

---
- name: Validating mitigation of Microsoft CVE-2020-1350.
  hosts: all
  gather_facts: False
  tasks:
    - name: Checking status...
      win_reg_stat:
        path: HKLM:\SYSTEM\CurrentControlSet\Services\DNS\Parameters
        name: TcpReceivePacketSize
      register: current_value
    - name: Current value is
      debug:
        msg: "{{ current_value }}"

A successful mitigation will show the following:

TASK [Current value is] *****
ok: [13.57.x.x] => {
    "msg": {
        "changed": false, 
        "exists": true, 
        "failed": false, 
        "raw_value": 65280, 
        "type": "REG_DWORD", 
        "value": 65280
    }
}

 

Takeaways and where to go next

Remediating vulnerabilities in network devices and servers is crucial, and in this blog we showed how Ansible can help with that given the current example of the “CVE-2020-1350 | Windows DNS Server Remote Code Execution Vulnerability”. 

If you want to know more about the Ansible Automation Platform:


*Red Hat provides no expressed support claims to the correctness of this code. All content is deemed unsupported unless otherwise specified
Viewing all 512 articles
Browse latest View live