donderdag 27 januari 2022

How to Use the Terraform Command Line Interface (CLI) on Ubuntu

ubuntu-terraform-cli.png

Terraform is a framework for building and configuring infrastructure as code, with a command-line interface and DSL language. Terraform can manage existing and popular service providers as well as custom in-house solutions to build and configure complete distributed data centers.

New article matched for in your Blogger! rule

Testing Wide Production 2020

maandag 24 januari 2022

Getting Started with Ansible.utils Collection for Playbook Creators: Part 1

Via Ansible Blog by Ashwini Mhatre

ansible-2.1-hero.svg#keepProtocol

The Ansible ansible.utils collection includes a variety of plugins that aid in the management, manipulation and visibility of data for the Ansible playbook developer. The most common use case for this collection is when you want to work with the complex data structures present in an Ansible playbook, inventory, or returned from modules. See each plugin documentation page for detailed examples for how these utilities can be used in tasks. In this two-part blog we will overview this collection in part one and see an example use case of using the utils collection in detail in part two.

 

Plugins inside ansible.utils 

Plugins are code which will augment ansible core functionality. This code executes on control node.it and gives options and extensions for the core features of Red Hat Ansible Automation Platform. This ansible.utils plugin collection includes:

  • Filter plugins
  • Lookup plugins
  • Test plugins
  • Modules

 

Filter plugins

Filter plugins manipulate data. With the right filter you can extract a particular value, transform data types and formats, perform mathematical calculations, split and concatenate strings, insert dates and times, and do much more. Ansible Automation Platform uses the standard filters shipped with Jinja2 and adds some specialized filter plugins. You can create custom Ansible filters as plugins. Please refer to the docs for more information.

The ansible.utils filter plugins include the following:

ansible.utils.from_xml

Convert a given XML string to native python dictionary.

ansible.utils.get_path

Retrieve the value in a variable using a path

ansible.utils.index_of

Find the indices of items in a list matching some criteria

ansible.utils.param_list_compare

Generate the final param list combining/comparing base and provided parameters.

ansible.utils.to_paths

Flatten a complex object into a dictionary of paths and values

ansible.utils.to_xml

Convert given JSON string to XML

ansible.utils.usable_range

Expand the usable IP addresses

ansible.utils.validate

Validate data with provided criteria

 

Lookup plugins

Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all templating, lookups execute and are evaluated on the Ansible Automation Platform control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources. You can also create custom lookup plugins. Please refer to the docs for more information.

The ansible.utils lookup plugins include:

ansible.utils.get_path

Retrieve the value in a variable using a path

ansible.utils.index_of

Find the indices of items in a list matching some criteria

ansible.utils.to_paths

Flatten a complex object into a dictionary of paths and values

ansible.utils.validate

Validate data with provided criteria

Note: In ansible.utils some plugins were implemented as both filter and lookup plugins to give the playbook developer flexibility depending on their use case.

 

Test plugins

Test plugins evaluate template expressions and return a value of True or False. With test plugins you can create conditionals to implement the logic of your tasks, blocks, plays, playbooks, and roles. Ansible Automation Platform uses the standard tests shipped as part of Jinja, and adds some specialized test plugins. Please refer to the docs for more information.

The ansible.utils test plugins include:

ansible.utils.in_any_network

Test if an IP or network falls in any network

ansible.utils.in_network

Test if IP address falls in the network

ansible.utils.in_one_network

Test if IP address belongs in any one of the networks in the list

ansible.utils.ip

Test if something in an IP address or network

ansible.utils.ip_address

Test if something in an IP address

ansible.utils.ipv4

Test if something is an IPv4 address or network

ansible.utils.ipv4_address

Test if something is an IPv4 address

ansible.utils.ipv4_hostmask

Test if an address is a valid hostmask

ansible.utils.ipv4_netmask

Test if an address is a valid netmask

ansible.utils.ipv6

Test if something is an IPv6 address or network

ansible.utils.ipv6_address

Test if something is an IPv6 address

ansible.utils.ipv6_ipv4_mapped

Test if something appears to be a mapped IPv6 to IPv4 mapped address

ansible.utils.ipv6_sixtofour

Test if something appears to be a 6to4 address

ansible.utils.ipv6_teredo

Test if something appears to be an IPv6 teredo address

ansible.utils.loopback

Test if an IP address is a loopback

ansible.utils.mac

Test if something appears to be a valid MAC address

ansible.utils.multicast

Test for a multicast IP address

ansible.utils.private

Test if an IP address is private

ansible.utils.public

Test if an IP address is public

ansible.utils.reserved

Test for a reserved IP address

ansible.utils.resolvable

Test if an IP or name can be resolved via /etc/hosts or DNS

ansible.utils.subnet_of

Test if a network is a subnet of another network

ansible.utils.supernet_of

Test if a network is a supernet of another network

ansible.utils.unspecified

Test for an unspecified IP address

ansible.utils.validate

Validate data with provided criteria

 

Modules

Modules are the main building blocks of Ansible playbooks. Although we do not generally speak of "module plugins", a module is a type of plugin. For a developer-focused description of the differences between modules and other plugins, see Modules and plugins: what is the difference?. Please refer to the docs for more information. 

The ansible.utils modules include:

ansible.utils.cli_parse

Parse cli output or text using a variety of parsers

ansible.utils.fact_diff

Find the difference between currently set facts

ansible.utils.update_fact

Update currently set facts

ansible.utils.validate

Validate data with provided criteria

 

Accessing and using the ansible.utils Collection

To download the utils Collection, refer to Automation hub (fully supported, requires a Red Hat Ansible Automation Platform subscription) or Ansible Galaxy (upstream):

To learn more about how to configure downloading via the ansible.cfg file or requirements.yml file, please refer to the blog, Hands On With Ansible Collections.

​​Ansible.utils is also available in the Supported Execution environment along with its required python libraries. Please refer to docs for more details about Execution Environments.

 

Different use cases of Utils

As we know, ansible.utils has a variety of plugins and it has various use cases. The following are the most common use cases of ansible.utils:

  • Validating business logic before pushing configurations using validate and test plugins
  • Auditing architectural deposition and layouts using test plugins
  • Managing complex data structure in ansible playbook using get_path, to_path plugins
  • Conducting minor checks related to network address using test plugins
  • Operational state assessment using cli_parse, validate plugins

Note: We will see operational state assessment using cli_parse, validate plugin in depth in part two of the blog.

 

Future scope

Here are some additional ansible.utils capabilities that are on the horizon:

  • Ipaddr filter plugin supports: 

      • The Ipaddr filter is designed to provide an interface to the netaddr Python package from within Ansible.
      •  It can operate on strings or lists of items, test various data to check if they are valid IP addresses, and manipulate the input data to extract requested information.
      •  ipaddr() works with both IPv4 and IPv6 addresses in various forms. 
      • There are also additional functions available to manipulate IP subnets and MAC addresses.
      • We are currently working on supporting the ipaddr filter as part of ansible.utils collection.
  • Support of more number of validate engines in ansible.utils.validation plugin:
      •  Currently the validate plugin is supporting only ansible.utils.jsonschema validation engines, but there is plan to add more validation engines.
  • Support different filter plugins to manipulate input data:
    • Recursive plugins: remove_keys,replace_keys, keep_keys

Contributing to this collection

This collection is intended for plugins that are not platform or discipline specific. Simple plugin examples should be generic in nature. More complex examples can include real world platform modules to demonstrate the utility of the plugin in a playbook.

We welcome community contributions to this collection. If you find problems, please open an issue or create a PR against the ansible.utils collection repository. See Contributing to Ansible-maintained collections for complete details. See the Ansible Community Guide for details on contributing to Ansible.

 

Takeaways and next steps

  • Ansible.utils plugins makes playbook writing experience simple and smooth
  • Implementation of ansible.utils plugins is very fast as they executed locally
  • Easy to understand, code, use, and integrate with other modules
  • As its plugins ecosystem, it is so easy to add new plugins in ansible.utils

To learn more about the ansible.utils collection, you can check out part two of this blog, where we will see operational state assessment using the cli_parse, validate plugin in depth for the operational state assessment use case using ansible.utils collection. 

New article matched for in your Blogger! rule

Testing Wide Production 2020

Getting Started with Ansible.utils Collection for Playbook Creators: Part 2

Via Ansible Blog by Ashwini Mhatre

Use Case: Operational state assessment using ansible.utils collection

In ansible.utils, there are a variety of plugins which we can use for operational state assessment of network devices. I overviewed the ansible.utils collection in part one of this two part blog series. If you have not reviewed part one, I recommend you do so, since I will build on this information in this part two blog. We will see how the ansible.utils collection can be useful in operational state assessment as an example use case.

In general, state assessment workflow has following steps:

  • Retrieve (Source of Truth)

    • Collect the current operational state from the remote host. 
    • Convert it into normalized structured data. Structured data can be in json, yaml format or any other format.
    • Store is an inventory variable.
  • Validate 

    • Define the desired state criteria in a standard based format, for example, as defined in a json schema format.
    • Retrieve operational state at runtime.
    • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Remediate 

    •  Implement required configuration changes to correct drift. 
    • Report on the change as an audit trail.

 

How can ansible.utils collection help in this workflow?

The ansible.utils collection makes it easier to retrieve and parse the data so it can then be further assessed from a structured format.

 

Retrieving operational state in structured format using Ansible.utils.cli_parse

This module is available as ansible.utils collection. It has a variety of parsers which help to parse CLI output or text output. It can work with multiple remote hosts like network, Linux, or windows.it. It supports multiple parsing engines and it is extensible which means you can create your own parsing engine. It is a single task to run a command, parse and set facts. 

Before the utils collection was available, we would need to write three different tasks to run commands, parse the output from command and set facts. But thanks to cli_parse we only have one task which will return structured data from the "show command" output.

Let's see an example of ansible.utils.cli_parse task:

  tasks:        - name:  Run a command and parse results        ansible.utils.cli_parse:            command: show interfaces            parser:                name: ansible.utils.xxxx                set_fact: interfaces     

In this task we need to provide a command which will execute on the device. Parser, which is a subplugin of cli_parse, plugin.set_fact sets the converted structure in the interfaces key. We can then refer to the interfaces key in our playbook. 

The above task will perform following operation:

  • Run the command on the device 
  • Parse using the 'xxxx' engine 
  • Use a default template folder 
  • Set parsed data as fact
  • Return command output as stdout

Currently ansible.utils.cli_parse plugin has following parsers:

  • ansible.utils.textfsm: Python module for parsing semi-formatted text
  • ansible.utils.ttp: Template based parsing, low regex use, jinja-like DSL
  • ansible.netcommon.native: Internal jinja, regex, yaml without additional 3rd party libraries required
  • ansible.netcommon.ntc_templates: Predefined textfsm templates packaged as python library 
  • ansible.netcommon.pyats: Cisco Test Automation & Validation Solution (11 OSs/2500 parsers) 
  •  ansible.utils.from_xml: convert XML to json using xmltodict 

All of the generic parsers are part of the ansible.utils collection and all network-related parsers are part of the ansible.netcommon collection.

 

Validating structured data and report errors using ansible.utils.validate

The Ansible.utils.validate module is a new module available as part of the ansible.utils collection which works with all platforms. It has extensible engine support and currently works with the jsonschema validation engine which uses the jsonschema python library underneath. It is a single task, which reads structured data and validates it against the data model provided in the task. This task will report success or error in case the data is valid or invalid as per the schema. 

Let's see an example of ansible.utils.validate task:

  tasks:        - name:  Validate structured data        ansible.utils.validate:             data: "{{ input_data }}"            criteria:            - "{{ lookup('file', './criteria.json') | from_json }}"            engine: ansible.utils.xxxx

In this task we need to provide data which is supposed to be structured data. Criteria is a list of criteria. Since currently we are using jsonschema, we have criteria in json format. Engine is a sub-plugin of the top level validate plugin. Here it is "ansible.utils.jsonschema". Again, you can write your own engine as it is extensible. 

The above task will perform following operation:

  • Reads the input JSON data and the criteria for data (schema mode)
  • Validate using the 'xxxx' engine
  • Returns list of errors if data does not conform to the schema criteria

Currently ansible.utils.validate plugin supports following validation engine:

  • ansible.utils.jsonschema: Python module to validate json data against a schema. 

Now let's use the above plugins from ansible.utils to see how we can use them in actual scenarios. In this example we will see how to use ansible.utils to fetch BGP operational state data, validate it against predefined json schema and also remediate configuration drift when detected. 

For this scenario consider we have three CSRv routers which are running cisco ios xe. All of them are BGP neighbors to each other and advertise three networks each.

utils blog 2 image 1

Lets check running configuration and operation state data related to BGP.

utils blog 2 image 2

Let's check the CSRv1 node. Lets execute command show running-config | section bgp. As you can see it has two neighbors configured, where both of them have the same Remote AS, so they are IBGP neighbors. The neighbors are activated and soft reconfiguration inbound is enabled on them. This node also advertises three networks.

Now let's execute the command show bgp summary.

utils blog 2 image 3

The above screenshot tells us the neighbor relationships with the other two nodes established and the current node receiving 3 prefixes from the other two nodes.

Now let's validate it using routing table entries. Lets execute command show ip route bgp.

utils blog 2 image 4

The above screenshot shows route table entries from node 1. As you can see, this node is aware of six routes, with the next hop being the respective BGP neighbors advertising them.

Similarly we have configured CSRv2 and CSRv3.

Now let's check the playbooks which we are using in this example with detailed steps.

Checkout this code if you want to learn more details.

Playbooks are divided into two parts:

  • Gather facts and store them in a yaml file as the Source of Truth (SOT)
  • Validate structured data against SOT and rectify drift if it is detected

 

1. Gather facts and store it in a yaml file for SOT

Let's check the content of facts.yaml.

utils blog 2 image 5

In the first task we are gathering bgp_global and bgp_address_family facts from target devices. In the second task we are storing them in a flat file under the hostvars directory. These files will act as SOT (source of truth) for BGP config on target devices.

Lets run the above playbook with "ansible-navigator run playbooks/facts.yaml " using this command. (See ansible-navigator docs for more details).

How does this data look after execution of the playbook? Let's check playbooks/host_vars/csrv-1.yaml.

utils blog 2 image 6

 

2. Validate structured data against SOT and rectify drift if it is detected

In this step we will inspect BGP operational state data for all the nodes in our topology and then determine if they are running as expected or if there is any configuration drift.

Now let's see the playbooks/verify.yaml playbook which will validate and rectify drift if it is present.

utils blog 2 image 7

In the first task we have used the ansible.utils.cli_parse plugin to execute the show ip route bgp command on the target device and then pass the output of this command to pyats parser.

The pyats parser then converts the output to structured data which is stored in the result variable.

In the next task we pass the value in the result variable along with a predefined schema to the ansible.utils.validate plugin. The plugin then compares the data against the schema or the criteria using the jsonschema engine. Each node has a schema file that defines the prefix ranges which they received from the other two neighbors .

As we saw from the topology and the CLI that these nodes are supposed to receive six routes (three in total) from each of the neighbor nodes. Now these prefixes are represented as patterns in the schema along with other properties like source_protocol, route_preferences, metric and active state.

The schema also sets the additional properties to false and defines minimum and maximum number of properties as six. This ensures that validating against the schema will always tell us whether the devices are receiving exactly the set of routes which they are supposed to receive or not.

Following is the example of a schema file for node CSRv1.

dj26Q2nkjSIwA7sFRxHz-ORag95D7raU-jbJYRLe

Let's check further tasks of verify.yaml

If the schema validation in the second task fails, the playbook enters the rescue section. This is where we used the BGP resource modules to enforce the SOT which we have saved previously in yaml files on the target devices. The end result will be remediating any configuration drift which causes failure in operation state validation.

 If we execute the verify.yaml playbook with the ansible-navigator run playbooks/verify.yaml command, as we have not made changes to any of the target devices, we see that they are working as expected and schema validation passes. (See ansible-navigator docs for more details.)

b2xBhKJl369wp9jp3Ww9ISWkkhORnnNESinC3cKC

Let's manually introduce erroneous changes on all devices. Then we will run the same playbook again and see how it behaves. Let's remove the routes to make the erroneous changes they are advertising.

tTr8wyzDwJaWMAOWo_aY_FIGaTuZq3mr7RIYQnkh

Now we have made changes in all the routers. Let's run the playbook again with ansible-navigator run playbooks/verify.yaml.

This time the schema validation fails. Remediation tasks are executed and they add facts that are missing prefixes on all three nodes. Let's take a detailed look into this.

ILxMNbKI38VU_ubzs6T1sycqOWSxrtkOVMj8-tk2

The first task, as usual, fetches the output of the show command and converts it to structured data.

The second task fails because the schema validation fails with multiple errors. The data doesn't match the constraints defined in the schema. This causes the remediation tasks to execute one by one. 

After the remediation tasks are complete, the configure BGP task showed no changes because we did not make any changes to the BGP global attribute.

The second one is where the BGP address family detects the drifts and reconfigures the missing prefixes. 

As we can see in the commands sent for all the target devices, the playbook adds facts to the routes that are deleted.

 If we run this playbook once again it will be idempotent and report no changes, thereby indicating everything working as expected. 

In a production environment, this playbook can be triggered based on external events or also can be scheduled as a periodic job in Red Hat Ansible Automation Platform's Automation controller to ensure compliance with the expected operational state.

 

Takeaways & Next Steps

As shown above, the ansible.utils collection: 

  • Makes operational state assessment easier, complementing Ansible Automation Platform's configuration management capabilities.
  • Acts as a single entry point for gathering the operational state of the entire inventory.
  • Provides a standardized way to define and validate the operational criteria as a structured data model.
  • Adds the steps for operational state assessment as a workflow template in Automation controller which can trigger other events, like running a playbook for automated remediation or reporting to an external tool, etc. 

To learn more about the Red Hat Ansible Automation Platform and network automation, you can check out these resources:

New article matched for in your Blogger! rule

Testing Wide Production 2020

donderdag 13 januari 2022

How to get access to download older Oracle versions?

Via Upgrade your Database – NOW! by Mike.Dietrich

Today a friend asked me whether I know where to download Oracle 10g. Not a typo where I mixed 9 and 0 on my keyboard. So what if … How to get access to download older Oracle versions?

What's on oracle.com and eDelivery

In the Software Delivery Cloud (formerly known as eDelivery) you will find the following base releases of the Oracle database:

How to get access to download older Oracle versions?

And the other source for downloads is oracle.com. But there you currently will be able to download Oracle Database 19c and Oracle Database 21c.

But there is no sight …

The post How to get access to download older Oracle versions? appeared first on Upgrade your Database - NOW!.

New article matched for in your Blogger! rule

Testing Wide Production 2020

woensdag 12 januari 2022

Inoreader Tutorial: This is how to take back control of your newsfeed!

Via Inoreader blog by Andrey Lyubenov

Congratulations! You are all set to take back control of your newsfeed with Inoreader! Thank you for being with us

Consuming quality content is getting more complicated these days – the information is fragmented, and there are many publishers, platforms, and content creators. 

With Inoreader, you will be in a comfortable position: you will no longer have to waste time browsing and searching for content – now, the content comes to you in your Inoreader account.

Having your favorite sources organized in feeds within Inoreader is excellent: you'll no longer have to browse through countless websites, blogs, or research journals. With Inoreader, you will be in a comfortable position: you will no longer have to waste time browsing and searching for content – now, the content comes to you in your Inoreader account.

So, it is time to start. Go through our onboarding series and unleash the full power of Inoreader!

First Steps – how to start

Organize and customize feeds  

Discover relevant content with Inoreader

Building reading habits

The post Inoreader Tutorial: This is how to take back control of your newsfeed! appeared first on Inoreader blog.

New article matched for in your Blogger! rule

Testing Wide Production 2020

Unexpected Grants To Public

Via OraFAQ Forum - RDF feed by Gogetter
Hi all,

for audit reasons we are not allowed to grant rights to Public. Now for a few days there is always a right that is automatically granted.
     SELECT *  FROM dba_tab_privs WHERE grantee = 'PUBLIC' AND owner = 'XXXXXXXXXX'          GRANTEE    OWNER      TABLE_NAME                     GRANTOR    PRIVILEGE                                GRA HIE COM TYPE                     INH     ---------- ---------- ------------------------------ ---------- ---------------------------------------- --- --- --- ------------------------ ---     PUBLIC     XXXXXXXXXX ST00001VQR0xGSIqbgUwYKF6yqCA=  XXXXXXXXXX EXECUTE                                  NO  NO  NO  TYPE                     NO      
Is there a way to find the procdure or package which causes the grant?

If I revoke the right, it will be granted again in the following days with another "Table" Name

thanks for your help
regards
Rudi

New article matched for in your Blogger! rule

Testing Wide Production 2020