Okay, here's a comprehensive Ansible tutorial, designed to take you from beginner to expert. This will be a lengthy document, so I'll break it down into chapters. We'll cover syntax, examples, explanations, scenarios, and practical use cases. I'll also highlight some of the quirks and common pitfalls.
Ansible: From Beginner to Expert
Table of Contents
- 1: Introduction to Ansible
- What is Ansible?
- Why Ansible? (Benefits and Use Cases)
- Ansible Architecture
- Key Concepts: Control Node, Managed Nodes, Inventory, Playbooks, Modules, Tasks, Roles
- Idempotency
- Ansible vs. Other Configuration Management Tools
- 2: Setting Up Your Ansible Environment
- Installing Ansible (RHEL, CentOS, Fedora, Ubuntu, Debian, macOS)
- Configuring the Control Node
- Creating and Managing the Inventory File
- Configuring SSH Access to Managed Nodes
- Testing Connectivity
- 3: Ansible Playbooks: The Heart of Automation
- Playbook Structure: Plays, Tasks, Handlers
- YAML Syntax Essentials
- Basic Playbook Examples: Installing Packages, Starting Services, Creating Files
- Using Modules:
yum,apt,service,file,copy,template,user,group,cron, and more - Understanding Module Arguments and Return Values
- Running Playbooks:
ansible-playbookcommand - Controlling Playbook Execution:
--limit,--start-at-task,--step,--check,--diff
- 4: Variables: Making Playbooks Dynamic
- Variable Precedence
- Defining Variables:
- In Inventory Files
- In Playbooks
- In Included Files
- Using
vars_files - Using
vars_prompt
- Magic Variables (Facts):
ansible_hostname,ansible_os_family,ansible_default_ipv4, etc. - Registering Variables: Capturing Output from Tasks
- Using Variables in Templates
- 5: Conditionals and Loops: Adding Logic to Your Playbooks
- Conditionals:
whenstatement- Basic Conditionals
- Using Facts in Conditionals
- Multiple Conditions
- Loops:
loopstatement- Basic Loops
- Looping Over Lists and Dictionaries
- Using
with_items,with_dict(Legacy) - Combining Conditionals and Loops
- Conditionals:
- 6: Templates: Dynamic Configuration Files
- Jinja2 Templating Engine
- Template Syntax: Variables, Control Structures, Filters
- Creating Templates
- Using the
templateModule - Common Template Filters:
default,upper,lower,strftime,to_json,to_nice_json - Template Examples: Configuring Web Servers, Databases, and More
- 7: Roles: Organizing and Reusing Your Automation
- Role Directory Structure:
tasks,handlers,vars,defaults,templates,files,meta - Creating Roles
- Using Roles in Playbooks
- Role Dependencies
- Sharing Roles with Ansible Galaxy
- Role Directory Structure:
- 8: Handlers: Responding to Changes
- What are Handlers?
- Defining Handlers
- Notifying Handlers
- Handler Execution Order
- Common Handler Use Cases: Restarting Services, Reloading Configuration
- 9: Ansible Vault: Securing Sensitive Data
- What is Ansible Vault?
- Creating and Encrypting Vault Files
- Using Vault Files in Playbooks
- Providing the Vault Password:
--ask-vault-pass,--vault-password-file - Vault Commands:
create,encrypt,decrypt,rekey,edit,view
- 10: Advanced Ansible Techniques
- Delegation:
delegate_to - Rolling Updates:
serial - Error Handling:
ignore_errors,block,rescue,always - Asynchronous Tasks:
async,poll - Using Dynamic Inventory
- Working with Cloud Providers (AWS, Azure, GCP)
- Using Ansible Collections
- Delegation:
- 11: Practical Use Cases and Scenarios
- Beginner:
- Basic System Updates
- User and Group Management
- File Management
- Intermediate:
- Web Server Deployment (LAMP Stack)
- Database Configuration
- Monitoring Agent Installation
- Advanced:
- Continuous Integration/Continuous Deployment (CI/CD) Pipeline
- Orchestrating Microservices
- Automating Cloud Infrastructure
- Expert:
- Building Custom Modules
- Developing Ansible Collections
- Contributing to the Ansible Community
- Beginner:
- 12: Troubleshooting Ansible
- Common Errors and How to Fix Them
- Debugging Playbooks:
--verbose,--step,--check,--diff - Using Logs
- Best Practices for Writing Robust Playbooks
- 13: Ansible Best Practices
- Idempotency
- Using Roles for Reusability
- Using Variables Effectively
- Securing Sensitive Data with Vault
- Testing Your Playbooks
- Version Control
- Appendix: Ansible Module Reference (Common Modules)
Let's start with 1: Introduction to Ansible.
1: Introduction to Ansible
What is Ansible?
Ansible is an open-source automation tool used for configuration management, application deployment, task automation, and IT orchestration. It simplifies complex tasks by using a human-readable language (YAML) to define automation processes.
Why Ansible? (Benefits and Use Cases)
- Simple: Uses YAML, which is easy to learn and understand.
- Agentless: Doesn't require any software to be installed on managed nodes. It uses SSH or WinRM for communication.
- Powerful: Can automate a wide range of tasks, from basic system administration to complex application deployments.
- Flexible: Can be used to manage a variety of systems, including Linux, Windows, and network devices.
- Idempotent: Ensures that tasks are only executed if necessary, preventing unintended changes.
- Use Cases:
- Configuration Management: Ensuring systems are in a desired state.
- Application Deployment: Deploying applications to multiple servers.
- Task Automation: Automating repetitive tasks, such as user creation or log rotation.
- Orchestration: Coordinating complex workflows across multiple systems.
- Cloud Provisioning: Automating the creation and management of cloud resources.
Ansible Architecture
- Control Node: The machine where Ansible is installed and from which playbooks are executed.
- Managed Nodes: The systems that are being managed by Ansible.
- Inventory: A list of managed nodes, organized into groups.
- Playbooks: YAML files that define the automation tasks to be performed.
- Modules: Reusable units of code that perform specific tasks.
- Tasks: Individual steps within a playbook that call modules.
- Plugins: Extensions that enhance Ansible's functionality (e.g., connection plugins, lookup plugins).
Key Concepts:
- Control Node: The central machine where Ansible is installed and from which you run your playbooks. Typically a Linux machine, but can also be macOS.
- Managed Nodes: The servers, network devices, or other systems that you are configuring with Ansible. These are the targets of your automation.
Inventory: A file (or dynamic script) that lists your managed nodes and organizes them into groups. This allows you to target specific sets of machines with your playbooks.
- Example:
# /home/admin/inventory [webservers] web1.example.com web2.example.com [databases] db1.example.com [all:vars] ansible_user=admin ansible_ssh_private_key_file=/home/admin/.ssh/id_rsaPlaybooks: YAML files that define the steps Ansible will take to automate your tasks. Playbooks are the core of Ansible automation.
- Example:
# deploy_web.yml --- - name: Deploy a website hosts: webservers become: true # Run tasks with sudo privileges tasks: - name: Install nginx yum: name: nginx state: present - name: Start nginx service: name: nginx state: started enabled: true- Modules: Reusable, self-contained units of code that perform specific tasks. Ansible ships with hundreds of modules for managing everything from files and users to packages and services. You can also write your own custom modules.
- Examples:
yum,apt,service,file,copy,template,user,group,cron
- Examples:
- Tasks: Individual steps within a playbook. Each task calls a specific Ansible module to perform an action.
- Roles: A way to organize and reuse playbooks. Roles allow you to break down complex automation tasks into smaller, more manageable units. A role typically includes tasks, handlers, variables, templates, and files.
Idempotency:
A crucial concept in Ansible. Idempotency means that running a playbook multiple times will have the same result as running it once. Ansible achieves this by checking the current state of the system before making any changes. If the desired state is already achieved, Ansible will skip the task. This prevents unintended changes and ensures consistency.
- Example: If you use the
yummodule to install a package, Ansible will only install the package if it's not already installed.
- Example: If you use the
Ansible vs. Other Configuration Management Tools:
- Ansible: Agentless, simple, YAML-based, push-based.
- Chef: Agent-based, Ruby-based, more complex.
- Puppet: Agent-based, DSL-based, more complex.
- SaltStack: Agent-based (can be agentless), Python-based, fast.
Ansible's agentless architecture and simplicity make it a popular choice for many organizations.
2: Setting Up Your Ansible Environment
Installing Ansible
The installation process varies depending on your operating system. Here are instructions for common platforms:
RHEL, CentOS, Fedora:
sudo dnf install ansible -y # Recommended for RHEL 8 and later, Fedora # OR sudo yum install ansible -y # For older CentOS/RHEL versionsUbuntu, Debian:
sudo apt update sudo apt install software-properties-common # If you don't have add-apt-repository sudo add-apt-repository --yes ppa:ansible/ansible sudo apt update sudo apt install ansible -ymacOS:
First, install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Then, install Ansible:
brew install ansiblePython's
pip(Generally not recommended for system-level Ansible):pip install ansibleNote: Using
pipmight lead to dependency conflicts with system packages. It's generally better to use your system's package manager. If you do usepip, it's highly recommended to use a virtual environment.
Configuring the Control Node
After installation, you might want to configure some global settings. This is done in the
ansible.cfgfile. The default location is/etc/ansible/ansible.cfg, but you can also create a localansible.cfgin your project directory, which will override the global settings.Common settings to configure:
inventory: Specifies the location of your inventory file.roles_path: Specifies the directory where Ansible will look for roles.collections_paths: Specifies the directory where Ansible will look for collections.private_key_file: Specifies the SSH private key to use for authentication.host_key_checking: Set toFalseto disable host key checking (not recommended for production).
Example:
# /etc/ansible/ansible.cfg [defaults] inventory = /home/admin/inventory roles_path = /home/admin/roles collections_paths = /opt/ansible/collections private_key_file = /home/admin/.ssh/id_rsa host_key_checking = False # Only for testing!Creating and Managing the Inventory File
The inventory file lists your managed nodes and organizes them into groups. It can be a simple text file in INI format or a YAML file.
INI Format:
# /home/admin/inventory [webservers] web1.example.com web2.example.com [databases] db1.example.com [monitoring] mon1.example.com [backups] backup.example.com [all:vars] ansible_user=admin ansible_ssh_private_key_file=/home/admin/.ssh/id_rsaYAML Format:
# /home/admin/inventory.yml all: hosts: web1.example.com: web2.example.com: db1.example.com: mon1.example.com: backup.example.com: vars: ansible_user: admin ansible_ssh_private_key_file: /home/admin/.ssh/id_rsa webservers: hosts: web1.example.com: web2.example.com: databases: hosts: db1.example.com: monitoring: hosts: mon1.example.com: backups: hosts: backup.example.com:Inventory Variables:
You can define variables at the host or group level in the inventory file. These variables will be available in your playbooks.
ansible_host: Specifies the IP address or hostname of the managed node.ansible_user: Specifies the username to use for SSH authentication.ansible_password: Specifies the password to use for SSH authentication (not recommended).ansible_ssh_private_key_file: Specifies the path to the SSH private key file.ansible_connection: Specifies the connection type (e.g.,ssh,winrm).ansible_port: Specifies the SSH port.
Configuring SSH Access to Managed Nodes
Ansible typically uses SSH for communication with managed nodes. You need to ensure that you have SSH access configured correctly. The most common method is to use SSH keys.
Generate an SSH key pair on the control node:
ssh-keygen -t rsa -b 4096 -f /home/admin/.ssh/id_rsa -N ""Copy the public key to the
authorized_keysfile on each managed node:ssh-copy-id -i /home/admin/.ssh/id_rsa.pub admin@web1.example.com ssh-copy-id -i /home/admin/.ssh/id_rsa.pub admin@web2.example.com ssh-copy-id -i /home/admin/.ssh/id_rsa.pub admin@db1.example.com ssh-copy-id -i /home/admin/.ssh/id_rsa.pub admin@mon1.example.com ssh-copy-id -i /home/admin/.ssh/id_rsa.pub admin@backup.example.comReplace
adminwith the appropriate username on each managed node.Alternatively, use Ansible itself to copy the key (bootstrapping):
# bootstrap.yml (Run this *before* other playbooks) --- - name: Copy SSH key to managed nodes hosts: all become: true tasks: - name: Copy public key authorized_key: user: admin # Replace with the actual user key: "{{ lookup('file', '/home/admin/.ssh/id_rsa.pub') }}"Run this playbook once to set up SSH keys. You'll likely need to provide a password the first time.
Testing Connectivity
After setting up your environment, it's essential to test connectivity to your managed nodes. You can use the
ansiblecommand with thepingmodule:ansible all -m pingThis command will attempt to connect to all hosts in your inventory and execute the
pingmodule. If successful, you should see output similar to this:web1.example.com | SUCCESS => { "changed": false, "ping": "pong" } web2.example.com | SUCCESS => { "changed": false, "ping": "pong" } db1.example.com | SUCCESS => { "changed": false, "ping": "pong" } mon1.example.com | SUCCESS => { "changed": false, "ping": "pong" } backup.example.com | SUCCESS => { "changed": false, "ping": "pong" }If you encounter errors, double-check your inventory file, SSH configuration, and network connectivity.
Great! Let's dive into 3: Ansible Playbooks: The Heart of Automation.
3: Ansible Playbooks: The Heart of Automation
Playbook Structure: Plays, Tasks, Handlers
Ansible playbooks are YAML files that define the automation tasks you want to perform. They are structured into plays, tasks, and handlers.
- Play: A play defines the target hosts and the tasks to be executed on those hosts. A playbook can contain multiple plays.
- Task: A task is a single action that you want to perform on the target hosts. Each task calls an Ansible module.
- Handler: A handler is a special type of task that is only executed when notified by another task. Handlers are typically used to restart services or reload configuration files.
Here's a basic example:
--- - name: Update web servers hosts: webservers become: true # Run tasks with sudo privileges tasks: - name: Update apt cache apt: update_cache: yes when: ansible_os_family == "Debian" - name: Install nginx yum: name: nginx state: present when: ansible_os_family == "RedHat" - name: Start nginx service: name: nginx state: started enabled: trueYAML Syntax Essentials
YAML (YAML Ain't Markup Language) is a human-readable data serialization format. Ansible playbooks are written in YAML. Here are some essential YAML syntax rules:
- Indentation: YAML uses indentation to define the structure of the document. Use spaces, not tabs! Consistent indentation is crucial. Typically, two spaces are used for indentation.
- Lists: Lists are denoted by a hyphen (
-). - Dictionaries (Mappings): Dictionaries are key-value pairs.
- Comments: Comments start with a hash symbol (
#). - Booleans: Booleans can be
true,false,yes,no,on,off. - Strings: Strings can be enclosed in single or double quotes, or not quoted at all (unless they contain special characters).
- Multiple Documents: A YAML file can contain multiple documents, separated by
---.
Basic Playbook Examples: Installing Packages, Starting Services, Creating Files
Let's look at some basic playbook examples:
Installing Packages:
--- - name: Install packages hosts: all become: true tasks: - name: Install httpd (RedHat) yum: name: httpd state: present when: ansible_os_family == "RedHat" - name: Install apache2 (Debian) apt: name: apache2 state: present when: ansible_os_family == "Debian"Starting Services:
--- - name: Start services hosts: all become: true tasks: - name: Start httpd (RedHat) service: name: httpd state: started enabled: true when: ansible_os_family == "RedHat" - name: Start apache2 (Debian) service: name: apache2 state: started enabled: true when: ansible_os_family == "Debian"Creating Files:
--- - name: Create files hosts: all become: true tasks: - name: Create /tmp/hello.txt file: path: /tmp/hello.txt state: touch mode: 0644
Using Modules:
yum,apt,service,file,copy,template,user,group,cron, and moreAnsible modules are the building blocks of your playbooks. Each module performs a specific task. Here are some commonly used modules:
yum: Manages packages on Red Hat-based systems.apt: Manages packages on Debian-based systems.service: Manages services.file: Manages files and directories.copy: Copies files from the control node to managed nodes.template: Creates files from Jinja2 templates.user: Manages user accounts.group: Manages groups.cron: Manages cron jobs.
Refer to the Ansible documentation for a complete list of modules and their options: https://docs.ansible.com/
Understanding Module Arguments and Return Values
Each module has its own set of arguments that you can use to configure its behavior. The arguments are specified as key-value pairs within the module definition.
Example:
- name: Create a user user: name: testuser password: "$6$rounds=656000$..." # Hashed password groups: wheel state: presentModules also return values, which you can use in subsequent tasks. The return values are stored in a dictionary. You can access the return values using the
registerkeyword.Example:
- name: Run a command command: /usr/bin/uptime register: uptime_output - name: Print the output debug: msg: "Uptime: {{ uptime_output.stdout }}"Running Playbooks:
ansible-playbookcommandTo run a playbook, use the
ansible-playbookcommand:ansible-playbook my_playbook.ymlYou can specify the inventory file using the
-ioption:ansible-playbook -i /home/admin/inventory my_playbook.ymlIf you need to provide a vault password, use the
--ask-vault-passoption or the--vault-password-fileoption:ansible-playbook my_playbook.yml --ask-vault-pass ansible-playbook my_playbook.yml --vault-password-file=/path/to/vault_password.txtControlling Playbook Execution:
--limit,--start-at-task,--step,--check,--diffAnsible provides several options to control the execution of your playbooks:
--limit: Limits the playbook execution to specific hosts or groups.ansible-playbook my_playbook.yml --limit webservers--start-at-task: Starts the playbook execution at a specific task.ansible-playbook my_playbook.yml --start-at-task "Install nginx"--step: Executes the playbook in step-by-step mode, prompting you to confirm each task before it is executed.ansible-playbook my_playbook.yml --step--check: Runs the playbook in "check mode," which simulates the changes that would be made without actually making them. This is useful for testing your playbooks.ansible-playbook my_playbook.yml --check--diff: Shows the differences that would be made by the playbook. This is useful for reviewing changes before applying them. Works best with modules that support diffs (liketemplate).ansible-playbook my_playbook.yml --diff
Alright, let's proceed to 4: Variables: Making Playbooks Dynamic.
4: Variables: Making Playbooks Dynamic
Variable Precedence
Ansible has a well-defined order of precedence for variables. When a variable is defined in multiple places, the value from the source with higher precedence will be used. Here's the order from lowest to highest precedence:
defaults/main.ymlin roles: Default variables defined in a role'sdefaults/main.ymlfile.- Inventory file or script group vars: Variables defined for a group in the inventory file.
- Inventory file or script host vars: Variables defined for a host in the inventory file.
- Host facts: Variables gathered from the managed node (e.g.,
ansible_os_family,ansible_hostname). role vars: Variables defined in a role'svars/main.ymlfile.- Playbook
vars: Variables defined directly in the playbook using thevarskeyword. vars_files: Variables defined in files included using thevars_fileskeyword.vars_prompt: Variables defined using thevars_promptkeyword (interactive prompts).block vars(only when the block is used): Variables defined within ablockusing thevarskeyword.- Task vars (only for the task): Variables defined directly within a task using the
varskeyword. - Extra vars (
-ecommand line option): Variables passed on the command line using the-eoption. - Registered vars: Variables created using the
registerkeyword. include_vars: Variables loaded using theinclude_varstask.set_fact: Variables set using theset_factmodule.- Role params: Variables passed to a role when it's included in a playbook.
includetask vars: Variables defined within anincludetask.lookupplugins: Variables obtained from lookup plugins (e.g.,file,env).
Understanding variable precedence is crucial for avoiding conflicts and ensuring that your playbooks behave as expected.
Defining Variables:
Let's explore the different ways to define variables in Ansible:
In Inventory Files:
You can define variables at the host or group level in the inventory file.
# /home/admin/inventory [webservers] web1.example.com http_port=80 web2.example.com http_port=8080 [databases] db1.example.com db_name=myapp [all:vars] ansible_user=admin ansible_ssh_private_key_file=/home/admin/.ssh/id_rsaIn this example,
http_portis defined forweb1.example.comandweb2.example.com, anddb_nameis defined fordb1.example.com.ansible_userandansible_ssh_private_key_fileare defined for all hosts.In Playbooks:
You can define variables directly in the playbook using the
varskeyword.--- - name: Deploy website hosts: webservers become: true vars: http_port: 80 document_root: /var/www/html tasks: - name: Install nginx yum: name: nginx state: present - name: Configure nginx template: src: nginx.conf.j2 dest: /etc/nginx/nginx.confIn this example,
http_portanddocument_rootare defined for thewebserversgroup.In Included Files:
You can define variables in separate files and include them in your playbook using the
vars_fileskeyword.--- - name: Deploy website hosts: webservers become: true vars_files: - vars/web_vars.yml tasks: - name: Install nginx yum: name: nginx state: present - name: Configure nginx template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf# vars/web_vars.yml http_port: 80 document_root: /var/www/htmlThis is useful for organizing your variables and keeping your playbooks clean.
Using
vars_prompt:The
vars_promptkeyword allows you to prompt the user for input when the playbook is executed. This is useful for sensitive information that you don't want to store in your playbook or inventory file.--- - name: Configure database hosts: databases become: true vars_prompt: - name: db_password prompt: "Enter the database password" private: true # Don't display the password on the screen tasks: - name: Configure database connection template: src: db.conf.j2 dest: /etc/myapp/db.confMagic Variables (Facts):
Ansible gathers information about the managed nodes and stores it in variables called "facts." These facts are automatically available in your playbooks.
Some commonly used facts:
ansible_hostname: The hostname of the managed node.ansible_os_family: The operating system family (e.g., "RedHat", "Debian").ansible_distribution: The operating system distribution (e.g., "CentOS", "Ubuntu").ansible_distribution_version: The operating system distribution version.ansible_default_ipv4: A dictionary containing information about the default IPv4 address.ansible_architecture: The system architecture (e.g., "x86_64").ansible_processor_vcpus: The number of virtual CPUs.ansible_memtotal_mb: The total memory in MB.
You can access these facts in your playbooks using the
{{ }}syntax.Example:
--- - name: Print system information hosts: all tasks: - name: Print hostname debug: msg: "Hostname: {{ ansible_hostname }}" - name: Print OS family debug: msg: "OS Family: {{ ansible_os_family }}" - name: Print IP address debug: msg: "IP Address: {{ ansible_default_ipv4.address }}"Registering Variables: Capturing Output from Tasks
The
registerkeyword allows you to capture the output of a task and store it in a variable. This is useful for tasks that return information that you need to use in subsequent tasks.Example:
--- - name: Run a command hosts: all become: true tasks: - name: Get disk usage command: df -h / register: disk_usage - name: Print disk usage debug: msg: "Disk Usage: {{ disk_usage.stdout }}"In this example, the output of the
df -h /command is stored in thedisk_usagevariable. You can then access the standard output usingdisk_usage.stdout, the standard error usingdisk_usage.stderr, and the return code usingdisk_usage.rc.Using Variables in Templates
Variables are commonly used in Jinja2 templates to create dynamic configuration files. You can access variables in templates using the
{{ }}syntax.Example:
# nginx.conf.j2 server { listen {{ http_port }}; root {{ document_root }}; index index.html; }--- - name: Configure nginx hosts: webservers become: true vars: http_port: 80 document_root: /var/www/html tasks: - name: Create nginx configuration file template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf
Excellent! Let's proceed to 5: Conditionals and Loops: Adding Logic to Your Playbooks.
5: Conditionals and Loops: Adding Logic to Your Playbooks
Conditionals:
whenstatementThe
whenstatement allows you to conditionally execute tasks based on certain conditions. The condition must evaluate to a boolean value (true or false).Basic Conditionals:
--- - name: Install nginx hosts: webservers become: true tasks: - name: Install nginx (RedHat) yum: name: nginx state: present when: ansible_os_family == "RedHat" - name: Install nginx (Debian) apt: name: nginx state: present when: ansible_os_family == "Debian"In this example, the
yumtask will only be executed on Red Hat-based systems, and theapttask will only be executed on Debian-based systems.Using Facts in Conditionals:
You can use any fact in your conditionals.
--- - name: Configure firewall hosts: all become: true tasks: - name: Start firewalld service: name: firewalld state: started enabled: true when: ansible_os_family == "RedHat" and ansible_distribution_version >= "7"This example starts the
firewalldservice only on Red Hat-based systems with a version greater than or equal to 7.Multiple Conditions:
You can combine multiple conditions using logical operators:
and: Both conditions must be true.or: At least one condition must be true.not: Negates the condition.
--- - name: Configure web server hosts: webservers become: true tasks: - name: Install php yum: name: php state: present when: ansible_os_family == "RedHat" and ansible_distribution_version >= "8" or ansible_distribution == "Fedora"This example installs PHP on Red Hat-based systems with a version greater than or equal to 8, or on Fedora systems.
Using Registered Variables in Conditionals:
You can use the results of a registered variable in a
whencondition.--- - name: Check if a file exists hosts: all tasks: - name: Stat a file stat: path: /etc/nginx/nginx.conf register: nginx_config - name: Create a backup of the file copy: src: /etc/nginx/nginx.conf dest: /etc/nginx/nginx.conf.bak when: nginx_config.stat.existsThis example checks if the
/etc/nginx/nginx.conffile exists using thestatmodule. If the file exists (i.e.,nginx_config.stat.existsis true), it creates a backup of the file.
Loops:
loopstatementThe
loopstatement allows you to repeat a task multiple times.Basic Loops:
--- - name: Create users hosts: all become: true tasks: - name: Create user accounts user: name: "{{ item }}" state: present loop: - user1 - user2 - user3This example creates three user accounts:
user1,user2, anduser3. Theitemvariable represents the current element in the loop.Looping Over Lists and Dictionaries:
You can loop over lists of dictionaries.
--- - name: Create users with specific attributes hosts: all become: true tasks: - name: Create user accounts user: name: "{{ item.name }}" state: present groups: "{{ item.groups }}" loop: - name: user1 groups: wheel - name: user2 groups: users - name: user3 groups: audioIn this example, each user is created with specific attributes (name and groups) defined in the list of dictionaries.
Using
with_items,with_dict(Legacy):with_itemsandwith_dictare older loop constructs that are still supported but are generally superseded by theloopkeyword.with_items: Loops over a list.--- - name: Install packages hosts: all become: true tasks: - name: Install packages yum: name: "{{ item }}" state: present with_items: - httpd - php - mariadb-serverwith_dict: Loops over a dictionary.--- - name: Create users hosts: all become: true tasks: - name: Create user accounts user: name: "{{ item.key }}" password: "{{ item.value }}" state: present with_dict: user1: "$6$..." user2: "$6$..." user3: "$6$..."
While these still work,
loopis generally preferred for its clarity and flexibility.Combining Conditionals and Loops:
You can combine conditionals and loops to create more complex automation tasks.
--- - name: Configure web servers hosts: webservers become: true tasks: - name: Install php modules yum: name: "{{ item }}" state: present loop: - php-mysqlnd - php-gd - php-xml when: ansible_os_family == "RedHat" and ansible_distribution_version >= "8"This example installs PHP modules only on Red Hat-based systems with a version greater than or equal to 8.
Another example, using a
blockfor more complex conditional logic:--- - name: Configure web servers hosts: webservers become: true tasks: - name: Configure web server based on OS block: - name: Install Apache on Debian apt: name: apache2 state: present - name: Configure Apache on Debian template: src: apache2.conf.j2 dest: /etc/apache2/apache2.conf when: ansible_os_family == "Debian" rescue: - name: Notify admin about failure mail: to: admin@example.com subject: "Apache configuration failed on Debian" body: "Check the logs on {{ ansible_hostname }}" always: - name: Ensure web server is running service: name: "{{ 'apache2' if ansible_os_family == 'Debian' else 'httpd' }}" state: started enabled: yesThis example uses a
blockwithwhen,rescue, andalwayssections to handle different scenarios based on the operating system.
Okay, let's move on to 6: Templates: Dynamic Configuration Files.
6: Templates: Dynamic Configuration Files
Jinja2 Templating Engine
Jinja2 is a powerful and flexible templating engine for Python. Ansible uses Jinja2 to create dynamic configuration files. Templates are plain text files that contain variables and control structures. When Ansible processes a template, it replaces the variables with their actual values and executes the control structures to generate the final configuration file.
Template Syntax: Variables, Control Structures, Filters
Jinja2 templates use a specific syntax for variables, control structures, and filters:
- Variables: Variables are enclosed in double curly braces
{{ variable_name }}. - Control Structures: Control structures are enclosed in curly braces and percent signs
{% control_structure %}. - Comments: Comments are enclosed in curly braces and hash signs
{# comment #}. - Filters: Filters modify the output of variables. They are applied using the pipe symbol
|.
Here are some common Jinja2 syntax elements:
Variables:
<h1>Welcome to {{ ansible_hostname }}</h1> <p>This server is running {{ ansible_os_family }} {{ ansible_distribution_version }}</p>Control Structures:
ifstatement:{% if ansible_os_family == "RedHat" %} <p>This is a Red Hat-based system.</p> {% elif ansible_os_family == "Debian" %} <p>This is a Debian-based system.</p> {% else %} <p>This is an unknown system.</p> {% endif %}forloop:<ul> {% for item in my_list %} <li>{{ item }}</li> {% endfor %} </ul>
Filters:
default(value): Returns the variable's value if it's defined; otherwise, returns the default value.<h1>Welcome to {{ site_name | default("My Website") }}</h1>upper(): Converts the string to uppercase.<h1>{{ ansible_hostname | upper }}</h1>lower(): Converts the string to lowercase.<h1>{{ ansible_hostname | lower }}</h1>strftime(format): Formats a date and time value.<p>Current time: {{ now | strftime("%Y-%m-%d %H:%M:%S") }}</p>to_json(): Converts a variable to a JSON string.<pre>{{ my_variable | to_json }}</pre>to_nice_json(indent): Converts a variable to a nicely formatted JSON string with the specified indentation.<pre>{{ my_variable | to_nice_json(2) }}</pre>join(separator): Joins a list of strings with the specified separator.<p>Groups: {{ groups | join(", ") }}</p>hash(algorithm): Generates a hash of the variable using the specified algorithm (e.g., "sha256").<p>Password hash: {{ my_password | hash('sha256') }}</p>
- Variables: Variables are enclosed in double curly braces
Creating Templates
To create a template, simply create a plain text file with the
.j2extension. The file should contain the configuration settings with variables and control structures.Example:
# /templates/nginx.conf.j2 server { listen {{ http_port }}; server_name {{ server_name }}; root {{ document_root }}; index index.html; location / { try_files $uri $uri/ =404; } }Using the
templateModuleThe
templatemodule is used to create files from Jinja2 templates. It takes the following arguments:src: The path to the template file on the control node.dest: The path to the destination file on the managed node.owner: The owner of the file.group: The group of the file.mode: The permissions of the file.
Example:
--- - name: Configure nginx hosts: webservers become: true vars: http_port: 80 server_name: example.com document_root: /var/www/html tasks: - name: Create nginx configuration file template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf owner: root group: root mode: 0644 notify: restart nginx handlers: - name: restart nginx service: name: nginx state: restartedCommon Template Filters:
default,upper,lower,strftime,to_json,to_nice_jsonWe've already covered these above in the "Template Syntax" section.
Template Examples: Configuring Web Servers, Databases, and More
Let's look at some template examples:
Configuring a Web Server (Nginx):
# /templates/nginx.conf.j2 user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen {{ http_port }}; server_name {{ server_name }}; root {{ document_root }}; index index.html index.htm; location / { try_files $uri $uri/ =404; } } }Configuring a Database (MySQL):
# /templates/my.cnf.j2 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysqld.log pid-file=/run/mysqld/mysqld.pid user=mysql # Disable networking skip-networking # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/run/mysqld/mysqld.pidConfiguring a Monitoring Agent (Prometheus Node Exporter):
# /templates/node_exporter.service.j2 [Unit] Description=Prometheus Node Exporter Wants=network-online.target After=network-online.target [Service] User=node_exporter Group=node_exporter Type=simple ExecStart=/usr/local/bin/node_exporter {{ node_exporter_flags }} [Install] WantedBy=multi-user.target
Okay, let's move on to 7: Roles: Organizing and Reusing Your Automation.
7: Roles: Organizing and Reusing Your Automation
Role Directory Structure:
tasks,handlers,vars,defaults,templates,files,metaRoles are a way to organize and reuse your Ansible playbooks. A role is a directory structure with specific subdirectories for tasks, handlers, variables, templates, files, and metadata.
Here's the typical directory structure of an Ansible role:
my_role/ ├── defaults/ │ └── main.yml # Default variables for the role ├── files/ │ └── ... # Static files to be copied to managed nodes ├── handlers/ │ └── main.yml # Handlers for the role ├── meta/ │ └── main.yml # Metadata about the role (author, dependencies, etc.) ├── tasks/ │ └── main.yml # Main tasks for the role ├── templates/ │ └── ... # Jinja2 templates to be used by the role └── vars/ └── main.yml # Variables for the roletasks/main.yml: This file contains the main tasks that the role will perform.handlers/main.yml: This file contains handlers that can be notified by tasks in the role.vars/main.yml: This file contains variables that are specific to the role. These variables have higher precedence than the default variables defined indefaults/main.yml.defaults/main.yml: This file contains default variables for the role. These variables have the lowest precedence.templates/: This directory contains Jinja2 templates that can be used by the role.files/: This directory contains static files that can be copied to the managed nodes.meta/main.yml: This file contains metadata about the role, such as the author, description, and dependencies.
Creating Roles
You can create a role manually by creating the directory structure and the necessary files. Alternatively, you can use the
ansible-galaxycommand to create a role skeleton:ansible-galaxy init my_roleThis command will create the basic directory structure for the role.
Using Roles in Playbooks
To use a role in a playbook, use the
roleskeyword:--- - name: Configure web servers hosts: webservers become: true roles: - my_roleYou can also pass variables to a role:
--- - name: Configure web servers hosts: webservers become: true roles: - role: my_role vars: http_port: 8080You can also specify a role multiple times with different variables:
--- - name: Configure web servers hosts: webservers become: true roles: - role: my_role vars: http_port: 80 server_name: example.com - role: my_role vars: http_port: 443 server_name: secure.example.comRole Dependencies
A role can depend on other roles. To specify role dependencies, use the
dependencieskeyword in themeta/main.ymlfile:# meta/main.yml --- dependencies: - role: common - role: nginxinc.nginxWhen you include a role with dependencies in a playbook, Ansible will automatically resolve and execute the dependencies before executing the role itself.
Sharing Roles with Ansible Galaxy
Ansible Galaxy is a repository for sharing Ansible roles. You can upload your roles to Ansible Galaxy to share them with the community.
To upload a role to Ansible Galaxy, you need to create an account on https://galaxy.ansible.com/ and then use the
ansible-galaxycommand to upload the role:ansible-galaxy role init my_role # If you haven't already cd my_role git init git add . git commit -m "Initial commit" # Create a GitHub repository for your role git remote add origin git@github.com:your_username/my_role.git git push -u origin main ansible-galaxy role import your_username my_roleReplace
your_usernamewith your Ansible Galaxy username andmy_rolewith the name of your role. You'll need to link your GitHub account to your Ansible Galaxy account.Role Variables: Defaults vs. Vars vs. Extra Vars
Understanding the different types of variables and their precedence is crucial for creating flexible and configurable roles.
defaults/main.yml:- Purpose: Provides default values for variables.
- Precedence: Lowest. These values are easily overridden.
- Use Case: Define sensible defaults that can be customized as needed.
Example:
# roles/my_role/defaults/main.yml --- http_port: 80 enable_ssl: false
vars/main.yml:- Purpose: Defines variables that are specific to the role.
- Precedence: Higher than
defaults. - Use Case: Define variables that are essential for the role's functionality and are less likely to be overridden.
Example:
# roles/my_role/vars/main.yml --- package_name: nginx config_file: /etc/nginx/nginx.conf
Extra Vars (
-e):- Purpose: Variables passed on the command line using the
-eoption. - Precedence: Highest (except for
registerandset_fact). - Use Case: Override any other variable definitions for specific scenarios or environments.
Example:
ansible-playbook my_playbook.yml -e "http_port=8080 enable_ssl=true"
- Purpose: Variables passed on the command line using the
Dynamic Role Inclusion with
include_roleandimport_roleAnsible provides two ways to include roles:
include_roleandimport_role. Understanding the difference is important for managing complex playbooks.include_role:- Behavior: Includes the role dynamically at runtime.
- Variable Scope: Variables defined in the role are not available until the role is executed.
- Use Case: When you need to conditionally include a role based on a variable or fact.
Example:
# tasks/main.yml --- - name: Include my_role if condition is met include_role: name: my_role when: ansible_os_family == "RedHat"
import_role:- Behavior: Imports the role statically at playbook parsing time.
- Variable Scope: Variables defined in the role are available immediately.
- Use Case: When you need to include a role unconditionally and want its variables to be available throughout the playbook.
Example:
# tasks/main.yml --- - name: Import my_role import_role: name: my_role
Role Tags
Tags allow you to selectively run parts of a playbook or role. You can apply tags to roles and tasks within roles.
Applying Tags to Roles:
# playbook.yml --- - name: Configure web servers hosts: webservers become: true roles: - role: my_role tags: - web - nginxApplying Tags to Tasks within Roles:
# roles/my_role/tasks/main.yml --- - name: Install nginx yum: name: nginx state: present tags: - install - nginx - name: Configure nginx template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf tags: - config - nginxRunning Playbooks with Tags:
ansible-playbook playbook.yml --tags nginx,configThis will only run the tasks and roles that are tagged with
nginxorconfig.
Role Handlers and Dependencies: A Cohesive Approach
Roles can have dependencies on other roles, and they can also define handlers that are triggered by tasks in other roles. This allows you to create complex automation workflows that are well-organized and maintainable.
Example:
# roles/my_role/meta/main.yml --- dependencies: - role: common tags: [ 'common' ]This specifies that
my_roledepends on thecommonrole.
Testing Roles with Molecule
Molecule is a testing framework for Ansible roles. It allows you to automate the process of testing your roles, ensuring that they work as expected.
Install Molecule:
pip install moleculeCreate a Molecule Scenario:
molecule init scenario --role my_roleConfigure Molecule (e.g.,
molecule/default/molecule.yml):--- dependency: name: galaxy driver: name: docker platforms: - name: instance image: "rockylinux:8" pre_build_image: true verifier: name: ansibleRun Molecule Tests:
molecule test
Best Practices for Role Development
- Keep roles focused and modular.
- Use variables to make roles configurable.
- Document your roles thoroughly.
- Test your roles with Molecule.
- Follow Ansible best practices for idempotency and error handling.
8: Handlers: Responding to Changes
What are Handlers?
Handlers are special tasks that are only executed when notified by other tasks. They are typically used to restart services or reload configuration files after a change has been made. Handlers are defined in the
handlers/main.ymlfile within a role.Defining Handlers
Handlers are defined like regular tasks, but they have a name that can be used to notify them.
# handlers/main.yml --- - name: restart nginx service: name: nginx state: restartedNotifying Handlers
To notify a handler, use the
notifykeyword in a task.--- - name: Configure nginx hosts: webservers become: true tasks: - name: Create nginx configuration file template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf owner: root group: root mode: 0644 notify: restart nginxIn this example, the
restart nginxhandler will be notified if thetemplatetask changes the/etc/nginx/nginx.conffile.Handler Execution Order
Handlers are executed in the order they are defined in the
handlers/main.ymlfile. However, handlers are only executed once per play, even if they are notified multiple times. Ansible collects all the notifications and then executes the handlers at the end of the play. This ensures that services are only restarted or reloaded once, even if multiple configuration files have been changed.Common Handler Use Cases: Restarting Services, Reloading Configuration
Handlers are commonly used to:
Restart Services: Restart a service after a configuration file has been changed.
# handlers/main.yml --- - name: restart nginx service: name: nginx state: restartedReload Configuration: Reload a service's configuration without restarting it. This is often faster and less disruptive than a full restart.
# handlers/main.yml --- - name: reload nginx service: name: nginx state: reloadedRunning Custom Scripts: Execute a custom script after a specific event.
# handlers/main.yml --- - name: run database migration command: /opt/myapp/migrate.sh
Example: Complete Role with Handlers
Here's a complete example of a role that uses handlers to configure Nginx:
nginx_role/ ├── defaults/ │ └── main.yml ├── handlers/ │ └── main.yml ├── meta/ │ └── main.yml ├── tasks/ │ └── main.yml └── templates/ └── nginx.conf.j2# defaults/main.yml --- http_port: 80 server_name: example.com document_root: /var/www/html# handlers/main.yml --- - name: restart nginx service: name: nginx state: restarted# tasks/main.yml --- - name: Install nginx yum: name: nginx state: present - name: Create nginx configuration file template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf owner: root group: root mode: 0644 notify: restart nginx - name: Ensure document root exists file: path: "{{ document_root }}" state: directory owner: nginx group: nginx mode: 0755# templates/nginx.conf.j2 server { listen {{ http_port }}; server_name {{ server_name }}; root {{ document_root }}; index index.html; location / { try_files $uri $uri/ =404; } }# meta/main.yml --- galaxy_info: author: Your Name description: Configures Nginx web server license: MIT min_ansible_version: 2.9 platforms: - name: EL versions: - 7 - 8 galaxy_tags: - web - nginx
9: Ansible Vault: Securing Sensitive Data
What is Ansible Vault?
Ansible Vault is a feature that allows you to encrypt sensitive data in your Ansible playbooks and variables files. This is crucial for protecting passwords, API keys, and other confidential information that you don't want to store in plain text. Ansible Vault uses AES256 encryption to secure your data.
Creating and Encrypting Vault Files
To create a new Vault file, use the
ansible-vault createcommand:ansible-vault create secrets.ymlThis command will prompt you for a password. After entering the password, you can edit the file and add your sensitive data. The file will be automatically encrypted when you save it.
You can also encrypt an existing file using the
ansible-vault encryptcommand:ansible-vault encrypt secrets.ymlThis command will also prompt you for a password.
Using Vault Files in Playbooks
To use a Vault file in a playbook, you need to specify the file in the
vars_filessection:--- - name: Deploy application hosts: webservers become: true vars_files: - secrets.yml tasks: - name: Configure database connection template: src: db.conf.j2 dest: /etc/myapp/db.confWhen you run the playbook, Ansible will prompt you for the Vault password.
Providing the Vault Password:
--ask-vault-pass,--vault-password-fileYou can provide the Vault password in two ways:
--ask-vault-pass: This option will prompt you for the password when you run the playbook.ansible-playbook deploy.yml --ask-vault-pass--vault-password-file: This option allows you to specify a file that contains the Vault password. This is useful for automating playbook execution.ansible-playbook deploy.yml --vault-password-file=/path/to/vault_password.txtImportant: Make sure the Vault password file is properly secured (e.g., permissions set to 600) to prevent unauthorized access.
Vault Commands:
create,encrypt,decrypt,rekey,edit,viewAnsible Vault provides several commands for managing Vault files:
create: Creates a new encrypted Vault file.ansible-vault create secrets.ymlencrypt: Encrypts an existing file.ansible-vault encrypt secrets.ymldecrypt: Decrypts a Vault file.ansible-vault decrypt secrets.ymlrekey: Changes the Vault password.ansible-vault rekey secrets.ymledit: Edits an encrypted Vault file. The file will be decrypted when you open it and re-encrypted when you save it.ansible-vault edit secrets.ymlview: Displays the contents of an encrypted Vault file.ansible-vault view secrets.yml
Example: Using Vault to Secure a Database Password
Create a Vault file:
ansible-vault create db_secrets.yml# db_secrets.yml (encrypted) db_password: "SuperSecretPassword123"Use the Vault file in a playbook:
--- - name: Configure database hosts: databases become: true vars_files: - db_secrets.yml tasks: - name: Configure database connection template: src: db.conf.j2 dest: /etc/myapp/db.confRun the playbook:
ansible-playbook configure_db.yml --ask-vault-passor
ansible-playbook configure_db.yml --vault-password-file=/path/to/vault_password.txtTemplate file:
# db.conf.j2 database_user = myapp database_password = {{ db_password }}
Okay, let's move on to 10: Advanced Ansible Techniques.
10: Advanced Ansible Techniques
Delegation:
delegate_toThe
delegate_tokeyword allows you to execute a task on a different host than the one specified in thehostssection of the play. This is useful for tasks that need to be executed on a specific server, such as a database server or a load balancer.Example:
--- - name: Configure web servers hosts: webservers become: true tasks: - name: Create database backup command: /opt/backup/backup_db.sh delegate_to: backup.example.comIn this example, the
backup_db.shscript will be executed on thebackup.example.comserver, even though the play is targeting thewebserversgroup.You can also use facts to dynamically determine the delegation target:
--- - name: Configure web servers hosts: webservers become: true tasks: - name: Create database backup command: /opt/backup/backup_db.sh delegate_to: "{{ groups['databases'][0] }}" # Delegate to the first host in the 'databases' groupRolling Updates:
serialThe
serialkeyword allows you to perform rolling updates, which means updating a subset of your hosts at a time. This is useful for minimizing downtime during deployments.Example:
--- - name: Deploy application hosts: webservers become: true serial: 2 # Update two hosts at a time tasks: - name: Stop web server service: name: nginx state: stopped - name: Update application code copy: src: /path/to/new/code dest: /var/www/html - name: Start web server service: name: nginx state: startedIn this example, Ansible will update two web servers at a time. You can also use a percentage to specify the number of hosts to update at a time:
serial: "20%" # Update 20% of the hosts at a timeError Handling:
ignore_errors,block,rescue,alwaysAnsible provides several ways to handle errors in your playbooks:
ignore_errors: This keyword allows you to ignore errors for a specific task.--- - name: Run a command that might fail hosts: all become: true tasks: - name: Run a command command: /path/to/command ignore_errors: trueWarning: Use
ignore_errorswith caution. It can mask underlying problems and make it difficult to troubleshoot your playbooks.block,rescue,always: These keywords allow you to define a block of tasks with error handling.block: Contains the tasks that you want to execute.rescue: Contains tasks that will be executed if any of the tasks in theblockfail.always: Contains tasks that will be executed regardless of whether the tasks in theblocksucceed or fail.
--- - name: Configure web server hosts: webservers become: true tasks: - name: Configure web server block: - name: Install nginx yum: name: nginx state: present - name: Create nginx configuration file template: src: nginx.conf.j2 dest: /etc/nginx/nginx.conf owner: root group: root mode: 0644 rescue: - name: Notify admin about failure mail: to: admin@example.com subject: "Nginx configuration failed" body: "Check the logs on {{ ansible_hostname }}" always: - name: Ensure web server is running service: name: nginx state: started enabled: trueIn this example, if any of the tasks in the
blockfail, the tasks in therescuesection will be executed. The tasks in thealwayssection will always be executed, regardless of whether the tasks in theblocksucceed or fail.
Asynchronous Tasks:
async,pollThe
asynckeyword allows you to run tasks asynchronously, which means that Ansible will not wait for the task to complete before moving on to the next task. This is useful for long-running tasks that you don't want to block the playbook execution.The
pollkeyword specifies how often Ansible should check the status of the asynchronous task.Example:
--- - name: Run a long-running command hosts: all become: true tasks: - name: Run a command asynchronously command: /path/to/long_running_command async: 45 # Run for a maximum of 45 seconds poll: 5 # Check the status every 5 seconds register: long_running_task - name: Print the status of the task debug: msg: "Status: {{ long_running_task }}" when: long_running_task.finishedIn this example, the
long_running_commandwill be executed asynchronously. Ansible will check the status of the task every 5 seconds for a maximum of 45 seconds. The status of the task will be stored in thelong_running_taskvariable.Using Dynamic Inventory
Dynamic inventory allows you to automatically discover and manage your infrastructure. Instead of manually maintaining an inventory file, you can use a script or plugin to dynamically generate the inventory from a cloud provider, a CMDB, or another source.
Ansible supports a variety of dynamic inventory plugins for different cloud providers and other systems. To use a dynamic inventory plugin, you need to configure it and then specify the plugin in the
inventorysetting in youransible.cfgfile.Example (using the AWS EC2 dynamic inventory plugin):
Install the
botolibrary:pip install boto3Create an AWS credentials file:
# ~/.aws/credentials [default] aws_access_key_id = YOUR_ACCESS_KEY aws_secret_access_key = YOUR_SECRET_KEYCreate an EC2 dynamic inventory file:
# ec2.ini [ec2] regions = us-east-1,us-west-2 destination_variable = public_dns_nameConfigure
ansible.cfg:[defaults] inventory = /path/to/ec2.py,/path/to/ec2.iniTest the dynamic inventory:
ansible all -m ping
Working with Cloud Providers (AWS, Azure, GCP)
Ansible provides modules for managing resources on various cloud providers, including AWS, Azure, and GCP. These modules allow you to automate the creation, configuration, and management of cloud resources.
To use these modules, you need to install the appropriate libraries and configure your credentials. Refer to the Ansible documentation for specific instructions for each cloud provider.
Using Ansible Collections
Ansible Collections are a way to package and distribute Ansible content, including roles, modules, plugins, and playbooks. Collections make it easier to share and reuse Ansible content.
To install a collection, use the
ansible-galaxy collection installcommand:ansible-galaxy collection install community.generalTo use a collection in a playbook, you need to specify the fully qualified collection name (FQCN) for the module or plugin:
--- - name: Use a module from a collection hosts: all become: true tasks: - name: Create a file community.general.file: # FQCN for the file module in the community.general collection path: /tmp/myfile state: touchWhat are Ansible Collections?
Ansible Collections are a standardized way to package and distribute Ansible content, including roles, modules, plugins, and playbooks. They provide a more organized and manageable way to share and reuse Ansible automation.
Benefits of Using Collections
- Organization: Collections provide a clear structure for organizing Ansible content.
- Reusability: Collections make it easier to share and reuse Ansible content across different projects.
- Versioning: Collections are versioned, allowing you to manage dependencies and ensure compatibility.
- Distribution: Collections can be distributed through Ansible Galaxy or private repositories.
- Dependency Management: Collections can declare dependencies on other collections, simplifying dependency management.
Collection Structure
A collection has a specific directory structure:
my_collection/ ├── galaxy.yml # Collection metadata ├── plugins/ │ ├── modules/ # Custom modules │ ├── lookup/ # Custom lookup plugins │ └── ... # Other plugin types ├── roles/ # Roles ├── playbooks/ # Playbooks ├── README.md # Collection documentation └── ...galaxy.yml: This file contains metadata about the collection, such as the name, version, author, and dependencies.plugins/: This directory contains custom modules, plugins, and other extensions.roles/: This directory contains roles.playbooks/: This directory contains playbooks.README.md: This file contains documentation for the collection.
Creating a Collection
You can create a collection using the
ansible-galaxy collection initcommand:ansible-galaxy collection init my_collectionThis command will create the basic directory structure for the collection.
The
galaxy.ymlFileThe
galaxy.ymlfile contains metadata about the collection. Here's an example:### required namespace: my_namespace name: my_collection version: 1.0.0 ### optional but highly recommended readme: README.md authors: - Your Name <your.email@example.com> description: A short description of the collection. license: MIT # For collections that use semantic versioning, the major version must be 1 version: 1.0.0 dependencies: community.general: ">=2.0.0"Key fields:
namespace: The namespace for the collection (e.g., your organization's name).name: The name of the collection.version: The version of the collection.readme: The path to the README file.authors: A list of authors.description: A short description of the collection.license: The license for the collection.dependencies: A dictionary of collection dependencies.
Using Collections in Playbooks
To use a module or plugin from a collection in a playbook, you need to specify the fully qualified collection name (FQCN):
--- - name: Use a module from a collection hosts: all become: true tasks: - name: Create a file my_namespace.my_collection.file: # FQCN for the file module path: /tmp/myfile state: touchInstalling Collections
You can install collections from Ansible Galaxy or a private repository using the
ansible-galaxy collection installcommand:ansible-galaxy collection install my_namespace.my_collectionYou can also specify a version:
ansible-galaxy collection install my_namespace.my_collection:1.0.0Managing Collection Dependencies
Collections can declare dependencies on other collections in the
galaxy.ymlfile. When you install a collection, Ansible Galaxy will automatically install its dependencies.Building and Publishing Collections
To build a collection, use the
ansible-galaxy collection buildcommand:ansible-galaxy collection buildThis command will create a
.tar.gzfile containing the collection.To publish a collection to Ansible Galaxy, use the
ansible-galaxy collection publishcommand:ansible-galaxy collection publish my_namespace-my_collection-1.0.0.tar.gzYou will need to create an account on Ansible Galaxy and obtain an API key to publish collections.
Collection Best Practices
- Use a consistent naming convention for your collections.
- Document your collections thoroughly.
- Use version control to manage your collections.
- Test your collections with Molecule.
- Follow Ansible best practices for idempotency and error handling.
- Keep collections focused and modular.
11: Practical Use Cases and Scenarios
This chapter will present practical use cases and scenarios, progressing from beginner to expert level.
Beginner:
Basic System Updates:
Scenario: You need to update all packages on your managed nodes.
Playbook:
--- - name: Update all packages hosts: all become: true tasks: - name: Update apt cache (Debian) apt: update_cache: yes when: ansible_os_family == "Debian" - name: Update all packages (Debian) apt: name: "*" state: latest when: ansible_os_family == "Debian" - name: Update all packages (RedHat) yum: name: "*" state: latest when: ansible_os_family == "RedHat"Explanation: This playbook updates all packages on all managed nodes. It uses the
aptmodule for Debian-based systems and theyummodule for Red Hat-based systems. The*specifies that all packages should be updated.User and Group Management:
Scenario: You need to create a new user account on all managed nodes.
Playbook:
--- - name: Create a new user account hosts: all become: true vars: new_user: testuser tasks: - name: Create the user account user: name: "{{ new_user }}" state: present groups: usersExplanation: This playbook creates a new user account named
testuseron all managed nodes. It uses theusermodule to create the account.File Management:
Scenario: You need to create a new file on all managed nodes.
Playbook:
--- - name: Create a new file hosts: all become: true vars: new_file: /tmp/myfile.txt tasks: - name: Create the file file: path: "{{ new_file }}" state: touch mode: 0644Explanation: This playbook creates a new file named
/tmp/myfile.txton all managed nodes. It uses thefilemodule to create the file.
Intermediate:
Web Server Deployment (LAMP Stack):
Scenario: You need to deploy a LAMP (Linux, Apache, MySQL, PHP) stack on your managed nodes.
Roles:
apache: Installs and configures Apache.mysql: Installs and configures MySQL.php: Installs and configures PHP.
Playbook:
--- - name: Deploy LAMP stack hosts: webservers become: true roles: - apache - mysql - phpExplanation: This playbook deploys a LAMP stack on the
webserversgroup. It uses three roles to install and configure Apache, MySQL, and PHP.Database Configuration:
Scenario: You need to configure a database server with specific settings.
Playbook:
--- - name: Configure database server hosts: databases become: true vars: db_name: myapp db_user: myappuser db_password: "SuperSecretPassword123" tasks: - name: Create database mysql_db: name: "{{ db_name }}" state: present delegate_to: localhost # Run on the control node - name: Create user mysql_user: name: "{{ db_user }}" password: "{{ db_password }}" priv: "{{ db_name }}.*:ALL" state: present delegate_to: localhostExplanation: This playbook configures a database server with specific settings. It uses the
mysql_dbandmysql_usermodules to create the database and user. Thedelegate_to: localhostdirective ensures that the tasks are executed on the control node, which needs to have the MySQL client installed.Monitoring Agent Installation:
Scenario: You need to install a monitoring agent (e.g., Prometheus Node Exporter) on all managed nodes.
Playbook:
--- - name: Install monitoring agent hosts: all become: true tasks: - name: Download node exporter get_url: url: https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz dest: /tmp/node_exporter.tar.gz - name: Extract node exporter unarchive: src: /tmp/node_exporter.tar.gz dest: /usr/local/bin creates: /usr/local/bin/node_exporter - name: Create node exporter user user: name: node_exporter system: yes createhome: no - name: Create node exporter service template: src: node_exporter.service.j2 dest: /etc/systemd/system/node_exporter.service owner: root group: root mode: 0644 notify: restart node exporter - name: Enable and start node exporter service: name: node_exporter state: started enabled: true handlers: - name: restart node exporter systemd: name: node_exporter state: restartedExplanation: This playbook installs the Prometheus Node Exporter on all managed nodes. It downloads the binary, extracts it, creates a user account, creates a systemd service file, and starts the service.
Advanced:
Continuous Integration/Continuous Deployment (CI/CD) Pipeline:
Scenario: You need to automate the deployment of your application using a CI/CD pipeline.
Tools:
- Jenkins
- Git
- Ansible
Workflow:
- Developer commits code to Git.
- Jenkins triggers a build.
- Jenkins runs tests.
- If tests pass, Jenkins triggers an Ansible playbook to deploy the application to the target environment.
Ansible Playbook:
--- - name: Deploy application hosts: webservers become: true tasks: - name: Stop web server service: name: nginx state: stopped - name: Update application code git: repo: https://github.com/your_username/your_application.git dest: /var/www/html version: "{{ git_commit }}" # Passed from Jenkins - name: Start web server service: name: nginx state: startedExplanation: This playbook deploys the application to the
webserversgroup. It stops the web server, updates the application code from a Git repository, and starts the web server. Thegit_commitvariable is passed from Jenkins, allowing you to deploy specific versions of your application.Orchestrating Microservices:
Scenario: You need to orchestrate the deployment and management of a microservices architecture.
Tools:
- Docker
- Docker Compose
- Ansible
Workflow:
- Create Docker images for each microservice.
- Create a Docker Compose file to define the relationships between the microservices.
- Use Ansible to deploy the Docker Compose file to the target environment.
Ansible Playbook:
--- - name: Deploy microservices hosts: all become: true tasks: - name: Copy docker-compose.yml copy: src: docker-compose.yml dest: /opt/myapp/docker-compose.yml - name: Start microservices command: docker-compose up -d args: chdir: /opt/myappExplanation: This playbook deploys the microservices to the target environment. It copies the
docker-compose.ymlfile to the server and then uses thedocker-compose up -dcommand to start the microservices.Automating Cloud Infrastructure:
Scenario: You need to automate the creation and management of cloud resources (e.g., AWS EC2 instances, Azure VMs).
Tools:
- Ansible
- Cloud Provider SDK (e.g., boto3 for AWS)
Playbook:
--- - name: Create EC2 instance hosts: localhost connection: local tasks: - name: Create EC2 instance ec2: key_name: my_keypair instance_type: t2.micro image: ami-0c55b99d5fe933c73 # Replace with your AMI region: us-east-1 wait: yes group: launch-wizard-1 count: 1 register: ec2 - name: Add new instance to host group add_host: hostname: "{{ item.public_ip }}" groupname: new_instances loop: "{{ ec2.instances }}"Explanation: This playbook creates an EC2 instance in AWS. It uses the
ec2module to create the instance. Theadd_hostmodule adds the new instance to a host group, allowing you to manage it with Ansible.
Expert:
Building Custom Modules:
Scenario: You need to perform a task that is not supported by any existing Ansible module.
Solution: Create a custom module.
Steps:
- Create a Python script that performs the task.
- Add the necessary Ansible module metadata to the script.
- Place the script in the
librarydirectory in your Ansible project. - Use the module in your playbook.
Example:
#!/usr/bin/python from ansible.module_utils.basic import AnsibleModule def main(): module = AnsibleModule( argument_spec = dict( name = dict(type='str', required=True), state = dict(type='str', default='present', choices=['present', 'absent']) ) ) name = module.params['name'] state = module.params['state'] result = dict( changed = False, message = '' ) if state == 'present': if not os.path.exists(name): os.makedirs(name) result['changed'] = True result['message'] = 'Created directory %s' % name else: result['message'] = 'Directory %s already exists' % name elif state == 'absent': if os.path.exists(name): shutil.rmtree(name) result['changed'] = True result['message'] = 'Removed directory %s' % name else: result['message'] = 'Directory %s does not exist' % name module.exit_json(**result) if __name__ == '__main__': main()Developing Ansible Collections:
Scenario: You want to package and distribute your custom modules, roles, and playbooks as an Ansible Collection.
Steps:
- Create a collection directory structure.
- Add your modules, roles, and playbooks to the appropriate directories.
- Create a
galaxy.ymlfile with metadata about the collection. - Build the collection using the
ansible-galaxy collection buildcommand. - Upload the collection to Ansible Galaxy using the
ansible-galaxy collection publishcommand.
Contributing to the Ansible Community:
Scenario: You want to contribute to the Ansible project by submitting bug fixes, new features, or documentation improvements.
Steps:
- Fork the Ansible repository on GitHub.
- Create a new branch for your changes.
- Make your changes and commit them to your branch.
- Submit a pull request to the Ansible repository.
Okay, let's move on to 12: Troubleshooting Ansible.
12: Troubleshooting Ansible
Common Errors and How to Fix Them
Here are some common Ansible errors and how to fix them:
"Host key verification failed."
Cause: Ansible is unable to verify the SSH host key of the managed node.
Solution:
Disable host key checking (not recommended for production):
# ansible.cfg [defaults] host_key_checking = FalseAdd the host key to your
known_hostsfile:ssh-keyscan hostname >> ~/.ssh/known_hostsUse the
sshmodule to add the host key to theknown_hostsfile:--- - name: Add host key to known_hosts hosts: all tasks: - name: Add host key known_host: name: "{{ inventory_hostname }}" key: "{{ lookup('pipe', 'ssh-keyscan ' + inventory_hostname) }}"
"Authentication failed."
Cause: Ansible is unable to authenticate to the managed node.
Solution:
- Verify that you have SSH access configured correctly.
- Verify that the
ansible_userandansible_ssh_private_key_filevariables are set correctly in your inventory file. - Verify that the SSH key is not password-protected.
- If using passwords, ensure
ansible_passwordis set (though key-based auth is preferred).
"Module not found."
Cause: Ansible is unable to find the specified module.
Solution:
- Verify that the module name is spelled correctly.
- Verify that the module is installed on the control node.
- If using a custom module, verify that it is located in the
librarydirectory in your Ansible project. - If using a module from a collection, ensure the collection is installed and the FQCN is used.
"Syntax error in playbook."
Cause: There is a syntax error in your playbook.
Solution:
- Use a YAML validator to check your playbook for syntax errors.
- Pay close attention to indentation and spacing.
- Verify that all variables are defined correctly.
"Variable not defined."
Cause: You are trying to use a variable that has not been defined.
Solution:
- Verify that the variable is defined in your inventory file, playbook, or included file.
- Verify that the variable name is spelled correctly.
- Check variable precedence to ensure the correct value is being used.
"Task failed."
Cause: A task in your playbook failed.
Solution:
- Examine the output of the task to determine the cause of the failure.
- Verify that all required packages are installed on the managed node.
- Verify that all required services are running on the managed node.
- Check the logs on the managed node for errors.
- Use
ignore_errorswith caution, and only when you understand the potential consequences.
"Idempotency issues."
Cause: A task is not idempotent, meaning that it makes changes every time it is run, even if the desired state has already been achieved.
Solution:
- Use the
createsorremovesoptions in thecommandmodule to ensure that the command is only executed if necessary. - Use the
statmodule to check the state of the system before making any changes. - Use the
changed_whenkeyword to define when a task should be considered changed.
- Use the
Debugging Playbooks:
--verbose,--step,--check,--diffAnsible provides several options for debugging playbooks:
--verbose: Increases the verbosity of the output. This can be useful for seeing more detailed information about what Ansible is doing.-vvvvgives the most verbose output.ansible-playbook my_playbook.yml --verbose--step: Executes the playbook in step-by-step mode, prompting you to confirm each task before it is executed.ansible-playbook my_playbook.yml --step--check: Runs the playbook in "check mode," which simulates the changes that would be made without actually making them. This is useful for testing your playbooks.ansible-playbook my_playbook.yml --check--diff: Shows the differences that would be made by the playbook. This is useful for reviewing changes before applying them.ansible-playbook my_playbook.yml --diff
Using Logs
Ansible logs its activity to the system log. The location of the system log varies depending on your operating system.
- RHEL, CentOS, Fedora:
/var/log/messagesor/var/log/syslog - Ubuntu, Debian:
/var/log/syslog
You can also configure Ansible to log to a separate file by setting the
log_pathoption in youransible.cfgfile:[defaults] log_path = /var/log/ansible.log- RHEL, CentOS, Fedora:
Best Practices for Writing Robust Playbooks
- Use Roles: Organize your playbooks into reusable roles.
- Use Variables: Make your playbooks dynamic by using variables.
- Use Conditionals: Add logic to your playbooks to handle different scenarios.
- Use Handlers: Respond to changes by restarting services or reloading configuration files.
- Use Vault: Secure sensitive data with Ansible Vault.
- Test Your Playbooks: Use check mode and diff mode to test your playbooks before applying them.
- Use Version Control: Store your playbooks in a version control system (e.g., Git).
- Write Idempotent Tasks: Ensure that your tasks are idempotent.
- Handle Errors: Use
ignore_errors,block,rescue, andalwaysto handle errors gracefully. - Document Your Playbooks: Add comments to your playbooks to explain what they do.
Okay, let's move on to 13: Ansible Best Practices.
13: Ansible Best Practices
This chapter summarizes the key best practices for using Ansible effectively.
Idempotency
- Definition: Ensure that your tasks are idempotent, meaning that running a playbook multiple times will have the same result as running it once.
- How to Achieve It:
- Use the
createsorremovesoptions in thecommandmodule. - Use the
statmodule to check the state of the system before making any changes. - Use the
changed_whenkeyword to define when a task should be considered changed. - Use modules designed for idempotency (e.g.,
yum,apt,file,service).
- Use the
Using Roles for Reusability
- Benefits:
- Organizes your playbooks into reusable units.
- Makes your playbooks easier to understand and maintain.
- Allows you to share your automation with others.
- Best Practices:
- Follow the standard role directory structure.
- Use descriptive names for your roles.
- Document your roles.
- Use role dependencies to manage complex relationships.
- Benefits:
Using Variables Effectively
- Benefits:
- Makes your playbooks more dynamic and flexible.
- Allows you to customize your automation for different environments.
- Best Practices:
- Define variables in a consistent manner.
- Use descriptive names for your variables.
- Understand variable precedence.
- Use
vars_filesto organize your variables. - Use
vars_promptfor sensitive information that you don't want to store in your playbook.
- Benefits:
Securing Sensitive Data with Vault
- Importance: Protect sensitive information like passwords and API keys.
- Best Practices:
- Use Ansible Vault to encrypt your sensitive data.
- Store your Vault password in a secure location.
- Use the
--vault-password-fileoption to automate playbook execution. - Rotate your Vault password regularly.
Testing Your Playbooks
- Importance: Ensure that your playbooks work as expected before applying them to production systems.
- Best Practices:
- Use check mode (
--check) to simulate the changes that would be made by the playbook. - Use diff mode (
--diff) to review the changes before applying them. - Test your playbooks in a staging environment before applying them to production.
- Use a CI/CD pipeline to automate the testing process.
- Use check mode (
Version Control
- Importance: Track changes to your playbooks and collaborate with others.
- Best Practices:
- Store your playbooks in a version control system (e.g., Git).
- Use branches to manage different versions of your playbooks.
- Use pull requests to review changes before merging them into the main branch.
- Tag releases of your playbooks.
Writing Idempotent Tasks
- Importance: Ensure that your tasks only make changes when necessary.
- Best Practices:
- Use modules designed for idempotency.
- Use the
createsorremovesoptions in thecommandmodule. - Use the
statmodule to check the state of the system before making any changes. - Use the
changed_whenkeyword to define when a task should be considered changed.
Handling Errors
- Importance: Handle errors gracefully to prevent your playbooks from failing unexpectedly.
- Best Practices:
- Use
ignore_errorswith caution. - Use
block,rescue, andalwaysto define error handling blocks. - Log errors to a file or send notifications to administrators.
- Use
Documenting Your Playbooks
- Importance: Make your playbooks easier to understand and maintain.
- Best Practices:
- Add comments to your playbooks to explain what they do.
- Use descriptive names for your tasks and variables.
- Create a README file for each role.
- Use a consistent style for your playbooks.
Okay, let's add a special chapter dedicated to Ansible Navigator:
13A: Ansible Navigator – A Modern CLI for Ansible
What is Ansible Navigator?
Ansible Navigator is a command-line interface (CLI) tool designed to improve the Ansible development and execution experience. It provides a container-based execution environment, a text-based user interface (TUI), and integration with Ansible Content Collections. It's designed to replace and enhance the functionality of
ansible-playbook,ansible, andansible-doc.Why Use Ansible Navigator?
- Consistent Execution Environment: Uses Execution Environments (EEs), which are container images that bundle Ansible, its dependencies, and any required collections. This ensures consistent execution across different environments.
- Improved User Interface: Offers a TUI for navigating playbooks, tasks, and variables, making it easier to understand and debug your automation.
- Collection Management: Simplifies the management of Ansible Content Collections.
- Integration with Ansible Automation Platform: Designed to work seamlessly with Red Hat Ansible Automation Platform.
- Replaces
ansible-doc: Provides an interactive way to explore module documentation. - Policy Enforcement: Can enforce policies defined in your organization, ensuring compliance.
Installing Ansible Navigator
Ansible Navigator is typically installed using
pip:pip install ansible-navigatorYou'll also need to have
podmanordockerinstalled, as Ansible Navigator relies on containerization.podmanis generally preferred on Red Hat-based systems.sudo dnf install podman # RHEL, CentOS, Fedora sudo apt install podman # Ubuntu, DebianConfiguring Ansible Navigator
Ansible Navigator is configured using a YAML file, typically located at
~/.ansible-navigator.ymlor in the current working directory.Here's a basic example:
# ~/.ansible-navigator.yml --- ansible-navigator: container-engine: podman execution-environment: true execution-environment-image: quay.io/ansible/ansible-execution-environment:latest mode: interactiveKey configuration options:
container-engine: Specifies the container engine to use (podmanordocker).execution-environment: Enables or disables the use of execution environments.execution-environment-image: Specifies the container image to use for the execution environment.quay.io/ansible/ansible-execution-environment:latestis a common starting point.mode: Specifies the execution mode (interactiveorstdout).interactiveprovides the TUI.log:level: Sets the log level (e.g.,debug,info,warning,error).path: Specifies the path to the log file.
Using Ansible Navigator
Ansible Navigator provides several subcommands:
ansible-navigator run: Runs an Ansible playbook.ansible-navigator images: Manages execution environment images.ansible-navigator collections: Manages Ansible Content Collections.ansible-navigator doc: Displays module documentation.ansible-navigator config: Manages Ansible Navigator configuration.
Running a Playbook:
ansible-navigator run my_playbook.ymlThis command will run the
my_playbook.ymlplaybook using the configured execution environment. Ifmode: interactiveis set, it will open the TUI, allowing you to step through the playbook, view task output, and inspect variables.Exploring Module Documentation:
ansible-navigator doc ansible.builtin.copyThis command will display the documentation for the
ansible.builtin.copymodule in the TUI.Using
stdoutMode:If you prefer to run Ansible Navigator without the TUI, you can set
mode: stdoutin your configuration file or use the--mode stdoutcommand-line option:ansible-navigator run my_playbook.yml --mode stdoutThis will run the playbook and display the output in the standard output stream, similar to
ansible-playbook.Execution Environments (EEs)
Execution Environments are container images that provide a consistent and isolated environment for running Ansible playbooks. They bundle Ansible, its dependencies, and any required collections.
You can create your own custom execution environments using the
execution-environment-buildertool, which is part of theansible-builderpackage.Example: Creating a Custom Execution Environment
Install
ansible-builder:pip install ansible-builderCreate an
execution-environment.ymlfile:# execution-environment.yml --- version: 1 dependencies: galaxy: requirements.yml python: requirements.txtCreate a
requirements.ymlfile for Ansible Galaxy dependencies:# requirements.yml --- collections: - community.general - amazon.awsCreate a
requirements.txtfile for Python dependencies:# requirements.txt boto3Build the execution environment:
ansible-builder create --file execution-environment.yml --container-runtime podmanBuild the container image:
podman build -t my_custom_ee .Use the custom execution environment in Ansible Navigator:
# ~/.ansible-navigator.yml --- ansible-navigator: container-engine: podman execution-environment: true execution-environment-image: my_custom_ee mode: interactive
Advanced Usage
- Using Ansible Navigator with Ansible Automation Platform: Ansible Navigator is designed to integrate seamlessly with Red Hat Ansible Automation Platform, allowing you to manage and execute your automation from a centralized platform.
- Enforcing Policies: Ansible Navigator can enforce policies defined in your organization, ensuring that your automation complies with security and compliance requirements.
- Customizing the TUI: You can customize the appearance and behavior of the TUI by modifying the configuration file.
Troubleshooting Ansible Navigator
"Container engine not found."
Cause: Ansible Navigator is unable to find the specified container engine (
podmanordocker).Solution:
- Verify that the container engine is installed and running.
- Verify that the
container-engineoption is set correctly in your configuration file.
"Execution environment image not found."
Cause: Ansible Navigator is unable to find the specified execution environment image.
Solution:
- Verify that the image exists.
- Verify that the
execution-environment-imageoption is set correctly in your configuration file. - Try pulling the image manually using
podman pullordocker pull.
"Permission denied."
Cause: Ansible Navigator does not have the necessary permissions to access the container engine or the execution environment image.
Solution:
- Verify that your user account has the necessary permissions to use the container engine.
- Try running Ansible Navigator with
sudo.
Okay, let's create the Appendix: Ansible Module Reference (Common Modules).
Appendix: Ansible Module Reference (Common Modules)
This appendix provides a brief overview of some of the most commonly used Ansible modules. For a complete list of modules and their options, refer to the Ansible documentation: https://docs.ansible.com/
apt: Manages packages on Debian-based systems.Options:
name: The name of the package to install.state: The desired state of the package (present,absent,latest).update_cache: Whether to update the apt cache before installing the package (yes,no).autoremove: Whether to automatically remove unused dependencies (yes,no).
Example:
- name: Install nginx apt: name: nginx state: present update_cache: yes
yum: Manages packages on Red Hat-based systems.Options:
name: The name of the package to install.state: The desired state of the package (present,absent,latest).enablerepo: Enable specified repository.disablerepo: Disable specified repository.
Example:
- name: Install httpd yum: name: httpd state: present
service: Manages services.Options:
name: The name of the service.state: The desired state of the service (started,stopped,restarted,reloaded).enabled: Whether the service should be enabled on boot (yes,no).
Example:
- name: Start nginx service: name: nginx state: started enabled: yes
file: Manages files and directories.Options:
path: The path to the file or directory.state: The desired state of the file or directory (present,absent,directory,touch,link,hard).owner: The owner of the file or directory.group: The group of the file or directory.mode: The permissions of the file or directory.recurse: Whether to recursively set permissions on directories.
Example:
- name: Create a directory file: path: /var/www/html state: directory owner: nginx group: nginx mode: 0755
copy: Copies files from the control node to managed nodes.Options:
src: The path to the source file on the control node.dest: The path to the destination file on the managed node.owner: The owner of the file.group: The group of the file.mode: The permissions of the file.content: The content of the file (instead of copying from a source file).
Example:
- name: Copy a file copy: src: /path/to/myfile.txt dest: /tmp/myfile.txt owner: root group: root mode: 0644
template: Creates files from Jinja2 templates.Options:
src: The path to the template file on the control node.dest: The path to the destination file on the managed node.owner: The owner of the file.group: The group of the file.mode: The permissions of the file.
Example:
- name: Create a configuration file from a template template: src: myconfig.conf.j2 dest: /etc/myconfig.conf owner: root group: root mode: 0644
user: Manages user accounts.Options:
name: The name of the user account.state: The desired state of the user account (present,absent).password: The password for the user account (hashed).groups: A list of groups that the user should be a member of.system: Whether the user is a system account (yes,no).createhome: Whether to create the user's home directory (yes,no).
Example:
- name: Create a user account user: name: testuser password: "$6$rounds=656000$..." groups: users state: present
group: Manages groups.Options:
name: The name of the group.state: The desired state of the group (present,absent).system: Whether the group is a system group (yes,no).
Example:
- name: Create a group group: name: mygroup state: present system: yes
cron: Manages cron jobs.Options:
name: A descriptive name for the cron job.job: The command to be executed by the cron job.minute: The minute of the hour when the cron job should be executed.hour: The hour of the day when the cron job should be executed.day: The day of the month when the cron job should be executed.month: The month of the year when the cron job should be executed.weekday: The weekday when the cron job should be executed.user: The user that the cron job should be executed as.
Example:
- name: Create a cron job cron: name: "Daily backup" job: "/opt/backup/backup.sh" minute: "0" hour: "2" user: root
command: Executes a command on the managed node.Options:
cmd: The command to be executed.creates: A file that, if it exists, prevents the command from being executed.removes: A file that, if it does not exist, prevents the command from being executed.chdir: Change into this directory before running the command.
Example:
- name: Run a command command: /usr/bin/uptime
shell: Executes a shell command on the managed node. Similar tocommand, but executes the command through a shell, allowing for shell features like pipes and redirects.Options: Same as
command.Example:
- name: Run a shell command shell: ls -l /tmp | grep myfile.txt
get_url: Downloads a file from a URL.Options:
url: The URL of the file to download.dest: The path to the destination file on the managed node.checksum: Expected checksum for the file.
Example:
- name: Download a file get_url: url: https://example.com/myfile.txt dest: /tmp/myfile.txt
unarchive: Unarchives a file on the managed node.Options:
src: The path to the archive file on the managed node.dest: The path to the destination directory on the managed node.creates: A file that, if it exists, prevents the archive from being extracted.remote_src: Whether the source file is on the control node or the managed node (yes,no).
Example:
- name: Unarchive a file unarchive: src: /tmp/myfile.tar.gz dest: /opt/myapp creates: /opt/myapp/myfile remote_src: yes
Discover more from Altgr Blog
Subscribe to get the latest posts sent to your email.
