A Complete Guide to Dynamic Inventory with AWX, AWS, GCP, and Azure
Table of Contents
- Introduction to Ansible Dynamic Inventory
- Understanding Dynamic Inventory Concepts
- Static vs Dynamic Inventory
- Dynamic Inventory Scripts and Plugins
- AWS Dynamic Inventory
- GCP Dynamic Inventory
- Azure Dynamic Inventory
- AWX and Dynamic Inventory
- Custom Dynamic Inventory Scripts
- Best Practices and Troubleshooting
- Advanced Use Cases
- Security Considerations
1. Introduction to Ansible Dynamic Inventory
Dynamic inventory is a powerful feature in Ansible that allows you to automatically discover and manage infrastructure resources from cloud providers, virtualization platforms, and other external sources. Instead of manually maintaining static inventory files, dynamic inventory scripts query external APIs to generate up-to-date inventory information.
graph TB
A[Ansible Controller] --> B[Dynamic Inventory Plugin]
B --> C[Cloud Provider API]
C --> D[AWS EC2]
C --> E[GCP Compute Engine]
C --> F[Azure Virtual Machines]
B --> G[Generated Inventory]
G --> H[Host Groups]
G --> I[Host Variables]
G --> J[Connection Details]
A --> K[Playbook Execution]
K --> L[Target Hosts]Why Use Dynamic Inventory?
- Automatic Discovery: Automatically discover new instances without manual intervention
- Real-time Updates: Always have current infrastructure state
- Scalability: Handle large, dynamic environments efficiently
- Cloud Integration: Native integration with major cloud providers
- Reduced Maintenance: No need to manually update inventory files
2. Understanding Dynamic Inventory Concepts
Core Components
classDiagram
class DynamicInventory {
+plugin_name: string
+cache_settings: dict
+host_filters: list
+group_by: list
+compose: dict
+keyed_groups: list
}
class InventoryPlugin {
+parse()
+verify_file()
+get_hosts()
+get_groups()
}
class CloudAPI {
+authenticate()
+list_instances()
+get_metadata()
}
DynamicInventory --> InventoryPlugin
InventoryPlugin --> CloudAPIKey Terms
- Inventory Plugin: Python module that interfaces with external systems
- Host Variables: Dynamic attributes assigned to hosts
- Groups: Logical collections of hosts based on attributes
- Caching: Storing inventory data to reduce API calls
- Filters: Criteria to include/exclude hosts
3. Static vs Dynamic Inventory
Comparison Table
| Aspect | Static Inventory | Dynamic Inventory |
|---|---|---|
| Maintenance | Manual updates required | Automatic updates |
| Scalability | Limited by manual effort | Highly scalable |
| Accuracy | Can become outdated | Always current |
| Setup Complexity | Simple | Requires configuration |
| Performance | Fast | May have API latency |
| Cloud Integration | Manual mapping | Native integration |
When to Use Each
flowchart TD
A[Infrastructure Type?] --> B[Static/Stable]
A --> C[Dynamic/Cloud]
B --> D[Use Static Inventory]
C --> E[Use Dynamic Inventory]
D --> F[Small environmentsRarely changing infrastructureOn-premise servers]
E --> G[Cloud environmentsAuto-scaling groupsContainer orchestrationFrequent deployments]4. Dynamic Inventory Scripts and Plugins
Inventory Plugin Configuration
Create an inventory configuration file:
# inventory.yml
plugin: auto
cache: true
cache_plugin: memory
cache_timeout: 3600
cache_connection: /tmp/ansible_cache
# Enable fact caching
fact_caching: jsonfile
fact_caching_connection: /tmp/ansible_facts_cache
fact_caching_timeout: 86400YAMLPlugin Types
graph LR
A[Inventory Plugins] --> B[Cloud Providers]
A --> C[Virtualization]
A --> D[Container Platforms]
A --> E[Network Devices]
B --> F[AWS EC2]
B --> G[GCP Compute]
B --> H[Azure Resource Manager]
C --> I[VMware vSphere]
C --> J[OpenStack]
D --> K[Docker]
D --> L[Kubernetes]
E --> M[Cisco]
E --> N[Juniper]Basic Plugin Structure
# custom_inventory_plugin.py
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
class InventoryModule(BaseInventoryPlugin, Constructable):
NAME = 'custom_inventory'
def verify_file(self, path):
"""Verify this is a valid inventory file"""
return path.endswith(('custom_inventory.yml', 'custom_inventory.yaml'))
def parse(self, inventory, loader, path, cache=True):
"""Parse the inventory"""
super().parse(inventory, loader, path, cache)
config = self._read_config_data(path)
# Your custom logic here
self._populate_inventory(config)
def _populate_inventory(self, config):
"""Populate inventory with hosts and groups"""
# Implementation details
passPython5. AWS Dynamic Inventory
Architecture Overview
graph TB
subgraph "AWS Environment"
A[EC2 Instances]
B[Auto Scaling Groups]
C[RDS Instances]
D[ELB/ALB]
E[Tags]
end
subgraph "Ansible Controller"
F[AWS Inventory Plugin]
G[Boto3 Library]
H[AWS Credentials]
end
subgraph "Generated Inventory"
I[Instance Groups]
J[Region Groups]
K[Tag-based Groups]
L[Host Variables]
end
F --> G
G --> A
G --> B
G --> C
G --> D
F --> I
F --> J
F --> K
F --> LAWS Inventory Configuration
# aws_inventory.yml
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
# Authentication
aws_access_key_id: "{{ vault_aws_access_key }}"
aws_secret_access_key: "{{ vault_aws_secret_key }}"
# Filters
filters:
instance-state-name: running
tag:Environment:
- production
- staging
# Grouping
keyed_groups:
- key: tags.Environment
prefix: env
- key: instance_type
prefix: type
- key: placement.region
prefix: region
# Host variables
compose:
ansible_host: public_ip_address
ansible_user: ec2_user
environment: tags.Environment
instance_size: instance_type
# Cache settings
cache: true
cache_plugin: memory
cache_timeout: 300YAMLAdvanced AWS Filtering
# aws_advanced_filters.yml
plugin: amazon.aws.aws_ec2
# Complex filters
filters:
# Multiple instance states
instance-state-name:
- running
- pending
# Tag-based filtering
tag:Team: devops
tag:Project: web-app
# Instance type filtering
instance-type:
- t3.micro
- t3.small
- m5.large
# Include/exclude by tag
include_filters:
- tag:Managed: ansible
exclude_filters:
- tag:DoNotManage: true
# Custom grouping strategies
keyed_groups:
- key: tags.Team | default('unassigned')
prefix: team
separator: '_'
- key: tags.Environment + '_' + tags.Application
prefix: app
- key: placement.availability_zone
prefix: az
# Dynamic host variables
compose:
ansible_host: |
public_ip_address if public_ip_address
else private_ip_address
ec2_state: state.name
ec2_arch: architecture
vpc_id: vpc_id
subnet_id: subnet_id
# Custom facts
is_web_server: "'web' in (tags.Role | default(''))"
is_database: "'db' in (tags.Role | default(''))"YAMLAWS IAM Permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeKeyPairs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeRegions",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"rds:DescribeDBInstances",
"elasticloadbalancing:DescribeLoadBalancers"
],
"Resource": "*"
}
]
}JSONTesting AWS Inventory
# Test inventory configuration
ansible-inventory -i aws_inventory.yml --list
# Test specific group
ansible-inventory -i aws_inventory.yml --graph
# Debug inventory issues
ansible-inventory -i aws_inventory.yml --list -vvvBash6. GCP Dynamic Inventory
GCP Architecture
graph TB
subgraph "Google Cloud Platform"
A[Compute Engine Instances]
B[Instance Groups]
C[Zones/Regions]
D[Labels]
E[Metadata]
end
subgraph "Ansible Controller"
F[GCP Inventory Plugin]
G[Google Cloud SDK]
H[Service Account Key]
end
subgraph "Inventory Structure"
I[Zone Groups]
J[Label Groups]
K[Instance Type Groups]
L[Custom Groups]
end
F --> G
G --> A
G --> B
G --> C
F --> I
F --> J
F --> K
F --> LGCP Inventory Configuration
# gcp_inventory.yml
plugin: google.cloud.gcp_compute
projects:
- my-project-id
- another-project-id
zones:
- us-central1-a
- us-central1-b
- europe-west1-b
# Authentication
auth_kind: serviceaccount
service_account_file: /path/to/service-account.json
# Filters
filters:
- status = RUNNING
- labels.environment = production
- machineType:t2-micro
# Grouping
keyed_groups:
- key: labels.environment
prefix: env
- key: zone
prefix: zone
- key: machineType
prefix: machine_type
# Host composition
compose:
ansible_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
gcp_image: disks[0].licenses[0]
gcp_machine_type: machineType
gcp_metadata: metadata.items
gcp_network: networkInterfaces[0].network
gcp_subnetwork: networkInterfaces[0].subnetwork
gcp_tags: tags.items
gcp_labels: labels
# Hostnames
hostnames:
- name
- public_ip
- private_ip
# Variables
vars:
gcp_project: project
gcp_zone: zoneBashGCP Service Account Setup
# Create service account
gcloud iam service-accounts create ansible-inventory \
--description="Service account for Ansible dynamic inventory" \
--display-name="Ansible Inventory"
# Grant necessary permissions
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:ansible-inventory@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/compute.viewer"
# Create and download key
gcloud iam service-accounts keys create ~/ansible-gcp-key.json \
--iam-account=ansible-inventory@PROJECT_ID.iam.gserviceaccount.comBashAdvanced GCP Configuration
# gcp_advanced.yml
plugin: google.cloud.gcp_compute
# Multiple projects with different auth
projects:
- project: production-project
auth_kind: serviceaccount
service_account_file: /keys/prod-key.json
- project: staging-project
auth_kind: serviceaccount
service_account_file: /keys/staging-key.json
# Complex filtering
filters:
- "status = RUNNING"
- "labels.team:ops"
- "NOT labels.temporary:true"
- "zone:(us-central1-a OR us-central1-b)"
# Advanced grouping
keyed_groups:
- key: labels.application + '-' + labels.environment
prefix: app
separator: '_'
- key: zone.split('-')[:-1] | join('-')
prefix: region
- key: machineType.split('/')[-1].split('-')[0]
prefix: series
# Conditional host variables
compose:
# Choose appropriate IP
ansible_host: |
(networkInterfaces[0].accessConfigs[0].natIP)
if (networkInterfaces[0].accessConfigs)
else (networkInterfaces[0].networkIP)
# Instance metadata
startup_script: metadata.items | selectattr('key', 'match', 'startup-script') | map(attribute='value') | first | default('')
# Network information
vpc_name: networkInterfaces[0].network.split('/')[-1]
subnet_name: networkInterfaces[0].subnetwork.split('/')[-1]
# Derived facts
is_preemptible: scheduling.preemptible | default(false)
has_external_ip: networkInterfaces[0].accessConfigs | length > 0YAML7. Azure Dynamic Inventory
Azure Architecture
graph TB
subgraph "Azure Cloud"
A[Virtual Machines]
B[Resource Groups]
C[Subscriptions]
D[Tags]
E[Scale Sets]
end
subgraph "Ansible Controller"
F[Azure Inventory Plugin]
G[Azure SDK]
H[Service Principal]
end
subgraph "Inventory Organization"
I[Subscription Groups]
J[Resource Group Groups]
K[Location Groups]
L[Tag Groups]
end
F --> G
G --> A
G --> B
G --> C
F --> I
F --> J
F --> K
F --> LAzure Inventory Configuration
# azure_inventory.yml
plugin: azure.azcollection.azure_rm
auth_source: auto
# Subscription filtering
include_vm_resource_groups:
- production-rg
- staging-rg
exclude_vm_resource_groups:
- temp-rg
# Instance filtering
include_vmss: true
exclude_powerstate:
- stopping
- stopped
- deallocated
# Grouping
keyed_groups:
- key: tags.Environment
prefix: env
- key: location
prefix: location
- key: resource_group
prefix: rg
- key: os_disk.os_type
prefix: os
# Host variables
compose:
ansible_host: public_ipv4_addresses[0] | default(private_ipv4_addresses[0])
ansible_user: |
'azureuser' if os_disk.os_type == 'Linux'
else 'Administrator'
azure_location: location
azure_resource_group: resource_group
azure_vm_size: vm_size
azure_os_type: os_disk.os_type
azure_tags: tags
# Conditional groups
conditional_groups:
web_servers: "'web' in tags.Role"
db_servers: "'database' in tags.Role"
linux_servers: "os_disk.os_type == 'Linux'"
windows_servers: "os_disk.os_type == 'Windows'"YAMLAzure Service Principal Setup
# Create service principal
az ad sp create-for-rbac --name ansible-inventory \
--role Reader \
--scopes /subscriptions/SUBSCRIPTION_ID
# Output will include:
# {
# "appId": "APP_ID",
# "displayName": "ansible-inventory",
# "name": "APP_ID",
# "password": "PASSWORD",
# "tenant": "TENANT_ID"
# }BashAzure Authentication Methods
# Method 1: Environment variables
# Set these in your environment:
# AZURE_CLIENT_ID=your_app_id
# AZURE_SECRET=your_password
# AZURE_SUBSCRIPTION_ID=your_subscription_id
# AZURE_TENANT=your_tenant_id
plugin: azure.azcollection.azure_rm
auth_source: env
# Method 2: Credential file
plugin: azure.azcollection.azure_rm
auth_source: credential_file
profile: default
# Method 3: Managed Identity (Azure VMs)
plugin: azure.azcollection.azure_rm
auth_source: msi
# Method 4: Azure CLI
plugin: azure.azcollection.azure_rm
auth_source: cliYAMLAdvanced Azure Features
# azure_advanced.yml
plugin: azure.azcollection.azure_rm
# Multiple subscriptions
subscriptions:
- subscription-id-1
- subscription-id-2
# Batch processing for large environments
batch_fetch_size: 100
max_record_count: 1000
# Include additional Azure resources
include_vm_scale_set_instances: true
include_powerstate: true
# Advanced filtering
conditional_groups:
high_availability: "availability_set != ''"
managed_disks: "storage_profile.os_disk.managed_disk != None"
premium_storage: "'Premium' in storage_profile.os_disk.managed_disk.storage_account_type"
# Complex grouping
keyed_groups:
# Group by VM size series
- key: vm_size.split('_')[0]
prefix: series
# Group by disk type
- key: storage_profile.os_disk.managed_disk.storage_account_type
prefix: storage
# Multi-dimensional grouping
- key: location + '_' + tags.Environment
prefix: loc_env
# Rich host variables
compose:
# Network information
private_ip: private_ipv4_addresses[0] | default('')
public_ip: public_ipv4_addresses[0] | default('')
# VM specifications
vm_cores: vm_size | regex_replace('.*_(\d+).*', '\1') | int
vm_memory_gb: |
{
'Standard_B1s': 1,
'Standard_B2s': 4,
'Standard_D2s_v3': 8
}[vm_size] | default(0)
# Cost optimization flags
has_accelerated_networking: network_profile.network_interfaces[0].enable_accelerated_networking | default(false)
boot_diagnostics_enabled: diagnostics_profile.boot_diagnostics.enabled | default(false)YAML8. AWX and Dynamic Inventory
AWX Dynamic Inventory Workflow
sequenceDiagram
participant U as User
participant A as AWX
participant I as Inventory Source
participant C as Cloud Provider
participant D as Database
U->>A: Create Inventory Source
A->>I: Configure Source Parameters
I->>C: Sync Request
C-->>I: Return Host Data
I->>D: Store Inventory Data
A->>U: Sync Complete
U->>A: Run Job Template
A->>D: Fetch Current Inventory
A->>A: Execute PlaybookAWX Inventory Sources
graph LR
A[AWX Tower/Controller] --> B[Inventory Sources]
B --> C[Cloud Providers]
B --> D[Version Control]
B --> E[Custom Scripts]
B --> F[Red Hat Satellite]
C --> G[AWS EC2]
C --> H[GCP Compute]
C --> I[Azure RM]
C --> J[OpenStack]
D --> K[Git Repository]
D --> L[SVN Repository]
E --> M[Python Script]
E --> N[Executable]
F --> O[Satellite 6]Creating AWS Inventory Source in AWX
- Navigate to Inventories
- Create New Inventory
- Add Inventory Source
# AWX AWS Source Configuration
Name: AWS Production Inventory
Source: Amazon EC2
Credential: AWS Credential
Regions: us-east-1, us-west-2
Instance Filters: tag:Environment=production AND instance-state-name=running
Update on Launch: ✓
Overwrite: ✓
Overwrite Variables: ✓
Source Variables:
keyed_groups:
- key: tags.Role
prefix: role
- key: placement.region
prefix: region
compose:
ansible_host: public_ip_address | default(private_ip_address)
env: tags.EnvironmentYAMLAWX Inventory Source Variables
# AWS EC2 Source Variables
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-west-2
# Use AWX credential automatically
aws_access_key_id: "{{ ansible_env.AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "{{ ansible_env.AWS_SECRET_ACCESS_KEY }}"
filters:
instance-state-name: running
tag:Managed: awx
keyed_groups:
- key: tags.Team
prefix: team
- key: instance_type
prefix: type
- key: placement.region
prefix: region
compose:
ansible_host: public_ip_address | default(private_ip_address)
ansible_user: |
{
'ubuntu': 'ubuntu',
'rhel': 'ec2-user',
'centos': 'centos'
}[tags.OS] | default('ec2-user')
cache: true
cache_plugin: memory
cache_timeout: 300YAMLAWX Custom Inventory Script
#!/usr/bin/env python3
# custom_awx_inventory.py
import json
import sys
import requests
from argparse import ArgumentParser
class CustomInventory:
def __init__(self):
self.inventory = {
'_meta': {
'hostvars': {}
}
}
# Parse command line arguments
parser = ArgumentParser()
parser.add_argument('--list', action='store_true')
parser.add_argument('--host', action='store')
self.args = parser.parse_args()
def get_inventory(self):
"""Main inventory logic"""
if self.args.list:
return self.list_inventory()
elif self.args.host:
return self.get_host_vars(self.args.host)
else:
return {}
def list_inventory(self):
"""Return full inventory"""
# Your custom logic here
hosts = self.fetch_hosts_from_api()
for host in hosts:
# Add to appropriate groups
self.add_host_to_group(host['name'], host['group'])
# Set host variables
self.inventory['_meta']['hostvars'][host['name']] = {
'ansible_host': host['ip'],
'ansible_user': host['user'],
'custom_var': host['metadata']
}
return self.inventory
def fetch_hosts_from_api(self):
"""Fetch hosts from external API"""
# Example API call
response = requests.get('https://api.example.com/hosts')
return response.json()
def add_host_to_group(self, hostname, groupname):
"""Add host to group"""
if groupname not in self.inventory:
self.inventory[groupname] = {
'hosts': [],
'vars': {}
}
self.inventory[groupname]['hosts'].append(hostname)
def get_host_vars(self, hostname):
"""Return variables for specific host"""
return self.inventory['_meta']['hostvars'].get(hostname, {})
if __name__ == '__main__':
inventory = CustomInventory()
print(json.dumps(inventory.get_inventory(), indent=2))YAMLAWX Inventory Sync Automation
# sync_inventory_playbook.yml
---
- name: Sync AWX Inventories
hosts: localhost
connection: local
vars:
awx_host: "https://awx.example.com"
awx_username: "{{ vault_awx_username }}"
awx_password: "{{ vault_awx_password }}"
tasks:
- name: Get AWX token
uri:
url: "{{ awx_host }}/api/v2/tokens/"
method: POST
user: "{{ awx_username }}"
password: "{{ awx_password }}"
force_basic_auth: yes
status_code: 201
register: awx_token
- name: Sync AWS inventory sources
uri:
url: "{{ awx_host }}/api/v2/inventory_sources/{{ item }}/update/"
method: POST
headers:
Authorization: "Bearer {{ awx_token.json.token }}"
status_code: 202
loop:
- 1 # AWS Production
- 2 # AWS Staging
- 3 # AWS Development
- name: Wait for sync completion
uri:
url: "{{ awx_host }}/api/v2/inventory_updates/{{ item.json.inventory_update }}/"
headers:
Authorization: "Bearer {{ awx_token.json.token }}"
register: sync_status
until: sync_status.json.status in ['successful', 'failed']
retries: 30
delay: 10
loop: "{{ awx_sync_results.results }}"YAML9. Custom Dynamic Inventory Scripts
Custom Script Architecture
graph TB
A[External System] --> B[API/Database]
B --> C[Custom Inventory Script]
C --> D[JSON Output]
D --> E[Ansible]
subgraph "Script Components"
F[Data Fetching]
G[Data Processing]
H[Group Creation]
I[Host Variables]
J[Caching Logic]
end
C --> F
F --> G
G --> H
G --> I
G --> JBasic Custom Script Template
#!/usr/bin/env python3
"""
Custom Ansible Dynamic Inventory Script
"""
import json
import argparse
import sys
from typing import Dict, List, Any
class CustomInventory:
def __init__(self):
self.inventory: Dict[str, Any] = {
'_meta': {
'hostvars': {}
}
}
self.read_cli_args()
def read_cli_args(self):
parser = argparse.ArgumentParser()
parser.add_argument('--list', action='store_true',
help='List all hosts')
parser.add_argument('--host', action='store',
help='Get variables for specific host')
self.args = parser.parse_args()
def get_inventory_data(self) -> Dict[str, Any]:
"""Main method to get inventory data"""
if self.args.list:
return self.list_inventory()
elif self.args.host:
return self.get_host_vars(self.args.host)
else:
return {}
def list_inventory(self) -> Dict[str, Any]:
"""Generate full inventory"""
# Fetch data from your source
hosts_data = self.fetch_hosts()
# Process each host
for host_data in hosts_data:
hostname = host_data['name']
# Add to groups
for group in host_data.get('groups', []):
self.add_host_to_group(hostname, group)
# Set host variables
self.inventory['_meta']['hostvars'][hostname] = {
'ansible_host': host_data.get('ip'),
'ansible_user': host_data.get('user', 'root'),
'ansible_port': host_data.get('port', 22),
**host_data.get('variables', {})
}
return self.inventory
def fetch_hosts(self) -> List[Dict[str, Any]]:
"""Fetch hosts from external source"""
# Override this method with your data source logic
return [
{
'name': 'web01.example.com',
'ip': '192.168.1.10',
'user': 'ubuntu',
'groups': ['webservers', 'production'],
'variables': {
'nginx_port': 80,
'ssl_enabled': True
}
},
{
'name': 'db01.example.com',
'ip': '192.168.1.20',
'user': 'ubuntu',
'groups': ['databases', 'production'],
'variables': {
'mysql_port': 3306,
'replication_role': 'master'
}
}
]
def add_host_to_group(self, hostname: str, groupname: str):
"""Add host to inventory group"""
if groupname not in self.inventory:
self.inventory[groupname] = {
'hosts': [],
'vars': {}
}
if hostname not in self.inventory[groupname]['hosts']:
self.inventory[groupname]['hosts'].append(hostname)
def get_host_vars(self, hostname: str) -> Dict[str, Any]:
"""Return variables for specific host"""
return self.inventory['_meta']['hostvars'].get(hostname, {})
def main():
inventory = CustomInventory()
result = inventory.get_inventory_data()
print(json.dumps(result, indent=2))
if __name__ == '__main__':
main()PythonDatabase-backed Inventory Script
#!/usr/bin/env python3
"""
Database-backed Dynamic Inventory Script
"""
import json
import argparse
import sqlite3
import logging
from typing import Dict, List, Any, Optional
class DatabaseInventory:
def __init__(self, db_path: str = '/etc/ansible/inventory.db'):
self.db_path = db_path
self.inventory: Dict[str, Any] = {
'_meta': {
'hostvars': {}
}
}
self.setup_logging()
self.read_cli_args()
def setup_logging(self):
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('/var/log/ansible_inventory.log'),
logging.StreamHandler()
]
)
self.logger = logging.getLogger(__name__)
def read_cli_args(self):
parser = argparse.ArgumentParser()
parser.add_argument('--list', action='store_true')
parser.add_argument('--host', action='store')
parser.add_argument('--refresh', action='store_true',
help='Refresh cache from database')
self.args = parser.parse_args()
def get_db_connection(self) -> sqlite3.Connection:
"""Get database connection"""
try:
conn = sqlite3.connect(self.db_path)
conn.row_factory = sqlite3.Row
return conn
except sqlite3.Error as e:
self.logger.error(f"Database connection failed: {e}")
raise
def fetch_hosts_from_db(self) -> List[Dict[str, Any]]:
"""Fetch hosts from database"""
query = """
SELECT
h.hostname,
h.ip_address,
h.ansible_user,
h.ansible_port,
h.environment,
h.role,
h.os_type,
h.enabled,
GROUP_CONCAT(g.group_name) as groups,
GROUP_CONCAT(hv.variable_name || '=' || hv.variable_value) as variables
FROM hosts h
LEFT JOIN host_groups hg ON h.id = hg.host_id
LEFT JOIN groups g ON hg.group_id = g.id
LEFT JOIN host_variables hv ON h.id = hv.host_id
WHERE h.enabled = 1
GROUP BY h.id
"""
with self.get_db_connection() as conn:
cursor = conn.execute(query)
rows = cursor.fetchall()
hosts = []
for row in rows:
host_data = {
'name': row['hostname'],
'ip': row['ip_address'],
'user': row['ansible_user'] or 'root',
'port': row['ansible_port'] or 22,
'groups': row['groups'].split(',') if row['groups'] else [],
'variables': {}
}
# Parse variables
if row['variables']:
for var_pair in row['variables'].split(','):
if '=' in var_pair:
key, value = var_pair.split('=', 1)
host_data['variables'][key] = value
# Add implicit groups
if row['environment']:
host_data['groups'].append(f"env_{row['environment']}")
if row['role']:
host_data['groups'].append(f"role_{row['role']}")
if row['os_type']:
host_data['groups'].append(f"os_{row['os_type']}")
hosts.append(host_data)
return hosts
def list_inventory(self) -> Dict[str, Any]:
"""Generate inventory from database"""
try:
hosts_data = self.fetch_hosts_from_db()
for host_data in hosts_data:
hostname = host_data['name']
# Add to groups
for group in host_data['groups']:
if group: # Skip empty groups
self.add_host_to_group(hostname, group)
# Set host variables
self.inventory['_meta']['hostvars'][hostname] = {
'ansible_host': host_data['ip'],
'ansible_user': host_data['user'],
'ansible_port': host_data['port'],
**host_data['variables']
}
self.logger.info(f"Generated inventory for {len(hosts_data)} hosts")
return self.inventory
except Exception as e:
self.logger.error(f"Failed to generate inventory: {e}")
return self.inventory
def add_host_to_group(self, hostname: str, groupname: str):
"""Add host to group"""
if groupname not in self.inventory:
self.inventory[groupname] = {
'hosts': [],
'vars': {}
}
if hostname not in self.inventory[groupname]['hosts']:
self.inventory[groupname]['hosts'].append(hostname)
def get_host_vars(self, hostname: str) -> Dict[str, Any]:
"""Get variables for specific host"""
# Generate full inventory first
self.list_inventory()
return self.inventory['_meta']['hostvars'].get(hostname, {})
def main():
try:
inventory = DatabaseInventory()
if inventory.args.list:
result = inventory.list_inventory()
elif inventory.args.host:
result = inventory.get_host_vars(inventory.args.host)
else:
result = {}
print(json.dumps(result, indent=2))
except Exception as e:
logging.error(f"Script failed: {e}")
sys.exit(1)
if __name__ == '__main__':
main()PythonAPI-based Inventory Script
#!/usr/bin/env python3
"""
API-based Dynamic Inventory Script
"""
import json
import argparse
import requests
import time
import os
from typing import Dict, List, Any
from urllib.parse import urljoin
class APIInventory:
def __init__(self):
self.api_base_url = os.environ.get('INVENTORY_API_URL', 'https://api.example.com')
self.api_token = os.environ.get('INVENTORY_API_TOKEN')
self.cache_file = '/tmp/ansible_inventory_cache.json'
self.cache_timeout = int(os.environ.get('CACHE_TIMEOUT', '300'))
self.inventory: Dict[str, Any] = {
'_meta': {
'hostvars': {}
}
}
self.read_cli_args()
def read_cli_args(self):
parser = argparse.ArgumentParser()
parser.add_argument('--list', action='store_true')
parser.add_argument('--host', action='store')
parser.add_argument('--refresh-cache', action='store_true',
help='Force cache refresh')
self.args = parser.parse_args()
def get_api_headers(self) -> Dict[str, str]:
"""Get API request headers"""
headers = {
'Content-Type': 'application/json',
'User-Agent': 'Ansible-Dynamic-Inventory/1.0'
}
if self.api_token:
headers['Authorization'] = f'Bearer {self.api_token}'
return headers
def fetch_from_api(self, endpoint: str) -> Dict[str, Any]:
"""Fetch data from API"""
url = urljoin(self.api_base_url, endpoint)
headers = self.get_api_headers()
try:
response = requests.get(url, headers=headers, timeout=30)
response.raise_for_status()
return response.json()
except requests.RequestException as e:
raise Exception(f"API request failed: {e}")
def is_cache_valid(self) -> bool:
"""Check if cache is still valid"""
if not os.path.exists(self.cache_file):
return False
cache_age = time.time() - os.path.getmtime(self.cache_file)
return cache_age < self.cache_timeout
def load_from_cache(self) -> Dict[str, Any]:
"""Load inventory from cache"""
try:
with open(self.cache_file, 'r') as f:
return json.load(f)
except (IOError, json.JSONDecodeError):
return {}
def save_to_cache(self, data: Dict[str, Any]):
"""Save inventory to cache"""
try:
with open(self.cache_file, 'w') as f:
json.dump(data, f, indent=2)
except IOError as e:
# Cache save failure shouldn't break inventory
pass
def fetch_hosts_from_api(self) -> List[Dict[str, Any]]:
"""Fetch hosts from API with caching"""
# Check cache first
if not self.args.refresh_cache and self.is_cache_valid():
cached_data = self.load_from_cache()
if cached_data:
return cached_data.get('hosts', [])
# Fetch from API
try:
# Get hosts
hosts_response = self.fetch_from_api('/api/v1/hosts')
hosts = hosts_response.get('hosts', [])
# Get groups
groups_response = self.fetch_from_api('/api/v1/groups')
groups = {g['id']: g for g in groups_response.get('groups', [])}
# Enrich hosts with group information
for host in hosts:
host_groups = []
for group_id in host.get('group_ids', []):
if group_id in groups:
host_groups.append(groups[group_id]['name'])
host['groups'] = host_groups
# Cache the results
cache_data = {
'timestamp': time.time(),
'hosts': hosts
}
self.save_to_cache(cache_data)
return hosts
except Exception as e:
# Fallback to cache if API fails
cached_data = self.load_from_cache()
if cached_data:
return cached_data.get('hosts', [])
raise e
def list_inventory(self) -> Dict[str, Any]:
"""Generate inventory from API"""
hosts_data = self.fetch_hosts_from_api()
for host_data in hosts_data:
hostname = host_data['hostname']
# Add to groups
for group in host_data.get('groups', []):
self.add_host_to_group(hostname, group)
# Add automatic groups based on attributes
if host_data.get('environment'):
self.add_host_to_group(hostname, f"env_{host_data['environment']}")
if host_data.get('role'):
self.add_host_to_group(hostname, f"role_{host_data['role']}")
if host_data.get('datacenter'):
self.add_host_to_group(hostname, f"dc_{host_data['datacenter']}")
# Set host variables
host_vars = {
'ansible_host': host_data.get('ip_address'),
'ansible_user': host_data.get('ssh_user', 'root'),
'ansible_port': host_data.get('ssh_port', 22),
}
# Add custom variables
if 'variables' in host_data:
host_vars.update(host_data['variables'])
self.inventory['_meta']['hostvars'][hostname] = host_vars
return self.inventory
def add_host_to_group(self, hostname: str, groupname: str):
"""Add host to group"""
if groupname not in self.inventory:
self.inventory[groupname] = {
'hosts': [],
'vars': {}
}
if hostname not in self.inventory[groupname]['hosts']:
self.inventory[groupname]['hosts'].append(hostname)
def get_host_vars(self, hostname: str) -> Dict[str, Any]:
"""Get variables for specific host"""
# Generate full inventory to get host vars
self.list_inventory()
return self.inventory['_meta']['hostvars'].get(hostname, {})
def main():
try:
inventory = APIInventory()
if inventory.args.list:
result = inventory.list_inventory()
elif inventory.args.host:
result = inventory.get_host_vars(inventory.args.host)
else:
result = {}
print(json.dumps(result, indent=2))
except Exception as e:
# Return empty inventory on error to avoid breaking Ansible
print(json.dumps({}))
exit(1)
if __name__ == '__main__':
main()Python10. Best Practices and Troubleshooting
Performance Optimization
graph TB
A[Performance Optimization] --> B[Caching Strategies]
A --> C[Filtering Techniques]
A --> D[Parallel Processing]
A --> E[API Rate Limiting]
B --> F[Memory Cache]
B --> G[File Cache]
B --> H[Redis Cache]
C --> I[Tag Filters]
C --> J[Region Filters]
C --> K[State Filters]
D --> L[Concurrent API Calls]
D --> M[Thread Pools]
E --> N[Request Throttling]
E --> O[Exponential Backoff]Caching Best Practices
# ansible.cfg
[inventory]
cache = True
cache_plugin = jsonfile
cache_timeout = 3600
cache_connection = ~/.ansible/tmp
# For Redis caching
cache_plugin = redis
cache_timeout = 3600
cache_connection = localhost:6379:0
# Memory caching (default)
cache_plugin = memory
cache_timeout = 300INIError Handling Template
#!/usr/bin/env python3
"""
Robust Dynamic Inventory with Error Handling
"""
import json
import sys
import logging
import time
from typing import Dict, Any, Optional
class RobustInventory:
def __init__(self):
self.setup_logging()
self.inventory = {'_meta': {'hostvars': {}}}
self.max_retries = 3
self.retry_delay = 2
def setup_logging(self):
"""Setup logging configuration"""
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(
level=logging.INFO,
format=log_format,
handlers=[
logging.FileHandler('/var/log/ansible_inventory.log'),
logging.StreamHandler(sys.stderr)
]
)
self.logger = logging.getLogger(__name__)
def retry_on_failure(self, func, *args, **kwargs):
"""Retry function with exponential backoff"""
for attempt in range(self.max_retries):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt == self.max_retries - 1:
self.logger.error(f"Final attempt failed: {e}")
raise
wait_time = self.retry_delay * (2 ** attempt)
self.logger.warning(f"Attempt {attempt + 1} failed: {e}. Retrying in {wait_time}s")
time.sleep(wait_time)
def safe_get_inventory(self) -> Dict[str, Any]:
"""Safely get inventory with fallback"""
try:
return self.retry_on_failure(self.get_inventory_data)
except Exception as e:
self.logger.error(f"All attempts failed: {e}")
# Return minimal inventory to prevent Ansible failure
return self.get_fallback_inventory()
def get_inventory_data(self) -> Dict[str, Any]:
"""Override this method with your inventory logic"""
raise NotImplementedError("Implement inventory logic here")
def get_fallback_inventory(self) -> Dict[str, Any]:
"""Return fallback inventory when all else fails"""
return {
'_meta': {
'hostvars': {}
},
'unreachable': {
'hosts': [],
'vars': {
'ansible_connection': 'local'
}
}
}
def validate_inventory(self, inventory: Dict[str, Any]) -> bool:
"""Validate inventory structure"""
required_keys = ['_meta']
for key in required_keys:
if key not in inventory:
self.logger.error(f"Missing required key: {key}")
return False
if 'hostvars' not in inventory['_meta']:
self.logger.error("Missing hostvars in _meta")
return False
return TruePythonDebugging Configuration
# debug_inventory.yml
plugin: amazon.aws.aws_ec2
debug: true
# Verbose logging
aws_config:
retries:
max_attempts: 3
# Enable boto3 debug logging
boto_config:
parameter_validation: false
# Cache for debugging
cache: true
cache_plugin: jsonfile
cache_timeout: 3600
cache_connection: /tmp/debug_cache
# Simple filters for testing
filters:
instance-state-name: running
# Basic grouping
keyed_groups:
- key: instance_type
prefix: type
compose:
ansible_host: public_ip_address | default(private_ip_address)
debug_info: |
"Instance: " + instance_id +
" Type: " + instance_type +
" State: " + state.nameYAMLCommon Issues and Solutions
| Issue | Symptoms | Solution |
|---|---|---|
| API Rate Limits | HTTP 429 errors | Implement exponential backoff |
| Timeout Errors | Inventory sync failures | Increase timeout values |
| Authentication | 401/403 errors | Verify credentials and permissions |
| Large Inventories | Slow performance | Use filtering and caching |
| Network Issues | Connection errors | Add retry logic |
| Invalid JSON | Parse errors | Validate JSON output |
Testing Dynamic Inventory
# Test inventory output
ansible-inventory -i inventory.yml --list
# Test specific host
ansible-inventory -i inventory.yml --host web01
# Graph view
ansible-inventory -i inventory.yml --graph
# Debug mode
ansible-inventory -i inventory.yml --list -vvv
# Test with specific group
ansible -i inventory.yml webservers -m ping
# Validate JSON structure
ansible-inventory -i inventory.yml --list | jq .
# Performance testing
time ansible-inventory -i inventory.yml --list > /dev/nullBashMonitoring and Alerting
# monitoring_playbook.yml
---
- name: Monitor Dynamic Inventory Health
hosts: localhost
vars:
inventory_sources:
- name: AWS Production
endpoint: aws_ec2
expected_hosts: 50
- name: GCP Staging
endpoint: gcp_compute
expected_hosts: 10
tasks:
- name: Test inventory sources
uri:
url: "{{ awx_url }}/api/v2/inventory_sources/{{ item.id }}/update/"
method: POST
headers:
Authorization: "Bearer {{ awx_token }}"
loop: "{{ inventory_sources }}"
register: inventory_tests
- name: Check host counts
debug:
msg: "{{ item.name }} has {{ item.host_count }} hosts (expected: {{ item.expected_hosts }})"
when: item.host_count < item.expected_hosts * 0.8
loop: "{{ inventory_sources }}"
- name: Send alert if host count is low
mail:
to: ops@example.com
subject: "Low host count in {{ item.name }}"
body: "Expected {{ item.expected_hosts }}, got {{ item.host_count }}"
when: item.host_count < item.expected_hosts * 0.8
loop: "{{ inventory_sources }}"YAML11. Advanced Use Cases
Multi-Cloud Inventory Management
graph TB
subgraph "Multi-Cloud Architecture"
A[Unified Inventory] --> B[AWS Plugin]
A --> C[GCP Plugin]
A --> D[Azure Plugin]
A --> E[On-Premise Plugin]
B --> F[AWS Resources]
C --> G[GCP Resources]
D --> H[Azure Resources]
E --> I[Datacenter Resources]
end
subgraph "Inventory Composition"
J[Group Merging]
K[Variable Precedence]
L[Cross-Cloud Tagging]
M[Unified Naming]
end
A --> J
A --> K
A --> L
A --> MDisaster Recovery Automation
# dr_inventory_sync.yml
---
- name: Disaster Recovery Inventory Sync
hosts: localhost
vars:
primary_region: us-east-1
dr_region: us-west-2
tasks:
- name: Get primary region inventory
uri:
url: "{{ awx_url }}/api/v2/inventories/{{ primary_inventory_id }}/hosts/"
headers:
Authorization: "Bearer {{ awx_token }}"
register: primary_hosts
- name: Check DR region capacity
uri:
url: "{{ awx_url }}/api/v2/inventories/{{ dr_inventory_id }}/hosts/"
headers:
Authorization: "Bearer {{ awx_token }}"
register: dr_hosts
- name: Calculate required DR resources
set_fact:
required_instances: "{{ primary_hosts.json.count * 0.8 | int }}"
available_instances: "{{ dr_hosts.json.count }}"
- name: Trigger DR scaling if needed
uri:
url: "{{ awx_url }}/api/v2/job_templates/{{ dr_scaling_template }}/launch/"
method: POST
headers:
Authorization: "Bearer {{ awx_token }}"
body_format: json
body:
extra_vars:
target_capacity: "{{ required_instances }}"
when: available_instances < required_instancesYAMLGitOps Integration
#!/usr/bin/env python3
"""
GitOps Dynamic Inventory
Integrates with Git repositories for infrastructure as code
"""
import json
import git
import yaml
import os
from typing import Dict, List, Any
class GitOpsInventory:
def __init__(self):
self.inventory = {'_meta': {'hostvars': {}}}
self.repo_url = os.environ.get('GITOPS_REPO_URL')
self.repo_path = '/tmp/gitops-inventory'
self.inventory_file = 'infrastructure/inventory.yml'
def clone_or_update_repo(self):
"""Clone or update the GitOps repository"""
if os.path.exists(self.repo_path):
repo = git.Repo(self.repo_path)
repo.remotes.origin.pull()
else:
git.Repo.clone_from(self.repo_url, self.repo_path)
def load_infrastructure_config(self) -> Dict[str, Any]:
"""Load infrastructure configuration from Git"""
config_path = os.path.join(self.repo_path, self.inventory_file)
with open(config_path, 'r') as f:
return yaml.safe_load(f)
def build_inventory_from_config(self):
"""Build inventory from GitOps configuration"""
self.clone_or_update_repo()
config = self.load_infrastructure_config()
# Process environments
for env_name, env_config in config.get('environments', {}).items():
env_group = f"env_{env_name}"
# Process services in environment
for service_name, service_config in env_config.get('services', {}).items():
service_group = f"{env_name}_{service_name}"
# Process instances
for instance in service_config.get('instances', []):
hostname = instance['hostname']
# Add to groups
self.add_host_to_group(hostname, env_group)
self.add_host_to_group(hostname, service_group)
# Set host variables
self.inventory['_meta']['hostvars'][hostname] = {
'ansible_host': instance.get('ip_address'),
'ansible_user': instance.get('user', 'root'),
'environment': env_name,
'service': service_name,
'config_version': config.get('version', 'unknown'),
**instance.get('variables', {})
}
def add_host_to_group(self, hostname: str, groupname: str):
"""Add host to group"""
if groupname not in self.inventory:
self.inventory[groupname] = {'hosts': [], 'vars': {}}
if hostname not in self.inventory[groupname]['hosts']:
self.inventory[groupname]['hosts'].append(hostname)
def main():
inventory = GitOpsInventory()
inventory.build_inventory_from_config()
print(json.dumps(inventory.inventory, indent=2))
if __name__ == '__main__':
main()Python12. Security Considerations
Security Architecture
graph TB
subgraph "Security Layers"
A[Authentication] --> B[Authorization]
B --> C[Encryption]
C --> D[Audit Logging]
D --> E[Network Security]
end
subgraph "Credential Management"
F[HashiCorp Vault]
G[AWS Secrets Manager]
H[Azure Key Vault]
I[GCP Secret Manager]
end
subgraph "Network Security"
J[VPN/Private Networks]
K[Firewall Rules]
L[API Gateway]
M[Rate Limiting]
end
A --> F
A --> G
A --> H
A --> I
E --> J
E --> K
E --> L
E --> MSecure Credential Management
# vault_integration.yml
---
- name: Secure Dynamic Inventory with Vault
hosts: localhost
vars:
vault_url: "{{ vault_addr }}"
vault_token: "{{ vault_token }}"
tasks:
- name: Retrieve AWS credentials from Vault
hashivault_read:
secret: aws/credentials/ansible
key: access_key
register: aws_access_key
- name: Retrieve AWS secret from Vault
hashivault_read:
secret: aws/credentials/ansible
key: secret_key
register: aws_secret_key
- name: Create secure inventory configuration
template:
src: aws_inventory.yml.j2
dest: /etc/ansible/aws_inventory.yml
mode: '0600'
owner: ansible
vars:
aws_access_key_id: "{{ aws_access_key.value }}"
aws_secret_access_key: "{{ aws_secret_key.value }}"YAMLIAM Least Privilege Policies
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AnsibleInventoryMinimal",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeRegions",
"ec2:DescribeAvailabilityZones"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": ["us-east-1", "us-west-2"]
}
}
},
{
"Sid": "DenyUnauthorizedActions",
"Effect": "Deny",
"NotAction": [
"ec2:Describe*"
],
"Resource": "*"
}
]
}JSONEncryption in Transit and at Rest
# encrypted_inventory.yml
plugin: amazon.aws.aws_ec2
# Use SSL/TLS for API calls
use_ssl: true
validate_certs: true
# Encrypted cache
cache: true
cache_plugin: encrypted_jsonfile
cache_timeout: 3600
cache_connection: ~/.ansible/encrypted_cache
cache_encryption_key: "{{ vault_cache_key }}"
# Secure credential passing
aws_access_key_id: "{{ vault_aws_access_key }}"
aws_secret_access_key: "{{ vault_aws_secret_key }}"
# Audit logging
audit_log: true
audit_log_file: /var/log/ansible/inventory_audit.logYAMLNetwork Security Configuration
# network_security.yml
---
- name: Configure Network Security for Dynamic Inventory
hosts: ansible_controllers
tasks:
- name: Configure firewall rules
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
immediate: yes
loop:
- 443/tcp # HTTPS for cloud APIs
- 22/tcp # SSH for remote hosts
- name: Restrict API access to specific networks
lineinfile:
path: /etc/ansible/ansible.cfg
regexp: '^inventory_ignore_extensions'
line: 'inventory_ignore_extensions = .pyc, .pyo, .orig, .ini, .cfg, .retry'
- name: Configure proxy for cloud API access
blockinfile:
path: /etc/environment
block: |
https_proxy=https://proxy.company.com:8080
http_proxy=http://proxy.company.com:8080
no_proxy=localhost,127.0.0.1,internal.company.comYAMLAudit and Compliance
#!/usr/bin/env python3
"""
Audit-enabled Dynamic Inventory
Logs all inventory operations for compliance
"""
import json
import logging
import hashlib
from datetime import datetime
from typing import Dict, Any
class AuditInventory:
def __init__(self):
self.setup_audit_logging()
self.inventory = {'_meta': {'hostvars': {}}}
def setup_audit_logging(self):
"""Setup audit logging"""
audit_logger = logging.getLogger('inventory_audit')
audit_logger.setLevel(logging.INFO)
handler = logging.FileHandler('/var/log/ansible/inventory_audit.log')
formatter = logging.Formatter(
'%(asctime)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
audit_logger.addHandler(handler)
self.audit_logger = audit_logger
def audit_log(self, action: str, details: Dict[str, Any]):
"""Log audit information"""
audit_entry = {
'timestamp': datetime.utcnow().isoformat(),
'action': action,
'user': os.environ.get('USER', 'unknown'),
'details': details,
'checksum': self.calculate_checksum(details)
}
self.audit_logger.info(json.dumps(audit_entry))
def calculate_checksum(self, data: Dict[str, Any]) -> str:
"""Calculate checksum for audit trail"""
return hashlib.sha256(
json.dumps(data, sort_keys=True).encode()
).hexdigest()
def fetch_inventory_with_audit(self):
"""Fetch inventory with full audit trail"""
try:
self.audit_log('inventory_fetch_start', {
'source': 'dynamic_inventory',
'method': 'api_call'
})
# Your inventory fetching logic here
inventory_data = self.get_inventory_data()
self.audit_log('inventory_fetch_success', {
'host_count': len(inventory_data.get('_meta', {}).get('hostvars', {})),
'group_count': len([k for k in inventory_data.keys() if k != '_meta'])
})
return inventory_data
except Exception as e:
self.audit_log('inventory_fetch_error', {
'error': str(e),
'error_type': type(e).__name__
})
raise
def main():
inventory = AuditInventory()
result = inventory.fetch_inventory_with_audit()
print(json.dumps(result, indent=2))
if __name__ == '__main__':
main()PythonZero Trust Implementation
# zero_trust_inventory.yml
plugin: amazon.aws.aws_ec2
# Verify every API call
verify_ssl: true
validate_certs: true
# Minimal permissions
filters:
# Only managed instances
tag:ManagedBy: ansible
# Only approved regions
instance-state-name: running
# Host verification
compose:
# Verify host certificates
ansible_ssh_common_args: '-o StrictHostKeyChecking=yes -o UserKnownHostsFile=/etc/ansible/known_hosts'
# Use certificate-based authentication
ansible_ssh_private_key_file: /etc/ansible/keys/{{ tags.Environment }}_key.pem
# Force encrypted connections
ansible_ssh_extra_args: '-o PasswordAuthentication=no -o ChallengeResponseAuthentication=no'
# Continuous verification
keyed_groups:
- key: tags.SecurityLevel
prefix: security
- key: tags.ComplianceStatus
prefix: compliance
# Exclude non-compliant hosts
exclude_filters:
- tag:ComplianceStatus: non-compliant
- tag:SecurityLevel: untrustedYAMLConclusion
This comprehensive guide has covered Ansible Dynamic Inventory from beginner to expert level, including:
- Fundamentals: Understanding dynamic inventory concepts and architecture
- Cloud Providers: Deep dive into AWS, GCP, and Azure implementations
- AWX Integration: Enterprise-level inventory management
- Custom Solutions: Building your own inventory scripts and plugins
- Best Practices: Performance optimization, troubleshooting, and monitoring
- Advanced Use Cases: Multi-cloud, service discovery, and GitOps integration
- Security: Comprehensive security considerations and implementations
Key Takeaways
- Start Simple: Begin with basic cloud provider plugins before moving to custom solutions
- Performance Matters: Implement caching and filtering for large environments
- Security First: Always follow least privilege principles and audit your inventory access
- Monitor and Alert: Set up monitoring for inventory health and performance
- Document Everything: Maintain clear documentation for your inventory configurations
Next Steps
- Practice: Set up a test environment and try different inventory configurations
- Automation: Integrate dynamic inventory into your CI/CD pipelines
- Monitoring: Implement comprehensive monitoring and alerting
- Security: Regular security audits and credential rotation
- Optimization: Continuously optimize performance and reduce API costs
Resources for Further Learning
- Ansible Official Documentation
- AWS Ansible Collection
- GCP Ansible Collection
- Azure Ansible Collection
- AWX Project
Community and Support
- Ansible Community: Join the Ansible community for support and discussions
- GitHub: Contribute to open-source inventory plugins
- Stack Overflow: Get help with specific technical issues
- Reddit: r/ansible for community discussions and tips
This guide provides a solid foundation for mastering Ansible Dynamic Inventory across all major cloud platforms and use cases. Remember to always test configurations in non-production environments first and follow your organization’s security and compliance requirements.
Discover more from Altgr Blog
Subscribe to get the latest posts sent to your email.
