Play Recap and Return Values
Last updated: June 29, 2025
My Experience with Ansible Recap and Return Values
In my early days with Ansible, I often found myself puzzled by the colorful output at the end of playbook runs and wondered how to actually make use of the data that Ansible modules were returning. I'd see green "ok" and yellow "changed" statuses, but didn't fully grasp what they meant or how to leverage them for more advanced automation.
Once I understood the Play Recap and Return values, it was like having a new superpower—suddenly I could build much more intelligent playbooks that could make decisions based on the results of previous tasks. This knowledge transformed how I automate both Linux and Windows environments, allowing me to create more robust, self-healing automation workflows.
In this post, I'll share my journey with understanding and using Ansible Play Recap and Return values, with practical examples that have helped me in real-world automation scenarios.
Understanding the Ansible Play Recap
When you run an Ansible playbook, the final output includes a "PLAY RECAP" section that summarizes the execution results for each host. This recap is your at-a-glance view of what happened during playbook execution. Let's look at a typical Play Recap output:
PLAY RECAP *************************************************************************
web-server1 : ok=5 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
db-server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
windows-server1 : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
In my early days, this was just a colorful summary. But understanding each of these states has helped me troubleshoot and build more resilient automation.
The Different Status Types in Play Recap
Ok Status
The ok
count indicates tasks that ran successfully and didn't need to make any changes. Think of it as Ansible saying, "I checked this, and it's already in the desired state."
- name: Ensure Apache is installed
ansible.builtin.yum:
name: httpd
state: present
# If Apache is already installed, this will report as "ok"
When I run my weekly compliance checks, seeing a high "ok" count is reassuring—it means my systems are already in the state I want them to be.
Changed Status
The changed
count shows tasks that successfully made modifications to the system. This is one of the most important indicators in the Play Recap—it tells you what Ansible actually did.
- name: Ensure configuration file has correct content
ansible.builtin.template:
src: apache.conf.j2
dest: /etc/httpd/conf/httpd.conf
# If the file needed to be updated, this will report as "changed"
I find the changed
status particularly useful for audit purposes. When I need to document what changes were made during a maintenance window, I pay close attention to this metric.
Unreachable Status
The unreachable
count indicates hosts that Ansible couldn't connect to, typically due to network issues, SSH/WinRM problems, or DNS errors.
- name: Configure Windows firewall
win_firewall_rule:
name: HTTP
localport: 80
action: allow
direction: in
protocol: tcp
state: present
# If WinRM isn't working, this host will be marked as "unreachable"
I've learned that unreachable hosts deserve immediate attention—they often indicate infrastructure problems that need addressing before any automation can succeed.
Failed Status
The failed
count shows tasks that attempted to run but encountered errors. These require investigation.
- name: Start a service that doesn't exist
ansible.builtin.service:
name: nonexistent_service
state: started
# This will report as "failed" since the service doesn't exist
In my production playbooks, I build in proper error handling for potential failure points. Failed tasks are often opportunities to improve your automation's robustness.
Skipped Status
The skipped
count indicates tasks that were skipped due to conditional execution.
- name: Install IIS on Windows
win_feature:
name: Web-Server
state: present
when: ansible_os_family == "Windows"
# This will be skipped on Linux hosts
In cross-platform playbooks, I rely heavily on skipped tasks. They allow me to write a single playbook that works on both Linux and Windows systems.
Rescued Status
The rescued
count shows tasks that failed but were caught and handled by a rescue
block.
- name: Try some operations with rescue handling
block:
- name: This might fail
ansible.builtin.command: /bin/false
rescue:
- name: Recovery task
ansible.builtin.debug:
msg: "We caught the failure and handled it!"
# If the command fails but rescue succeeds, this counts as "rescued"
I find rescue
blocks invaluable for creating self-healing automation. For example, my application deployment playbooks can automatically restore from backups if a deployment fails.
Ignored Status
The ignored
count represents tasks that failed but were explicitly ignored using the ignore_errors: true
directive.
- name: Try to check a service that might not exist yet
ansible.builtin.service:
name: custom_app
state: started
ignore_errors: true
# If this fails, the error is ignored and playbook continues
I use ignore_errors
sparingly, but it's helpful for non-critical checks or when a failure doesn't necessarily mean the overall automation should stop.
Working with Return Values
Beyond the Play Recap, each Ansible module returns structured data that you can capture and use in subsequent tasks. This is where Ansible's power truly shines.
Capturing Return Values with Register
The register
keyword allows you to store the complete output from a task in a variable:
- name: Get server uptime
ansible.builtin.command: uptime
register: system_uptime
- name: Show the captured output
ansible.builtin.debug:
msg: "System uptime: {{ system_uptime.stdout }}"
Common Return Values
Most Ansible modules return some common values. Here are the ones I use frequently:
Changed Status
- name: Create a file
ansible.builtin.file:
path: /tmp/test_file
state: touch
register: file_result
- name: Take action if file was created or modified
ansible.builtin.debug:
msg: "The file was just created or modified"
when: file_result.changed
This has been incredibly useful for my configuration management workflows. I can trigger service restarts only when configuration files actually change, avoiding unnecessary disruptions.
Return Code (rc)
Command and shell modules return an exit code that indicates success (0) or failure (non-zero):
- name: Check if a process is running
ansible.builtin.shell: pgrep nginx
register: process_check
ignore_errors: true
- name: Start nginx if not running
ansible.builtin.service:
name: nginx
state: started
when: process_check.rc != 0
I've built entire health check systems using return codes. My monitoring playbooks can query application status and take remedial action based on these values.
stdout and stderr
Command outputs are captured in stdout
and stderr
:
- name: Get disk space
ansible.builtin.shell: df -h
register: disk_space
- name: Parse and evaluate disk usage
ansible.builtin.debug:
msg: "Available disk space: {{ disk_space.stdout_lines }}"
I often use this technique for parsing command outputs to make decisions. For example, I might extract version numbers or configuration values from application outputs.
Powerful Windows Example: Working with PowerShell Return Values
When working with Windows hosts, I leverage PowerShell's ability to return structured objects:
- name: Get Windows service status
win_shell: |
Get-Service -Name 'MSSQLSERVER' |
Select-Object -Property Name, Status |
ConvertTo-Json
register: sql_service_info
- name: Ensure SQL Server is running if it's installed
win_service:
name: MSSQLSERVER
state: started
when: sql_service_info.rc == 0 and (sql_service_info.stdout | from_json).Status != 'Running'
This approach lets me work with complex Windows data and build sophisticated automation flows for Windows environments.
Linux Example: Advanced Usage with Conditionals
In Linux environments, I often use return values to make dynamic decisions:
- name: Check available disk space
ansible.builtin.shell: df -h / | grep -v Filesystem | awk '{print $5}' | sed 's/%//'
register: disk_usage
changed_when: false
- name: Run disk cleanup if usage above 80%
ansible.builtin.shell: find /var/log -name "*.log" -mtime +7 -delete
when: disk_usage.stdout | int > 80
This kind of automated maintenance has saved me countless hours of manual work and prevented storage-related outages.
Understanding the Flow of Return Values
To better visualize how return values are processed in an Ansible playbook, I've created this sequence diagram that shows the flow of information:
This diagram helps me understand the execution flow and makes it easier to design complex playbooks that react to system states.
Best Practices I've Learned
Through years of working with Ansible return values, I've developed these best practices:
1. Always Handle Errors Appropriately
- name: Critical database backup
ansible.builtin.shell: pg_dump -U postgres mydatabase > /backups/db.sql
register: backup_result
failed_when: backup_result.rc != 0
- name: Notify team if backup fails
ansible.builtin.mail:
subject: "CRITICAL: Database backup failed"
to: "{{ alert_email }}"
body: "The database backup failed with: {{ backup_result.stderr }}"
when: backup_result is failed
2. Use Changed_When to Control Changed Status
By default, command and shell modules always report as "changed". You can make them more accurate:
- name: Check NTP synchronization
ansible.builtin.command: chronyc tracking
register: chrony_result
changed_when: false # This is a read-only operation
3. Utilize Failed_When for Custom Failure Conditions
Sometimes success isn't just about return codes:
- name: Check API health
ansible.builtin.uri:
url: https://api.example.com/health
return_content: yes
register: api_health
failed_when: "'healthy' not in api_health.content"
4. Leverage JSON Return Values
Many modules return structured data you can navigate:
- name: Get user info
ansible.builtin.user:
name: webapp
state: present
register: user_info
- name: Show user home directory
ansible.builtin.debug:
msg: "User home directory is {{ user_info.home }}"
5. Use "when" Statements with Return Values for Conditional Execution
- name: Check if config exists
ansible.builtin.stat:
path: /etc/app/config.ini
register: config_file
- name: Create default config if missing
ansible.builtin.template:
src: config.ini.j2
dest: /etc/app/config.ini
when: not config_file.stat.exists
Common Pitfalls to Avoid
Through my experience, I've encountered several common pitfalls:
Forgetting that registered variables persist across hosts - In a multi-host play, each host has its own copy of the registered variable.
Not accounting for undefined values - Always use the
default
filter or test withis defined
when accessing potentially undefined values.Overlooking the structure of return data - Use
debug
to inspect the complete structure of return values if you're unsure.Ignoring errors that should be handled - Use
ignore_errors
sparingly and preferblock/rescue
for better error handling.Not checking command return codes - Always verify command success with
rc == 0
checks when using shell or command modules.
Conclusion
Understanding Ansible Play Recap and Return values has been foundational to my success with infrastructure automation. What started as merely colorful output at the end of playbook runs has become a critical tool in my automation strategy, enabling me to build intelligent, reactive workflows.
The ability to capture task results and make decisions based on them transforms Ansible from a simple configuration management tool into a powerful automation platform capable of complex, state-aware operations across both Linux and Windows environments.
I encourage you to dive deeper into return values in your own playbooks. Start by registering module results, exploring the data structure with debug tasks, and gradually building more sophisticated conditional logic. Before long, you'll be building self-healing, intelligent automation that can adapt to the state of your systems.
Last updated