Getting started with Ansible is not that simple. If you’re starting from scratch – this could help you. Even today, the documentation isn’t complete and there are some versioning mixups. Here’s what I accomplished in a day or two.
Environment: Amazon EC2 Linux (i.e. CentOS), t2.small.
Installation:
# I tried the other methods. Didn't go as smooth as this. $ sudo yum install python-pip $ sudo yum install python-devel $ sudo pip install ansible
Paths:
I found no real guidance on this. So I just went ahead and placed everything in /etc/ansible (I work with the default ec2-user). So far it’s Ok (the path, not the user). Here’s a full ‘find’ on my files:
./ansible.cfg # just one line... ./site.yml # nothing in here ./filebeat.pb.yml # first and only playbook! ./hosts # this is a mighty script! ./roles # this is how you bundle tasks/steps ./roles/filebeat # this role installs 'filebeat', a log drain ./roles/filebeat/defaults ./roles/filebeat/defaults/main.yml # variables ./roles/filebeat/handlers ./roles/filebeat/handlers/main.yml # will restart the service once in the end # instead of after any action that asks ./roles/filebeat/tasks # this is the work ./roles/filebeat/tasks/main.yml # main just includes the others in order ./roles/filebeat/tasks/start.yml ./roles/filebeat/tasks/configure.yml ./roles/filebeat/tasks/install.yml ./roles/filebeat/templates # i've also put static files here ./roles/filebeat/templates/etc ./roles/filebeat/templates/etc/filebeat ./roles/filebeat/templates/etc/filebeat/COMODORSADomainValidationSecureServerCA.crt ./roles/filebeat/templates/etc/filebeat/conf.d ./roles/filebeat/templates/etc/filebeat/conf.d/audience.yml ./roles/filebeat/templates/etc/filebeat/conf.d/nodejs.yml ./roles/filebeat/templates/etc/filebeat/conf.d/spark.yml ./roles/filebeat/templates/etc/filebeat/filebeat.yml ./roles/filebeat/templates/etc/filebeat/ssl.crt ./roles/filebeat/templates/etc/yum.repos.d ./roles/filebeat/templates/etc/yum.repos.d/elastic.repo
Inventory
This example shows a dynamic inventory for Amazon EC2. It means that ‘/etc/ansible/hosts’ is a script that takes ‘–list’ as a parameter. In my case, I used a rudimentary python script that simply outputs the instances divided to groups according to a combination of two ec2-tags: <environment>-<role>. Instead of placing AWS API creds in the script or on the host I used a role for the machine. Based on this. I’m no Python expert but here goes:
#!/usr/bin/python import argparse import boto.ec2 import simplejson as json PROD_VPC = 'vpc-nomoreshortnames' REGIONS = ['us-east-1', 'us-west-2'] def get_ec2_instances_into_hash(region, hash): ec2_conn = boto.ec2.connect_to_region(region) reservations = ec2_conn.get_all_reservations() for res in reservations: for inst in res.instances: if inst.state == 'running': if inst.vpc_id is PROD_VPC: ip = inst.private_ip_address # I use VPN here else: ip = inst.ip_address env_tag = inst.tags.get('Environment', 'no-env') role_tag = inst.tags.get('Role', 'no-role') key = env_tag + '-' + role_tag if key not in hash: hash[key] = {'hosts': [], 'vars': {}} hash[key]['hosts'].append(ip) # I have different SSH keys on each set of hosts and all are availale locally hash[key]['vars']['ansible_ssh_private_key_file'] = '~/.ssh/' + str(inst.key_name) + '.pem' def main(): parser = argparse.ArgumentParser() parser.add_argument('--list', help='list inventory', action='store_true') args = parser.parse_args() instances = {} for r in REGIONS: get_ec2_instances_into_hash(r, instances) print json.dumps(instances, sort_keys=True, indent=4 * ' ') if __name__ == '__main__': main()
The point here is the format of the JSON output. As long as you format like this – you can use whatever script you like.
ansible.cfg
Just this, for smooth SSH-ing:
[defaults]
host_key_checking = False
But I had to add this to ~/.bashrc
export ANSIBLE_CONFIG=/etc/ansible/ansible.cfg
Playbooks
I started by looking at Ansible Galaxy and specifically this example. It didn’t work for me at all but the adjustments I did were really small. So it’s a good start to learn from. If I’m missing anything further down – just return to Steven’s code.
I don’t like the explanations I found about playbooks. I think this eventual example I got to explains it quite well. Please refer to the find output above to know where to place the files. This is how I run the playbook on a single group of hosts:
# From /etc/ansible I run $ ansible-playbook filebeat.pb.yml --limit smoketest-websockets
The first thing that is read is the playbook file ‘filebeat.pb.yml’ this is “the” playbook. All it does is install and configure one service I use to ship (I like to say drain) logs. So here’s the playbook:
--- - hosts: smoketest-backend become: yes become_user: root roles: - { role: filebeat, filebeat_env: smoketest, filebeat_role: backend } - hosts: smoketest-websockets become: yes become_user: root roles: - { role: filebeat, filebeat_env: smoketest, filebeat_role: websockets }
In short; The hosts is what we’ll pull out from the hosts script output. If you don’t want to pull everything from the hosts script and then just use a little you can create different inventory scripts, for example staging, production, tests. And use them with parameter ‘-i staging’. The two ‘become’ statements allow to connect with my default ec2-user but run the installation commands as root. As you can see the ‘roles‘ array allows you to run more than one role per playbook but this a very small example. We run the filebeat role with two parameters filebeat_env and filebeat_role. I specified all of the parameters in use in this role in the defaults yaml: (roles/filebeat/defaults/main.yml)
---
filebeat_ssl_certificate: true
filebeat_logzio_token: secret_token_for_this_nice_service
filebeat_logstash_host: listener.logz.io:5015
filebeat_logstash_tls_certificate_authorities: /etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt
filebeat_env: env_not_set
filebeat_role: role_not_set
I can use these parameters anywhere in the role.
The run starts from ‘roles/filebeat/tasks/main.yml‘ All it does is include for other tasks:
---
- include: start.yml
- include: configure.yml
- include: install.yml
They will run in the order you see them.
Start:
I will probably remove this later, but since I use IP address without DNS I like to print out the hostnames along the way to help me debug, so this is all it does eventually. There must be a smarter way…
--- - name: Get name shell: hostname register: hostname # we restart anyway notify: Restart Service | filebeat - name: Print name local_action: command echo item with_items: '{{ hostname.stdout_lines }}'
What it does is run the ‘hostname’ command, register this command for debugging and stuff and calls notify which I’ll talk about later. The second part uses the recorded output of the ‘hostname command and prints it out locally on the server I’m running from. This looks bad and will also have an undesired side effect of registering a “change” on every run which not idempotent. You’ll see it in the output. Ok for debugging, bad for production.
Configure:
I’m not going to explain much:
---
- name: Upload SSL | filebeat
template:
src: etc/filebeat/COMODORSADomainValidationSecureServerCA.crt
dest: /etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt
owner: root
group: root
mode: 0600
notify: Restart Service | filebeat
tags:
- configuration
- template-configuration
- filebeat
- name: Upload yum repo file | filebeat
template:
src: etc/yum.repos.d/elastic.repo
dest: /etc/yum.repos.d/elastic.repo
owner: root
group: root
mode: 0644
tags:
- configuration
- yum-repo
- filebeat
- name: Create /etc/filebeat directory
file: path=/etc/filebeat/conf.d state=directory mode=0755
- name: Upload main config | filebeat
template:
src: etc/filebeat/filebeat.yml
dest: /etc/filebeat/filebeat.yml
owner: ec2-user
group: ec2-user
mode: 0664
notify: Restart Service | filebeat
tags:
- configuration
- template-configuration
- filebeat
- name: Upload nodejs logs config | filebeat
template:
src: etc/filebeat/conf.d/nodejs.yml
dest: /etc/filebeat/conf.d/nodejs.yml
owner: ec2-user
group: ec2-user
mode: 0664
notify: Restart Service | filebeat
when: ( filebeat_role != 'audience' ) and ( filebeat_role != 'spark' )
tags:
- configuration
- template-configuration
- filebeat
- name: Upload audience logs config | filebeat
template:
src: etc/filebeat/conf.d/audience.yml
dest: /etc/filebeat/conf.d/audience.yml
owner: ec2-user
group: ec2-user
mode: 0664
notify: Restart Service | filebeat
when: filebeat_role == 'audience'
tags:
- configuration
- template-configuration
- filebeat
Just a few highlights:
- Templates are good for simply copying files, but the directories need to be there upfront. I used the file module for that (it works like ‘mkdir -p’). Templates obviously allow you to put {{ variables }} and even erb like code. The variables are also accessible here.
- When is the way to have a condition on an action. It will take place only if the condition result is true. As you can see variables are used ‘as-is’ no dollar signs or braces of any kind.
- Notify calls a Handler. Handlers are there to do the restart (in this case) just once at the end, and only if someone asked for it. If you look at the ugly ‘start’ code you can see I’m calling it anyway, just because I needed it to overcome an unrelated issue.
- Yumrepo there is already a module for this – but it’s still in beta and I couldn’t get it. So instead I just copied the repo file into place.
Here are the rest of the files (I’m pretty much done explaining).
Install:
roles/filebeat/tasks/install.yml
---
- name: Install Packages | yum
yum: name=filebeat state=latest
tags:
- filebeat
- software-installation
- using-yum
- name: Start with system | filebeat
service: name=filebeat enabled=yes state=restarted
Templates:
Just a partial example to show how I used variables in the template
filebeat: prospectors: # EB Activity Log - paths: - /var/log/eb-activity.log encoding: plain input_type: log fields: logzio_codec: plain token: {{ filebeat_logzio_token }} environment: {{ filebeat_env }} role: {{ filebeat_role }}
If you forgot how to run it eventually then scroll back up or just: “ansible-playbook playbook-file.yml” from the ‘/etc/ansible’ dir.