I recently gave a talk about Prometheus and how to get started at Cloud Native Aarhus Meetup #6.
Cloud Native Aarhus is one of the most fantastic communities I have been a part of!
The shared vision and love for cloud native by the people is just staggering.
If you wanna join us and talk about everything Cloud Native you can sign up at our Meetup page or join our slack and come chat with us!
The talk was named “Prometheus 101 – How to get started” and targets how to get going with the most awesome monitoring solution out there.
It is important to understand the components within Prometheus and the architecture of the complete system which can be found here.
The demo repo is open source and available here: https://github.com/steiniche/prometheus.
Thanks to vegasbrianc for the original repo and idea.
One of the cool things about the demo repo is that it only requires docker and docker-compose to try it out!
The day after the meetup Grafana 5.0 was released which the repo now uses and it works without any hassel on your part.
I received so much positive feedback about the presentation, so if you were at the meetup and you are reading this I want to say thank you very much, I really appreciate it.
Maybe there will be a Prometheus 201 talk some day where we deep dive into the awesome details about this fantastic monitoring system.
One of the things I think people should dive deep with is PromQL which is the query language used by prometheus.
Another thing is the service discovery capability which works out of the box with container clustering management systems such as Kubernetes.
Until then I hope to see you out there at the meetups here in Denmark and maybe hear your awesome talk!
Ansible vs Docker in the delivery pipeline.
I believe there is place for both Ansible and Docker and we have something to show for it! In my current team one of our goals are: all our infrastructure as code in git. There is many ways of doing this but we are currently using Ansible And Docker. There is not many resources on google telling you how Ansible and Docker can benefit each other. But Ansible’s own site actually have three key points about this: https://www.ansible.com/docker. These key points are: Flexibility, audibility, and ubiquity.
Ubiquity is what makes a combination of Ansible and Docker viable for us. Even though containers are awesome they cannot be run in total isolation. What I mean by this is that before we can run docker containers on our machine we need to install docker itself. There can be requirements about how the network of the machine should be setup. Sometimes we need to configure security aspects on the machine e.g. firewalls.
We are running a setup without a docker hub instead containers are distributed to the machine from a central tool: Ansible.
An easy analogy is the one the image is showing: Ansible prepares and moves containers, and the containers contain the application.
Our environment is a mix of windows and CENTOS boxes. We are currently not running docker on the windows boxes.
But by having Ansible we can actually configure both windows and linux system from one central place.
An example workflow from our setup:
Linux: Provision machine for docker -> move docker container containing web application to the machine -> run container.
Windows: Provision machine -> configure it for a web application -> deploy web application.
By using Ansible we can have a central place for both provisioning and configuration all of our machines. We are currently using powershell for windows and python for linux, however Ansible is equipped with a fairly large set of core modules which can fulfill most requirements. We have found Ansible easy to implement into our tool chain and all in all easy to work with. You can really feel that this is a tool by developers for developers doing operations.
There is a point to be made about kubernetes, and the like, where you have a platform for running your containers on a set of kubernetes hosts. But you would still need to configure kubernetes and what better tool to use than Ansible!
To summarize if you have a goal like ours, all infrastructure as code, Ansible and Docker is a great tool combination for making it a reality.
This will be one of the more technical posts and I really think this could help someone else.
I think it is a very normal situtation to be in: you have grown out of your one repository to rule them all.
And you know that you are not google.
This post will explain moving a folder from one git repository to another and the amazing tool: git filter-branch.
I had a pretty interesting task the other day, namely moving a folder from one git repository to another.
The folder was ready to live by it self as it had grown its own eco system with a complete test suite and did not have any ties to the main repository.
There were just one condition: We did not want to lose the history regarding all the files in this folder.
I got the task of figuring out how we could move the content and history and luckily for me I am not the first one try to tackle this problem.
I pieced the gist below together from various blogs and stackoverflow posts but this stackoverflow post helped me the most.
Enough rambling, lets look at some git commands.
To the git commands
Here be dragons!
I think git is an amazing tool and you should be able to do this kinda crazy thing as well, but please be careful!
The interesting part in the gist above is:
git filter-branch --subdirectory-filter node/ -- --all
Which can actually do the one thing we normally do not like in Configuration Management: rewrite history.
However, in our case it is actually what we want, rewriting the git history that is, because it will trim the repository to only contain the content and history of our node/ folder.
However, please be careful when using
git filter-branch as it will infact rewrite history and if you push it to your master branch on the wrong repository your gonna have to revert.
Hope this helped you out in your git endeavors.