AWS Archives - digitalis.io https://digitalis.io Any Kubernetes. Any Cloud. Any Data Center. Tue, 05 Jan 2021 16:22:04 +0000 en-GB hourly 1 https://wordpress.org/?v=5.6.2 https://digitalis.io/wp-content/uploads/2020/06/cropped-Digitalis-512x512-Blue_Digitalis-512x512-Blue-32x32.png AWS Archives - digitalis.io https://digitalis.io 32 32 ECS Container monitoring using cAdvisor https://digitalis.io/blog/aws/ecs-container-monitoring-using-cadvisor/ https://digitalis.io/blog/aws/ecs-container-monitoring-using-cadvisor/#respond Sat, 28 Nov 2020 18:38:00 +0000 https://digitalis.io/?p=11170 In this topic, I will explain how to monitor docker containers running on an ECS cluster. Even though AWS CloudWatch is the preferred tool for monitoring and collecting container metrics, in some scenarios it is required to use alternative solutions.

The post ECS Container monitoring using cAdvisor appeared first on digitalis.io.

]]>
ECS Container monitoring using cAdvisor
l

28 Nov, 2020

LinkedInTwitter

In this topic, I will explain how to monitor docker containers running on an ECS cluster. Even though AWS CloudWatch is the preferred tool for monitoring and collecting container metrics, in some scenarios it is required to use alternative solutions.

cAdvisor is an open-source project for understanding the resource usage of running containers.

Metrics collected using cAdvisor can be analyzed using its own web UI or can be exported to various storage drivers. Here I will explain how to use cAdvisor to collect metrics from ECS and ship it to Prometheus for further use.

Prometheus is a widely used open-source tool for monitoring and alerting systems. It can collect metrics from targets and trigger alerts based on conditions and rules evaluation.

But we have CloudWatch?

Yes, CloudWatch may be the easiest solution for metrics collection for ECS. But I was already using Prometheus for storing and alerting metrics from various other systems. So I needed a solution to export the metrics to Prometheus and leverage my well tested and trusted monitoring and alerting ecosystem.
Running cAdvisor cAdvisor can be run either as a docker container or standalone. It is run as an ECS task as there is already an ECS cluster for scheduling and running docker containers.

Create an ECS cluster and a task definition to start with.
ECS cluster creation guide is available here – https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html

Follow this AWS guide to create task definition- https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html.

A sample task definition is provided below for reference.

{
  "ipcMode": null,
  "executionRoleArn": "arn:aws:iam::123456789012:role/TaskExecutionRole",
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "environmentFiles": null,
      "logConfiguration": null,
      "entryPoint": null,
      "portMappings": [
        {
          "hostPort": 8080,
          "protocol": "tcp",
          "containerPort": 8080
        }
      ],
      "command": null,
      "linuxParameters": null,
      "cpu": 0,
      "environment": [],
      "resourceRequirements": null,
      "ulimits": null,
      "dnsServers": null,
      "mountPoints": [
        {
          "readOnly": true,
          "containerPath": "/rootfs",
          "sourceVolume": "root"
        },
        {
          "readOnly": false,
          "containerPath": "/var/run",
          "sourceVolume": "var_run"
        },
        {
          "readOnly": true,
          "containerPath": "/sys",
          "sourceVolume": "sys"
        },
        {
          "readOnly": true,
          "containerPath": "/var/lib/docker",
          "sourceVolume": "var_lib_docker"
        }
      ],
      "workingDirectory": null,
      "secrets": null,
      "dockerSecurityOptions": null,
      "memory": 256,
      "memoryReservation": null,
      "volumesFrom": [],
      "stopTimeout": null,
      "image": "google/cadvisor",
      "startTimeout": null,
      "firelensConfiguration": null,
      "dependsOn": null,
      "disableNetworking": null,
      "interactive": null,
      "healthCheck": null,
      "essential": true,
      "links": null,
      "hostname": null,
      "extraHosts": null,
      "pseudoTerminal": null,
      "user": null,
      "readonlyRootFilesystem": null,
      "dockerLabels": {
        "PROMETHEUS_EXPORTER_PORT": "8080",
        "PROMETHEUS_EXPORTER_JOB_NAME": "prometheus-ecs-discovery"
      },
      "systemControls": null,
      "privileged": null,
      "name": "cadvisor"
    }
  ],
  "placementConstraints": [],
  "memory": "256",
  "taskRoleArn": "arn:aws:iam::123456789012:role/DefaultTaskRole",
  "compatibilities": [
    "EC2"
  ],
  "taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/cAdvisor:1",
  "family": "cAdvisor",
  "requiresAttributes": [
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.task-iam-role"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.task-eni"
    }
  ],
  "pidMode": null,
  "requiresCompatibilities": [
    "EC2"
  ],
  "networkMode": "awsvpc",
  "cpu": "512",
  "revision": 4,
  "status": "ACTIVE",
  "inferenceAccelerators": null,
  "proxyConfiguration": null,
  "volumes": [
    {
      "fsxWindowsFileServerVolumeConfiguration": null,
      "efsVolumeConfiguration": null,
      "name": "root",
      "host": {
        "sourcePath": "/"
      },
      "dockerVolumeConfiguration": null
    },
    {
      "fsxWindowsFileServerVolumeConfiguration": null,
      "efsVolumeConfiguration": null,
      "name": "var_run",
      "host": {
        "sourcePath": "/var/run"
      },
      "dockerVolumeConfiguration": null
    },
    {
      "fsxWindowsFileServerVolumeConfiguration": null,
      "efsVolumeConfiguration": null,
      "name": "sys",
      "host": {
        "sourcePath": "/sys"
      },
      "dockerVolumeConfiguration": null
    },
    {
      "fsxWindowsFileServerVolumeConfiguration": null,
      "efsVolumeConfiguration": null,
      "name": "var_lib_docker",
      "host": {
        "sourcePath": "/var/lib/docker/"
      },
      "dockerVolumeConfiguration": null
    }
  ]
}

Create a new service using this task definition.

Follow AWS guide on service creation. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html.

It is important to choose the DAEMON service type as cAdvisor needs to be running on all ECS EC2 instances.

Create an Application Load Balancer to access cAdvisor service, which is listening on port 8080, and attach this ALB to the cAdvisor service. This step is optional and is only required if it is required to access cAdvisor directly.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html.

Once the cAdvisor task is started, the Web UI can be accessed using the ALB DNS name.

Shipping metrics to Prometheus

It requires adding the cAdvisor endpoints in Prometheus to ship the metrics exposed by cAdvisor.

As cAdvisor is running as a container, the IP address of the cAdvisor endpoints will be dynamically assigned and will be changed when the task is restarted. This requires Prometheus to discover and register the targets dynamically.

Prometheus Amazon ECS discovery (https://github.com/teralytics/prometheus-ecs-discovery) discovers and registers these dynamic endpoints in Prometheus. It generates the list of cAdvisor endpoints in a file. Prometheus then can utilize the file_sd_config option to read targets from the file. Sample Prometheus config is provided below:

- job_name: ecs
  honor_timestamps: true
  metrics_path: /metrics
  scheme: http
  file_sd_configs:
  - files:
    - /var/lib/prometheus/discovery/ecs_file_sd.yml
    refresh_interval: 1m

It relies on the PROMETHEUS_EXPORTER_PORT label by default to find the docker port where cAdvisor is listening. It is possible to customize this label by passing the -config.port-label option to Prometheus Amazon ECS discovery.

To read the ECS details, AWS credentials can be used as environment variables. Alternatively, an AWS role ARN can be passed using –config.role-arn option.

Full configuration options can be found at https://github.com/teralytics/prometheus-ecs-discovery/blob/master/README.md

Once Prometheus registers these endpoints, they can be found on the target page on Prometheus. The metrics exported by cAdvisor are prefixed with “container_” by default.

Jino John

Jino John

DevOp Engineer

Jiono has 19 years of IT experience of working with both small and large companies. He started his career as a Linux engineer and has looked after IT systems in various Financial Technology companies. Jino has extensive experience in designing and implementing AWS solutions and is an AWS Certified DevOps Engineer – Professional.

Categories

Archives

Related Articles

The post ECS Container monitoring using cAdvisor appeared first on digitalis.io.

]]>
https://digitalis.io/blog/aws/ecs-container-monitoring-using-cadvisor/feed/ 0
Incremental backups with rsync and hard links https://digitalis.io/blog/linux/incremental-backups-with-rsync-and-hard-links/ https://digitalis.io/blog/linux/incremental-backups-with-rsync-and-hard-links/#respond Fri, 13 Nov 2020 16:33:28 +0000 https://digitalis.io/?p=10155 There are many different options that control the behaviour of the backup process and how it determines what files to copy, link or delete, this blog describes how to build a simple incremental backup solution using rsync and hard links.

The post Incremental backups with rsync and hard links appeared first on digitalis.io.

]]>
Incremental backups with rsync and hard links

13 Nov, 2020

LinkedInTwitter

In this post I am going to describe a way to build a simple incremental backup solution using rsync and hard links. You may already be familiar with rsync but for anyone who is not, rsync is a command-line tool commonly used on Linux and other UNIX-like operating systems to copy and synchronise directories. I will assume some prior knowledge of rsync in this post so if you have not used it before there may be some parts that confuse you!

A bit of background

Before we go into the details you should understand how files are stored on the filesystem and how hard links work.
All files and directories are represented in the filesystem by an inode number which is the filesystem’s internal identity for the file. If you run ls -li in a directory you can see the inode numbers listed on the left:
[user1@backupbox dir1]$ ls -li
total 128
33839002 -rw-rw-r--. 1 user1 user1 12942 Oct  2 16:14 file1
33839003 -rw-rw-r--. 1 user1 user1 14106 Oct  2 16:14 file2
33839004 -rw-rw-r--. 1 user1 user1 19360 Oct  2 16:14 file3
33839005 -rw-rw-r--. 1 user1 user1 17093 Oct  2 16:14 file4
33839006 -rw-rw-r--. 1 user1 user1 16094 Oct  2 16:14 file5
A “file” as we see it by path and filename is in fact a reference to the inode and is often referred to as a “link”. When you create a hard link from one file to another you are creating a separate reference (link) from a new filename to the same inode number. This is different from a “soft” or “symbolic” link (symlink) which is a reference from one location to another path in the filesystem. You can see the difference in the output of ls -li:
[user1@backupbox dir1]$ ls -li
total 64
33839002 -rw-r--r--. 2 user1 user1 12942 Oct  2 16:14 file1
33839003 -rw-r--r--. 2 user1 user1 14106 Oct  2 16:14 file2
33839002 -rw-r--r--. 2 user1 user1 12942 Oct  2 16:14 hardlink1
33839003 -rw-r--r--. 2 user1 user1 14106 Oct  2 16:14 hardlink2
33695760 lrwxrwxrwx. 1 user1 user1     5 Oct  2 16:15 symlink1 -> file1
33695762 lrwxrwxrwx. 1 user1 user1     5 Oct  2 16:15 symlink2 -> file2
When you edit the original file the changes are also visible in the hard-linked version:
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 file1
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 hardlink1
[user1@backupbox dir1]$ cat file1
This is file1
[user1@backupbox dir1]$ cat hardlink1
This is file1
[user1@backupbox dir1]$ echo "an extra line" >>file1
[user1@backupbox dir1]$ cat file1
This is file1
an extra line
[user1@backupbox dir1]$ cat hardlink1
This is file1
an extra line
And if you edit the hard-linked file the changes are seen in the original file:
[user1@backupbox dir1]$ echo "another extra line" >>hardlink1
[user1@backupbox dir1]$ cat file1
This is file1
an extra line
another extra line
[user1@backupbox dir1]$ cat hardlink1
This is file1
an extra line
another extra line
Changing the ownership and permissions also affects both files:
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 file1
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 hardlink1
[user1@backupbox dir1]$ sudo chown root.root file1
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-r--r--. 2 root  root  47 Oct  2 16:19 file1
33839002 -rw-r--r--. 2 root  root  47 Oct  2 16:19 hardlink1
[user1@backupbox dir1]$ sudo chmod 0666 hardlink1
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-rw-rw-. 2 root  root  47 Oct  2 16:19 file1
33839002 -rw-rw-rw-. 2 root  root  47 Oct  2 16:19 hardlink1
Now if we delete the original file we will see that the hard link still exists and the file content remains intact. In contrast a symlink pointing to the original file will no longer be valid:
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 file1
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 hardlink1
33695760 lrwxrwxrwx. 1 user1 user1  5 Oct  2 16:15 symlink1 -> file1
[user1@backupbox dir1]$ rm -f file1
[user1@backupbox dir1]$ ls -li
total 4
33839002 -rw-r--r--. 1 user1 user1 47 Oct  2 16:19 hardlink1
33695760 lrwxrwxrwx. 1 user1 user1  5 Oct  2 16:15 symlink1 -> file1
[user1@backupbox dir1]$ cat hardlink1
This is file1
an extra line
another extra line
[user1@backupbox dir1]$ cat symlink1
cat: symlink1: No such file or directory
We can even create another hard link and delete the existing one and the data still remains intact:
[user1@backupbox dir1]$ ls -li
total 4
33839002 -rw-r--r--. 1 user1 user1 47 Oct  2 16:19 hardlink1
[user1@backupbox dir1]$ ln hardlink1 newlink1
[user1@backupbox dir1]$ ls -li
total 8
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 hardlink1
33839002 -rw-r--r--. 2 user1 user1 47 Oct  2 16:19 newlink1
[user1@backupbox dir1]$ rm hardlink1
[user1@backupbox dir1]$ ls -li
total 4
33839002 -rw-r--r--. 1 user1 user1 47 Oct  2 16:19 newlink1
[user1@backupbox dir1]$ cat newlink1
This is file1
an extra line
another extra line
When you delete a file using the rm command, or any other method, what you are actually doing is just removing the link to the inode. This is why the function to delete a file in languages such as C and PHP is called “unlink”. When all links to an inode have been removed the inode itself will be deleted. As long as there is at least one link pointing to it the inode and the data will remain intact.

So what does this have to do with rsync and incremental backups?

Let’s say we want to create a mirror of a remote directory /home/data from a server named server1 into a local directory /backup/server1. Typically we would do something like this:
rsync -av --delete server1:/home/data/ /backup/server1/

We would then run the same command again each time we wanted to update the mirror with the latest changes from the server.

To implement a basic incremental backup system we might consider making a local copy of the previous backup before starting the rsync:

[user1@backupbox dir1]$ cp -a /backup/server1/ /backup/server1Old/

Then we update our mirror from the remote server:

[user1@backupbox dir1]$ rsync -av --delete server1:/home/data/ /backup/server1/

Obviously this isn’t very efficient in either time or space so we could improve this by using hard links instead, which can be done by adding the -l argument to the cp command:

# Create a hard-linked clone of the current backup
cp -al /backup/server1 /backup/server1Old
# update our mirror from the remote server
rsync -av --delete server1:/home/data/ /backup/server1/
This previous backup is preserved in /backup/server1Old and /backup/server1 will contain the entire new backup and only uses the space required for the new and changed files. This creates an efficient way to implement incremental backups, however it still has its limitations especially when dealing with large numbers of files.

To improve things further we can use a feature in rsync which enables us to efficiently create hard-linked copies of a directory’s contents with only the changed files taking up space on disk. The rsync feature we need is the –link-dest argument.

Taking this as a starting point:

server1:/home/data: Remote source directory

/backup/server1New: Destination for a new backup. Does not yet exist

/backup/server1Old: Existing previous backup

The result we want in /backup/server1New is that all unchanged files are hard links to the existing files in /backup/server1Old and only the changed files are copied from the remote server and take up space in the new backup.

This is exactly what the –link-dest argument does for us. It performs a normal rsync from server1:/home/data to /backup/server1New but if the file does not exist in /backup/server1New it will look at the same relative path under /backup/server1Old to see if the file has changed. If the file in /backup/server1Old is the same as the file on the remote server then instead of copying it over rsync will create a hard link from the file in /backup/server1Old into /backup/server1New.

To use this we just add the “old” directory as the –link-dest argument to our rsync command:

rsync -av --link-dest /backup/server1Old server1:/home/data/ /backup/server1New/

Here we can see the old backup directory’s contents:

[user1@backupbox ~]$ ls -lRi /backup/server1Old/
/backup/server1Old/:
total 0
68876 drwxrwxr-x. 3 user1 user1 53 Oct  2 17:30 files
 
/backup/server1Old/files:
total 72
33651935 drwxrwxr-x. 2 user1 user1    42 Oct  2 17:30 bar
   68882 -rw-rw-r--. 1 user1 user1 28883 Oct  2 17:30 foo1
   68883 -rw-rw-r--. 1 user1 user1 27763 Oct  2 17:30 foo2
   68884 -rw-rw-r--. 1 user1 user1 10487 Oct  2 17:30 foo3
 
/backup/server1Old/files/bar:
total 76
33695759 -rw-rw-r--. 1 user1 user1 32603 Oct  2 17:30 bar1
33838984 -rw-rw-r--. 1 user1 user1 15318 Oct  2 17:30 bar2
33839003 -rw-rw-r--. 1 user1 user1 26122 Oct  2 17:30 bar3

On the server we then modify a file:

[user1@server1 files]$ echo "Hello world" >/home/data/files/foo3

Now we run our incremental backup command:

[user1@backupbox ~]$ rsync -av --link-dest=/backup/server1Old server1:/home/data/ /backup/server1New/
receiving incremental file list
created directory /backup/server1New
files/foo3
 
sent 136 bytes  received 272 bytes  816.00 bytes/sec
total size is 130,701  speedup is 320.35

We can see from the rsync output that only the changed file has been copied but if we list the contents of the new directory we can see it contains all of the files:

[user1@backupbox ~]$ ls -lRi /backup/server1New/
/backup/server1New/:
total 0
101051460 drwxrwxr-x. 3 user1 user1 53 Oct  2 17:30 files
 
/backup/server1New/files:
total 64
    68885 drwxrwxr-x. 2 user1 user1    42 Oct  2 17:30 bar
    68882 -rw-rw-r--. 2 user1 user1 28883 Oct  2 17:30 foo1
    68883 -rw-rw-r--. 2 user1 user1 27763 Oct  2 17:30 foo2
101051461 -rw-rw-r--. 1 user1 user1    12 Oct  2 17:40 foo3
 
/backup/server1New/files/bar:
total 76
33695759 -rw-rw-r--. 2 user1 user1 32603 Oct  2 17:30 bar1
33838984 -rw-rw-r--. 2 user1 user1 15318 Oct  2 17:30 bar2
33839003 -rw-rw-r--. 2 user1 user1 26122 Oct  2 17:30 bar3
If you compare the inode numbers to the listing of /backup/server1Old above you will see that only the modified file and the directories have different inode numbers.

Using du we can also see that the second backup takes up less space on disk:

[user1@backupbox ~]$ du -chs /backup/server1*
140K	/backup/server1New
12K	/backup/server1Old
152K	total

Putting it all together

Here is an example script that can be used to create daily incremental backups of a directory. Each backup is stored in a directory named after today’s date and it will look for yesterday’s backup to create the hard links:

#!/bin/bash
 
# The source path to backup. Can be local or remote.
SOURCE=servername:/source/dir/
# Where to store the incremental backups
DESTBASE=/backup/servername_data
 
# Where to store today's backup
DEST="$DESTBASE/$(date +%Y-%m-%d)"
# Where to find yesterday's backup
YESTERDAY="$DESTBASE/$(date -d yesterday +%Y-%m-%d)/"
 
# Use yesterday's backup as the incremental base if it exists
if [ -d "$YESTERDAY" ]
then
	OPTS="--link-dest $YESTERDAY"
fi
 
# Run the rsync
rsync -av $OPTS "$SOURCE" "$DEST"
The beauty of doing your backups this way is that each daily backup is a full mirror of the remote directory. This means there is no complex logic required to find the latest version of a file or to find a file from a specific date, just go to the directory named with the date you want and open the file as normal. Each backup directory is completely independent of the others so if you need to free up some space you can just delete any of the backups that you no longer require. Removing a backup will not impact the backups before or after, a simple rm -rf is all you need!

Limitations

As with every backup solution this one has its limitations and you must choose a method that fits your particular use-case. Here are a few examples of limitations in this solution:

  • Changes in permissions or ownership on a source file mean the file is counted as a new file so it will be copied again even if its contents have not changed. There are options in rsync to control this behaviour.
  • If you move or rename a file on the source server it will count as a new file and will be copied in full even if its contents have not changed and it still has the same inode number.
  • Directories themselves cannot be hard linked on most filesystems so this is not supported by rsync. For most use cases this is not a problem but if you have an enormous number of directories in the backup they will start to take a noticeable amount of space on the backup disk.

Conclusion

When it comes to using rsync for backups this is only the tip of the iceberg. There are many different options that control the behaviour of the backup process and how it determines what files to copy, link or delete. Further information about rsync can be found on their website, https://rsync.samba.org/.
Richard Gooding

Richard Gooding

Technical Lead

Richard has a varied history in development, devops and databases so he is always comfortable on either side of the dev/ops fence. His past experience includes web and email hosting, software testing, building desktop and mobile apps, managing large Cassandra clusters, building and running large-scale distributed applications and more.

Categories

Archives

Related Articles

The post Incremental backups with rsync and hard links appeared first on digitalis.io.

]]>
https://digitalis.io/blog/linux/incremental-backups-with-rsync-and-hard-links/feed/ 0