Hey there! Let’s talk about logging in your production system. It can be a real pain to manage logs when things get big, but don’t worry, I’ve got a solution for you! In this post, I’ll show you how to use the roura/fluentd-s3
Docker image to easily send your app logs to Amazon S3. It’s perfect for long-term storage, analysis, and keeping things compliant.
Why S3 for Logs?
Amazon S3 is awesome for storing logs. Here’s why:
- Super Durable: Your logs will be safe with 99.999999999% durability (that’s 11 nines, folks!).
- Easy on the Wallet: You only pay for what you use, and the storage costs are low.
- Scales Like Crazy: You’ll never run out of space with S3’s virtually unlimited storage.
- Plays Nice with AWS: It works seamlessly with AWS analytics services like Athena and Glue.
- Smart Lifecycle Policies: You can set up rules to move old logs to cheaper storage or delete them automatically.
Getting Started with roura/fluentd-s3
The roura/fluentd-s3
image is like a pre-configured Fluentd setup that collects your logs and sends them to S3. It’s super easy to use and still gives you plenty of options to customize.
Basic Usage
To start collecting logs with the roura/fluentd-s3
image, just run this command:
docker run -d \
-p 24224:24224 \
-e AWS_ACCESS_KEY_ID=your_access_key \
-e AWS_SECRET_ACCESS_KEY=your_secret_key \
-e S3_BUCKET=your-log-bucket \
-e S3_REGION=us-east-1 \
roura/fluentd-s3
Boom! Your Fluentd instance is up and running, ready to receive logs.
Sending Logs from Docker Containers
To send logs from your Docker containers to Fluentd, use the Fluentd logging driver:
docker run --log-driver=fluentd \
--log-opt fluentd-address=localhost:24224 \
your-application-image
If you’re using Docker Compose, just add this to your service definition:
services:
your-app:
image: your-application-image
logging:
driver: fluentd
options:
fluentd-address: fluentd:24224
Configuring Things
The roura/fluentd-s3
image is super flexible. You can customize it using environment variables without touching any config files.
Essential Config
Variable | Description | Required? |
---|---|---|
AWS_ACCESS_KEY_ID | AWS access key with S3 write permissions | Yes |
AWS_SECRET_ACCESS_KEY | AWS secret access key | Yes |
S3_BUCKET | S3 bucket name where logs will be stored | Yes |
S3_REGION | AWS region for the S3 bucket (e.g., us-east-1 ) | Yes |
Advanced Config
Variable | Description | Default |
---|---|---|
FLUENTD_PORT | Port for Fluentd to listen on | 24224 |
FLUENTD_BIND | IP address to bind Fluentd to | 0.0.0.0 |
S3_PATH | Path prefix in the S3 bucket | logs/ |
BUFFER_PATH | Path for Fluentd buffer files | /var/log/fluentd |
TIME_SLICE_FORMAT | Format for time slices | %Y%m%d |
TIME_SLICE_WAIT | Wait time before uploading time slices | 15m |
Organizing Logs in S3
By default, logs are stored in S3 using a time-based directory structure. But you can customize it using the S3_PATH
and TIME_SLICE_FORMAT
variables.
For example, to organize logs by app and date:
docker run -d \
-p 24224:24224 \
-e AWS_ACCESS_KEY_ID=your_access_key \
-e AWS_SECRET_ACCESS_KEY=your_secret_key \
-e S3_BUCKET=your-log-bucket \
-e S3_REGION=us-east-1 \
-e S3_PATH=logs/my-application/ \
-e TIME_SLICE_FORMAT=%Y/%m/%d \
roura/fluentd-s3
This will create a structure like:
your-log-bucket/
└── logs/
└── my-application/
└── 2023/
└── 05/
└── 15/
└── logs.gz
Fine-tuning Performance
Buffer Settings
The TIME_SLICE_WAIT
parameter controls how long Fluentd waits before uploading logs to S3. The default is 15 minutes, which is a good balance between reducing API calls and getting logs to S3 quickly.
If you want more real-time log delivery, you can reduce this value:
docker run -d \
# ... other parameters ...
-e TIME_SLICE_WAIT=5m \
roura/fluentd-s3
Local Buffer Storage
By default, Fluentd buffers logs to /var/log/fluentd
before uploading them to S3. If you’re dealing with a ton of logs, you might want to mount a volume to keep the buffer safe and prevent data loss:
docker run -d \
# ... other parameters ...
-v /path/on/host:/var/log/fluentd \
roura/fluentd-s3
Keeping Things Secure
IAM Permissions
Instead of using your AWS root credentials, create an IAM user with minimal permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-log-bucket",
"arn:aws:s3:::your-log-bucket/*"
]
}
]
}
Using IAM Roles with ECS/EKS
If you’re running on AWS ECS or EKS, you can use IAM roles instead of hardcoded credentials:
docker run -d \
-p 24224:24224 \
-e S3_BUCKET=your-log-bucket \
-e S3_REGION=us-east-1 \
roura/fluentd-s3
The container will automatically use the instance’s IAM role credentials.