- Prerequisites
- Overview of the workflow
- Step-by-step implementation
- 1) Create S3 Bucket and set lifecycle (delete after 7 days)
- 2) Create an IAM Policy and user with the least necessary access
- 3) Install and configure AWS CLI on the server
- 4) Sample script: upload_and_clean.sh
- 5) Scheduling automatic execution (Cron or systemd timer)
- 6) Checking and ensuring the success of the operation
- Advanced options and improvements
- Important security tips
- Monitoring, alerting, and disaster recovery (DR) testing
- Comparison of options: awscli vs rclone vs s3fs
- Why is choosing the right location important?
- Operational example: Setup for a VPS trading
- Summary and final points
- Sample files and resources
- Frequently Asked Questions
Prerequisites
To implement this backup bot, you need to have a basic server and AWS environment ready. Check and provide the following:
- SSH access Access to the DirectAdmin server sudo/root.
- Installation AWS CLI (version v2 preferred) or tools like rclone.
- One S3 bucket In AWS with a desired name and appropriate Region.
- A user IAM With limited access to that bucket or using IAM Role (If running on EC2).
- Enough space on the server to temporarily store backups.
- Activation Lifecycle policy In S3 for deletion after 7 days.
Overview of the workflow
The simplified process flow is as follows:
- DirectAdmin creates a backup and saves it to a local folder (e.g.
/home/admin/backups). - The script (bot) runs on a schedule and uploads backups to S3.
- After ensuring upload, the script will load local files that are more than 2 days It removes the age.
- In S3 one life cycle It is set to keep each object after 7 days Deletes.
Step-by-step implementation
1) Create S3 Bucket and set lifecycle (delete after 7 days)
You can create a bucket from the AWS console or use the AWS CLI. Example bucket creation command (example region = eu-central-1):
aws s3api create-bucket --bucket my-da-backups-bucket --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1Example lifecycle policy (file) lifecycle.json):
{"Rules":[{"ID":"ExpireBackupsAfter7Days","Prefix":"","Status":"Enabled","Expiration":{"Days":7},"NoncurrentVersionExpiration":{"NoncurrentDays":7}}]}Applying a policy with the CLI:
aws s3api put-bucket-lifecycle-configuration --bucket my-da-backups-bucket --lifecycle-configuration file://lifecycle.jsonRecommendations: If you need to reduce storage costs, you can use storage-classes like STANDARD_IA Or INTELLIGENT_TIERING If versioning is enabled, also set the NoncurrentVersionExpiration policy.
2) Create an IAM Policy and user with the least necessary access
Example of a policy limited to a bucket (JSON file):
{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["s3:PutObject","s3:GetObject","s3:ListBucket","s3:DeleteObject","s3:PutObjectAcl"],"Resource":["arn:aws:s3:::my-da-backups-bucket","arn:aws:s3:::my-da-backups-bucket/*"]}]}Security tip: It is better to use random keys instead of static keys. IAM Role (if your server is EC2). Also consider restricting access based on IP or additional conditions.
3) Install and configure AWS CLI on the server
Installation example for Debian/Ubuntu and CentOS/RHEL:
sudo apt update && sudo apt install -y awsclisudo yum install -y awscliConfiguration for access key if needed:
aws configure --profile da-backupIf you are using IAM Role on EC2, there is no need to configure it.
4) Sample script: upload_and_clean.sh
This script will upload all backups to S3 and then delete local files that are older than 2 days and are still in S3. (Place the following code in the appropriate path and make it executable.)
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
# Configuration
BACKUP_DIR="/home/admin/backups"
BUCKET="my-da-backups-bucket"
S3_PREFIX="directadmin"
AWS_PROFILE="da-backup"
LOGFILE="/var/log/da_s3_backup.log"
DAYS_LOCAL=2
# Helpers
log() { echo "$(date '+%F %T') $*" | tee -a "$LOGFILE"; }
aws_cmd() {
if [ -z "${AWS_PROFILE:-}" ]; then
aws "$@"
else
aws --profile "$AWS_PROFILE" "$@"
fi
}
# Upload files
log "Starting upload from $BACKUP_DIR to s3://$BUCKET/$S3_PREFIX"
aws_cmd s3 cp "$BACKUP_DIR" "s3://$BUCKET/$S3_PREFIX/" --recursive --only-show-errors --storage-class STANDARD --sse AES256
if [ $? -ne 0 ]; then
log "Error: upload to S3 failed."
exit 1
fi
log "Upload completed."
# Remove local files older than DAYS_LOCAL if present in S3
log "Searching for local files older than $DAYS_LOCAL days"
find "$BACKUP_DIR" -type f -mtime +"$DAYS_LOCAL" -print0 | while IFS= read -r -d '' file; do
base=$(basename "$file")
s3key="$S3_PREFIX/$base"
if aws_cmd s3api head-object --bucket "$BUCKET" --key "$s3key" >/dev/null 2>&1; then
log "Removing local file: $file (found in S3)"
rm -f "$file" && log "Deleted: $file" || log "Error deleting: $file"
else
log "S3 object not found: s3://$BUCKET/$s3key — not removed locally."
fi
done
log "Operation finished."Tips: Save the script file, for example, in /usr/local/bin/upload_and_clean.sh Put and with chmod +x Execute. Path BACKUP_DIR Modify according to your DirectAdmin settings.
5) Scheduling automatic execution (Cron or systemd timer)
To run daily at 02:00 with cron:
0 2 * * * /usr/local/bin/upload_and_clean.sh >> /var/log/da_s3_backup.log 2>&1If you wish, you can. systemd timer Build for better control and logging.
6) Checking and ensuring the success of the operation
Check the logs in /var/log/da_s3_backup.log And verify the files in S3:
To see the list of files in S3:
aws s3 ls s3://my-da-backups-bucket/directadminCheck the lifecycle period from the console or with a CLI command to make sure that deletion will occur after 7 days.
Advanced options and improvements
- Multipart upload For large files: AWS CLI supports it automatically, but for very large files you can change the concurrency settings.
- Use of SSE-KMS For encryption with an enterprise KMS key:
aws s3 cpFrom--sse aws:kms --sse-kms-key-id "arn:aws:kms:...""Use. - Activate S3 Object Lock And versioning if you need protection against unwanted deletion.
- If outbound traffic is costly, place the server near the S3 region or in a direct-attached infrastructure.
- Add notification if upload fails using AWS SNS or Webhook to send message to Slack/Email.
Important security tips
Some key tips for keeping backups safe:
- Least privilege: Use minimal permissions for IAM.
- Avoid static keys: Don't store access keys in a text file; if you're on EC2, use an IAM Role.
- Encryption: Use server-side encryption (SSE) or pre-upload encryption (gpg or rclone encryption).
- Logging: Enable S3 and CloudTrail access logs to record every operation.
Monitoring, alerting, and disaster recovery (DR) testing
Recommendations to ensure continuous operation:
- Regularly test the process of restoring backups from S3 to the server and include this testing in your monthly schedule.
- Use of S3 Inventory and CloudWatch To monitor upload rate and errors.
- Set an alert if an error or upload failure occurs.
Comparison of options: awscli vs rclone vs s3fs
AWS CLI: Simple, popular, and convenient. Supports AWS lifecycle and encryption.
rclone: More features for sync, client-side encryption, and bandwidth control. Suitable if you need to limit bandwidth or encrypt before uploading.
s3fs: Mounts the bucket as a filesystem, but may have lower performance and stability for large backup files.
Why is choosing the right location important?
Choosing the right location can reduce upload time and outbound traffic costs. Choosing a location close to the S3 region will reduce latency and operation time.
If you need additional protection, dedicated connectivity, or performance differences, infrastructure with multiple locations and special features can help.
Operational example: Setup for a VPS trading
- Choose a location close to the exchange or trading server to reduce latency.
- Enable DDoS protection for the relevant server.
- Move DirectAdmin backups to a nearby S3 to reduce backup time.
Summary and final points
Building a bot involves three main parts: secure upload to S3, local deletion of older files, 2 days After confirming the upload, and configuring the lifecycle in S3 for retention 7 days.
Use IAM with minimal access, encryption, and appropriate monitoring. Always test the restore process to ensure recovery.
Sample files and resources
- Sample lifecycle.json and script file upload_and_clean.sh It is available in the text above.
- AWS S3 lifecycle documentation
- AWS CLI Documentation









