De-mystifying AWS S3 Usage

AWS S3 is a fantastic resource for cloud object storage! The only complaint that I often hear is in the lack of transparency to understand current usage. Although there are monthly costs, its sometimes pretty hard to see exactly where you’re using space most heavily.


Because S3 is an object storage engine, your files are not stored hierarchically or registered centrally – as with a traditional file system – which essentially means that each file stored on S3 is an individual entity. Although you can see them in a pseudo hierarchical view on the S3 dashboard for instance, this is actually simply a rendering of each file’s path metadata. This means that there is no built-in traversal of files and directories, and no easily accessible way to see total space usage for a given bucket or bucket folder.


AWS CLI to the rescue! Using the modern cli we are able to query our buckets for more information.

NOTE: If this is your first time using the cli, you need to follow the setup instructions that includes how to install and authorize your cli.

$ aws s3 ls s3://
PRE dir1/  
PRE dir2/  
PRE dir3/  

This isn’t massively useful just yet, luckily there is a --recursive option that will loop through every object in the bucket!

$ aws s3 ls s3:// --recursive
2016-08-11 12:44:38          0 dir1/  
2016-10-04 10:15:37          0 dir1/images/  
2016-10-04 10:20:09      21394 dir1/images/load_balancer_dns.png  
2016-10-04 10:20:08      40650 dir1/images/load_balancer_icon.png  
2016-10-04 10:20:09      19940 dir1/images/load_balancer_page.png  

We’re getting somewhere! Next we can apply some grep magic to exclude non-file results:

$ aws s3 ls s3:// --recursive  | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)"
2016-08-11 12:44:38          0 dir1/  
2016-10-04 10:15:37          0 dir1/images/  
2016-10-04 10:20:09      21394 dir1/images/load_balancer_dns.png  
2016-10-04 10:20:08      40650 dir1/images/load_balancer_icon.png  
2016-10-04 10:20:09      19940 dir1/images/load_balancer_page.png  

The next step is to total up the sizes from our results. For this we can use the handy awk command:

$ aws s3 ls s3:// --recursive  | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'
2631.37 MB  

And there you have it – a single-line command to calculate the total AWS S3 space usage for a bucket or bucket folder

…and now to delete some stuff we don’t really need anymore!

Automatic backup MSSQL Server to Amazon S3

In this article we will explain of how to set up automatic backup of Microsoft SQL Server to Amazon S3. The method described in this article works well for all editions of MSSQL Server, including Microsoft SQL Server Express Edition.

We use Microsoft SQL Server Management Studio to generate backup script.

If you do not have this tool installed, you can download and install it from Microsoft’s website. If you are familiar with T-SQL you can write backup script manually or use existing one.

Open SQL Server Management Studio, expand Databases and select the database you want to backup. Right-click the database, point to Tasks, and then click Back Up.

SQL Management Studio -> Tasks -> Back Up

Open SQL Server Management Studio, right-click the database and choose Tasks -> Back Up

The Back Up Database dialog box will appear. Here you can configure various backup settings. Detailed description can be found here.

SQL Management Studio -> Back Up Database dialog

Back Up Database dialog allows you to configure various backup settings

Note that we changed default destination to Z: drive, this is the virtual drive that points to the Amazon S3 Bucket. Check out these simple instructions to learn how to mount an S3 Bucket as a Windows Drive.

After you configure backup click Script -> Script Action to File and save generated script to the file.

save sql script to the file

Script Action to File

The following script was generated:

BACKUP DATABASE [master] TO  DISK = N'Z:\mssql-backup\full-backup.bak' 
WITH  DESCRIPTION = N'Automatic backup to Amazon S3 (_full_)', 
NOFORMAT, NOINIT,  NAME = N'master-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

Let’s see this script in action. Open Command Prompt and type the following command if you are using Windows Authentication:

sqlcmd -S .\SQLEXPRESS -i c:\scripts\full-db-backup.sql

Or type the following command if you are using SQL Authentication:

sqlcmd -U BackupUser -P password -S .\SQLEXPRESS -i c:\scripts\full-db-backup.sql

You should see output similar to this

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\Administrator>sqlcmd -S .\SQLEXPRESS -i c:\scripts\full-db-backup.sql
11 percent processed.
20 percent processed.
31 percent processed.
40 percent processed.
52 percent processed.
61 percent processed.
70 percent processed.
81 percent processed.
90 percent processed.
Processed 352 pages for database 'master', file 'master' on file 5.
100 percent processed.
Processed 2 pages for database 'master', file 'mastlog' on file 5.
BACKUP DATABASE successfully processed 354 pages in 0.659 seconds (4.394 MB/sec).


Ok, now let’s create the following batch file c:\scripts\run-mssql-backup.cmd

@echo off
sqlcmd -S .\SQLEXPRESS -i c:\scripts\full-db-backup.sql

And finally type the following command to add new task to Task Scheduler

schtasks /Create /TN mssql-full-backup /SC DAILY /TR c:\scripts\run-mssql-backup.cmd /ST 01:00

This command creates a scheduled task “mssql-full-backup” to run c:\scripts\run-mssql-backup.cmd starting at 01:00 every day. You will see output similar to this:

SUCCESS: The scheduled task "mssql-full-backup" has successfully been created.

Congratulations, automated backup to Amazon S3 is configured.

How Do I Convert My SSL Certificate File To PEM Format?

Most Certificate Authorities (CAs) issue certificates in PEM format. PEM certificates typically have extensions like .pem.crt.cer, and .key.

The PEM format uses the header and footer lines -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

Other certificate formats include the DER/Binary, P7B/PKCS#7, and PFX/PKCS#12 formats. The AeroFS Appliance requires a certificate in PEM format in step 9 of theappliance setup. This certificate will be used to ensure secure transactions between your appliance and users’ web browsers.

Converting Your Existing Certificate To PEM Format

If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL commands:

Convert DER to PEM

openssl x509 -inform der -in certificate.cer -out certificate.pem

Convert P7B to PEM

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.pem

Convert PFX to PEM

openssl pkcs12 -in certificate.pfx -out certificate.pem -nodes

Alternatively, you can use this SSL converter tool.

Removing Passphrase From Existing Private Key File

If you try to upload a passphrase-protected private key file, you will get a “key is invalid” error message. To fix this you will need to remove the passphrase from your private key file and upload the passphrase-free private key file to your appliance. You can remove the passphrase as follows:

1. Run openssl rsa -in example.key -out example.nocrypt.key

2. Enter your passphrase.



Tool convert SSL cert: