Installing SFTP/SSH Server on Windows using OpenSSH

Natively from Microsoft 

Working with Cywin (so we could use password less and rsync backup from window to linux box) :

Recently, Microsoft has released an early version of OpenSSH for Windows. You can use the package to set up an SFTP/SSH server on Windows.

Installing SFTP/SSH Server

  • Download the latest OpenSSH for Windows binaries (package or
  • Extract the package to C:\Program Files\OpenSSH
  • As the Administrator, install SSHD and ssh-agent services:
    powershell.exe -ExecutionPolicy Bypass -File install-sshd.ps1
  • As the Administrator, generate server keys and restrict an access to them, by running the following commands from the C:\Program Files\OpenSSH directory:
    .\ssh-keygen.exe -A
    powershell.exe -ExecutionPolicy Bypass -Command ". .\FixHostFilePermissions.ps1 -Confirm:$false"
  • Allow incoming connections to SSH server in Windows Firewall:
    • Either run the following PowerShell command (Windows 8 and 2012 or newer only), as the Administrator:
      New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' -Service sshd -Enabled True -Direction Inbound -Protocol TCP -Action Allow
    • or go to Control Panel > System and Security > Windows Firewall1 > Advanced Settings > Inbound Rules and add a new rule for sshd service (or port 22).

Tools to check website – dns – email hosting status

DNS health check:

Website check:

Check IP whitelist:

Azure speed test:

AWS speedtest:

Linode speed test:

Free & Public DNS Servers (Valid January 2018)

Provider Primary DNS Server Secondary DNS Server
Comodo Secure DNS
OpenDNS Home6
Norton ConnectSafe7
Alternate DNS12
Hurricane Electric15


Automating Backups using AWS Lambda

When thinking about Cloud Services, backup is something normally not taken into consideration because we hear lots of “ephemeral”, “self-healing”, “repositories” words.

Sometimes, applications need to be backed up in order to achieve a RTO that suits customer’s business objectives.

Automation is key under these concepts, so with this document we want to indicate how easy is to setup a fully automated system using Lambda functions written in Python and scheduled in a daily basis to fulfil the requirements.

Last but not least, storage usage is also important here. If you have hundreds of snapshots, and you don’t delete them appropriately, you end up having TB of old, useless data. Removing snapshots based on a retention policy is very important in this process too.

Let’s define first the pre-requisites to have this properly working in our AWS account:

Setup IAM Permissions

  • Go to Services, IAM, Create a new Role
  • Write the name (ebs-lambda-worker)
  • Select AWS Lambda
  • Don’t select any policy, click Next, and Create Role.
  • Select the new role, and click Create Role Policy
  • Go to Custom Policy, click Select
  • Write a Policy Name, (snapshot-policy), and paste the content of the following gist.

  • What we’ve just done is allowing this role to Create/Delete Snapshots, Create tags and modify snapshots attributes. Also we have allowed permissions to Describe EC2 instances and view logs in Cloudwatch

Create Lambda Backup Function

This first function will allow us to backup every instance in our account under the region we put the lambda function, that has a “Backup” or “backup” key tag. No need to indicate a value here.

Before creating the function I would like to briefly explain what it does. The script will search for all instances having a tag with “Backup” or “backup” on it. As soon as we have the instances list, we need to get all the EBS volumes on each instance in order to have the list of EBSs to be backed up. Also, it will look for a “Retention” tag key which will be used as a retention policy number in days. If there is no tag with that name, it will use a 7 days default value for each EBS instance.

After creating the snapshot it creates a “DeleteOn” tag on the snapshot indicating when will be deleted using the Retention value and another Lambda function that we explain later on this document.

Steps to create the function:

  • Go to Services, Lambda, and click Create a Lambda Function
  • Skip the blueprint screen
  • Write a name for it (ebs-backup-worker)
  • Select Python 2.7 as a Runtime option
  • Paste the code below
  • Select the previously created IAM role (ebs-lambda-worker)
  • Click Next and Create Function

Create Lambda Prune Snapshot Function

Our snapshots are created successfully using our previous function, but as explained at the beginning of this document, we need to remove them when not needed anymore.

Reminder: By default, every instance will be backed up if it has a “Backup” or “backup” tag. Also, if no “Retention” tag is added, snapshots for the instance will be removed after a week, as the backup Lambda function will add the “DeleteOn” tag key on each snapshot with the specific date when must be deleted.

Using the same steps as before, create the function (ebs-backup-prune)

Use the following code:

You will end up with something like this:

So, you now have 2 working functions that will backup ebs volumes into snapshots and remove those when “DeleteOn” specifies. Now is time to automate using the Event sources feature from Lambda.

Schedule our Functions

We need to run at least once a day both. For doing that, we need to:

  • Go to Services, Lambda, click on the function name
  • Click on Event sources
  • Click on Add event source
  • Select Scheduled Event
  • Type Name: backup-daily or remove-daily based on the function you are scheduling
  • Schedule expression: rate(1 day)
  • Click Submit

Integration of pmacct with ElasticSearch and Kibana

In this post I want to show a solution based on a script (pmacct-to-elasticsearch) that I made to gather data from pmacct and visualize them using Kibana/ElasticSearch. It’s far from being the state of the art of IP accounting solutions, but it may be used as a starting point for further customizations and developments.

I plan to write another post with some ideas to integrate pmacct with the canonical ELK stack (ElasticSearch/Logstash/Kibana). As usual, add my RSS feed to your reader or follow me on Twitter to stay updated!

The big picture

This is the big picture of the proposed solution:

pmacct-to-elasticsearch - The big picture

There are 4 main actors: pmacct daemons (we already saw how to install and configure them) that collect accounting data, pmacct-to-elasticsearch, which reads pmacct’s output, processes it and sends it to ElasticSearch, where data are stored and organized into indices and, at last, Kibana, that is used to chart them on a web frontend.

The starting point of this tutorial is the scenario previously viewed in the Installing pmacct on a fresh Ubuntu setup post.

UPDATE: I made some changes to the original text (that was about Kibana 4 Beta 2) since Kibana 4 has been officially released

In the first part of this post I’ll cover a simple setup of both ElasticSearch 1.4.4 and Kibana 4.

In the second part I’ll show how to integrate pmacct-to-elasticsearch with the other components.

Setup of ElasticSearch and Kibana

This is a quick guide to setup the aforementioned programs in order to have a working scenario for my goals: please strongly consider security and scalability issues before using it for a real production environment. You can find everything you need on the ElasticSearch web site.


Install Java (Java 8 update 20 or later, or Java 7 update 55 or later are recommended at time of writing for ElasticSearch 1.4.4):

# apt-get install openjdk-7-jre


Install ElasticSearch from its APT repository

# wget -qO - | sudo apt-key add -
# add-apt-repository "deb stable main"
# apt-get update && apt-get install elasticsearch

… and (optionally) configure it to automatically start on boot:

# update-rc.d elasticsearch defaults 95 10

Since this post covers only a simple setup, tuning and advanced configuration are out of its scope, but it is advisable to consider the official configuration guide for any production-ready setup.
Just change a network parameter to be sure that ES does not listen on any public socket; edit the /etc/elasticsearch/elasticsearch.yml file and set

Finally, start it:

# service elasticsearch start

Wait some seconds then, if everything is ok, you can check its status with an HTTP query:

# curl http://localhost:9200/?pretty
  "status" : 200,
  "name" : "Wild Thing",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.4.4",
    "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
    "build_timestamp" : "2015-02-19T13:05:36Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.3"
  "tagline" : "You Know, for Search"

Kibana 4

Download and install the right version of Kibana 4, depending on your architecture (here I used the x64):

# cd /opt
# curl -O
# tar -zxvf kibana-4.0.0-linux-x64.tar.gz

By default, Kibana listens on for the web front-end: again, for this simple setup it’s OK, but be sure to protect your server using a firewall and/or a reverse proxy like Nginx.

Run it (here I put it in background and redirect its output to /var/log/kibana4.log):

# /opt/kibana-4.0.0-linux-x64/bin/kibana > /var/log/kibana4.log &

Wait some seconds until it starts, then point your browser at http://YOUR_IP_ADDRESS:5601 to check that everything is fne.

pmacct-to-elasticsearch configuration

Now that all the programs we need are up and running we can focus on pmacct-to-elasticsearch setup.

pmacct-to-elasticsearch is designed to read JSON output from pmacct daemons, to process it and to store it into ElasticSearch. It works with both memory and print plugins and, optionally, it can perform manipulations on data (such as to add fields on the basis of other values).

pmacct-to-elasticsearch Data flow

Install git, download the repository from GitHub and install it:

# apt-get install git
# cd /usr/local/src/
# git clone
# cd pmacct-to-elasticsearch/
# ./install

Now it’s time to configure pmacct-to-elasticsearch to send some records to ElasticSearch. Configuration details can be found in the file.

In the last post an instance of pmacctd was configured, with a memory plugin named plugin1that was performing aggregation on a socket basis (src host:port / dst host:port / protocol):

plugins: memory[plugin1]

imt_path[plugin1]: /var/spool/pmacct/plugin1.pipe
aggregate[plugin1]: etype, proto, src_host, src_port, dst_host, dst_port

In order to have pmacct-to-elasticsearch to process plugin1 output, we need to create the homonymous pmacct-to-elasticsearch configuration file, /etc/p2es/plugin1.conf; default values already point pmacct-to-elasticsearch to the local instance of ElasticSearch (URL = http://localhost:9200), so we just need to set the destination index name and type:

    "ES_IndexName": "example-%Y-%m-%d",
    "ES_Type": "socket"

Since this is a memory plugin, we also need to schedule a crontab task to consume data from the in-memory-table and pass them to pmacct-to-elasticsearch, so edit the /etc/cron.d/pmacct-to-elasticsearch file and add the line:

*/5 *  * * *     root  pmacct -l -p /var/spool/pmacct/plugin1.pipe -s -O json -e | pmacct-to-elasticsearch plugin1

Everything is now ready to have the first records inserted into ElasticSearch: if you don’t want to wait for the crontab task to run, execute the above command from command line then query ElasticSearch to show the records:

# curl http://localhost:9200/example-`date +%F`/socket/_search?pretty
  "hits" : {
    "total" : 6171,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "example-2014-12-15",
      "_type" : "socket",
      "_id" : "AUo910oSOUAYMzMu9bxU",
      "_score" : 1.0,
      "_source": { "packets": 1, "ip_dst": "",
                   "@timestamp": "2014-12-15T19:32:02Z", "bytes": 256,
                   "port_dst": 56529, "etype": "800", "port_src": 53, 
                   "ip_proto": "udp", "ip_src": "" }

(the `date +%F` is used here to obtain the actual date in the format used by the index name, that is YYYY-MM-DD)

Just to try the configuration for a print plugin, edit the /etc/pmacct/pmacctd.conf configuration file, change the plugins line and add the rest:

plugins: memory[plugin1], print[plugin2]

print_output_file[plugin2]: /var/lib/pmacct/plugin2.json
print_output[plugin2]: json
print_trigger_exec[plugin2]: /etc/p2es/triggers/plugin2
print_refresh_time[plugin2]: 60
aggregate[plugin2]: proto, src_port
aggregate_filter[plugin2]: src portrange 0-1023

Then, prepare the p2es configuration file for pmacct-to-elasticsearch execution for this plugin (/etc/p2es/plugin2.conf):

    "ES_IndexName": "example-%Y-%m-%d",
    "ES_Type": "source_port",
    "InputFile": "/var/lib/pmacct/plugin2.json"

Here, pmacct-to-elasticsearch is instructed to read from /var/lib/pmacct/plugin2.json, the file where pmacctd daemon writes to.

As you can see from the pmacctd plugin2 configuration above, a trigger is needed in order to run pmacct-to-elasticsearch: /etc/p2es/triggers/plugin2. Just add a link to the default_triggerscript and it’s done:

# cd /etc/p2es/triggers/
# ln -s default_trigger plugin2

Now you can restart pmacct daemons in order to load the new configuration for plugin2:

# service pmacct restart

or, if you preferred not to install my pmacct System V initscript:

# killall -INT pmacctd -w ; pmacctd -f /etc/pmacct/pmacctd.conf -D

After the daemon has finished writing the output file (/var/lib/pmacct/plugin2.json), it runs the trigger which, in turn, executes pmacct-to-elasticsearch with the right argument (plugin2) and detaches it.

Wait a minute, then query ElasticSearch from command line:

# curl http://localhost:9200/example-`date +%F`/source_port/_search?pretty

From now on it’s just a matter of customizations and visualization in Kibana. The official Kibana 4 Quick Start guide can help you to create visualizations and graphs. Remember, the name of the index used in these examples follows the [example-]YYYY-MM-DD daily pattern.


Time series indices tend to grow and to fill up disk space and storage, so a rotation policy may be useful to delete data older than a specific date.

The Curator tool and its Delete command can help you in this:

# apt-get install python-pip
# pip install elasticsearch-curator

Once installed, test it using the right arguments…

# curator --dry-run delete indices --prefix example- --timestring %Y-%m-%d --older-than 1 --time-unit days
2014-12-15 19:04:13,026 INFO      Job starting...
2014-12-15 19:04:13,027 INFO      DRY RUN MODE.  No changes will be made.
2014-12-15 19:04:13,031 INFO      DRY RUN: Deleting indices...
2014-12-15 19:04:13,035 INFO      example-2014-12-15 is within the threshold period (1 days).
2014-12-15 19:04:13,035 INFO      DRY RUN: Speficied indices deleted.
2014-12-15 19:04:13,036 INFO      Done in 0:00:00.020131.

… and, eventually, schedule it in the pmacct-to-elasticsearch crontab file (/etc/cron.d/pmacct-to-elasticsearch), setting the desired retention period:

# m h dom mon dow user  command
0 1  * * *     root  curator delete indices --prefix example- --timestring \%Y-\%m-\%d --older-than 30 --time-unit days

Of course, you can use Curator for many other management and optimization tasks too, but they are out of the scope of this post.

Git Memo – Repository Backup

file-system copy

A git repository can be safely copied to an other directory, or put in an archive file. This is a very basic way of keeping a backup of your repo.

cp -r myrepo backup_copy
tar -czf backup_copy.tgz myrepo

But such a frozen copy can not be updated.

git bundle

Git bundle let you pack the references of your repository as a single file, but unlike the tar command above the bundle is a recognized git source. You can fetch, pull clone from it.

$ git bundle create /path/to/mybundle master branch2 branch3

or to get all refs

$ git bundle create /path/to/mybundle --all

You can then check what is in your bundle:

$ git bundle list-heads /path/to/mybundle

After moving the bundle on any location you can get a new repository by:

$ git clone -b master /path/to/mybundle newrepo

If you want to continue your development in the first repository you can do as follow:

$ git tag lastsent master
#.... do some commits
$ git bundle create /path/to/newcommitbundle lastsent..master

Then send it by email or any dumb transport, and in the repository on the other side:

$ git bundle verify /path/to/newcommitbundle
$ git pull /path/to/newcommitbundle master

More details in git-bundle(1) Manual Page and Git’s Little Bundle of Joy

A discussion is in Stackoverflow: Backup a Local Git Repository

gcrypt remote

Joey Hess gcrypt remote offer a great solution to backup to an unsecure location.

You should first install git-remote-gcrypt from his repository

Then create the encrypted remote by pushing to it:

git remote add cryptremote gcrypt::rsync://
git push cryptremote master

More information in the GitHub repository

and in Joey Hess: fully encrypted git repositories with gcrypt and git-annex special remote: gcrypt

git mirror

It is more interesting to use git do do backups.

You can mirror your repository to a bare repository with:

git clone –mirror myrepo repo_backup.git

repo_backup.git has the same branches than myrepo an an “origin” remote pointing to myrepo. You can update the copy from repo_backup.git by pulling origin.

mirror remote

In the previous solution the backup repository track its origin, but the original repository knows nothing about its backup. It ca be solved by using instead a mirror remote.

You first create an empty bare repository:

git init --bare /path/to/reposave.git

Then in myrepo:

git remote add --mirror=push backup file:///path/to/reposave.git
git push backup

The backup repository will be updated by repeating the last command.

If you want to be able to recover in the backup from a bad push, you may want the backup to keep a reflog. As reflog are disabled by default in bare repository you shoul set it in reposave.git:

git config core.logAllRefUpdates true

live mirror

Sometime you may want a mirror to experiment a dangerous operation. In this case the mirror is not a backup we can better think as the origin as the backup. The clone --mirror is not the solution since it creates a bare repository. And a simple clone is not either even if it as all reference, because it has only one branch, the other branch are just registered as remotes/origin/name. *

As you what to make experiments in this repository you probably don’t even want to track the origin, if you succeed you will replace the original one, if you fail you will remove the repository.

You first do a simple clone:

git clone myrepo /path/to/experimental

Then create all missing branches:

cd /path/to/experimental
git branch --no-track branch2 remotes/origin/branch2
git branch --no-track branch3 remotes/origin/branch3

Delete the refs to origin:

git branch -d remotes/origin/master remotes/origin/branch2 \
    remotes/origin/branch3 remotes/origin/HEAD
git remote remove origin

And play with experimental