Installing SFTP/SSH Server on Windows using OpenSSH

Natively from Microsoft https://winscp.net/eng/docs/guide_windows_openssh_server 

Working with Cywin (so we could use password less and rsync backup from window to linux box) : https://www.mls-software.com/files/setupssh-7.6p1-1.exe

Recently, Microsoft has released an early version of OpenSSH for Windows. You can use the package to set up an SFTP/SSH server on Windows.

Installing SFTP/SSH Server

  • Download the latest OpenSSH for Windows binaries (package OpenSSH-Win64.zip or OpenSSH-Win32.zip)
  • Extract the package to C:\Program Files\OpenSSH
  • As the Administrator, install SSHD and ssh-agent services:
    powershell.exe -ExecutionPolicy Bypass -File install-sshd.ps1
  • As the Administrator, generate server keys and restrict an access to them, by running the following commands from the C:\Program Files\OpenSSH directory:
    .\ssh-keygen.exe -A
    powershell.exe -ExecutionPolicy Bypass -Command ". .\FixHostFilePermissions.ps1 -Confirm:$false"
  • Allow incoming connections to SSH server in Windows Firewall:
    • Either run the following PowerShell command (Windows 8 and 2012 or newer only), as the Administrator:
      New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' -Service sshd -Enabled True -Direction Inbound -Protocol TCP -Action Allow
    • or go to Control Panel > System and Security > Windows Firewall1 > Advanced Settings > Inbound Rules and add a new rule for sshd service (or port 22).

Tools to check website – dns – email hosting status

DNS health check:

Website check:

Check IP whitelist:

Azure speed test:

AWS speedtest:

Linode speed test: https://www.linode.com/speedtest

https://www.lifewire.com/internet-speed-test-sites-2626177

Free & Public DNS Servers (Valid January 2018)

https://servers.opennic.org/

Provider Primary DNS Server Secondary DNS Server
Level31 209.244.0.3 209.244.0.4
Verisign2 64.6.64.6 64.6.65.6
Google3 8.8.8.8 8.8.4.4
Quad94 9.9.9.9 149.112.112.112
DNS.WATCH5 84.200.69.80 84.200.70.40
Comodo Secure DNS 8.26.56.26 8.20.247.20
OpenDNS Home6 208.67.222.222 208.67.220.220
Norton ConnectSafe7 199.85.126.10 199.85.127.10
GreenTeamDNS8 81.218.119.11 209.88.198.133
SafeDNS9 195.46.39.39 195.46.39.40
OpenNIC10 69.195.152.204 23.94.60.240
SmartViper 208.76.50.50 208.76.51.51
Dyn 216.146.35.35 216.146.36.36
FreeDNS11 37.235.1.174 37.235.1.177
Alternate DNS12 198.101.242.72 23.253.163.53
Yandex.DNS13 77.88.8.8 77.88.8.1
UncensoredDNS14 91.239.100.100 89.233.43.71
Hurricane Electric15 74.82.42.42
puntCAT16 109.69.8.51

 

Automating Backups using AWS Lambda

https://medium.com/cognitoiq/automating-backups-using-aws-lambda-baa013fdffc7

When thinking about Cloud Services, backup is something normally not taken into consideration because we hear lots of “ephemeral”, “self-healing”, “repositories” words.

Sometimes, applications need to be backed up in order to achieve a RTO that suits customer’s business objectives.

Automation is key under these concepts, so with this document we want to indicate how easy is to setup a fully automated system using Lambda functions written in Python and scheduled in a daily basis to fulfil the requirements.

Last but not least, storage usage is also important here. If you have hundreds of snapshots, and you don’t delete them appropriately, you end up having TB of old, useless data. Removing snapshots based on a retention policy is very important in this process too.

Let’s define first the pre-requisites to have this properly working in our AWS account:

Setup IAM Permissions

  • Go to Services, IAM, Create a new Role
  • Write the name (ebs-lambda-worker)
  • Select AWS Lambda
  • Don’t select any policy, click Next, and Create Role.
  • Select the new role, and click Create Role Policy
  • Go to Custom Policy, click Select
  • Write a Policy Name, (snapshot-policy), and paste the content of the following gist.

https://gist.githubusercontent.com/fernandohonig/12caf85034d91a4746eb/raw/2ed89a1f70ea84d739cedcb7ae3785c1e7d0d957/gistfile1.txt

  • What we’ve just done is allowing this role to Create/Delete Snapshots, Create tags and modify snapshots attributes. Also we have allowed permissions to Describe EC2 instances and view logs in Cloudwatch

Create Lambda Backup Function

This first function will allow us to backup every instance in our account under the region we put the lambda function, that has a “Backup” or “backup” key tag. No need to indicate a value here.

Before creating the function I would like to briefly explain what it does. The script will search for all instances having a tag with “Backup” or “backup” on it. As soon as we have the instances list, we need to get all the EBS volumes on each instance in order to have the list of EBSs to be backed up. Also, it will look for a “Retention” tag key which will be used as a retention policy number in days. If there is no tag with that name, it will use a 7 days default value for each EBS instance.

After creating the snapshot it creates a “DeleteOn” tag on the snapshot indicating when will be deleted using the Retention value and another Lambda function that we explain later on this document.

Steps to create the function:

  • Go to Services, Lambda, and click Create a Lambda Function
  • Skip the blueprint screen
  • Write a name for it (ebs-backup-worker)
  • Select Python 2.7 as a Runtime option
  • Paste the code below
  • Select the previously created IAM role (ebs-lambda-worker)
  • Click Next and Create Function

https://gist.githubusercontent.com/fernandohonig/1a6921af7e1735f89f91/raw/e8d9eae135b4902669c5f4275d2a8dfe2cf5c04d/gistfile1.txt

Create Lambda Prune Snapshot Function

Our snapshots are created successfully using our previous function, but as explained at the beginning of this document, we need to remove them when not needed anymore.

Reminder: By default, every instance will be backed up if it has a “Backup” or “backup” tag. Also, if no “Retention” tag is added, snapshots for the instance will be removed after a week, as the backup Lambda function will add the “DeleteOn” tag key on each snapshot with the specific date when must be deleted.

Using the same steps as before, create the function (ebs-backup-prune)

Use the following code:

https://gist.githubusercontent.com/fernandohonig/3eed6cd31a76e8ba7199/raw/c733ff7f91ebe7679f0d5eb0300a87a37311e714/gistfile1.txt

You will end up with something like this:

So, you now have 2 working functions that will backup ebs volumes into snapshots and remove those when “DeleteOn” specifies. Now is time to automate using the Event sources feature from Lambda.

Schedule our Functions

We need to run at least once a day both. For doing that, we need to:

  • Go to Services, Lambda, click on the function name
  • Click on Event sources
  • Click on Add event source
  • Select Scheduled Event
  • Type Name: backup-daily or remove-daily based on the function you are scheduling
  • Schedule expression: rate(1 day)
  • Click Submit

Integration of pmacct with ElasticSearch and Kibana

https://blog.pierky.com/integration-of-pmacct-with-elasticsearch-and-kibana/

In this post I want to show a solution based on a script (pmacct-to-elasticsearch) that I made to gather data from pmacct and visualize them using Kibana/ElasticSearch. It’s far from being the state of the art of IP accounting solutions, but it may be used as a starting point for further customizations and developments.

I plan to write another post with some ideas to integrate pmacct with the canonical ELK stack (ElasticSearch/Logstash/Kibana). As usual, add my RSS feed to your reader or follow me on Twitter to stay updated!

The big picture

This is the big picture of the proposed solution:

pmacct-to-elasticsearch - The big picture

There are 4 main actors: pmacct daemons (we already saw how to install and configure them) that collect accounting data, pmacct-to-elasticsearch, which reads pmacct’s output, processes it and sends it to ElasticSearch, where data are stored and organized into indices and, at last, Kibana, that is used to chart them on a web frontend.


The starting point of this tutorial is the scenario previously viewed in the Installing pmacct on a fresh Ubuntu setup post.

UPDATE: I made some changes to the original text (that was about Kibana 4 Beta 2) since Kibana 4 has been officially released

In the first part of this post I’ll cover a simple setup of both ElasticSearch 1.4.4 and Kibana 4.

In the second part I’ll show how to integrate pmacct-to-elasticsearch with the other components.

Setup of ElasticSearch and Kibana

This is a quick guide to setup the aforementioned programs in order to have a working scenario for my goals: please strongly consider security and scalability issues before using it for a real production environment. You can find everything you need on the ElasticSearch web site.

Dependencies

Install Java (Java 8 update 20 or later, or Java 7 update 55 or later are recommended at time of writing for ElasticSearch 1.4.4):

# apt-get install openjdk-7-jre

ElasticSearch

Install ElasticSearch from its APT repository

# wget -qO - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
# add-apt-repository "deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main"
# apt-get update && apt-get install elasticsearch

… and (optionally) configure it to automatically start on boot:

# update-rc.d elasticsearch defaults 95 10

Since this post covers only a simple setup, tuning and advanced configuration are out of its scope, but it is advisable to consider the official configuration guide for any production-ready setup.
Just change a network parameter to be sure that ES does not listen on any public socket; edit the /etc/elasticsearch/elasticsearch.yml file and set

network.host: 127.0.0.1

Finally, start it:

# service elasticsearch start

Wait some seconds then, if everything is ok, you can check its status with an HTTP query:

# curl http://localhost:9200/?pretty
{
  "status" : 200,
  "name" : "Wild Thing",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.4.4",
    "build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
    "build_timestamp" : "2015-02-19T13:05:36Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.3"
  },
  "tagline" : "You Know, for Search"
}

Kibana 4

Download and install the right version of Kibana 4, depending on your architecture (here I used the x64):

# cd /opt
# curl -O https://download.elasticsearch.org/kibana/kibana/kibana-4.0.0-linux-x64.tar.gz
# tar -zxvf kibana-4.0.0-linux-x64.tar.gz

By default, Kibana listens on 0.0.0.0:5601 for the web front-end: again, for this simple setup it’s OK, but be sure to protect your server using a firewall and/or a reverse proxy like Nginx.

Run it (here I put it in background and redirect its output to /var/log/kibana4.log):

# /opt/kibana-4.0.0-linux-x64/bin/kibana > /var/log/kibana4.log &

Wait some seconds until it starts, then point your browser at http://YOUR_IP_ADDRESS:5601 to check that everything is fne.

pmacct-to-elasticsearch configuration

Now that all the programs we need are up and running we can focus on pmacct-to-elasticsearch setup.

pmacct-to-elasticsearch is designed to read JSON output from pmacct daemons, to process it and to store it into ElasticSearch. It works with both memory and print plugins and, optionally, it can perform manipulations on data (such as to add fields on the basis of other values).

pmacct-to-elasticsearch Data flow

Install git, download the repository from GitHub and install it:

# apt-get install git
# cd /usr/local/src/
# git clone https://github.com/pierky/pmacct-to-elasticsearch.git
# cd pmacct-to-elasticsearch/
# ./install

Now it’s time to configure pmacct-to-elasticsearch to send some records to ElasticSearch. Configuration details can be found in the CONFIGURATION.md file.

In the last post an instance of pmacctd was configured, with a memory plugin named plugin1that was performing aggregation on a socket basis (src host:port / dst host:port / protocol):

plugins: memory[plugin1]

imt_path[plugin1]: /var/spool/pmacct/plugin1.pipe
aggregate[plugin1]: etype, proto, src_host, src_port, dst_host, dst_port

In order to have pmacct-to-elasticsearch to process plugin1 output, we need to create the homonymous pmacct-to-elasticsearch configuration file, /etc/p2es/plugin1.conf; default values already point pmacct-to-elasticsearch to the local instance of ElasticSearch (URL = http://localhost:9200), so we just need to set the destination index name and type:

{
    "ES_IndexName": "example-%Y-%m-%d",
    "ES_Type": "socket"
}

Since this is a memory plugin, we also need to schedule a crontab task to consume data from the in-memory-table and pass them to pmacct-to-elasticsearch, so edit the /etc/cron.d/pmacct-to-elasticsearch file and add the line:

*/5 *  * * *     root  pmacct -l -p /var/spool/pmacct/plugin1.pipe -s -O json -e | pmacct-to-elasticsearch plugin1

Everything is now ready to have the first records inserted into ElasticSearch: if you don’t want to wait for the crontab task to run, execute the above command from command line then query ElasticSearch to show the records:

# curl http://localhost:9200/example-`date +%F`/socket/_search?pretty
{
  ...
  "hits" : {
    "total" : 6171,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "example-2014-12-15",
      "_type" : "socket",
      "_id" : "AUo910oSOUAYMzMu9bxU",
      "_score" : 1.0,
      "_source": { "packets": 1, "ip_dst": "172.16.1.15",
                   "@timestamp": "2014-12-15T19:32:02Z", "bytes": 256,
                   "port_dst": 56529, "etype": "800", "port_src": 53, 
                   "ip_proto": "udp", "ip_src": "8.8.8.8" }
      }
      ...
    ]
  }
}

(the `date +%F` is used here to obtain the actual date in the format used by the index name, that is YYYY-MM-DD)

Just to try the configuration for a print plugin, edit the /etc/pmacct/pmacctd.conf configuration file, change the plugins line and add the rest:

plugins: memory[plugin1], print[plugin2]

print_output_file[plugin2]: /var/lib/pmacct/plugin2.json
print_output[plugin2]: json
print_trigger_exec[plugin2]: /etc/p2es/triggers/plugin2
print_refresh_time[plugin2]: 60
aggregate[plugin2]: proto, src_port
aggregate_filter[plugin2]: src portrange 0-1023

Then, prepare the p2es configuration file for pmacct-to-elasticsearch execution for this plugin (/etc/p2es/plugin2.conf):

{
    "ES_IndexName": "example-%Y-%m-%d",
    "ES_Type": "source_port",
    "InputFile": "/var/lib/pmacct/plugin2.json"
}

Here, pmacct-to-elasticsearch is instructed to read from /var/lib/pmacct/plugin2.json, the file where pmacctd daemon writes to.

As you can see from the pmacctd plugin2 configuration above, a trigger is needed in order to run pmacct-to-elasticsearch: /etc/p2es/triggers/plugin2. Just add a link to the default_triggerscript and it’s done:

# cd /etc/p2es/triggers/
# ln -s default_trigger plugin2

Now you can restart pmacct daemons in order to load the new configuration for plugin2:

# service pmacct restart

or, if you preferred not to install my pmacct System V initscript:

# killall -INT pmacctd -w ; pmacctd -f /etc/pmacct/pmacctd.conf -D

After the daemon has finished writing the output file (/var/lib/pmacct/plugin2.json), it runs the trigger which, in turn, executes pmacct-to-elasticsearch with the right argument (plugin2) and detaches it.

Wait a minute, then query ElasticSearch from command line:

# curl http://localhost:9200/example-`date +%F`/source_port/_search?pretty

From now on it’s just a matter of customizations and visualization in Kibana. The official Kibana 4 Quick Start guide can help you to create visualizations and graphs. Remember, the name of the index used in these examples follows the [example-]YYYY-MM-DD daily pattern.

Housekeeping

Time series indices tend to grow and to fill up disk space and storage, so a rotation policy may be useful to delete data older than a specific date.

The Curator tool and its Delete command can help you in this:

# apt-get install python-pip
# pip install elasticsearch-curator

Once installed, test it using the right arguments…

# curator --dry-run delete indices --prefix example- --timestring %Y-%m-%d --older-than 1 --time-unit days
2014-12-15 19:04:13,026 INFO      Job starting...
2014-12-15 19:04:13,027 INFO      DRY RUN MODE.  No changes will be made.
2014-12-15 19:04:13,031 INFO      DRY RUN: Deleting indices...
2014-12-15 19:04:13,035 INFO      example-2014-12-15 is within the threshold period (1 days).
2014-12-15 19:04:13,035 INFO      DRY RUN: Speficied indices deleted.
2014-12-15 19:04:13,036 INFO      Done in 0:00:00.020131.

… and, eventually, schedule it in the pmacct-to-elasticsearch crontab file (/etc/cron.d/pmacct-to-elasticsearch), setting the desired retention period:

# m h dom mon dow user  command
...
0 1  * * *     root  curator delete indices --prefix example- --timestring \%Y-\%m-\%d --older-than 30 --time-unit days
#EOF

Of course, you can use Curator for many other management and optimization tasks too, but they are out of the scope of this post.