How Do I Convert My SSL Certificate File To PEM Format?

https://support.aerofs.com/hc/en-us/articles/205007260-How-Do-I-Convert-My-SSL-Certificate-File-To-PEM-Format-

Most Certificate Authorities (CAs) issue certificates in PEM format. PEM certificates typically have extensions like .pem.crt.cer, and .key.

The PEM format uses the header and footer lines -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

Other certificate formats include the DER/Binary, P7B/PKCS#7, and PFX/PKCS#12 formats. The AeroFS Appliance requires a certificate in PEM format in step 9 of theappliance setup. This certificate will be used to ensure secure transactions between your appliance and users’ web browsers.

Converting Your Existing Certificate To PEM Format

If your certificate is not in PEM format, you can convert it to the PEM format using the following OpenSSL commands:

Convert DER to PEM

openssl x509 -inform der -in certificate.cer -out certificate.pem

Convert P7B to PEM

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.pem

Convert PFX to PEM

openssl pkcs12 -in certificate.pfx -out certificate.pem -nodes

Alternatively, you can use this SSL converter tool.

Removing Passphrase From Existing Private Key File

If you try to upload a passphrase-protected private key file, you will get a “key is invalid” error message. To fix this you will need to remove the passphrase from your private key file and upload the passphrase-free private key file to your appliance. You can remove the passphrase as follows:

1. Run openssl rsa -in example.key -out example.nocrypt.key

2. Enter your passphrase.

 

 

Tool convert SSL cert: https://decoder.link/converter

Using some best practices for SQL Server in Azure VM (IaaS) to get MAX performance

https://sqlwithmanoj.com/2016/06/30/using-some-best-practices-for-sql-server-in-azure-vm-iaas-to-get-max-performance/

I was setting up an environment on Azure Premium Storage (G-Series) server for my Project, and was not getting the performance I was expecting. The comparison was against an existing Database Server setup on an On-Premise VM.

Microsoft Azure Premium Storage delivers high-performancelow-latency disk support for VMs running I/O-intensive workloads. Uses SSD (Solid State Drives) to store your data and Caching capability to provides highest throughput and low latency. You can configure disk caching policy on the Premium Storage disks as ReadOnly, ReadWrite or None, based upon your workloads, and type of data processing.

The highest Premium Storage Disk type, P30 provides you Disk size of 1TB, with 5000 IOPS, which can provide throughput of 200 MBPS.
Here are some of the Best Practices [suggested by Microsoft] to ensure that your SQL Server instance is optimized upto its maximum on an Azure Virtual Machine:

1. Use Premium Storage. Standard storage is only recommended for dev/test.

2. Keep the storage account and SQL Server VM in the same region.

3. Format the Data disks, and use NTFS 64-KB allocation unit size for data and log files.

4. Stripe multiple Azure data disks to get MAX IOPS and throughput.

5. Avoid using operating system or temporary disks for database storage or logging.

6. Enable read caching on the disk(s) hosting the data files and TempDB.

7. Do not enable caching on disk(s) hosting the log file.

8. Setup SQL Server tempDB on D: Drive which is a SSD (Solid State Disk) in providing MAX throughput on handling heavier workloads.

9. Enable SQL Server database page compression.

10. Enable instant file initialization for data files, to reduce the time that is required for initial file allocation. Grant the SQL Server (MSSQLSERVER) service account with SE_MANAGE_VOLUME_NAME and add it to the Perform Volume Maintenance Tasks security policy. Database File Initialization.

11. Disable autoshrink and Limit autogrow on the database.

12. Move all databases and error log & trace file directories to data disks, including system databases.

13. Enable the Lock Pages in Memory Option (Windows level). This security setting determines which accounts can use a process to keep data in physical memory, which prevents the system from paging the data to virtual memory on disk. Exercising this privilege could significantly affect system performance by decreasing the amount of available random access memory (RAM).

14. Set MAX SQL Server memory to 64GB.

15. Run a full Index REBUILD (without REORGANISE) and UPDATE STATS with FULL SCAN option on all tables in the database.

16. And last and not the least, tune your Queries. Badly created Queries and Indexes won’t run on highly optimized systems 🙂

RDS/SQL SERVER BACKUP AND RESTORE TO/FROM S3

http://cloudbasic.net/documentation/rds-sql-server-backup-restore-s3/

Backup database to S3

  1. Under RDS Dashboard create a new option group with “SQLSERVER_BACKUP_RESTORE” option.
  2. Update your RDS instance to use the newly created option.
  3. Open SQL Management Studio, connect to RDS database and execute the following to kick off the backup:
USE [msdb]
GO

DECLARE   @return_value int

EXEC  @return_value = [dbo].[rds_backup_database]
      @source_db_name = 'your_database_name',
      @S3_arn_to_backup_to = 'arn:aws:s3:::your-bucket-name/folder/db.bak',
      @KMS_master_key_arn = NULL,
      @overwrite_S3_backup_file = NULL

SELECT    'Return Value' = @return_value

GO

To check the progress of the backup, run the following:

USE [msdb] GO

DECLARE   @return_value int

EXEC  @return_value = [dbo].[rds_task_status]         @db_name =
'your_database_name',         @task_id = <<<found in result of previous query>>>

SELECT    'Return Value' = @return_value

GO

Restore database from S3 .bak file

exec msdb.dbo.rds_restore_database 
    @restore_db_name='database_name', 
    @s3_arn_to_restore_from='arn:aws:s3:::bucket_name/folder/file_name_and_extension';

More information, including how to export an encrypted backup and restore a database from an encrypted S3 backup file, visit:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html

Best Practices for SQL Server in Azure

http://www.oakwoodsys.com/best-practices-sql-server-azure/

Most of the typical items that DBAs will hit in tuning out a server for SQL Server hold true in Azure. The three major spots you will usually focus on are OS configurations, storage configurations, and SQL Server instance configurations.  These are generalizations and, as your workload demands, there may be variations that are necessary, but those are best-covered case by case and out of scope for this post.

OS Configurations

When configuring an OS for SQL Server, these are the items that you should hit:

  • Provision separate service accounts for all SQL Server services (database engine, SQL server agent, Integration Services, etc.). This is a security best practice and may be considered domain level work if you need the accounts to have the ability to hit domain locations.
  • Add the database engine service account to the following user rights:
    • Perform volume maintenance tasks – this enables instant file initialization, which can yield 4-5x performance improvements on tasks such as data file growths (it has no impact on transaction logs, those *must* be 0 initialized), database creation, and database restores.
    • Lock pages in Memory – this prevents the OS from paging out SQL Server’s buffer pool
  • Change power settings from balanced to high performance. CPU improvements didn’t appear to be as drastic as other documented cases (private clouds where the hypervisor and VMs were under the same administrative team), but based on my geek bench testing, you can still squeak out an ~8% performance improvement with this setting (which was a bit surprising to me, actually). This may be fluky, but it was consistent over ~10 tests at varying times of the day.

Storage Configuration

Azure storage is extremely simple to configure and use. That simplicity does, however, come with a price: the ability to make extremely fine-grained configuration decisions based on your workload. That said, for most purposes, Azure storage is perfectly fine, as long as you configure it properly. Here’s the list of best practice configuration guidelines for Azure storage for SQL Server:

  • Use data disks with no caching and no geo-redundancy
  • There are two options for scaling IO in Azure: Storage pools or simple volumes on singular data disks with file groups containing files on multiple volumes
    • The SQLCat team has benchmarked the performance of the two. The files/filegroups option yields better scaling of IOPS but comes at the cost of more management overhead and partial unavailability of any of the disks can result in the database being unavailable.
  • Set allocation units for the volumes which will hold SQL datafiles to 64kb
  • Use storage pools, not Windows’ software RAID. IOPS and throughput do not scale with Windows’ software RAID and can yield erratic and terribly performant results.
    • Caps to be aware of:
      VM Size Data Disks Max IOPS Bandwidth
      A3 8 4000 400 Mbps
      A4 16 8000 800 Mbps
      A6 8 4000 1000 Mbps
      A7 16 8000 2000 Mbps
      A8 16 8000 40 Gbps
      A9 16 8000 40 Gbps
    • Overall IOPS for a subscription is 20,000
  • There are a couple common ways that you can configure your storage pools:
    • One large storage pool with all of your disks in it and then placing all of your database files on that
    • Segregate out IO needs by category. Kinda’ like:
      • 4x data disks – TempDB
      • 4x data disks – Transaction Logs
      • 8x data disks – Data files
  • Number of columns on virtual disk should be equal to the number of physical disks in the pool (6 disks = 6 columns – this allows IO requests to be evenly distributed across all data disks in the pool)
  • Interleave value on virtual disks should be 256kb for all workloads
  • Linear scaling starts breaking down around 4 disks and writes scale better than reads

You may use the following PowerShell commands to verify the configuration of your virtual disks:

Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, @{Expression={$_.Interleave / 1KB}; Label="Interleave(KB)"}, NumberOfColumns, NumberOfDataCopies, @{Expression={$_.Size / 1GB}; Label="Size(GB)"}, @{Expression={$_.FootprintOnPool / 1GB}; Label="PoolFootprint(GB)"} –AutoSize

On an interesting note. Azure storage is a shared environment. This means there are going to be some behind the scenes things that happen and may not necessarily be what you want. If your systems are under heavy use, you will likely never notice this, but for new systems and for performance testing benchmark machines, you are going to want to warm up the disks (for roughly 20 minutes). Last, but not least: ALWAYS test your throughput. I recommend SQLIO when you are trying to just generally test the performance of the storage subsystem. It’s out of scope for what I want to cover here, but here is where you can find that application and really good documentation on it.

SQL Server Instance Configuration

There are instance level configuration options that generally benefit SQL Server. These are irrespective of where your SQL Server instance is located, but this is what you’ll want to hit:

  • Install only minimum feature set for instance
  • Enable option Advanced > optimize for ad hoc workloads
  • Change option Advanced > cost threshold for parallelism to 50. SQL Server’s default setting for this is just flat out too low. If you are tuning a pre-existing instance, you’ll notice a lot of CXPACKET waits in your wait_statistics
  • Change option Advanced > max degree of parallelism to value equal to the number of cores/vCPUs (up to 8). 0 defaults to all available logical processors up to 64.
  • Enable option Database Settings > Compress backup. There is really no reason to not compress your backups.
  • Split your TempDB up across multiple data files. You will use the same number of data files as you have processors, up to 8. These will all need to be of the same size, as SQL Server uses a weighted round robin when leveraging multiple data files. The goal behind this is to reduce contention for the GAM, SGAM and PFS pages for the TempDB.
  • Change option Memory > minimum server memory (in MB) to value ~2048-4096 lower than maximum server memory value
  • Change option Memory > maximum server memory (in MB) as follows:
    Server Total RAM (GB) Max server memory (MB)
    7 5120
    14 11264
    28 24576
    56 52224
    112 108544

Conclusion

That’s all folks! That should get you going on configuring an Azure VM to be home to your lovely little SQL Server instance(s). Happy clouding!