CloudFormation managed by CodeCommit and Code Pipeline demo.

One of the most advantages using cloud platforms is automation of resources managements, be their creation, changes and terminations. This not only helps make similar deployments and standard ones, avoiding very dissimilar environments that would be difficult to manage, but also allows self servicing by customers decreasing time to deploy labs, Q/A and other environments. It should also make quick to bring up an external customer environment in a short time going further and using a service catalog tool (which by the way, AWS provides with some additional integration).

For cloud architects and operators, CloudFormation (Infrastructure as a code) tool makes quicker set up and change of resources. Those are traceable via CloudTrail and can be controlled via policies stating who can use templates and which resources. In smaller environments CloudFormation templates can be managed by small team in a local repository such as file share, local disk or remotely on S3 (this can even use bucket versioning to help storing a small amount of versions if the environment changes rarely, but it is not best use of S3).

So once CloudFormation gets more use and the team increases to more than a few members, using a versioning tool such as Git or CodeCommit helps a lot in traceability, rollback and troubleshooting. With every committed change goes description and comments of what is the intent and targeted resources.

Going one step further CodePipeline can be used to monitor changes (commits) in CodeCommit repositories and start CloudFormation template updates, or even creation of a new stack. If a company already uses CodeCommit and CodePipeline to manage code, using them for CloudFormation management is a similar process. This demo intends to show a small setup with a single commit user creating and updating a small VPC.

CloudCommit set up

CloudCommit is the code versioning and management repository tool in AWS, Git based. It should be familiar to GitHub users for software development and collaboration projects. To start up it we will need a CodeCommit user with SSH RSA keys so access to the repository is private and secured via Unix command line. Note that AWS CodeCommit allows HTTPS connections using a different set up and both ways are supported in Windows as well. Reference document in AWS documentation here.

Creating a new IAM user:

  1. Go to IAM tool and create a new user with Programmatic access which allows CLI management. Click Next
  2. Click on Attach existing policies directly so we can add a standard AWS policy to the user. There are 3 policies allowing different permissions and for this demo we will use AWSCodeCommitFullAccess one. For production environments this should be more restrictive, probably PowerUser type where repositories cannot be deleted.
  3. Move next screen, review policy and create the new IAM user. Make sure you download the CSV with the Access key ID and Secret access key to set up git access.

Setting up git:

  1. Make sure you have git installed in your system. Installation in different platforms should be easy. Once done we should be able to invoke it as
iMac:~ aws$ git version
git version 2.15.2 (Apple Git-101.1)
iMac:~ aws$ 
  1. Let us run ssh-keygen to create a new key pair that will be used with the new IAM user. At the end of the process, the cat command will display the public key so we can add to the user profile.
iMac:~ aws$ 
iMac:~ aws$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/aws/.ssh/id_rsa): /Users/aws/.ssh/codecommit_rsa
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/aws/.ssh/codecommit_rsa.
Your public key has been saved in /Users/aws/.ssh/codecommit_rsa.pub.
The key fingerprint is:
SHA256:Nxw13oUv9UBv8ci90iuflrTKspzZH21vptLfS4rXGS0 aws@iMac.local
The key's randomart image is:
+---[RSA 2048]----+
|            o..o.|
|           o +o=+|
|          . . ++*|
|         . .  o.+|
|        S +  . +.|
|         . .  Eo+|
|             oo=B|
|          ..B.*BB|
|           *+*=X=|
+----[SHA256]-----+
iMac:~ aws$ 
iMac:~ aws$ 
iMac:~ aws$ cat /Users/aws/.ssh/codecommit_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDE3oy5PHVHVVVITzJusWhSo2ihFNPtx7+mBypHX1TJThAIUmSK8oJtyYa1ncuDCV5P8rdxv3lgVIr6mTbxr6RY3v/FECxwt1uf4CiKaGwaJsMglPPRw8NneMTIRbhoV1Kae7gfokMmZc9g3MDtv+sipM2z6ZlVqnJqDISiNaYK6RY0eDW8X2KLYL0rgPoIEERUjjr+iAhcLMr0lkb6IHFMDGc+onbGPCKZ/W3/zP8TOeG83tG/xA3CiGTG/LC31TK6zRwKl5g7WhoUfaAE3KkrHz398a9Nm8q6NLIAgr2HguEFz9TaLbqmpphYi/p+lpByGYGzLd0nTS+HbKD3XjWd aws@iMac.local
iMac:~ aws$ 
iMac:~ aws$ 
iMac: aws$ 
  1. Moving back to IAM user properties screen, we need to add the new generated public key to its profile. In the third tab Security Credentials , click on the Upload SSH public key and paste the contents of the key showed above.

  1. Now back to the local machine, we will setup the private key for SSH access. Go to .ssh directory to add a new configuration. The file will be named config and should contain the lines:
Host git-codecommit.*.amazonaws.com
  User ****USER-KEY-ID****
  IdentityFile ~/.ssh/codecommit_rsa

Replace the user key ID field with one generated in you IAM user SSH credential. If you used a different name for the key file other than codecommit_rsa just replace it with yours. Also, protect the file access with chmod 600 config . Once done the .ssh directory should be like:

iMac:.ssh aws$ pwd
/Users/aws/.ssh
iMac:.ssh aws$ ls -la
total 64
drwx------   6 aws  staff   204 Jul 25 08:24 .
drwxr-xr-x+ 26 aws  staff   884 Jul 18 14:37 ..
-rw-------   1 aws  staff  1679 Jul 25 07:47 codecommit_rsa
-rw-r--r--   1 aws  staff   396 Jul 25 07:47 codecommit_rsa.pub
-rw-------   1 aws  staff   101 Jul 25 08:24 config
-rw-r--r--   1 aws  staff  3844 Jun 18 14:22 known_hosts
iMac:.ssh aws$ 
  1. Test the SSH connection using ssh git-codecommit.us-east-1.amazonaws.com
iMac:.ssh aws$ ssh git-codecommit.us-east-1.amazonaws.com
The authenticity of host 'git-codecommit.us-east-1.amazonaws.com (52.94.229.29)' can't be established.
RSA key fingerprint is SHA256:eLMY1j0DKA4uvDZcl/KgtIayZANwX6t8+8isPtotBoY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'git-codecommit.us-east-1.amazonaws.com,52.94.229.29' (RSA) to the list of known hosts.
You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.Connection to git-codecommit.us-east-1.amazonaws.com closed by remote host.
Connection to git-codecommit.us-east-1.amazonaws.com closed.

The -v parameter should help troubleshooting connection errors, permissions. Please, mind the key name file used matching its reference in the config file. Now return back to the folder where the repository will be created.

iMac:.ssh aws$ cd ~/Documents/Demos/CodeCommit-Pipeline/
iMac:CodeCommit-Pipeline aws$ ls -la
total 0
drwxr-xr-x  4 aws  staff  136 Jul 25 07:58 .
drwxr-xr-x  7 aws  staff  238 Jul 25 07:58 ..
drwxr-xr-x@ 7 aws  staff  238 Jul 25 06:42 CloudFormation automation.mindnode
-rw-r--r--@ 1 aws  staff  326 Jul 25 07:58 VPC-Demo.yml
iMac:CodeCommit-Pipeline aws$ 

CodeCommit repository setup and git connection

A CodeCommit repository can now be created and linked with Git. To start, access the AWS CodeCommit console service screen under Developer Tools . Click on Get Started if this is the first repository and enter a name and description. After confirmation that the repository has been created we can select which types of notification. Leave as it is and select an existent test topic or create a new one.

Once saved it is time to connect the repository to Git, and alternatively one could work directly with the console adding and reviewing files, which is a great way to explore CodeCommit later. For now, let’s stick with regular Git tool. Select SSH as connection method and the OS. All 3 prerequisites have been completed by now so we can move on.

Select, copy the git command from the AWS console and run it on your command prompt. An example below shows the connection to the repository, new directory created and a message stating it is empty for now. I also moved a simple template for a VPC, file VPC-Demo.yml, in YAML format into the repository directory.

You can create a similar YAML file for the CloudFormation test by using this sample:

AWSTemplateFormatVersion: "2010-09-09"
Description: VPC in North Virginia
#
#
Resources:
#
# VPC
#
  MyVPC:
	Type: AWS::EC2::VPC
	Properties:
	  CidrBlock: "10.17.0.0/16"
	  InstanceTenancy: default
	  Tags:
		- Key: Name
		  Value: MyVPC
		- Key: Environment
		  Value: Testing
#
# END

Now, for connecting the repository:

iMac:CodeCommit-Pipeline aws$ pwd
/Users/aws/Documents/Demos/CodeCommit-Pipeline
iMac:CodeCommit-Pipeline aws$ git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/CloudFormation
Cloning into 'CloudFormation'...
warning: You appear to have cloned an empty repository.
iMac:CodeCommit-Pipeline aws$ ls -la
total 0
drwxr-xr-x  5 aws  staff  170 Jul 25 10:49 .
drwxr-xr-x  7 aws  staff  238 Jul 25 10:49 ..
drwxr-xr-x  3 aws  staff  102 Jul 25 10:49 CloudFormation
drwxr-xr-x@ 7 aws  staff  238 Jul 25 06:42 CloudFormation automation.mindnode
-rw-r--r--@ 1 aws  staff  326 Jul 25 07:58 VPC-Demo.yml
iMac:CodeCommit-Pipeline aws$ mv VPC-Demo.yml CloudFormation
iMac:CodeCommit-Pipeline aws$ cd CloudFormation
iMac:CloudFormation aws$ ls -la
total 0
drwxr-xr-x   4 aws  staff  136 Jul 25 10:49 .
drwxr-xr-x   4 aws  staff  136 Jul 25 10:49 ..
drwxr-xr-x  10 aws  staff  340 Jul 25 10:49 .git
-rw-r--r--@  1 aws  staff  326 Jul 25 07:58 VPC-Demo.yml
iMac:CloudFormation aws$ 

Going back to the AWS console, we can confirm the repository is still empty, so let’s try few Git commands to update the repository.

First one is git status which should show a change on the local repository and new file VPC-Demo.yml

iMac:CloudFormation aws$ git status
On branch master

No commits yet

Untracked files:
  (use "git add <file>..." to include in what will be committed)

VPC-Demo.yml

nothing added to commit but untracked files present (use "git add" to track)

Now we can add all files, and recheck status using git add -A and git status commands.

iMac:CloudFormation aws$ git add -A
iMac:CloudFormation aws$ git status
On branch master

No commits yet

Changes to be committed:
  (use "git rm --cached <file>..." to unstage)

new file:   VPC-Demo.yml

iMac:CloudFormation aws$ 

Now that the VPC-Demo.yml file is added to the repo, it can be committed using git commit command. Typing it with -m parameter will run non-interactively, otherwise you will be presented with the default text editor.

Synch the new file to the repository with example below and refresh the AWS console screen to confirm the repository has it.

iMac:CloudFormation aws$ git commit -m "Adding the first version of CloudFormation template to the repo" 
[master (root-commit) 43df3a8] Adding the first version of CloudFormation template to the repo
 Committer: AWS <aws@iMac.local>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly. Run the
following command and follow the instructions in your editor to edit
your configuration file:

git config --global --edit

After doing this, you may fix the identity used for this commit with:

git commit --amend --reset-author

 1 file changed, 20 insertions(+)
 create mode 100644 VPC-Demo.yml
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git status
On branch master
Your branch is based on 'origin/master', but the upstream is gone.
  (use "git branch --unset-upstream" to fixup)

nothing to commit, working tree clean
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git diff
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git remote
origin
iMac:CloudFormation aws$ git push
Warning: Permanently added the RSA host key for IP address '52.94.233.146' to the list of known hosts.
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 459 bytes | 459.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/CloudFormation
  • [new branch]      master -> master
    		
iMac:CloudFormation aws$ git status
On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean
iMac:CloudFormation aws$ 

If you click on the demo file you should see its content and be able to edit it directly on AWS console. Going to the Commits submenu at the left you should be able to see the first commit information, the master branch, description and time completed.

Getting back to the terminal or prompt, let us try a file change, adding a subnet the VPC, checking the differences and committing and push the changes to the master branch of the repository.

Here is the updated file if you want to update the CloudFormation template. It only adds one subnet, one route table and associates both.

AWSTemplateFormatVersion: "2010-09-09"
Description: VPC in North Virginia
#
#
Resources:
#
#
# VPC
#
#
  MyVPC:
	Type: AWS::EC2::VPC
	Properties:
	  CidrBlock: "10.17.0.0/16"
	  InstanceTenancy: default
	  Tags:
		- Key: Name
		  Value: MyVPC
		- Key: Environment
		  Value: Testing
#
#
# Subnets
#
#
  MySubnet1a:
	Type: AWS::EC2::Subnet
	Properties:
	  AvailabilityZone: us-east-1a
	  CidrBlock: "10.17.1.0/24"
	  Tags:
		- Key: Name
		  Value: MySubnet1a
	  VpcId: !Ref MyVPC
#
#
# Route Tables
#
#
  MyRouteTable:
	Type: AWS::EC2::RouteTable
	Properties:
	  VpcId: !Ref MyVPC
#
#
# Route Tables Associations
#
#
  MySubnet1aRouteTableAssociation:
	Type: AWS::EC2::SubnetRouteTableAssociation
	Properties:
	  RouteTableId: !Ref MyRouteTable
	  SubnetId: !Ref MySubnet1a
#
#
#
# END

Now that we have an edited file let’s check few Git commands with CodeCommit and finally commit and push the changes to master branch of the repository. This is just a bit of Git commands needed to check changed code, review it before updating the repository. Note the git diff command will show added lines with a ‘+’.

iMac:CloudFormation aws$ git status
On branch master
Your branch is up to date with 'origin/master'.


nothing to commit, working tree clean`
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ # AFTER EDITING THE FILE`
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ git status`
On branch master`
Your branch is up to date with 'origin/master'.`
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)


modified:   VPC-Demo.yml`
no changes added to commit (use "git add" and/or "git commit -a")
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git diff
diff --git a/VPC-Demo.yml b/VPC-Demo.yml
index 0a9a970..9752611 100644
--- a/VPC-Demo.yml
+++ b/VPC-Demo.yml
@@ -4,7 +4,9 @@ Description: VPC in North Virginia
 #
 Resources:
 #
+#
 # VPC
+#
 #
   MyVPC:
	 Type: AWS::EC2::VPC
@@ -17,4 +19,39 @@ Resources:
		 - Key: Environment
		   Value: Testing
 #
+#
+# Subnets
+#
+#
+  MySubnet1a:
+    Type: AWS::EC2::Subnet
+    Properties:
+ 	 AvailabilityZone: us-east-1a
+ 	 CidrBlock: "10.17.1.0/24"
+ 	 Tags:
+ 	   - Key: Name
+ 		 Value: MySubnet1a
+ 	 VpcId: !Ref MyVPC
+#
+#
+# Route Tables
+#
+#
+  MyRouteTable:
+    Type: AWS::EC2::RouteTable
+    Properties:
+ 	 VpcId: !Ref MyVPC
+#
+#
+# Route Tables Associations
+#
+#
+  MySubnet1aRouteTableAssociation:
+    Type: AWS::EC2::SubnetRouteTableAssociation
+    Properties:
+ 	 RouteTableId: !Ref MyRouteTable
+ 	 SubnetId: !Ref MySubnet1a
+#
+#
+#
 # END
iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git commit -a -m "Adding subnets to the VPC"
[master e5e3d34] Adding subnets to the VPC
 Committer: AWS <aws@iMac.local>
Your name and email address were configured automatically based
on your username and hostname. Please check that they are accurate.
You can suppress this message by setting them explicitly. Run the
following command and follow the instructions in your editor to edit
your configuration file:


git config --global --edit`
After doing this, you may fix the identity used for this commit with:


git commit --amend --reset-author`
 1 file changed, 37 insertions(+)
iMac:CloudFormation aws$ git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 595 bytes | 595.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/CloudFormation
   43df3a8..e5e3d34  master -> master
iMac:CloudFormation aws$ 

Back to the AWS console and refreshing the Commits page, you should see the new commit, time of its completion and brief commit ID by the end of the page. If you click on it, CodeCommit shows some of the compare tools.

When working with teams this can get multiple branches and pull requests, outside scope of this demo for now. Now, let’s start CodePipeline setup to automate CloudFormation execution of those changes.

CodePipeline

AWS CodePipeline tool allows staging software development following several methods from modest to complex projects. In this demo we will initially set up a 2-stage process to monitor CodeCommit master branch of the CloudFormation repository and trigger CloudFormation create/update stack as needed. It should work with changes in the repository itself on CodeCommit editor, over a text editor in local repository copy as it is committed and pushed back to the repository.

Before creating a new pipeline, we need a new CloudFormation role with PowerUser policy. Go to IAM and select create new Role:

  • In the AWS Service tab, select CloudFormation
  • Click on Permissions button to move on, search and add for the policy named PowerUserAccess. Important note here is that this role can have advanced permissions boundaries, not applicable in the demo.
  • Review the role name and click on Create role. As suggestion I used CustomCloudFormationPowerUser. Now we can go to create new pipeline.

To start up a new pipeline, go to AWS CodePipeline main window to create a new one. Note that one does not need all 6 steps in every project.

Step 1 – Name the pipeline CloudFormationPipeline

Step 2 – Choose CodeCommit as the source, select the demo repository CloudFormation and branch Master . Leave detection method as Amazon CloudWatch.

Step 3 – Build, select No Build

Step 4 – Deploy

  • Choose AWS CloudFormation as the Deployment provider
    • Action mode: Create or update a stack
    • Stack name: CFNAuto
    • Template file: VPC-Demo.yml as our previous template name
    • Blank configuration file
    • No changes in Capabilities
    • Role name: Custom-CodePipelineAccessToCloudFormation that we previous set up in IAM before starting CodePipeline

Step 5 – A new role for CodePipeline should be created, named Custom-AWS-CodePipeline-Service. We are just adding custom wording to the role name as to make easier to identify not AWS standard ones when cleaning up.

Step 6 – Review the pipeline information and click on Create pipeline to complete it.

Immediately after the pipeline creation we should see the AWS CodeCommit running as stated as in progress blue line for a minute and the succeeded message in green if everything went alright. At this time the pipeline should move from the Source step to the Staging one. If the CloudFormation template is valid and the stack creation was successful we should get another green message.

Click on View Pipeline history and you should see a first build of our demo VPC pipeline.

We can check how the CloudFormation process went by entering on its console and selecting the Events tab. All doing well, it shows the VPC creation first followed by the route table and subnet, finishing with the associations. This demo is ran entirely on US-EAST-1 or North Virginia region, so mind this when describing resources in the CloudFormation template otherwise subnet Availability Zone will not match VPC and the whole stack will roll back.

As a last verification, going into the VPC service console we should see the new VPC and subnet created along with the new route table.

The new VPC (along with other previous test and default VPC for the region)

The new subnet. Note that the routing table does not (and should not) match the default VPC routing table if we successfully attached both. The main route table should allow all inside VPC communication but having a custom table can be useful when adding internet, NAT or VPN connections for a subnet.

With this simple demo of automation we can move on into adding changes to the CloudFormation template via Git updates.

Updating the CloudFormation via CodeCommit Git tools.

So in the event of change need in the infrastructure all we need to do is to edit the template and commit the change to our Git tool. Vantage here is versioning and history of changes can be kept, who done that and later, we can introduce approvals into the change flow. First things first, let us try adding a new subnet to the VPC, us-east-1b AZ with 10.17.2.0/24 CIDR.

Edit your template file adding the following lines in the subnet section:

 MySubnet1b:
	Type: AWS::EC2::Subnet
	Properties:
	  AvailabilityZone: us-east-1b
	  CidrBlock: "10.17.2.0/24"
	  Tags:
		- Key: Name
		  Value: MySubnet1b
	  VpcId: !Ref MyVPC

And under Route Tables Associations section:

  MySubnet1bRouteTableAssociation:
	Type: AWS::EC2::SubnetRouteTableAssociation
	Properties:
	  RouteTableId: !Ref MyRouteTable
	  SubnetId: !Ref MySubnet1b

Now back to our terminal let us check Git status and commit the update.

iMac:CloudFormation aws$ 
iMac:CloudFormation aws$ git status
On branch master
Your branch is up to date with 'origin/master'.


Changes not staged for commit:`
  (use "git add <file>..." to update what will be committed)`
  (use "git checkout -- <file>..." to discard changes in working directory)`
	modified:   VPC-Demo.yml


no changes added to commit (use "git add" and/or "git commit -a")`
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ git add -A`
iMac:CloudFormation aws$ git commit -m "Adding a second subnet to the VPC"`
[master c735027] Adding a second subnet to the VPC`
 Committer: AWS <aws@iMac.local>`
Your name and email address were configured automatically based`
on your username and hostname. Please check that they are accurate.`
You can suppress this message by setting them explicitly. Run the`
following command and follow the instructions in your editor to edit`
your configuration file:`
	git config --global --edit


After doing this, you may fix the identity used for this commit with:`
	git commit --amend --reset-author


 1 file changed, 16 insertions(+)`
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ `
iMac:CloudFormation aws$ git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 336 bytes | 336.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/CloudFormation
   7ae00a2..c735027  master -> master
iMac:CloudFormation aws$ 

With all running well, we should see changes in the CloudFormation console, and so on the previous route table having a new subnet associated with.

CloudFormation events

VPC route table associations

Pipeline history

This should give small demo of 3 AWS tools working together to manage infrastructure as a code. As one last step in this demo, we can add one more stage for authorization into the workflow.

Go back to the CodePipeline console, select the CloudFormationPipeline if not shown automatically and click on the Edit button. Just below the Source stage there is a ‘+’ Stage box where we can add another stage. Click on it and name Auth. On the Action Category, choose ‘Approval’ type, add an Action name such as ‘FirstApprover’ with type Manual approval. Choose the previous topic ‘Notify me’ for this region and under Comments enter something appropriate like “Please review and approve the changes to the VPC”. Save the pipeline change.

Once done, click on Release Change button even with no changes in the template file. We should get usual Source stage running for few minutes and the Auth stage holding for approval with a Review button. If the SNS topic was set up, an email with a link should be sent as well. For now click on the Review button and approve the change. Enter a comment before approval and the pipeline should move to the final stage Staging with no changes in the VPC or in the CloudFormation stack.

Cleaning up the demo

In order to not risk incurring costs, if you want to clean the lab up, it should be done in simple steps:

  • Delete the CloudFormation stack
  • Delete the CodePipeline pipeline
  • Delete the CodeCommit repository
  • Delete the CodeCommit IAM user
  • Remove the local Git repository
  • Check for an S3 bucket containing the template versions for CloudFormation and CodePipeline
  • The new service roles named after “custom” prefix can also be deleted

Conclusion

This is far from most complete use of CloudFormation automation in AWS with its coding tools but should give a head start exploring them. As a more elaborate exercise, this lab can be expanded by using additional code commit users, simulating a larger admin group and mostly, working with different branches and environments, let’s say, being able to deploy a test or dev env into another VPC and/or account before committing to production.

As for learning more about Git and CodeCommit, AWS documentation page helps starting up a bit using step by step setup and tutorials and the excellent online / ebook Pro Git project by Scott Chacon and Ben Straub which dive deep into Git.

 

Edit: 2018–08–06 — I’ve made a not so quick video for commuters (47 min) showing similar setup working > https://youtu.be/9RENcc8PZTk — hope it is helpful to you!

Advertisements

Migrating a VM to AWS using VM Import/Export

This is a small lab to learn AWS VM Import/Export service for a friend. It is possible needed in a lot of cloud migration scenarios but not the only one. AWS documentation has provided a good guide about this migration but I wanted to actually do it a few times and see the outputs, specially the import process and know more how a usual Linux (and perhaps a Windows server later) VM with dynamic IP address with no “cloud init” would behaviour.

So for the process mind the requirements in terms of OS and versions, kernels, disk type interfaces, etc… AWS is pretty broad supporting a lot of Linux and Windows combinations but sometimes the latest versions aren’t. Also, FreeBSD is complete absent here but I’d bet most admins would find fun in launching it from the marketplace and move the load manually!

AWS instructions site

Main documentation page is here > https://docs.aws.amazon.com/vm-import/latest/userguide/what-is-vmimport.html?shortFooter=true

It is important to know supported OS and kernel versions before doing it as I stumbled with an “incompatible kernel” version much later on first tries when adding an Ubuntu 17.x as the most updated one supported is v16. I retried using a CentOS 7.4 .

And the instructions and method are here > https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html?shortFooter=true

I won’t detail here but we need to use AWS CLI to create a role with associated policy to be able to run the import command and a ready S3 bucket for that. Any bucket should do and for lab purposes you can make it less expensive using one with less redundancy (new in 2018) and with RRS settings.

I used VMWare Fusion with a regular CentOS 7.4 installed with just server package to make the OVA smaller before upload. I am suspecting I could make it smaller with minimal set of software installed, but I was unsure if lacking network package groups would make AWS reset failing. It should not as the requirements pages would tell us, right?

So for the first try, I’ve got a Ubuntu, made an OVF package/folder instead of OVA and of course it failed much later in the process (uploading 1GiB took 6 hours using SCP). I changed to CentOS expecting the installation to be smaller but it was little bit bigger, 50MiB approximately. So using CentOS 7.4 and exporting as OVA has worked fine. I will retry with Ubuntu later but I need to install v16.x instead of v17 that is not supported.

Moving image to S3 for import

I used SCP to copy the OVA image to a temp Linux AMI instance I had it running in us-east-1 region. So using a simple scp copy got the file to the instance, where I could then copy to the S3 bucket, third command below. Note that I have not used an S3 role for this instance but instead I’ve installed a set of IAM access key and access key ID user with read/write rights to the bucket. Both ways should work.

iMac:Downloads rodrigo$ scp -i EC2-US-EAST-1.pem /Desktop/CentOS1.ova/ ec2-user@54.152.35.169:

CentOS1.ova 100% 1044MB 28.5KB/s 10:26:16

iMac:Downloads rodrigo$

iMac:Downloads rodrigo$ ssh -i EC2-US-EAST-1.pem ec2-user@54.152.35.169

Last login: Fri Apr 27 19:22:31 2018 from 177.141.138.65

| | )

| ( / Amazon Linux AMI

|_||

https://aws.amazon.com/amazon-linux-ami/2018.03-release-notes/

1 package(s) needed for security, out of 5 available

Run “sudo yum update” to apply all updates.

-bash: warning: setlocale: LCCTYPE: cannot change locale (UTF-8): No such file or directory

ec2-user@ip-172-31-59-187 $ aws s3 cp CentOS1.ova s3://besparked-vm-import/

Completed 1.0 GiB/1.0 GiB (62.8 MiB/s) with 1 file(s) remaining

upload: ./CentOS1.ova to s3://besparked-vm-import/CentOS1.ova

ec2-user@ip-172-31-59-187 $

ec2-user@ip-172-31-59-187 $

Editing the containers.json file

The document is clear here and before running the import command, a JSON file is needed to point the bucket and OVA file. It seems it could be done in a single command line but I haven’t found references to it, something to review later. For now make sure using correct bucket name and file name and prefix.

ec2-user@ip-172-31-59-187 $ nano containers.json


“Description”: “Ubuntu OVA”,

“Format”: “ova”,

“UserBucket”:

“S3Bucket”: “besparked-vm-import“,

“S3Key”: “CentOS1.ova

}

}]

Importing the OVA archive as an AMI image

The following command will get the archive as specified in the JSON document and start the import process. Later we can check the import process through a describe option. Output at several states are pasted below.

ec2-user@ip-172-31-59-187 $ aws ec2 import-image –description “CentOS” –license-type BYOL –disk-containers file://containers.json

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“SnapshotDetails”:

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

},

“DiskImageSize”: 0.0,

“Format”: “OVA”

}

],

“Progress”: “2”,

“StatusMessage”: “pending”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“SnapshotDetails”: ,

“Progress”: “2”,

“StatusMessage”: “pending”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“SnapshotDetails”:

“Status”: “active”,

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

},

“DiskImageSize”: 652312064.0,

“Format”: “VMDK”

}

],

“Progress”: “28”,

“StatusMessage”: “converting”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“SnapshotDetails”:

“Status”: “completed”,

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

},

“DiskImageSize”: 652312064.0,

“Format”: “VMDK”

}

],

“Progress”: “34”,

“StatusMessage”: “updating”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“Platform”: “Linux”,

“Architecture”: “x8664″,

“SnapshotDetails”:

“Status”: “completed”,

“DeviceName”: “/dev/sda1”,

“DiskImageSize”: 652312064.0,

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

},

“Format”: “VMDK”

}

],

“Progress”: “58”,

“StatusMessage”: “booting”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “active”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“Platform”: “Linux”,

“Architecture”: “x8664″,

“SnapshotDetails”:

“Status”: “completed”,

“DeviceName”: “/dev/sda1”,

“Format”: “VMDK”,

“DiskImageSize”: 652312064.0,

“SnapshotId”: “snap-01907d3b13638c26c”,

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

}

}

],

“Progress”: “84”,

“StatusMessage”: “preparing ami”,

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $

ec2-user@ip-172-31-59-187 $ aws ec2 describe-import-image-tasks –import-task-ids import-ami-0306b6f28c3708613

“ImportImageTasks”:

“Status”: “completed”,

“LicenseType”: “BYOL”,

“Description”: “CentOS”,

“ImageId”: “ami-0c93e58db64cb1d23”,

“Platform”: “Linux”,

“Architecture”: “x8664″,

“SnapshotDetails”:

“Status”: “completed”,

“DeviceName”: “/dev/sda1”,

“Format”: “VMDK”,

“DiskImageSize”: 652312064.0,

“SnapshotId”: “snap-01907d3b13638c26c”,

“UserBucket”:

“S3Bucket”: “besparked-vm-import”,

“S3Key”: “CentOS1.ova”

}

}

],

“ImportTaskId”: “import-ami-0306b6f28c3708613”

}

]

}

ec2-user@ip-172-31-59-187 $

The message above indicates process is done and a new private AMI is under the account list.

Testing access to the instance

Go to the EC2 section of the AWS console, under the AMI, check if the new image is there and with the button Launch create a small instance base on the imported OVA. Here I would use most of default settings first time but I would like to explore larger instances and with more disks, instance based or EBS ones.

By the last page the security key can be used, but it won’t work as it does with regular Amazon images, reasoning is because the process won’t “cloud init” the instance if I am not missing anything. Anyway, if you have a regular account that is allowed to SSH you can try using public or private IP address on the instance.

ec2-user@ip-172-31-59-187 $ ssh rmonteiro@54.158.55.32

rmonteiro@54.158.55.32’s password:

Last login: Sat Apr 28 08:33:01 2018 from 54.152.35.169

rmonteiro@centos1 $

rmonteiro@centos1 $

To where now

Having this completed, there is a lot variations one can do, such as using more VM dishes before packing the OVA, using fixed IP address, having IPv6, etc… and I’d say try with a Windows Server supported in the requirements.

Last, there are other methods to use such as vCenter and Hyper-V assisted migrations, or live, which are more complex but useful with production environments with a large serve fleet. On the same main documentation page you can find main instructions.

AWS VPC VPN tutorial

A VPC VPN in Amazon Web Services is a private connection from your local network, company, to an AWS VPC (Virtual Private Cloud). It is one of the most used method to start deploying services on the cloud.

The AWS VPN allows a company network to be extended to the cloud infrastructure and to use several services such as Storage Gateway to expand storage and tape library capacity, Elastic File System (EFS), Active Directory integration/federation. The VPN is also used for backing up “Direct Connect” links which are AWS way of connecting customers networks via high speed and determinist links through providers. The VPN would take over via dynamic routing if Direct Connect link is down. There is some use cases of putting non-production traffic via VPN instead of Direct Connect so to not take over production bandwidth.

In this tutorial I will show how to set up a VPN using a home network pretending to be a company network using software VPN pfSense at our end. It will use VMWare Fusion on a Mac to run the virtual VPN server and a simple NAT setup (port forwarding for VPN communication). The virtual network emulating a company network will be segregated by VMWare network feature of the home network and will use static routing although most production deployments with hardware VPN would use BGP for dynamic routing, prefix propagation and filtering. A similar set up can be built using VirtualBox and Linux/Windows hosts as well.

Having AWS VPN is one way of building internet VPN to it but it is possible to use virtual routers and VPN software, such as Cisco CSR-1000v in AWS (by using AWS Marketplace to launch instances) and local VMWare deployment. High availability in the cloud is a little bit more complex but doable and the reason for deploying this can be by company and information security policy. It works same way with other security appliance and network vendors.

How it works

This tutorial will set up a Virtual Private Cloud (VPC) which is a simulated data centre and two subnets inside it. Those subnets are similar to VLANs, although not the same. One subnet will be public and host a Linux instance we will call Bastion and use to access another Linux host on the private subnet for testing it. The other subnet, private, will be the one simulating the DC extension and won’t have internet connection initially. This subnet will have a static route to the on-prem virtual LAN so it can communicate with it via the VPN. In some deployments some of the private subnets need access to the internet so it can be updated from it or have packages downloaded. This is common on AWS Linux AMI deployments. For some other servers, this might not be needed and approved, where updates and management are done inside company.

The public subnet needs an Internet Gateway (IGW) attached to the VPC and an associated route table with an added route to whole IPv4 address space for this tutorial (it is possible to set up IPv6 for this deployment as well). The private network will need another routing table with an entry to the local on-prem network which will be 192.168.180.0/24. If this private subnet would communicate with internet, a NAT Gateway should be needed in addition a route to it.

This set up needs that the local internet provider allows inbound connections to UDP/4500 and a fixed IP address for the VPN endpoint. Since the home network public IP address do not change a lot (it happens when devices are rebooted or the cable network is down for sometime), it is not problem. Sometimes the providers block TCP/80 and few others so home subscribers do not set up web servers, but usually UDP/4500 is opened, but it is good to check it before starting.

Diagram

Setup of the home network NAT

For this tutorial I am using an Apple Airport Time Capsule behind my carrier device using regular NAT setup to share this internet connection to home devices. The same can be accomplished with most wireless and home routers. And of course, it can be done with enterprise hardware appliances, such as routers.

If we were doing a company deployment using real production networks, we probably would not need to create a virtual network and do port forwarding because a dedicated internet connection would be in place. But for this set up and to keep home network segregated and not disturbed, we will use NAT for the VPN public facing interface, otherwise we would have to deploy VPN directly to my internet provider (and use a hardware for that).

So let’s enable port forwarding to the 192.168.1.201 host using UDP/4500 port. This is the host public facing IP address. In real deployments we would use a regular global unicast address. Here will it be hidden behind port forwarding from the IP address my provider has given to the Airport.

That is all we need on the home network, after saving configuration, it just restarts and the VPN can be built.

VPC setup

Let’s start a new VPC set up and its components. Please, note that VPC is a basic building block of almost anything in AWS and it is somewhat a long subject, but we will stick to a simple set up.

Adding a new VPC

Going to the AWS Console under Services, choose:

  • I am using “us-east-1”, North Virginia region but this should work on all regions
  • Network and Content Delivery > VPC
  • Enter in Your VPCs and click on Create VPC
  • Name it “VPN-VPC”, add the 10.0.0.0/16 as the address block, leave no IPv6 at this time and use default tenancy
  • Go to the Subnets section on the left
  • Click on Create Subnet to start the public one. Enter a name like “VPN-Public” and select the previous created VPC which is names “VPN-VPC”. Be careful of not selecting the default(s) VPC of your region. The availability zone should not matter much but use the first on in this region, “us-east-1a” and use 10.0.1.0/24 as the prefix.
  • Click again on Create Subnet to create the private subnet. Name it “VPN-Private” and select the VPN VPC and same availability zone and network prefix of “10.0.2.0/24”. At this point those are very similar subnets, private to this VPC.
  • Go to the Internet Gateways section and click on the Create internet gateway to create an IGW for this VPC. Name it “VPN-IGW” and click on Create button.
  • Back on the IGW list, select just the newly created IGW, then go to the Actions button and attach it to the VPN VPC. Once this is done, you should have the State as “attached” in green.
  • Let’s make the public subnet “public” now by adding a route table and routes to the internet. Go to the Route Tables section and click on Create Route Table, name it “VPN-Public-RTB” and make sure selecting the VPN VPC before clicking on Yes, Create button.
  • When done you should have the new routing table selected. Go the Routes tab and see that we have the “10.0.0.0/16” local VPC route as default, so subnets inside a VPC can reach each other, a standard to AWS VPCs. Click on Edit button to add a new route followed by the Add another route button. Enter the “0.0.0.0/0” destination and select the recent created IGW from the target list and save the changes. Click on the Subnet Associations to the right and on the Edit button, select the “10.0.1.0/24” subnet, which is our public and save the changes to have it associated with this routing table. Now, any launched instance in this subnet that has public IP address can access the internet and vice-versa. That will allow using our Bastion host later.
  • We will go back here to set up another routing table and associate it to the private subnet so it will allow instances on it reach the VPN and on-prem network.

We should have a similar set up as:

Here, the VPC is set up with few test subnets, Internet Gateway and Route Tables.

Here, the VPC is set up with few test subnets, Internet Gateway and Route Tables.

List of subnets and their configurations for this VPC.

List of subnets and their configurations for this VPC.

Initial route table for public subnet

Initial route table for public subnet

This is public routing table entries for internet and internal to VPC.

This is public routing table entries for internet and internal to VPC.

So now we have a basic VPC setup for hosting servers in AWS in a public network, let us jump to the VPN set up.

VPN setup in the VPC

Setting up a VPN in AWS VPC is done in 3 steps, one for each configuration element.

Customer Gateway

Customer Gateway (CGW) is where the VPN is terminated at the on-prem network, usually a VPN device or router, but also a software running on physical or virtual host, which is the case here.

On the VPC Dashboard in the major section VPN Connections/Customer Gateways, click on Create Customer Gateways and name it “On-Prem-CGW” . For routing, choose it Static because we are not using BGP here and enter the public IP address of the home router/access point. That is the IP address, global unicast/public given static or dynamic from the provider. Note that is not the 192.168.1.201 which is destination address for the port forwarding. Using 203.0.113.64 as an example, that’s the IP address for CGW.

Virtual Private Gateway

Next step is creating a VPG which terminates our VPN in the VPC side in AWS. This is a scalable, redundant service providing dual tunnels and most of the times is enough for a VPN connection, although one can set up additional resilience in different combinations of additional endpoints.

Go to the Virtual Private Gateways section and create one VPG on Create Virtual Private Gateway, naming it “AWS-VPG” using Amazon default ASN. Once it is created, associate it with the VPC by using the Actions button and the Attach to VPC option. Choose your VPC here, and be careful on not selecting the default VPC. The attaching process begins with the state in orange while processing. After few minutes , state should be green confirming attachment is complete.

VPN Connections

Last step is creating the VPN itself using both end points just created. Go to the VPN Connections section on the left and click on Create VPN Connection. Enter the following information:

  • Name tag: “VPN to on-prem”
  • Virtual Private Gateway: choose the recent created VPG
  • Customer Gateway: existing and the ID is also the created one.
  • Routing Options: Choose Static here and enter the network address range for the on-prem network. In this tutorial is 192.168.180.0/24 and on real networks, it could be a big set of prefixes depending on how they are allocated. Using dynamic routing and BGP would make this simpler for bigger networks and this is the usual way.
  • Under Tunnel Options, you can specify both tunnels pre-shared keys and CIDRs having more control over them, but for this lab, leave them blank so AWS generates this information for us.

Once this page is saved, the VPN connection process starts taking a few minutes to return to the connection list page. The VPN never starts to come up from the AWS point so it won’t keep trying reaching the on-prem network. We will when setting up the VPN endpoint how to do this from our end.

AWS VPN sample configuration once complete.

AWS VPN sample configuration once complete.

One last action here is to download the configuration hint file for pfServer. Click on the Download Configuration button, select pfSense by the end of the list and save the text file for later use.

The private subnet route to the on-prem LAN

The private subnet needs to know how to route to the on-prem LAN so we need to go back to the Route Tables section, edit the VPN-Private-RTB route table and on the Routes tab, add a route to 192.168.180.0/24 with the VGW as the target. Save the changes and confirm the route table is associated with the private subnet.

Preparing test VM and instances

Setting up a local test machine

A local virtual machine of any platform can be used to test the VPN connection to AWS as well to manage the VPN server pfSense. I will run a quick macOS setup from here but it can be done with any Linux/Windows and Firefox/Chrome combination. While the method differs between VMWare Workstation, vCenter or Fusion platforms and so do Hyper-V and VirtualBox, a virtual network will help showing the concept of isolation from the home network where I could have personal VMs running as well. This can be optional when testing, both LAN and WAN interfaces of pfSense would be on same logical network (or broadcast domain/VLAN concepts) although with different network addresses in case there is no logical network separation.

VMWare Fusion Professional provides some additional network capabilities, here an isolated virtual network for the demo.

VMWare Fusion Professional provides some additional network capabilities, here an isolated virtual network for the demo.

This network will not provide DHCP services for the tutorial so all test machines, being client or servers, should have a fixed IP address in 192.168.180.0/24 prefix and 192.168.180.10 as default gateway. This will be the internal or LAN address of the VPN server in pfSense. Since the 10.0.0.0/16 will be under default route scope, no additional manual route will be added.

Preparing a test instance in AWS private subnet

We should launch a test instance using Amazon Linux so going to the AWS Console under Computing section and EC2, select Instances on the left pane and click on the Launch Instance button following the steps below:

  • Use the regular older Amazon Linux AMI
  • General purpose t2.micro instance size
  • Now for the VPC, choose the test VPC in Network line, and the private subnet on it
  • No changes in storage neither the tags needed
  • Create a new Security Group that allows inbound SSH connections from the on-prem network prefix 192.168.180.0/24 and ICMP from all networks 0.0.0.0/0. For production deployments, use InfoSec and company requirements for more restricted environment, and of course, new services should be added as allowed to the subnet or servers groups as appropriate, i.e. file server access, web access).
  • Review and launch the instance. It should take 2 to 5 minutes to be ready. Write down the private IP address, here I am using 10.0.2.232.
Security Group example for the instance

Security Group example for the instance

Configuring pfSense VM for the VPN

Downloading pfSense ISO

pfSense Community Edition can be downloaded from pfSense Community Edition portal and for this demo I used version 2.4.3 CD Image (ISO) Installer for AMD64 architecture. It should be around 300MiB in size and you can check the SHA256 Checksum for the .gz file. Note that macOS usually will unpack it automatically with default Safari settings.

Once ISO is downloaded, launch VMWare Fusion Library window and create a new VM with the following settings:

  • Operation System FreeBSD 11 64-bit
  • Legacy BIOS
  • In Customize settings:
    • Save As “VPN-Server” in the appropriate folder. I usually run all VMs in the macOS shared folder that is kept outside Time Machine backup and where I can launch VMs from different user accounts.
    • Select the Network Adapter which should be the only one at this moment and change the network settings from “Share with my Mac” to “Bridged Networking/Autodetect” which will make this interface connected to your main home LAN and internet accessible. Remember the NAT/Port Forwarding of the UDP/4500 to the VPN server? Here is where we connect those. pfSense calls this the WAN interface.
    • Add a second network interface and connect it to the AWS-onprem private network. This will be the LAN Interface.
    • Increase the memory from 256MiB to 2048 and 1 CPU to 2
    • Remove the Camera and sound-cards.

Turn on the virtual machine, and once FreeBSD is complete, the pfSense Installer screen will show up:

  • After accepting the use license choose “Install pfSense”
  • Use default key map for most deployments
  • Keep auto FS install using UFS
  • Hold on while the packages are copied and expanded into the virtual server.
  • Select the < No > button to skip going to the system shell.
  • Finally, allow the server to restart

Once pfSense is restarted the initial setup screen will appear. The WAN and LAN interfaces should be correct placed and one indication is the IPv6 address assigned to the WAN interface, since it is turned on AirPort router.

pfSense setup screen with proposed IP addresses and network interfaces.

pfSense setup screen with proposed IP addresses and network interfaces.

Now, let us update IP address for this server. Type “2” and “1” to configure the WAN interface. Use the settings:

  • No DHCP use
  • IPv4 address 192.168.1.201
  • Use a prefix 24 for this network
  • Add 192.168.1.1 as default gateway
  • For IPv6, leave the DHCP configuration on and we can disable later on
  • If it asks for HTTP fallback for configuration, confirm Y and complete interface configuration
  • Now, on the internal LAN address, type “2” and “2” on the main menu
  • Enter the agreed IP 192.168.180.10
  • Use prefix 24 again
  • Press <ENTER> for no default gateway
  • On the IPv6 address setup, press <ENTER> again to disable it
  • When asked about DHCP server, leave it turned off
  • The last message should inform the server can be managed from http://192.168.180.10

The server has no internal LAN blocking rule on its firewall to allow initial setup and to avoid locking out the administrators, but this safety can be turned off in pfServer preferences, which increases security and help confirming with InfoSec. Also, it is possible to enabled SSH daemon for remote access, go into the shell, halt/restart the system and other features from text management screen. For now, go into the management virtual machine and test connectivity to pfServer.

pfSense set up

Go back to the test VM in the private LAN and launch a browser to http://192.168.180.10, enter with initial credentials, user “admin”, password “pfsense” (all lower case). Follow the 9 step process.

  • In step 2, enter local host network settings:
    • Hostname: vpn-server
    • Domain: lab.inc
    • DNS entries such as 8.8.8.8 and 8.8.4.4
    • Override DNS off
  • Step 3 you can leave the original NTP time server or change it and select a local timezone
  • Step 4 all network configuration is already set so no changes needed here. same for Step 5
  • Step 6, type a new admin password.
  • Step 7 will start a reload with new configuration. The Wizard should quickly jump to Step 9 with a green line. Click on the “click here to continue webConfigurator” , read the notice and accept it

You should now see the Dashboard screen. Here you can check a lot of system statistics and functions. From the System menu it is possible to enable SSH/HTTPS and load certificates, normal security function in production.

We can now start the IPSec VPN configuration in pfSense.

Configure IPSec VPN

IPSec set up

Go to the VPN > IPsec menu in pfSense. Under “Tunnels” information we will start phase 1 and phase 2 for each one of the tunnels. AWS VPN will keep the second tunnel as backup of the first for outages such as programmed maintenance or regular failures, which are uncommon to happen.

Click on the Add P1 button. On “General Information”, we need to add the other tunnel endpoint IP address from the configuration file hint we saved earlier when configuring the VPN. Another change needed is adding a description to the tunnel, such as “Tunnel 1”.

On “Phase 1 Proposal (Authentication), leave the current settings as they are and copy and paste the “Pre-Shared Key” to the appropriate field.

On “Phase 1 Proposal (Encryption Algorithm) we can use higher security methods. Select AES 256bits, with Hash Algorithm SHA256 and DH Group 24 (2048(sub 256) bit). The other settings in advanced configuration are fine. Save the configuration so we can start Phase 2 set up.

Click on “Show P2 entries” followed by the new entry and the following settings . For the “Local Network”, the default option of LAN subnet should capture our test range 192.168.180.0/24 but production with several networks, they need to be listed here. As for the “Remote Network” it must be changed to list AWS VPC entire range 10.0.0.0/16.

For Phase 2 Proposal (SA/Key Exchange), let’s us AES 256bits only, with SHA256 as the Hash Algorithms with PFS key group 16 (4096 bit). We can use 10.0.2.1 as the ping IP address in advanced ping host.

Disable “rekey” as instructed in AWS hint configuration.

Save the configuration and note that pfSense asks to apply the configuration. Make it so by using the appropriate button and wait for the confirmation notice. Repeat the configuration for tunnel 2 using its data.

IPSec, Security Associations and tunnel status

IPSec, Security Associations and tunnel status

Firewall setup

FIrewall in pfSense must be set to allow traffic back and forth on-prem LAN to AWS VPN. Go to the Firewall > Aliases to create network aliases before we touch the rules.

Click on the Add button, type a name such as “AWSVPC” , description and type as network followed by the range 10.0.0.0/16 and a description for it. Repeat the process to create a local range to 192.168.180.0/24.

Now we can add rules to the firewall. Go to Firewall > Rules and click on the Add button, any of them is enough. Select the Interface IPSec here. In the Source and Destination, use the “Single host or alias” with AWSVPC and OnPrem respectively. Change the protocol from TCP to “Any”. Add a description and save the rule.

Click on LAN tab to add the return back rule to AWS VPC. Select the Add button, confirming the LAN interface is selected and any protocol is allowed just like the above rule. In the source and destination fields, use same information as above but reversed to allow traffic from the on-prem network reaching AWS VPC range.

For this lab one additional step is necessary, disabling private IP address (RFC1918) from the WAN internet. Usually this must be turned on as default but since this lab has traffic coming the VPN as 10.0.0.0/16, we would not be able to start traffic from the AWS VPC. Even the allow rule would not right away because the default blocking takes place. Once I clicked on the gear icon for the private IP address rules in WAN firewall interface, got into a general preferences page to disabled, it started worked. Then I enabled the blocking again but the traffic from VPC to local LAN was not blocked. Even after a pfSense restart. So, something to investigate, maybe redo whole pfSense server to see if I missed a step or if we reproduce the issue. Something to keep an eye on.

// Note few minutes later – The private IP address blocking must be disabled. While I had few minutes working back with this enabled, eventually I have to turn it off for this lab //

Testing VPN connection

After those last configurations, you should be able to go to the command prompt or shell of the test machine on-prem and try pinging it with ping 10.0.2.232 command and the results should be similar.

Rodrigos-Mac-3: rodrigomonteiro$ ping 10.0.2.232

PING 10.0.2.232 (10.0.2.232): 56 data bytes

64 bytes from 10.0.2.232: icmpseq=0 ttl=253 time=148.420 ms

64 bytes from 10.0.2.232: icmpseq=1 ttl=253 time=267.706 ms

64 bytes from 10.0.2.232: icmpseq=2 ttl=253 time=148.204 ms

64 bytes from 10.0.2.232: icmpseq=3 ttl=253 time=149.026 ms

64 bytes from 10.0.2.232: icmpseq=4 ttl=253 time=347.856 ms

64 bytes from 10.0.2.232: icmpseq=5 ttl=253 time=147.870 ms

^C

— 10.0.2.232 ping statistics —

6 packets transmitted, 6 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 147.870/201.514/347.856/78.625 ms

Rodrigos-Mac-3: rodrigomonteiro$

Another connectivity test is trying to SSH into the machine as:

Rodrigos-Mac-3: rodrigomonteiro$ ssh 10.0.2.232

The authenticity of host ‘10.0.2.232 (10.0.2.232)’ can’t be established.

ECDSA key fingerprint is SHA256:vEe+p41QsPLNQ2Dp3QSbExYfBVgOVofK6qgLfJyWhn4.

Are you sure you want to continue connecting (yes/no)? ^C

Rodrigos-Mac-3: rodrigomonteiro$

If you add the Region key to your test machine, it should be possible to access the instance in the private network and ping back the on-prem LAN. First, create a new key file in your terminal with touch key.pem command.

Strip read access from others and groups if using Linux/macOS with command chmod og-rw key.pem so only the file owner can read and write to it. Edit the file using nano with the command nano key.pem and paste the text of the key used for this instance. You should have previously used this key to launch this test instance. Type <Control-X> to save the file. Confirm the same file name if asked.

Now test access to the instance with ssh -i key.pem ec2-user@10.0.2.232 command.

Rodrigos-Mac-3: rodrigomonteiro$ ssh -i key.pem ec2-user@10.0.2.232

| | )

| ( / Amazon Linux AMI

|_||

https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

-bash: warning: setlocale: LCCTYPE: cannot change locale (UTF-8): No such file or directory

ec2-user@ip-10-0-2-232 $

Assuming all went fine, we should be on regular Amazon Linux shell ready to work. This subnet and instance will not have access to the internet. You can now test a ping from the AWS VPC machine back to LAN since we should have firewall blocking fixed. Typing ping 192.168.180.101 allowed me to ping a test machine and even ssh to it if the service is enabled on macOS.

So with this we end this very basic VPN setup that allow future labs where we can connect Active Directory, Storage Gateway, use virtual machine migration services and other. I hope to do few of them now on and link back here as a requirement.

Bonus – list of tasks and planning

One activity that helps in planning and documenting is a mind map, below the one used for this tutorial. Hope it is useful.

Aplicação simples “Hello World” em PHP

Criar primeira versão dev 0.9 como index.php numa pasta local

  • Criar pasta HelloWorld
  • Adicionar a primeira versão do index.php
  • Entrar na pasta HelloWorld
  • Criar o bundle da aplicação com o comando zip: zip -r ../helloworld-dev-0.9-zip .
  • O bundle estará criado na pasta acima do projeto.

Não usaremos VPC (Virtual Private Cloud) nesta demo

Criar o ambiente EB para subir o primeiro pacote de aplicação PHP

  • Entrar página do serviço EB
  • Clicar em Create New Application
  • Digitar nome da aplicação: Hello World
  • Digitar descrição: Aplicação Hello World em PHP
  • Clicar em Create Web Server
  • Em Environment Type escolher PHP como Predefined configuration e Load balacing, auto scaling como Environment Type.
  • Escolher Upload your own para subir o pacote via browser, escolher o bundle e fazer o upload.
  • Em Deployment Preferences, deixar as opções padrão neste exemplo. O Rolling fará com que 30% da aplicação seja atualizado por vez de forma que que a aplicação não seja virada de ambiente ou versão duma vez só.
  • Clique em Next
  • Em Environment Information altere o nome para helloworld-dev-xzy e verifique a disponibilidade do nome público do ambiente nesta região do AWS com o botão Check Availability . Note que usamos ‘xzy’ como sufixo no nome do ambiente para torná-lo único já que outros ambientes na mesma região do AWS podem existe com o nome ‘helloworld-dev’. Uma possível prática em produção seria usar um código interno de desenvolvimento que não revele informações sobre o projeto externamente. Entre com uma descrição Aplicação Hello World em PHP, ambiente dev.
  • Em recursos adicionais deixaremos padrão sem criar novo VPC ou instâncias RDS para banco de dados.
  • Configuration Details:
    • Instance Type: deixe o padrão como t1.micro, mas em projetos normais pode-se usar maior capacidade. A vantagem aqui é poder crescer verticalmente se necessário (scale up).
    • Escolha uma chave padrão para as instâncias EC2 desta região. Ela tem que estar previamente criada nesta versão do EB.
    • Email address para as notificações
    • Em Application health check URL use /index.php como página de teste para o Elastic Load Balancer. Em aplicações comuns, isto pode ser um arquivo estático HTML.
    • Rolling updates type continua o padrão Rolling based on Health
    • Cross zone load balacing: seguiremos com múltiplas AZs
    • Connection Draining: deve ficar ligado o que permite um tempo até as conexões para uma instância terminarem e sejam migradas para outra instância nova com diferente versão ou ambiente da aplicação. O Draining timout em seguida determina este tempo, que pode ficar em 20 segundos aqui.
    • Health reporting em Enhanced, disponível em instâncias mais recentes no AWS.
    • Root Volume: Aqui é possível determinar o tamanho e tipo dos volumes raiz dependendo da necessidade da aplicação. Em geral os dados transitórios ou códigos ficam nas instâncias enquanto estão rodando, e são efêmeros, ou seja, são destruídos quando a instância ou aplicação são terminados. Os dados relacionados à aplicação geralmente ficam externos em bancos de dados relacionais no RDS (ou interno na empresa em alguns casos), no DynamoDB , S3 ou outros. Por isto, em geral, estes volumes não precisam ser muito grandes.
  • Environment Tags: Para classificação dos elementos criados pelo EB, administração e relatórios de custo, é interessante usar Tags. No EB não é possível usar Name , reservado, mas como exemplo, pode-se classificar esta demo como Service – Elastic Beanstalk.
  • Permissions:
    • Instance Profile: O AWS criará um automaticamente chamado aws-elasticbeanstalk-ec2-role para que as instâncias tenham acessos a alguns serviços AWS como CloudWatch para enviar logs, ler filas do SQS. Em produção você pode customizar diferentes instance profiles para diferentes fins.
    • Service role: Role ou papel usado pelo EB que determina quais serviços e níveis de permissão usados pela aplicação.
    • O usuário criando ambiente EB aqui deve ter permissões no IAM para criar estas Service e Instance roles. Além do dono da conta e usuários participantes do grupo Administrator, usuários com menos privilégios, por exemplo, participantes do grupo Desenvolvedores, podem ter esta permissão herdade de um grupo ou adicionada manualmente – iam:CreateServiceLinkedRole -.
  • Reveja as informações na próxima tela e se estiver tudo certo clique em Launch
  • Levará uns poucos minutos e o novo ambiente estará criado. Você pode ver eventos recentes no Dashboard, acompanhar mais detalhes em Events, alarmes e estatísticas em Health.
  • Em Configuration é possível fazer alterações na quantidade de instâncias, tipo delas, chave de criptografia para acesso, forma de fazer atualizações e outros.

Testando atualizar a primeira versão

Dentro no dashboard, clique em Update and deploy e escolha novamente o mesmo bundle helloworld-dev-0.9.zip e entre com a versão 0.9 neste exemplo e em seguida em Deploy, deixando as preferência como padrão por enquanto. O EB deve subir o bundle novamente e começar a substituir as instâncias com a nova versão no ambiente ‘dev’. Após alguns minutos os logs mostram a operação como concluída.

Entre no Application Versions pelo botão na barra superior e em ‘Application Versions’. Teremos a versão 0.9 acima e a original ‘First Release’ , as datas de criação, origem do pacote ou bundle e o ambiente onde foram implementadas.

Clicando à esquerda em Environments é possível ver em verde o ambiente dev e a URL associada.

Verificando os serviços AWS criados pelo EB

O Elastic Beanstalk cria vários serviços, usando CloudFormation, como instâncias EC2, VPC, Security Groups, Elastic Load Balancer e buckets S3 como repositórios das aplicações, podemos checar o que há por trás duma aplicação no EB.

Acesse o menu de serviços do AWS e na seção Compute clique em EC2. Note que há uma instância criada para este serviço com descrição do ambiente no nome, endereço IP público, Security Group usado, chave de criptografia para acesso, tipo de instância Linux e outros.

Clique em Elastic Block Store à esquerda e veja que um volume de 8 GiB foi criado associado a esta instância de teste. Embora o volume esteja no EBS e possa ter um snapshot dele criado (o que pode ser útil para criar outras aplicações depois de customizações), não se deve considerá-lo como recurso de armazenagem permanente pois o volume é destruído com a exclusão de ambiente ou aplicação. Uma das razões para uso deste tipo de volume é permitir escalar a aplicação verticalmente alterando o tipo e tamanho de instância.

Por último, clique em Load Balacing e em Load Balancers para ver o ELB criado para esta aplicação. Ele mostrará em que VPC foi criado (neste caso no Default da region), as AZs usadas, configuração de porta, verificação de status e saúde da instância rodando a aplicação e muitas outras opções.

Acesso via nome de domínio

Entre no menu de serviços do AWS, seção Network and Content Delivery, e acesse ao Route 53. Clique em Hosted Zones para ver seus domínios registrados.

Crie um novo registro tipo A chamado helloworld atrelando-o ao primeiro ambiente criado pelo Elastic Beanstalk. Após alguns minutos deve ser possível acessar ao http://helloworld.rmonteir.name e ver a versão e ambiente da aplicação ativos.

Subindo novo ambiente

Similar ao modelo Blue/Green de deployment, podemos criar um novo ambiente clonando o atual e trazendo atualizações na aplicação em novas versões. Isto permite testar um deployment antes da ativação e fazê-la em algumas instâncias substituindo e ativando aos poucos, causando menor disrupção. O Elastic Beanstalk faz a substituição do registro CNAME no Elastic Load Balancer.

Dentro do Dashboard da aplicação, clique em Actions > Clone Environment para iniciar o processo. Neste ponto é possível atualizar a plataforma Linux ou Windows usada no EB quando há novas versões disponíveis. Cheque a disponibilidade da URL novamente, entre com descrição. Estando tudo certo, clique em Clone.

Depois de alguns minutos o ambiente novo estará pronto e pode ser testado clicando na URL específica dele > http://helloworld-prod-xzy.us-east-1.elasticbeanstalk.com . A versão no ambiente ‘prod’ é a mesma 0.9 originalmente do ‘dev’.

Edite o arquivo ‘index.php’ alterando o texto para versão 1.0 e ambiente prod, crie um novo bundle e atualize a versão do ambiente ‘prod’ para 1.0 com ele. Para isso edite o arquivo no TextWrangler, salve o arquivo, volte ao Terminal e use o comando zip para criar novo bundle: zip -r ../helloworld-prod-1.0.zip .

Volte ao painel do EB, clique em Upload and Deploy e selecione o novo bundle, edite a descrição da versão para prod 1.0 clique em Deploy. Terminado o processo, faça um refresh no browser e veja que a descrição agora mostra ambiente prod v1.0.

Alternar entre ambiente dev para prod

Similar ao Blue/Green deployment, agora podemos alterar do ambiente dev para o prod usando a função Swap Environment URL. Isto fará com que o Route 53 alterne o registro A do Elastic Load Balancer do ‘dev’ para o do ‘prod’.

Primeiro acesse ao ambiente ‘dev’ e depois vá ao Dashboard novamente e clique em Actions > Swap Environment URL e escolha o ambiente ‘prod’ como destino. O ambiente ‘dev’ aparece automaticamente escolhido para ser substituído porque existem apenas 2 ambientes. Num deployment a partir de 3 ambientes, você deve selecionar o ambiente de origem.

A troca do registro no Route 53 acontece rapidamente e assim que os registros e caches DNS foram limpos, o novo ambiente de produção é acessado. Basta fazer o refresh da URL http://helloworld.rmonteir.name e ver a página ser alterada depois de alguns minutos.

Checando logs e status da aplicação

É possível saber mais de como a aplicação anda através dos logs do EB e no CloudWatch. Exemplo rápido é acessar à guia Logs no EB e pedir as últimas 100 linhas de log e depois abrí-las em uma nova guia no browser. Aqui se consegue verificar tanto detalhes dos hosts rodando as aplicações, Windows ou Linux, quanto informações da aplicação em si como IIS ou Apache.

Na guia Health teremos estatísticas dos acessos, tais como latência, quantidade de acessos com sucesso (código 2XX) e erros de páginas não encontradas.

Na guia Monitoring há dados de perfomance da aplicação como uso de CPU, latência média e uso de rede. Cada gráfico pode ser isolado e ampliado para ver mais detalhes, também alterar a janela de observação para períodos maiores ou menores. Aqui é possível definir alarmes através da variação destas métricas, o que nos leva à próxima guia.

Em Alarms tem-se o resumo dos alarmes definidos na guia Monitoring onde podem ser acompanhados e modificados quando necessário.

Na guia Configurations há uma quantidade de controles que podem ser alterados para manipular o ambiente. Por exemplo, em Scaling teremos o padrão como 1 servidor web e o máximo de 4 além dos triggers (gatilho) que iniciam um evento de auto-scaling (horizontal) lançando mais instâncias, aqui conforme o consumo de rede é maior que um certo nível por período de tempo. Além da escalabilidade em Configurations, podemos definir upgrades do ambiente web, por exemplo, novas versão do PHP, Python, etc…além do sistema operacional das instâncias, como uma revisão do Amazon Linux mais novo. Último detalhe importante é que nesta página é possível adicionar certificados para acesso via TLS ao Elastic Load Balancer, que é recomendado na maioria das aplicações web.

Terminando a demo e desligando os serviços.

Ao final da demo podemos destruir os ambientes através do EB, que fará com que toda estrutura criada anteriormente seja excluída, inclusive banco de dados RDS se provisionado junto. Como é mais comum e recomendado, o banco de dados SQL ou outro repositório devem estar isolados do EB e não seriam afetados.

Para iniciar a exclusão da aplicação, entre no menu principal do EB na aplicação Hello World, clique em Actions e selecione Delete Application. O AWS iniciará exclusão de todos ambientes em paralelo.

Apresentação do AWS Storage Gateway, Virtual Tape Library e Cached Volume.

Apresentação do AWS Storage Gateway

O AWS Storage Gateway fornece soluções similares aos NAS e Tape Libraries de um Data Centre tradicional, para ambientes cloud puro ou híbrido. As modalidades do Storage Gateway podem ser atender a demandas de armazenagem tanto do DC local através de volumes em caches servindo como NAS iSCSI como de ambientes que foram migrados para cloud com poucas modificações, modelo lift and shift. O mesmo vale para solução de tape library que pode atender ao DC virtual no AWS como ao on-premise atual.

Os volumes iSCSI são armazenados num bucket S3 por trás da solução automatizada do AWS enquanto que as fitas virtuais, além de estarem no S3 para rápido restore, podem ser arquivadas no Glacier para retenção, menor custo e conformidade com políticas de arquivamento. É importante saber que o restore a partir do Glacier pode levar de 3 a 5 horas para começar, tempo para desarquivar as fitas.

Como resultado, o Storage Gateway pode ajudar nos cenários em que servidores são migrados para a cloud ou montados diretamente neste ambiente sem que sejam feitas grandes modificações no fluxo de operações. Os casos mais comuns são de organizações movendo ambientes para AWS, começando jornada e experimentando mais cloud computing.

Nesta demo não usaremos um Virtual Private Cloud e os controles comuns de segurança e segregação para focarmos no Storage Gateway.

Iniciando servidor Windows 2012

Para a demonstração usaremos uma instância virtual do Windows Server 2012 que será um dos servidores de backup com VEAM instalado e como servidor a ter o backup feito. Normalmente usaríamos agentes de backup para fazer backups remotos a este servidor, mas para simplificar a demo faremos tudo numa mesma instância. Também usaremos este servidor como cliente do Storage Gateway como NAS demonstrando como volumes podem ser montados remotamente via iSCSI.

Iniciando instância no EC2

Acesse ao console do AWS, e encontre na seção Compute o ítem EC2. O dashboard mostrará qualquer instância já em uso nesta região , Security Groups, chaves, etc…

Cliquem em Launch Instance e siga os passos da lista abaixo:

  • Escolha imagem tipo Windows Server 2012 Base R2 do Free Tier.
  • Em ‘Instance Type’ mantenha o Free Tier.
  • Em ‘Configure Instance Details’ verifique que será usado o VPC default e normalmente não é necessário alterar nada aqui.
  • Em ‘Add Storage’ também deixe as opções padrão.
  • Em ‘Add Tags’ podemos descrever algumas tags como:
    • Name: Backup server
    • Service: Storage Gateway client
    • Environment: Demo
  • Em ‘Configure Security Group’ adicione um SG com nome ‘Windows Server’ para ajudar diferenciar de outros SG e para limpeza ao final da demo. O grupo permitirá acesso via Windows Remote Desktop na porta TCP/3389 de qualquer lugar da Internet e enquanto isto pode não ser um problema numa demo rápida ou lab, em ambientes de produção o SG deveria permitir acesso apenas de dentro da rede da empresa e das redes ou hosts permitidos. Se você tiver um jump box ou bastion host ou grupo deles, é possível restringir o acesso aos servidores e administração.
  • No ‘Review Instance Launch’ confirme que as opções estejam como desejado e clique no botão Launch para iniciar a criação da instância Windows.

Nesta etapa é necessário usar uma chave de criptografia já existente ou criar uma nova. Clique no botão View Instances para acessar ao console e à lista de instâncias. O console mostrará informações da instância com endereço IP privado no VPC, endereço público IPv4 e no ‘Status Checks’ o andamento da criação da instância. Processo leva alguns minutos.

Para acessar ao Windows Server no AWS é necessário gerar a senha do Administrator a partir da chave privada de criptografia. Para isso vá no botão ‘Actions’ e selecione ‘Get Windows Password’. Selecione o arquivo com a chave ou copie e cole o conteúdo dela na janela e clique em ‘Decrypt Password’. Em seguida anote-a para usarmos. ≈y$(=-FN@QWo≈

Abra um cliente de Remote Desktop e crie/salve uma nova conexão. Após alguns segundos o novo ambiente do usuário ‘Administrator’ é preparado e estaremos no desktop do Windows 2012.

Preparando o Storage Gateway tipo Volume

Vamos preparar um exemplo de Storage Gateway tipo Volume no AWS que servirá de volume remoto iSCSI para o servidor Windows. Aqui usaremos cliente Windows mas qualquer cliente compatível deveria funcionar, inclusive servidores ESXi com data stores remotos.

Por exemplo, você pode ter múltiplos ESXi ou clusters trabalhando com os Storage Gateways para alta disponibilidade ou HA e vMotion. A limitação está no fato de que você tem que usar um volume completamente baseado em storage local ao invés do tipo cached, o que mina efetivamente o uso do Storage Gateway para este propósito. Entretanto pode ser usado para um lab ou testes com data stores pequenos. De qualquer forma, testar vCenter e HA é mais simples com um share NFS ou NAS com iSCSI na rede.

Voltando ao Storage Gateway, aqui é possível criar um ou mais volumes de até 16 TiB e 32 deles por Gateway. Estes volumes são espelhados de forma assíncrona no S3 e snapshots podem ser base de um volume EBS (Elastic Block Storage). Isto pode ser útil quando se quer mover um servidor com grande quantidade de dados para a cloud. Acesse à seção ’Storage’ e ‘Storage Gateway’ no console do AWS. Se houver gateways criados, o console de operações deve ficar aberto, e se for a primeira utilização a página de boas-vindas deve aparecer. Neste caso clique em ‘Getting Started’ e escolha o tipo de gateway como ‘Volume gateway’ para iniciar.

Este tipo de gateway tem duas variações, Cached Volumes e Stored Volumes. Embora sejam similares eles têm diferentes usos. O Cached usará um espaço menor na máquina virtual ou instância do gateway sincronizando de forma assíncrona os dados para o AWS mantendo o mais utilizado localmente para agilizar o tempo de acesso e reduzir os custos de transferência de dados para fora do AWS. Neste caso, não é necessário ter todo espaço em cache que será reservado ao volume, como por exemplo, 150GiB de cache podem ser usados para um volume de um TiB. O Stored manterá cópia completa do volume local e neste caso todo espaço reservado ao volume no AWS deve ser provisionado localmente.

Selecione ‘Cached volumes’ neste caso e clique em Next. Próximo passo é escolher onde e como rodar a máquina virtual ou instância do Storage Gateway. Normalmente em um deployment on-premises no DC da empresa, deve-se usar uma VM no VMWare ou Hyper-V. No começo de 2018 o AWS suporta ESXi atual além do Hyper-V 2008 R2 e 2012. Para a demo usaremos tudo em cloud e por isso escolhemos ‘Amazon EC2’. Para todos os casos haverá instruções adicionais logo abaixo. Após selecionar que usaremos EC2 clique no botão Launch para escolher uma imagem ou AMI no AWS Marketplace.

Se for o primeiro acesso ao Storage Gateway, o AWS pede se aceite os teremos de serviço e são apresentados os custos por hora na região escolhida. Clique no botão Continue to subscribe. Há agora algumas formas de lançar a instância do Storage Gateway, em particular ‘1-Click Launch’ e ‘Manual Launch’. Nesta demo usaremos o 1-Click e podemos rever as opções como tamanho de instância (m3.xlarge), configurações de VPC onde se pode segregar numa subnet os storage gateways e Security Groups. Aqui, os SG deveriam definir e restringir quem e o que pode acessar aos Storage Gateways, normalmente apenas os servidores clientes. Uma forma interessante de fazer é usar os Security Groups destes servidores como origem para os Security Groups de acesso aos Gateways. Isto se os servidores clientes estiverem usando SG comuns para saída (outbound). Embora não detalharemos isto nesta demo, é importante considerar isto em produção como proteção no nível de rede/conexão aos Storage Gateways. Além deste recurso, cada gateway pode usar os controles de acesso regulares ao iSCSI. Clique em ‘Launch with 1-Click’ para iniciar a criação do Storage Gateway. O AWS confirmará o início do lançamento da instância do Gateway e oferecerá abrir a lista de instâncias EC2. Clique no link para acompanhar a criação da instância.

Enquanto a instância é criada, anote o IP público da instância para qual seu computador deve ter acesso, por exemplo, 34.238.116.189. Em cerca de 2 minutos a instância deverá ter concluído a inicialização.

Volte à janela do console do Storage Gateway e clique em Next para continuar a configuração. Entre com o endereço IP do seu Gateway que possa ser acessado pelo browser, neste caso o público. Em ambiente de produção, provavelmente usaríamos o IP privado porque o Gateway não estaria aberto ao acesso pela internet. Clique em Connect to gateway.

Na tela ‘Activate Gateway’ confirme que ele foi lançado na região desejada, aqui US East-1 (North Virginia), selecione a zona do tempo para ele como GMT -5:00 Eastern Time e entre com um nome como ‘VolumeGateway1’. Avance com o Activate gateway. Note que o gateway aparecerá como ativo e uma mensagem de aviso de que não existe espaço para volumes de Cache e Upload Buffer. Ambos precisam ter pelo menos 150GiB e normalmente são criados quando lançamos a instância do Gateway, mas isto pode ser feito depois de criado o gateway. Clique em Save and continue e confirme que o Gateway está ativo e possui um alerta. Iremos adicionar e associar volumes EBS (Elastic Block Storage) pelo console do EC2.

Volte ao console do EC2 na seção ‘Elastic Block Storage’ criaremos 2 volumes de 150GiB em ‘Volumes’. Note que existem 2 volumes já criados e associados à esta instância do Gateway. Clique em Create Volume e altere a capacidade para 150GiB por enquanto. A região e AZ (Availability Zone) devem ser a mesma em que estamos trabalhando (cheque antes no console do EC2 ou na lista dos volumes já associados ao Gateway). Se quiser adicione uma tag Name:Storage Gateway cache volume e clique em Create Volume. Repita a operação para criar um segundo volume EBS para o ‘Storage Gateway upload buffer’. Ao retornar ao console do EBS você verá os novos volumes como disponíveis e não associados a instâncias com State em cor azul. Selecione um deles por vez e associe-os a instância do Storage Gateway. Você pode anotar o Volume ID para que sejam associados corretamente às funções no Storage Gateway.

Volte ao console do Storage gateway. clique no botão Edit local disks para acessar à lista de volumes disponíveis para este Gateway. Agora é possível ver os dois volumes de 150GiB não alocados. Edite ambos e escolha para cada um as funções de ‘cache’ e ‘upload buffer’ salvando em seguida a modificação. O alarme desaparecerá.

Com o Gateway selecionado, crie um volume através do botão Create volume certificando de que está escolhido o gateway correspondente, capacidade de 150GiB neste caso, novo volume vazio e com target iSCSI chamado de ‘demo’. Em seguida é possível definir autenticação tipo CHAP para o target iSCSI, mas não usaremos esta função nesta demo. Clique em Skip. Retornando ao console do Storage Gateway uma mensagem de aviso mostra que o volume virtual foi criado e em qual gateway – You have successfully created your volume vol-093d6f3b6c50625ea on gateway sgw-C728CDAE

Adicionando o volume do Storage Gateway como target iSCSI no Windows

De volta ao console do Windows Server, selecione a função de busca do Windows (lupa) e procure por iSCSI. Como será primeiro acesso, o Windows perguntará se queremos habilitar o iSCSI e logo em seguida teremos a tela para configurar o target.

Entre com o endereço IP privado da instância do Storage Gateway (172.31.95.161) e em ‘Quick Connect’ para listar os targets iSCSI. Como não usamos autenticação via CHAP, os serviços estão abertos a vários clientes. Uma nova janela indicando que a conexão foi estabelecida mostrará o nome do target e o status como ‘connected’. Clique em OK para fechar a janela.

Agora podemos ver o volume no Storage Manager do Windows, instalar um sistema de arquivos e usar o volume normalmente. O Device Manager do Windows mostra um novo disco de 150GiB. Volte na lupa de busca do Windows Server, digite ‘Disk Management’ para abrir o Storage Manager do Windows. Confirme que o novo volume de 150GiB aparece e crie um file system nele. Clique com botão direito para por o disco como ‘on-line’, depois novamente para inicializá-lo como GPT e por último formate-o como ReFS de 150GiB com nome ‘Demo’. Crie uma pasta e arquivo de teste.

Teste copiar uma pasta do Windows Server para o novo volume de forma a gerar consumo de espaço e transferência de dados e um pouco de logs. Por exemplo a pasta Windows. Abra o prompt de comandos ‘cmd.exe’ e digite o comando xcopy /e /y /r c:windows d:windows2 e espere que ele termine a cópia.

Monitorando uso do Storage Gateway

De volta ao console do Storage Gateway você poderá verificar o tamanho do upload buffer durante a cópia ou ao final, e alguns segundos depois confirmar que ele foi zerado, mostrando que a sincronização foi concluída. Em um ambiente puro cloud este processo acontece mais rápido do que em um híbrido. O ambiente híbrido precisa enviar grandes quantidades de dados do DC pela internet/VPN ou Direct Connect até o AWS. Ainda, é possível restringir dentro do console do Storage Gateway, a quantidade de banda usada neste serviço.

Enquanto o upload buffer pode ser efêmero em links mais rápidos e com menos uso, podemos ver o tamanho do volume cache na seção ‘Volumes’ do Storage Gateway.

Como usamos pouco o lab, o cache pode ter um valor muito pequeno ainda, neste exemplo cerca de 7% dos 150GiB que correspondem a 10GiB, mais ou menos o tamanho usado pela pasta Windows copiada anteriormente.

O CloudWatch pode mostrar estatísticas do Storage Gateway. Nele é possível ver a performance do cache, limites de uso dos volumes e estabelecer alertas, como por exemplo, quando um volume alcançar determinada quantidade de uso.

Demo do Virtual Tape Library

Para esta demo criaremos um novo Storage Gateway do tipo VTL, adicionaremos algumas fitas virtuais e instalaremos o VEEAM com uma licença de demonstração.

Volte ao console do Storage Gateway no botão Create gateway, escolha Tape gateway e Amazon EC2 como plataforma.

Vamos lançar uma nova instância EC2 como VTL usando o AWS Marketplace e desta vez adicionaremos os volumes necessários durante a criação das instâncias. Clique em Continue to subscribe

Ao invés de usar o ‘1-Click to launch’ selecione ‘Manual Launch’ e clique no botão amarelo ‘Launch with EC2 Console’ na região desejada, neste caso ‘US-East-1’ ou North Virginia. O console de lançamento de instâncias EC2 aparecerá aberto no passo 2 para escolha do tamanho da instância. Siga os passos para definir a instância para o VTL:

  • Use o tamanho ‘General purpose m4.xlarge’ e clique em Next: Configure Instance Details
  • No passo 3 podemos escolher em que VPC e zona de disponibilidade e subnet lançar Virtual Tape Library. Para a demo usaremos o padrão mas em produção é importante considerar no contexto da arquitetura e como a solução funcionará, custos, segurança e resiliência.
  • No passo 4 para Storage, vamos adicionar dois discos de 150GiB mínimos necessários para o funcionamento do Storage Gateway. Ative o Delete on Termination para que eles não fiquem órfãos ao final da demo quando apagarmos os gateways. Em produção, pode-se considerar não ligar esta opção por algumas razões. Uma delas pode ser em caso de troca de instância VTL para um tamanho maior. Bastaria associar os EBS na nova instância e reconfigurar os targets iSCSI. Num ambiente maior pode ser necessário usar instâncias otimizadas para maior performance de rede para um volume de backup enorme e ademais os volumes EBS são de fato volumes em rede e não locais à instância, por tanto se beneficiam duma instância maior com maior throughput em rede.
  • Passo 5 tags, pode-se definir tags para nome da instância e grupo, função, para facilitar o billing e identificação.
  • Passo 6 podemos utilizar o mesmo Security Group criado para o Storage Gateway tipo volume. As mesmas considerações de segurança são aplicáveis.
  • Passo 7 é revisar as informações e escolher chave de criptografia para acesso e assim executar o lançamento da instância.

Enquanto a instância é preparada, anote o IP público para usarmos na configuração remota, 34.230.33.103. De novo um alerta, como fazemos uma demo, deixamos a instância com acesso público via internet, mas em produção este acesso estaria restrito a um bastion host ou jump box, ou a alguns hosts administradores na rede privada.

Quando a instância estiver rodando, escolha continuar, entre com o endereço IP público dela e configure zona do tempo (GMT -5:00), dê um nome como ‘TapeLibrary1’ e escolha a aplicação de backup. Nesta demo usaremos o ‘Veeam’. Em seguida os discos serão configurados para uso. Selecione um disco de 150 GiB para cache e outro para o upload buffer.

Quando pronto, próximo passo é criar algumas fitas para uso na biblioteca. Clique em Create tapes e escolha 5 fitas para começar de 100GiB o tamanho mínimo e um prefixo de 4 letras maiúsculas, como ‘DEMO’. Note que as fitas podem ter tamanho variável e para compatibilidade pode-se usar um tamanho padrão como por exemplo 800GiB para um LTO-4. Clique em Create Tapes

Na seção ‘Tapes’ você as verá sendo criadas e o status final quando prontas, além do sufixo gerado pelo AWS como código de barras. É possível usar Tags para as fitas e suas classificações, separação de custo e outros usos.

Voltando ao Tape Library é possível gerenciar alguns aspectos dele, alterar Tags, e modificar o tipo virtual de tape drive e changer.

Instalando o Veeam e testando o backup

É necessário fazer um cadastro no site do VEEAM e baixar o ‘Backup and Replication 9.5’ para esta demo. Após o download da imagem ISO, pode-se extrair os arquivos no drive d: de teste criado na demo anterior.

Será necessário instalar MSSQL Express e talvez alguns componentes antes.

Enquanto o VEEAM é instalado, podemos adicionar os targets iSCSI através do gerenciador de forma similar ao volume remoto da outra demo. Abra o iSCSI Initiator Properties e digite o endereço IP privado (dentro do VPC) do VTL. Após a conexão devem ser apresentados 1 media changer e 10 tape drives. Conecte em todos um por vez. Como não alteramos os nomes deles no gerenciador do Storage Gateway, ficarão com nomes em sequência. Feche o iSCSI Initiator.

Abra o console do VEEAM que deve estar como atalho no desktop e faça um login local nele. Caso a instalação tenha falhado em subir os serviços do VEEAM, selecione a opção para seguir em frente com a instalação e ao final reinicie o servidor Windows. Os serviços do VEEAM deverão ser ativados automaticamente.

Para preparar a infraestrutura de backup é necessário instalar o Tape Drive e Media Changer, importar as fitas e fazer inventário delas:

  • Criar o tape library
  • Clicar em import tapes dentro do changer
  • Clicar em inventory tapes dentro do changer

Quando terminado clique em Backup no VEEAM:

  • aceite o nome padrão do job e escolha o drive ‘d:’ no Windows para a seleção de backup.
    • Crie um novo ‘Media Pool’
    • Escolha o tape library ‘AWS’. Aqui o changer pode usar qualquer fita livre e adicionar outras automaticamente. Em backups mais complexos é possível definir fitas e changers diferentes.
    • Media Set: deixe as opções sugeridas para criar um novo.
    • Retention: usamos o padrão não sobre-escrevendo fitas nesta demo.
    • Options: deixe como padrão e o backup será iniciado. Aqui também é possível fazer backups paralelos com mais de um tape drive.
    • Criado o Media Pool pode-se executar o backup full. Selecione as próximas opção de backup full e incremental sem agendamento e sem ejetar a fita ao final do backup.
    • Selecione ‘Run the job when I click finish’.
    • Job deve levar alguns minutos para executar.

MacOS lab setup – part1: OS, DNS and certificates

MacOS XPTO lab

Virtual machine setup on Fusion 8

  • Create a new machine using an image
  • Select MacOS .DMG downloaded from the Mac App Store
  • Leave virtual machine settings as default making sure network is set up as shared with host. This will keep it relatively isolated with broadcast, multicast and Bonjour isolated from main network.

MacOS setup

  • Install MacOS
    • Do not enable iCloud.
    • User hostname as server2 in preferences.
    • IP 192.168.115.222/24 , GW 192.168.115.2.
    • Install VMWare Tools and restart the VM.
    • Check internet access by opening Safari, it should load and welcome screen and allow access to any site.
    • Login into the Mac App Store to download MacOS updates and restart the virtual machine if needed.
  • Install Server app and update it
    • Open Mac App Store and search the Server app to download it. It should be the latest version unless Server app is downloaded before MacOS is updated. Usually after MacOS update, Server app might need an update to make it compatible.

Server initial network set up

  • Open the Server app and pin it to the Dock.
  • Go in the first menu option named server2 to view the main server information and hostname.
  • Click on the Edit Host Name button to change the hostname and domain. Select Internet mode and type new hostname as ‘server2.xpto.inc’ , click Finish.

ScreenShot2016-08-27at6.36.13PM-2016-08-28-06-06.png

  • When asked about DNS enablement, select this option and notice the green light on the left pane confirming it is enabled.
  • The Alert section should have one new entry confirming the server hostname change was successful. Click on the Alert on the left pane, double-click the new entry to view details and clear it as new alert.

DNS setup

  • Go to the DNS on left pane, under Advanced section. You should see the server enabled as 192.168.115.222, Permissions from any network access and Forwarding server as 192.168.115.2 which is also the VMWare Fusion virtual default gateway address that reached the real network DNS (usually a home router in a home lab like this).
  • Note that the Lookups option initially will allow only access from clients from the server network.
  • There should be only one host entry named server2.xpto.inc created by the DNS enablement process when completing the hostname change above.
  • Click on the gear icon and select Show all records option. Two zones, Primary and Reverse should appear.
  • Now, to create the lab company internal domain xpto.inc select the + button and Add primary zone option.
    • Enter xpto.inc and select the Allow zone transfers option to future backup DNS server download them.
    • Leave the other options as default and select the Create button.
    • Create a new A record for the server2.xpto.inc by selecting the + button again, new Machine Record option.
    • Select the new xpto.inc zone name and type server2 as new record name, enter the IP address for the server 192.168.115.222 and optionally, enter a text for a description and press the Create button.
    • There should be 4 zone records now, 2 for original ‘server’ zones primary and reverse zones, 2 for the new xpto.inc zones, primary and reverse.
    • Select and delete the older zones as they will not be used: ‘server2.xpto.inc’ and the ‘222.115.168.192.in-addr.arpa’. By deleting the primary zone, the reverse will be deleted automatically.

ScreenShot2016-08-27at7.23.30PM-2016-08-28-06-06.png

    • Enable mail exchanger (MX) record by selecting the + button and Mail Exchange Record. Make sure the xpto.inc zone is selected and type the server name server2.xpto.inc hostname, leaving priority zero as default. Select Create button.

Certification creation

    • ⁃ First, let’s create a certificate request for the server.
      • Open the Certificate Assistant from the Dock and on the first menu of options, select: Request a certificate from an existing CA and on Continue.
      • In the Certificate Information screen, use administrator@xpto.inc email address for the certificate owner, server2.xpto.inc as the Common Name and the same email as before for the certificate authority email address.
      • Select the option to save the certificate in disk instead of sending over email.
      • Save the certificate request on the Desktop with standard file name. The request file can be reviewed on any regular text editor.
      • – There is no Certificate Authority (CA) yet to issue certificates, so let’s create one.
      • Change the Name to ‘XPTO’s CA’.
      • Make sure de Identity Type is set as Self Signed Root CA.
      • Select the User Certificate as ‘SSL Server’.
      • Type the email for the administrator@xpto.inc.
      • Keep it as the default CA and press the Create button.
    • – Finally, use the Certificate Assistant to issue a server certificate from the request created previously.
      • Back on the Certificate Assistant, select the Use your CA to create a certificate for someone else.
      • In the next window, drag the certificate request file from the Desktop created earlier into the box.
      • Select the ‘XPTO’s CA’ in the Issuing CA box. Leave the Make this CA the default.
      • Press the Create button to complete the process. Mail app will open to send the certificate, cancel the sending and quit Mail app.
      • The Certification Assistant will show certificate information. For now, the server2.xpto.inc server certificate is not trusted because its issuer XPTO CA is not trusted in the server.
      • Open the Keychain Access by type <Command>-<Space> and type ‘keychain’ on Spotlight Search.
      • On the left pane, click on the Logins pane to list both root and server CA.
      • Go to the ‘System’ pane and confirm if the XPTO’s CA is listed there as not trusted, with a red X sign. Double-click it to edit information and change the the Trust setting: When Using This Certificate to ‘Always Trust’.
      • After confirming the change with the Administrator password, the red X will turn into a blue + signal.
      • Go back to the Login pane and make sure the server2.xpto.inc is trusted now and has its signal turned into blue +. Move this certificate to the System pane so the certificate is available system wide.
      • You should see the server2.xpto.inc now as trusted by XPTO’s CA.

ScreenShot2016-08-27at9.39.39PM-2016-08-28-06-06.png

  • Go back to the Certificates pane in the Server app and select the Secure services using: server2.xpto.inc – XPTO’s CA. You should see that all services are now using the standard server certificate issued by the root CA for XPTO. When this root CA is also trusted into client and other servers, client software will trust server2.xpto.inc hostname certificate.
  • To test the certificate from the server computer, go to the Websites pane on the Server app, turn it on, and click on the Server site (SSL) option. It will open Safari automatically to https://server2.xpto.inc

Upgrade the second node.

Fix the new DVD image on VMWare host for this virtual machine and:

Last login: Wed May 1 08:23:18 on ttys001
macbookpro:~ Rodrigo$ ssh -l osadmin 192.168.1.52
The authenticity of host ‘192.168.1.52 (192.168.1.52)’ can’t be established.
RSA key fingerprint is 70:b3:a1:f4:09:b2:a6:23:0e:ee:6d:c7:93:7c:2f:f2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.1.52’ (RSA) to the list of known hosts.
osadmin@192.168.1.52’s password:
Command Line Interface is starting up, please wait …

Welcome to the Platform Command Line Interface

VMware Installation:
        2 vCPU: Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz
        Disk 1: 500GB
        8192 Mbytes RAM

admin:utils system upgrade
utils system upgrade cancel
utils system upgrade initiate
utils system upgrade status

admin:utils system upgrade init
admin:utils system upgrade initiate ?

Syntax:
utils system upgrade initiate
utils system upgrade initiate listall

admin:utils system upgrade initiate

Warning: Do not close this window without first canceling the upgrade.

Source:

1) Remote Filesystem via SFTP
2) Remote Filesystem via FTP
3) Local DVD/CD
q) quit

Please select an option (1 – 3 or “q” ): 3
Please enter SMTP Host Server (optional): 192.168.1.41
Please enter Email Destination: ucmadmin@lab.inc
Checking for valid upgrades. Please wait…

Available options and upgrades in “”:

1) UCSInstall_UCOS_8.6.2.22900-9.sgn.iso
q) quit

Please select an option (1 – 1 or “q” ): 1
Accessing the file. Please wait…

A system reboot is required when the upgrade process completes or is canceled. This will ensure services affected by the upgrade process are functioning properly.

Downloaded: UCSInstall_UCOS_8.6.2.22900-9.sgn.iso
File version: 8.6.2.22900-9
File checksum: –

Automatically switch versions if the upgrade is successful (yes/no): yes

Start installation (yes/no): yes
The upgrade log is install_log_2013-05-01.18.36.35.log
Upgrading the system. Please wait…
05/01/2013 18:36:36 file_list.sh|Starting file_list.sh|<LVL::Info>
05/01/2013 18:36:36 file_list.sh|Parse argument method=local_dvd|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|Parse argument source_dir=|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|Parse argument dest_file=/var/log/install/downloaded_versions|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|is_upgrade_lock_available: Upgrade lock is available.|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|is_upgrade_result_available: Result is not available|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|acquire_upgrade_lock: Lock is free, creating lock (pid: 10262)|<LVL::Debug>
05/01/2013 18:36:36 file_list.sh|Process local CD/DVD request|<LVL::Info>
0