shipping code – The uShip Blog https://ushipblogsubd.wpengine.com Tue, 15 Jul 2025 19:45:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 Effortless AMI Deployments with Chef Infra and Habitat – Part 2 https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-2/ https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-2/#respond Fri, 06 Dec 2019 16:31:03 +0000 https://ushipblogsubd.wpengine.com/?p=14904 This is Part 2 of a series. Please make sure and read Part 1 before continuing. Deploying Habitat In Part 1 of this series, we generated a new “webserver” cookbook, built a Habitat package with it and then pushed that to the Habitat Builder. Now, we’re going to deploy a Windows server on Amazon Web Services. This server... Read More

The post Effortless AMI Deployments with Chef Infra and Habitat – Part 2 appeared first on The uShip Blog.

]]>
This is Part 2 of a series. Please make sure and read Part 1 before continuing.

Deploying Habitat

In Part 1 of this series, we generated a new “webserver” cookbook, built a Habitat package with it and then pushed that to the Habitat Builder. Now, we’re going to deploy a Windows server on Amazon Web Services. This server will load our Habitat package when it’s created and after it runs then we should have the default IIS site running.

The first thing we need to do is login to the AWS Management Console and go to the EC2 Dashboard:

Click on “Launch instance” which will take us to the wizard for launching a server.

Search or scroll down to the image “Microsoft Windows Server 2012 R2 Base” and click “Select” to go to the next screen.

Select an instance size for your server. I’ll use t2.micro to stay in the AWS free tier.

Click the “Next: Configure Instance Details” button. On the next page, select a VPC or create a new one if you don’t have one already then scroll to the bottom and the following to the “User data” under the “Advanced Details” and click “Next: Add Storage”

<powershell>
Start-Transcript
# Install Habitat
if ((Get-Command “hab” -ErrorAction SilentlyContinue)) {
Write-Host “Habitat Installation found”
} else {
Write-Host “Habitat Installation not found, installing…”
(New-Object System.Net.WebClient).DownloadString(‘https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.ps1’) | Out-File install.ps1
# Install Habitat
if (Test-Path -Path env:HAB_VERSION) {
.\install.ps1 -Version $env:HAB_VERSION
} else {
.\install.ps1
}
}

if (!(Test-Path -Path env:HAB_LICENSE)) {
$env:HAB_LICENSE=”accept-no-persist”
}

# Install supervisor and Habitat Windows Service
Write-Host “Installing Habitat Supervisor and Windows Service…”
hab pkg install core/hab-sup
hab pkg install core/windows-service
hab pkg exec core/windows-service install
[System.Environment]::SetEnvironmentVariable(“HAB_LICENSE”, “accept”, [System.EnvironmentVariableTarget]::Machine)

Write-Host “Finished Installing Habitat Supervisor and Windows Service”

Start-Service -Name “Habitat”

Write-Host “Installing webserver package”
C:\ProgramData\Habitat\hab.exe pkg install uship/webserver
Write-Host “webserver package installed”
Write-Host “Loading webserver service”
C:\ProgramData\Habitat\hab.exe svc load uship/webserver
Write-Host “webserver service loaded”
Stop-Transcript
</powershell>

You can leave the default storage size or adjust it as needed and click “Next: Add Tags” to go to the next step. Feel free to add any tags that you’d like. I’m going to set a “Name” tag so I can easily find the server.

On the next step, create a new security group or select an existing one. You’ll want one that has port 80 open and also 3389 if you want to be able to remote into it. Click on “Review and Launch” and make sure your settings are good. Click the “Launch” button to create the server and make sure to create or select a keypair before clicking the “Launch Instances” button. Launching the server will take a few minutes but after it’s up, grab the password, using the private key that corresponds to the keypair you selected at launch time, and login remotely to the instance.

To make sure that everything worked properly, we’ll check a couple of things. First, go to the C:\Users\Administrator\Documents directory and open the Powershell transcript.

In the transcript, we can see the Habitat installation, starting the Habitat service, and then loading the webserver package. To check that the Chef run completed successfully, open the Habitat.log file from the C:\hab\svc\windows-service\logs directory. You can see it loading the Habitat supervisor and then running the Chef Client:

2019-11-21 19:33:33,806 – Habitat windows service is starting launcher at: C:\hab\pkgs\core\hab-launcher\12605\20191112144934\bin\hab-launch.exe
2019-11-21 19:33:33,816 – Habitat windows service is starting launcher with args: run –no-color
2019-11-21 19:33:34,216 – hab-sup(MR): core/hab-sup (core/hab-sup/0.90.6/20191112145002)
2019-11-21 19:33:34,216 – hab-sup(MR): Supervisor Member-ID efdc426fe81743deac99d168bbda512e
2019-11-21 19:33:34,216 – hab-sup(MR): Starting gossip-listener on 0.0.0.0:9638
2019-11-21 19:33:34,216 – hab-sup(MR): Starting ctl-gateway on 127.0.0.1:9632
2019-11-21 19:33:34,216 – hab-sup(MR): Starting http-gateway on 0.0.0.0:9631
2019-11-21 19:33:35,145 – Logging configuration file ‘C:\hab/sup\default\config\log.yml’ not found; using default logging configuration
2019-11-21 19:34:41,087 – hab-sup(AG): The uship/webserver service was successfully loaded
2019-11-21 19:34:44,114 – hab-sup(MR): Starting uship/webserver (uship/webserver/0.0.1/20191115133545)
2019-11-21 19:34:44,137 – webserver.default(UCW): Watching user.toml
2019-11-21 19:34:44,153 – webserver.default(HK): Modified hook content in C:\hab\svc\webserver\hooks\run
2019-11-21 19:34:44,154 – webserver.default(SR): Hooks recompiled
2019-11-21 19:34:44,166 – webserver.default(CF): Created configuration file C:\hab\svc\webserver\config\attributes.json
2019-11-21 19:34:44,166 – webserver.default(CF): Created configuration file C:\hab\svc\webserver\config\bootstrap-config.rb
2019-11-21 19:34:44,166 – webserver.default(CF): Created configuration file C:\hab\svc\webserver\config\client-config.rb
2019-11-21 19:34:44,166 – webserver.default(SR): Initializing
2019-11-21 19:34:45,126 – webserver.default(SV): Starting service as user=win-3bdeq9ruckm$, group=<anonymous>
2019-11-21 19:34:56,767 – webserver.default(O): Starting Chef Client, version 14.11.21[0m
2019-11-21 19:35:02,230 – webserver.default(O): Using policy ‘webserver’ at revision ‘835107fe240d0a571c9d2fc7450a88e208b0f04c5c5e8cbd3865c3838439d4b9′[0m
2019-11-21 19:35:02,236 – webserver.default(O): resolving cookbooks for run list: [“webserver::default@0.1.0 (b9bf53c)”][0m
2019-11-21 19:35:02,349 – webserver.default(O): Synchronizing Cookbooks:[0m
2019-11-21 19:35:02,535 – webserver.default(O): – iis (7.2.0)[0m
2019-11-21 19:35:02,573 – webserver.default(O): – webserver (0.1.0)[0m
2019-11-21 19:35:02,611 – webserver.default(O): – windows (6.0.1)[0m
2019-11-21 19:35:02,611 – webserver.default(O): Installing Cookbook Gems:[0m
2019-11-21 19:35:02,639 – webserver.default(O): Compiling Cookbooks…[0m
2019-11-21 19:35:02,740 – webserver.default(O): Converging 2 resources[0m
2019-11-21 19:35:02,740 – webserver.default(O): Recipe: iis::default[0m
2019-11-21 19:35:02,763 – webserver.default(O): * iis_install[install IIS] action install
2019-11-21 19:35:02,764 – webserver.default(O): * windows_feature[IIS-WebServerRole] action install
2019-11-21 19:35:49,670 – webserver.default(O): * windows_feature_dism[IIS-WebServerRole] action install
2019-11-21 19:35:49,670 – webserver.default(O): [32m- install Windows feature iis-webserverrole[0m
2019-11-21 19:35:49,670 – webserver.default(O): [0m
2019-11-21 19:35:49,670 – webserver.default(O): [0m
2019-11-21 19:35:51,202 – webserver.default(O): [0m * windows_service[iis] action enable (up to date)
2019-11-21 19:35:51,383 – webserver.default(O): * windows_service[iis] action start (up to date)
2019-11-21 19:35:51,427 – webserver.default(O): [0m
2019-11-21 19:35:51,427 – webserver.default(O): Running handlers:[0m
2019-11-21 19:35:51,427 – webserver.default(O): Running handlers complete
2019-11-21 19:35:51,433 – webserver.default(O): [0mChef Client finished, 3/5 resources updated in 54 seconds[0m

 

We can see that Chef ran the iis::default recipe to install IIS and start the service. Let’s go to the IP address of the instance and you can see the default IIS site:

At this point, we’ve shown how we can leverage Habitat and Powershell user data to bring up a server and configure it without having to fully bootstrap it. In Part 3 of this series, we’ll look at how we can utilize the Parameter Store in AWS Systems Manager to handle dynamic configuration that was traditionally kept in Chef Vault or Data Bags.

The post Effortless AMI Deployments with Chef Infra and Habitat – Part 2 appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-2/feed/ 0
Effortless AMI Deployments with Chef Infra and Habitat – Part 1 https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-1/ https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-1/#respond Thu, 05 Dec 2019 18:04:45 +0000 https://ushipblogsubd.wpengine.com/?p=14871 Background At uShip, we’ve been moving to an AMI deployment strategy for standing up web servers that houses our main application. We made the decision as part of a larger strategy to ensure our environments (dev, qa, prod, etc.) were as similar as possible. We figured that if we could build a single AMI that... Read More

The post Effortless AMI Deployments with Chef Infra and Habitat – Part 1 appeared first on The uShip Blog.

]]>
Background

At uShip, we’ve been moving to an AMI deployment strategy for standing up web servers that houses our main application. We made the decision as part of a larger strategy to ensure our environments (dev, qa, prod, etc.) were as similar as possible. We figured that if we could build a single AMI that is deployed to every environment, that would be a huge step in accomplishing environment parity. While the process has mostly been straight-forward, we have run into a problem and the new Effortless Infrastructure Pattern from Chef provided an elegant solution.

The Problem

Chef Infra is a great way of managing configuration for servers. One of the biggest reasons that we reached for Chef versus something else is the Windows support. While other options have gotten better, Chef had the support back in 2015 when we were evaluating configuration management solutions. Chef’s client/server model allowed us to get visibilty into our fleet. However, that visibility comes at a cost.

The cost has to do with bootstrapping nodes into the Chef Server. Traditionally, this process works well as you’d usually have long-lived nodes and if you wanted to remove one, you could do that manually using the chef-server-ctl. With our AMI deployment strategy, we were creating and destroying nodes every deployment so we were left with many missing nodes and no easy way of cleaning them up. Before we get into the effortless pattern, let’s look at the traditional way of bootstrapping a node.

Bootstrapping Chef Nodes

In Chef, bootstrapping is the process that installs the Chef Infra Client and sets up the node to communicate with the Chef Server. This can either be done using the knife bootstrap command from your workstation or, in the case of AWS, with a user data script. Here’s an example of what we were using for an unattended bootstrap:

Write-Output “Pull the encrypted_data_bag_secret key from S3”
& “C:/Program Files/Amazon/AWSCLI/bin/aws.exe” s3 cp s3://<my-super-real-s3-bucket>/default-validator.pem C:/chef/
& “C:/Program Files/Amazon/AWSCLI/bin/aws.exe” s3 cp s3://<my-super-real-s3-bucket>/encrypted_data_bag_secret C:/chef/encrypted_data_bag_secret

Write-Output “Create first-boot.json for Chef bootstrap into $environment policy_group”
$firstBoot = @{“policy_name” = “web”; “policy_group” = “$environment” }
Set-Content -Path C:/chef/first-boot.json -Value ($firstboot | ConvertTo-Json -Depth 10)

Write-Output “Create client.rb file for Chef using a dynamically-generated node name”
$nodeName = “$(hostname)-{0}” -f ( -join ((65..90) + (97..122) | Get-Random -Count 4 | % { [char]$_ }))

$clientrb = @”
chef_server_url ‘https://chef-server.example.com/organizations/default’
validation_client_name ‘default-validator’
validation_key ‘C:/chef/default-validator.pem’
node_name ‘{0}’
“@ -f $nodeName
Set-Content -Path C:/chef/client.rb -Value $clientrb

Write-Output “Run Chef client first time”
C:/opscode/chef/bin/chef-client.bat -j C:/chef/first-boot.json

 

I’d like to note that we were originally using Chef Vault to store secrets but there doesn’t appear to be a way for a node to bootstrap itself and then give itself permissions to a vault item and so we’re using encrypted data bags here.

Assuming that you’ve set up your S3 bucket policy and EC2 instance role, this solution works well to bring up instances. But, as mentioned earlier, if you boot up four new servers in each environment every time you deploy, you’ll have an increasing number of missing nodes. There is a Lambda out on the interwebs for cleaning up nodes in the Chef Server, but this is kinda of a pain to do and only addresses the Chef Server; it does nothing for the ones in Chef Automate.

Effortless Infrastructure

If you missed the session from ChefConf 2019, there’s an excellent talk by David Echols about what effortless config is. Essentially, the effortless pattern is a way to build and run your cookbooks as a single, deployable package. It accomplishes this using Habitat, Policyfiles, and Chef Solo. Before reading further, I urge you to check out that video and the track on Learn Chef Rally.

Prerequisites

Chef Workstation
Habitat

Generate a Cookbook

The first thing we need to do is generate a new cookbook. I’m going to deploy a cookbook that sets up IIS on a Windows server but the concepts should be similar if you’re deploying Linux servers.

PS C:\Users\uship\Projects> chef generate cookbook webserver
Generating cookbook webserver
– Ensuring correct cookbook content
– Committing cookbook files to git

Your cookbook is ready. To setup the pipeline, type `cd webserver`, then run `delivery init`

 

Let’s check out the content of the webserver cookbook:

PS C:\Users\uship\Projects> cd webserver
PS C:\Users\uship\Projects\webserver> tree
.
├── CHANGELOG.md
├── LICENSE
├── Policyfile.rb
├── README.md
├── chefignore
├── kitchen.yml
├── metadata.rb
├── recipes
│   └── default.rb
├── spec
│   ├── spec_helper.rb
│   └── unit
│   └── recipes
│   └── default_spec.rb
└── test
└── integration
└── default
└── default_test.rb

7 directories, 11 files

 

To set up IIS, we’re going to leverage the iis cookbook. Add the following to the metadata.rb file:

name ‘webserver’
maintainer ‘The Authors’
.
.
.
# source_url ‘https://github.com/<insert_org_here>/webserver’

depends ‘iis’, ‘~> 7.2.0’

 

We’ll need to go ahead and install the dependencies. For this, we’ll leverage Policyfiles. If you are unfamiliar, they’re basically what replaces Berkshelf and environments/roles. Check out the documentation but you should just need to run the following:

PS C:\Users\uship\Projects\webserver> chef install
Building policy webserver
Expanded run list: recipe[webserver::default]
Caching Cookbooks…
Installing webserver >= 0.0.0 from path
Installing iis 7.2.0
Installing windows 6.0.1

Lockfile written to /Users/uship/Documents/effortless_ami_deployments/webserver/Policyfile.lock.json
Policy revision id: c2746cac28e13e1dae4fa99f4b9f9d56e5b7bf11894f1cce1e8940a2f4de42c3

 

Now that we have our dependencies installed, let’s update the Chef recipe to install IIS.

#
# Cookbook:: webserver
# Recipe:: default
#
# Copyright:: 2019, The Authors, All Rights Reserved.

include_recipe ‘iis’

 

This will install IIS on the server and enable the W3SVC service. At this point, if you boot up a Test Kitchen instance to test and then browse to the IP address, you should see the default Internet Information Services page.

Package the Cookbook

As I said earlier, the effortless infrastructure pattern leverages Habitat to package and run your Chef cookbook like an application. To package this up, we’ll need to habitatize our application and create a basic structure. Note that this is going to be deployed and run on a Windows server so it needs to be built on a Windows box to work properly. If you’re working on Mac or Linux, the concepts are the same but you’d use Bash instead of Powershell for writing your plan. Again, I’ll defer to the Habitat documentation for the specifics.

From the root of your cookbook directory, initialize the Habitat plan, using your origin:

PS C:\Users\uship\Projects\webserver> hab plan init -o uship
» Constructing a cozy habitat for your app…

Ω Creating file: habitat/plan.ps1
`plan.sh` is the foundation of your new habitat. It contains metadata,
dependencies, and tasks.

Ω Creating file: habitat/default.toml
`default.toml` contains default values for `cfg` prefixed variables.

Ω Creating file: habitat/README.md
`README.md` contains a basic README document which you should update.

Ω Creating directory: habitat/config/
`/config/` contains configuration files for your app.

Ω Creating directory: habitat/hooks/
`/hooks/` contains automation hooks into your habitat.

For more information on any of the files:
https://www.habitat.sh/docs/reference/plan-syntax/

→ Using existing file: habitat/../.gitignore (1 lines appended)
≡ An abode for your code is initialized!

 

For the effortless infrastructure, we’ll lean on the Habitat Scaffolding provided by the Habitat core team. You can see what the scaffolding is doing by looking in the repository, but all we need to do is update the habitat/plan.ps1 file:

# This is the name of our Habitat package
$pkg_name=”webserver”

# Update this with your origin
$pkg_origin=”uship”

# Package version. Typically follomws Semantic Versioning
$pkg_version=”0.0.1″

# Update this per your preferences
$pkg_maintainer=”uShip, Inc. <devops@uship.com>”

# We need these dependencies for our application to run
$pkg_deps=@(
“core/cacerts”
“stuartpreston/chef-client” # https://github.com/habitat-sh/habitat/issues/6671
)

# Use the scaffolding-chef-infra scaffolding
$pkg_scaffolding=”chef/scaffolding-chef-infra”

# Name of our Policyfile
$scaffold_policy_name=”Policyfile”

# Location of the Policyfile. In this case, habitat/../Policyfile.rb
$scaffold_policyfile_path=”$PLAN_CONTEXT/../”

 

The last thing we need to do before we can build our Habitat package is update the configuration for the Chef Client that will be running. Habitat uses Toml for configuration and the default config is in habitat/default.toml:

# Use this file to templatize your application’s native configuration files.
# See the docs at https://www.habitat.sh/docs/create-packages-configure/.
# You can safely delete this file if you don’t need it.

# Run the Chef Client every 5 minutes
interval = 300

# Offset the Chef Client runs by 30 seconds
splay = 30

# No offset for the first run
splay_first_run = 0

# Wait for Chef Client run lock file to be deleted
run_lock_timeout = 300

 

Go ahead and remove the habitat/config and habitat/hooks directories as these aren’t needed and tend to cause errors with the build:

PS C:\Users\uship\Projects\webserver> rmdir habitat/config
PS C:\Users\uship\Projects\webserver> rmdir habitat/hooks

 

To build our Habitat package, we’ll enter the Habitat studio. The studio is a clean room which only packages up the dependencies that have been specified and nothing else.

PS C:\Users\uship\Projects\webserver> hab studio enter
WARNING: Using a local Studio. To use a Docker studio, use the -D argument.
hab-studio: Creating Studio at C:\hab\studios\Users–uship–Projects–webserver
» Importing origin key from standard input
≡ Imported public origin key uship-20190919164651.
» Importing origin key from standard input
≡ Imported secret origin key uship-20190919164651.
** The Habitat Supervisor has been started in the background.
** Use ‘hab svc start’ and ‘hab svc stop’ to start and stop services.
** Use the ‘Get-SupervisorLog’ command to stream the Supervisor log.
** Use the ‘Stop-Supervisor’ to terminate the Supervisor.

hab-studio: Entering Studio at C:\hab\studios\Users–uship–Projects–webserver
[HAB-STUDIO] Habitat:\src>

 

Inside the studio, we’ll run build which will use the default location of the plan file in habitat/plan.ps1:

[HAB-STUDIO] Habitat:\src> build
: Loading C:\hab\studios\Users–uship–Projects–webserver\src\habitat\plan.ps1
webserver: Plan loaded
webserver: Validating plan metadata
webserver: hab-plan-build.ps1 setup
webserver: Using HAB_BIN=C:\hab\pkgs\core\hab-studio\0.83.0\20190712234514\bin\hab\hab.exe for installs, signing, and hashing
webserver: Resolving scaffolding dependencies
» Installing chef/scaffolding-chef-infra
⌂ Determining latest version of chef/scaffolding-chef-infra in the ‘stable’ channel
→ Using chef/scaffolding-chef-infra/0.16.0/20191028151207
≡ Install of chef/scaffolding-chef-infra/0.16.0/20191028151207 complete with 0 new packages installed.
webserver: Resolved scaffolding dependency ‘chef/scaffolding-chef-infra’ to C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\chef\scaffolding-chef-infra\0.16.0\20191028151207
webserver: Loading Scaffolding C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\chef\scaffolding-chef-infra\0.16.0\20191028151207/lib/scaffolding.ps1
» Installing chef/scaffolding-chef-infra
⌂ Determining latest version of chef/scaffolding-chef-infra in the ‘stable’ channel
→ Using chef/scaffolding-chef-infra/0.16.0/20191028151207
≡ Install of chef/scaffolding-chef-infra/0.16.0/20191028151207 complete with 0 new packages installed.
webserver: Resolved build dependency ‘chef/scaffolding-chef-infra’ to C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\chef\scaffolding-chef-infra\0.16.0\20191028151207
» Installing core/chef-dk/2.5.3/20180416182816
→ Using core/chef-dk/2.5.3/20180416182816
.
.
.
webserver: Preparing to build
webserver: Building
Building policy webserver
Expanded run list: recipe[webserver::default]
Caching Cookbooks…
Installing webserver >= 0.0.0 from path
Using iis 7.2.0
Using windows 6.0.1

Lockfile written to C:/hab/studios/Users–uship–Projects–webserver/src/Policyfile.lock.json
Policy revision id: f8a3f2d55e079328c164d2c0250854348cdb7900e89c4c8e9cbe155825d7635b
webserver: Installing
Exported policy ‘webserver’ to C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\uship\webserver\0.0.1\20191114064617

To converge this system with the exported policy, run:
cd C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\uship\webserver\0.0.1\20191114064617
chef-client -z

Directory: C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\uship\webserver\0.0.1\20191114064617

Mode LastWriteTime Length Name
—- ————- —— —-
d—– 11/14/2019 6:47 AM config
webserver: Writing configuration
webserver: Writing default.toml
d—– 11/14/2019 6:47 AM hooks
webserver: Creating manifest
webserver: Building package metadata
webserver: Generating package artifact
» Signing C:\hab\studios\Users–uship–Projects–webserver\hab\cache\artifacts\.uship-webserver-0.0.1-20191114064617-x86_64-windows.tar.xz
→ Signing C:\hab\studios\Users–uship–Projects–webserver\hab\cache\artifacts\.uship-webserver-0.0.1-20191114064617-x86_64-windows.tar.xz with uship-20190919164651 to create C:\hab\studios\Users–uship–Projects–webserver\hab\cache\artifacts\uship-webserver-0.0.1-20191114064617-x86_64-windows.hart
≡ Signed artifact C:\hab\studios\Users–uship–Projects–webserver\hab\cache\artifacts\uship-webserver-0.0.1-20191114064617-x86_64-windows.hart.
webserver: hab-plan-build.ps1 cleanup
webserver:
webserver: Source Cache: C:\hab\studios\Users–uship–Projects–webserver\hab\cache\src\webserver-0.0.1
webserver: Installed Path: C:\hab\studios\Users–uship–Projects–webserver\hab\pkgs\uship\webserver\0.0.1\20191114064617
webserver: Artifact: C:\hab\studios\Users–uship–Projects–webserver\src\results\uship-webserver-0.0.1-20191114064617-x86_64-windows.hart
webserver: Build Report: C:\hab\studios\Users–uship–Projects–webserver\src\results\last_build.ps1
webserver: SHA256 Checksum:
webserver: Blake2b Checksum:
webserver:
webserver: I love it when a plan.ps1 comes together.
webserver:

 

If everything is successful, the newly-built package will be in the results directory. Let’s go ahead and push it to the Habitat Bldr Service. We can use the results/last_build.ps1 file to set variables so we don’t need to specify the full path to the artifact. Note that you’ll need to make sure your auth token is set up.

PS C:\Users\uship\Projects\webserver\results> . .\last_build.ps1
PS C:\Users\uship\Projects\webserver\results> hab pkg upload $pkg_artifact
79 B / 79 B | [=====================================================================================================================================================================================] 100.00 % 654 B/s
→ Using existing public origin key uship-20190919164651.pub
→ Using existing core/cacerts/2019.08.28/20190829172945
→ Using existing stuartpreston/chef-client/14.11.21/20190328012639
↑ Uploading uship-webserver-0.0.1-20191114064617-x86_64-windows.hart
70.89 KB / 70.89 KB | [===========================================================================================================================================================================] 100.00 % 1.45 MB/s
√ Uploaded uship/webserver/0.0.1/20191114064617
≡ Upload of uship/webserver/0.0.1/20191114064617 complete.

 

You should now have a public “webserver” package available in the “unstable” channel of your Habitat origin. In the next part of this blog post series, we’ll build an AMI and deploy our new package to a server using that AMI. If you want to see the code for this, it’s available at https://github.com/uShip/effortless_ami_deployments and the Habitat package is at https://bldr.habitat.sh/#/pkgs/uship/webserver.

The post Effortless AMI Deployments with Chef Infra and Habitat – Part 1 appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/company-news/effortless-ami-deployments-with-chef-infra-and-habitat-part-1/feed/ 0
uShip University: Continuing Education for Developers, by Developers https://ushipblogsubd.wpengine.com/shipping-code/uship-university-aces-test/ https://ushipblogsubd.wpengine.com/shipping-code/uship-university-aces-test/#respond Thu, 01 Feb 2018 07:00:32 +0000 https://ushipblogsubd.wpengine.com/?p=7613 At uShip, our employees, especially software developers and engineers, like to be constantly learning and challenged as it ultimately makes them happier people and co-workers. They also like to know the company supports them exploring new interests, mastering current expertise, and being naturally curious. So in December 2017, we launched uShip University, a new program... Read More

The post uShip University: Continuing Education for Developers, by Developers appeared first on The uShip Blog.

]]>
At uShip, our employees, especially software developers and engineers, like to be constantly learning and challenged as it ultimately makes them happier people and co-workers. They also like to know the company supports them exploring new interests, mastering current expertise, and being naturally curious.

So in December 2017, we launched uShip University, a new program for our developers and engineers. It’s an in-house continuing education program run by employees for employees.

No mascots. No expensive textbooks. No tailgating (unless on their own time, of course).

uShip University’s first turnout was amazing. Feedback from developers is that they left excited about learning as a group, gained both personal and technical confidence, and were impressed with the preparation and approach of the course. Course homework naturally resurrected college flashbacks for many.

Courses are prepared and delivered by uShip’s technical mentors from uShip’s long-time in-house mentorship program. Going forward, uShip University will be held semi-monthly with staggered courses so participants can join multiple tracks.

If you’re considering one of the many developer roles open at uShip, there’s no limit to learning.  Come join us!

The post uShip University: Continuing Education for Developers, by Developers appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/uship-university-aces-test/feed/ 0
Distributed Operations: How uShip Built a Culture of Code Ownership for Faster Feature Delivery https://ushipblogsubd.wpengine.com/shipping-code/distributed-operations-uship-built-culture-code-ownership-faster-feature-delivery/ https://ushipblogsubd.wpengine.com/shipping-code/distributed-operations-uship-built-culture-code-ownership-faster-feature-delivery/#comments Thu, 26 Oct 2017 20:47:17 +0000 https://ushipblogsubd.wpengine.com/?p=7507 Article originally published by VictorOps The original all-hands-on-deck culture faltered during growth. Raleigh Schickel, DevOps Manager, has seen uShip evolve from a small team with a few developers, to a larger company with a dev team size of 60 and growing. Initially, everyone was always on-call for their own code. But this culture of ownership... Read More

The post Distributed Operations: How uShip Built a Culture of Code Ownership for Faster Feature Delivery appeared first on The uShip Blog.

]]>
Article originally published by VictorOps

The original all-hands-on-deck culture faltered during growth.

Raleigh Schickel, DevOps Manager, has seen uShip evolve from a small team with a few developers, to a larger company with a dev team size of 60 and growing. Initially, everyone was always on-call for their own code. But this culture of ownership changed as the company grew.

“As we hired more people, code ownership centralized, though not intentionally,” says Raleigh. Developers started expecting Operations to be responsible for the running system, and started working on features and moving on.“

uShip is a continuous deployment shop and developers are empowered to deploy code at any time. As Operations became more centralized, it was becoming more difficult to determine cause of application issues.

“Problems that could have had a quick easy fix would lag,” says Raleigh. “This made our Time to Identify and Time to Resolve unnecessarily long. The question became: How can we decrease time to identify and resolve?”

The challenge: how to recreate that early culture of code ownership.

uShip’s developers also expressed their desire to own their own infrastructure and not wait on others. But they didn’t want to be on-call. Raleigh says, this didn’t add up.

“[The developers] are looking for ways to speed up their development processes and time to market, but there are security and operational problems with that. If they change a setting and go to sleep and the service breaks, who deals with it? Who knows the most about it? I don’t know what they did.”

In response to their request, Raleigh convinced the developers to take on-call responsibilities. “Our rationale was this,” says Raleigh. “If you are willing to be responsible for the code you are delivering today, then we can expand access to the infrastructure that we are creating for tomorrow.” They agreed.

VictorOps helps democratize the on-call process.

Now (more than ever) with a team of 60 and growing, uShip needed a better way to manage on-call. uShip had handled incidents and log communication via email, with Nagios paging the team directly. This process was unwieldy.

They chose VictorOps for intelligent alerting, routing, and incident management. Now this 60+ person team could intelligently and humanely handle incidents that might occur anytime. Raleigh explains:

“Before VictorOps, we were limited to the same four or five people who were on-call all the time and that was a burnout gig. VictorOps allowed us to democratize the on-call process. We spread out the on-call load, which helps build empathy among developers about what other people go through. It allows those people who have traditionally been on-call to step back for a moment and catch their breath.”

The VictorOps developer discounted pricing program enabled uShip to affordably provide accounts for the entire development team.

Creative approaches to on-call rotation schedules.

To manage their on-call schedules, development teams work on a three-month cycle in which each team spends two weeks on call. They are on-call from 6pm until 9am, at which time the Ops team takes over.

uShip’s developers use the VictorOps team set up and scheduling features extensively. Since each team sets its own schedule for its members, they have used their creativity to design complex rotations. For example, they used VictorOps to put themselves on-call in two-hour chunks.

Raleigh especially loves the scheduled override feature because if there is a last-minute schedule change, it’s no problem. If someone on-call gets sick or something happens, they can just create an override instead of having to tweak the on-call schedule.

Devs on-call handle application health and well-defined issues.

Raleigh explains that uShip’s development teams are primarily on-call to monitor application health. They respond to incidents related to questions such as, How many exceptions do we have? Is the marketplace healthy? Do we have enough new listings? Do we have enough new users?

Developers are also on-call for infrastructure issues that have well-defined, simple fixes. “As long as the alert is clear and tells them what is going on, they can go push a button and easily fix a problem,” says Raleigh. “If they have to go reset app servers, we have buttons for that.”

However, if a Linux server is broken and requires intensive troubleshooting or if a telemetry system is down, an Ops team member handles those incidents and are not a developer’s responsibility to solve.

Always on-call in their particular area of expertise.

For the most part, developers aren’t part of a time-based on-call rotation. Instead, they are always on-call for their code in their area of expertise. Via the VictorOps Incident Automation Engine, Raleigh set up routing keys that send each alert to the right expert. During feature releases, the responsible dev team goes on-call until everyone is comfortable that the deployment was successful.

“Developers get to think about and understand the whole system in a way that they were not able to before,” says Raleigh. “Their mindset was: of course my code works. Actually, there is a giant system out there that interacts with your code.”

Using Slack to create manual incidents eliminates even more noise.

The dev teams self-organized to have one person from each team on-call at all times in case of a problem. They wrote an app called the Victorbot that allows anybody in Slack to create a manual incident and page the appropriate team via Slack in case of an emergency. “This is another way that VictorOps has helped us only page the right team when we need a response,” says Raleigh.

Devs on-call feel empathy and build even better code.

Raleigh explains why putting devs on-call has been so great for the team. He says, “The devs get a little taste of what it’s like to wake up in the middle of the night and handle the platform. They have shown great desire to make sure the launch of new code is healthy and for being the primary person on-call for it at launch. The best part is that we’re shifting back to the ownership culture.”

Choosing to build features rather than building a huge operations team.

Ultimately, owning code isn’t just a nice-to-have. It enables uShip to put its resources toward development rather than toward supporting increasingly complicated infrastructure, especially as microservices proliferate and require specialization. Raleigh says:

“If you believe in democratizing operations, then developers need to be on-call. Otherwise, if you have 20 microservices and five go down, how many Ops people would you need to put that fire out? It’s a choice. Are you going to pay for developers or are you going to pay for an Ops team? We think our company benefits more from delivering products. The more developers you have, the more you can develop product. It’s just kind of the reality.”

The DevOps team has more time to innovate.

With uShip’s culture shift and devs now available to take on-call, Raleigh’s DevOps team has more time to focus their work on helping the company innovate faster, which means writing code and developing and improving infrastructure. Raleigh says, “At some point in your company’s life, you’re going to take another look at code that was written with ideals in mind, and realize that as volume and traffic increase, it doesn’t always perform very well. Right now, for example, our team is currently focused on writing code to reduce load on the database.

“We’re turning two of my senior dot net developers into SREs so we can focus on this work,” Raleigh continues. “It’s good for everyone involved to be doing this type of work as a means to improve platform performance and mitigate future issues and alerts. We would rather have the team working for the future than firefighting today’s application issues.”

The post Distributed Operations: How uShip Built a Culture of Code Ownership for Faster Feature Delivery appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/distributed-operations-uship-built-culture-code-ownership-faster-feature-delivery/feed/ 1
What We’re Reading Between Pushes, Vol. 2 https://ushipblogsubd.wpengine.com/shipping-code/what-were-reading-between-pushes/ https://ushipblogsubd.wpengine.com/shipping-code/what-were-reading-between-pushes/#respond Tue, 29 Nov 2016 17:06:05 +0000 https://ushipblogsubd.wpengine.com/?p=6660 uShip Engineering values a culture of continual learning. A degree may land you an entry level position, but in software development especially, education is an ongoing process in both formal and informal settings. As such, we implore our team members to continually stretch the bounds of their knowledge, and we love to dive into the... Read More

The post What We’re Reading Between Pushes, Vol. 2 appeared first on The uShip Blog.

]]>
uShip Engineering values a culture of continual learning. A degree may land you an entry level position, but in software development especially, education is an ongoing process in both formal and informal settings. As such, we implore our team members to continually stretch the bounds of their knowledge, and we love to dive into the things teammates find interesting enough to share. We’ve decided to publish a sample of what we’ve come across recently, as even though much of what we find isn’t new, it may be new to you.

Shaun Martin, Director of Software Development

What Google Taught Me About Scaling Engineering Teams by Edmond Lau
Even though our Product and Engineering orgs are currently in a slower, steady growth phase, there are always ways to improve communication, knowledge sharing and team/department structure at various stages.
For the New Team Lead: The First Six Things You Should Know
We’re entrenched in formalizing leadership training for all our Development Team Leads for the first time, previously relying (quite effectively) on a one-on-one mentorship approach. This post was shared by one of our Team Leads, and it highlights a lot of soft skills and challenges facing new leaders.

You Are Not Paid to Write Code

A thought- and discussion-provoking post from Brave New Geek, this post addresses the tunnel vision we often succumb to as developers, believing our problem space to be 100% unique. Developers are paid to solve problems with as few lines of code as possible, keeping solutions elegant yet simple. To do this, we need to take the time to research and leverage existing frameworks, libraries, open source projects and other solutions that get us 90% of the way there. Instead of “reinventing the wheel”, developers have a tendency to build our special snowflake sedan by reinventing the axles, glovebox, steering column and A/C.
The Obstacle is the Way by Ryan Holiday (book)
The third book from Ryan Holiday, its title was inspired by an entry in Marcus Aurelius’ Meditations: “The impediment to action advances action. What stands in the way becomes the way.” Problems are inevitable, so when you view each challenge as an opportunity to learn and a chance to practice one or more virtues, your entire perspective on your problems begins to change.

Bill Fienberg, Developer

The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It by Kelly McGonigal (book)
Laughing at myself because it’s taken me months to finish a book that is supposed to help you increase your willpower.

Brent Lewis, Developer

Software Estimation: Demystifying the Black Art by Steve McConnell (book)

Evan Machnic, DevOps Engineer

The Phoenix Project by Gene Kim, Kevin Behr, and George Spafford (book)
This is a nice quick read  about a fictional company that adopts DevOps practices to bring projects under control and save the business. Great read for anyone who works in technology, not just for DevOps.
The DevOps Handbook by Gene Kim, Jez Humble, Patrick Debois, and John Willis (book)
This is a companion book to The Phoenix Project. It builds on the principles in Phoenix and outlines how to put those principles into practice in the real world. If you’re looking to implement DevOps in your company or you want to really understand the Three Ways of DevOps, this is a great read.

[amp-cta id=’8486′]

The post What We’re Reading Between Pushes, Vol. 2 appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/what-were-reading-between-pushes/feed/ 0
Not all <video> is created equal. https://ushipblogsubd.wpengine.com/shipping-code/not-all-video-is-created-equal/ https://ushipblogsubd.wpengine.com/shipping-code/not-all-video-is-created-equal/#respond Sat, 12 Nov 2016 19:35:45 +0000 https://ushipblogsubd.wpengine.com/?p=6571 Recently at uShip, we launched a new homepage as part of our “Design Language System” (DLS) initiative. The new homepage featured an animated “shipper” and “carrier” having a conversation. This is achieved through the use of a <video> tag, a feature proposed in 2007 by Opera, and iterated greatly throughout the following years. This blog will... Read More

The post Not all <video> is created equal. appeared first on The uShip Blog.

]]>
Recently at uShip, we launched a new homepage as part of our “Design Language System” (DLS) initiative. The new homepage featured an animated “shipper” and “carrier” having a conversation. This is achieved through the use of a <video> tag, a feature proposed in 2007 by Opera, and iterated greatly throughout the following years. This blog will go through a brief history of the <video> feature, our previous homepage which utilized a video in its design, and the obstacles we came up against in our new homepage.
The Early Days
The video tag was first proposed by Opera, in the age when Flash was still king. Calling for video to become a first class denizen in their manifesto, they quickly ran into issues. The HTML5 Working Group wanted to have at least one high quality video format which all browsers could support. However, back then, there were no known codecs that could satisfy all the current browsers. To put it simply, universal browser compatibility was held back by patents and politics. Even Steve Jobs expressed concern over “patent pools” being assembled to go after “open sourced” codecs such as Theora. This is why we can’t have nice things, or perfect video on home pages.

Reassessing our Homepage
The previous version of the homepage was disjointed, touted advanced features that most users would not encounter, and did not provide a clear message of the uShip platform and brand. In a nutshell, users didn’t know what uShip was or how it could benefit them. The lack of focus in messaging and frankensteined messaging created confusion with our users.

Our old homepage had a full width video.

Through multiple rounds of concepts and explorations the visual design team landed on a message and visual style that would anchor the rest of the homepage: “Shippers meet Carriers. Carriers meet Shippers.”. A small looping video showing a carrier and shipper conversing with each other adds a personal touch.
Color Range Woes
So everything went well, right? Nope. Whenever opening the homepage in Internet Explorer/Edge, we were greeted with an abhorrent video.

We get signal. Gray screen turn on.

Notice the difference in the gray backgrounds? This didn’t happen for every single Edge user. It happened for Edge users whose video renderer did not support the Full color range of 0-255. In the video driver our developer machines use, the default color range is 16-235 (Limited).  The “white” background for the video was rendering around 226. Changing nVidia’s color settings to be “Full” resulted in the background rendering at the intended white. This is a moot point, as we can’t control hardware color settings from the web! So, we know the issue, what are we to do?
Adapting
I brought an alternative to our design department; change the video’s background to be within the limited white range. The implications of this in relation to the site’s overall design was less desirable. There were some alternatives we could further explore (failing back to a gif for example), but with our deadline approaching, we needed to implement a solution.
Das Kode
First, we attempted to deliver WebM video. If the browser does not support WebM, we deliver MP4. If the browser does not support either format (which should not happen for our supported browsers), we deliver a message stating that the video is not supported on their current browser.

The actual code (nothing special):
<video autoplay loop autostart class=”homepage-video”>
<source src=”https://path/to/webm” type=”video/webm”>
<source src=”https://path/to/mp4″ type=”video/mp4″>
Your browser doesn’t support HTML5 video tag.
</video>
For Internet Explorer/Edge we simply hide the video, and replace it with a still image.

Staring Contest!

I’m optimistic that developers will not have to deal with this issue for much longer, as recently released Microsoft Edge 14 supports WebM videos. Of course, many companies like ourselves will continue to support Edge 13 and Internet Explorer 11. For now, we’ll have to consider low level issues like color range settings or just avoid/be aware of HTML5 video.
Run into any other strange/quirky <video> bugs?
Let us know in the comments!
[amp-cta id=’8486′]

The post Not all <video> is created equal. appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/not-all-video-is-created-equal/feed/ 0
Deconstructing throttle in underscore.js https://ushipblogsubd.wpengine.com/shipping-code/deconstructing-underscore-js-throttle/ https://ushipblogsubd.wpengine.com/shipping-code/deconstructing-underscore-js-throttle/#respond Thu, 29 Sep 2016 21:20:05 +0000 https://ushipblogsubd.wpengine.com/?p=6462 Ever try to build a rate limiter? underscore.js has one called throttle. And like most utility functions, its source code on the surface looks a bit dense. Teasing it apart piece by piece helps us see how it accomplishes its purpose. In plain language: “We have a browser event that fires constantly, but we don’t... Read More

The post Deconstructing throttle in underscore.js appeared first on The uShip Blog.

]]>
Ever try to build a rate limiter? underscore.js has one called throttle. And like most utility functions, its source code on the surface looks a bit dense. Teasing it apart piece by piece helps us see how it accomplishes its purpose. In plain language:

“We have a browser event that fires constantly, but we don’t want our event handler function to fire constantly, too — only once per second, max.”

The Big Picture

Imagine we want to build a clever new app that writes to the browser console the user’s mouse cursor coordinates as they move the mouse.

function announceCoordinates (e) {
    console.log('cursor at ' + e.clientX + ', ' + e.clientY);
}
window.addEventListener('mousemove', announceCoordinates);

Brilliant!

Except, as the user wanders about, mousemove fires constantly. Maybe we’re announcing the coordinates too often? It’s arguable. And if instead of console logging we were telling our server what the mouse coordinates are, we’d definitely be sending word to home base way, way too often.

So — we need a rate limiter.

Slowing Our Roll

Let’s rewrite our event binding to take advantage of throttle from underscore.js.

window.addEventListener(
    'mousemove',
    _.throttle(announceCoordinates, 1000)
);

And just like that, our announcements only broadcast once per second.

How It Works

In our original event binding, we configured it to call announceCoordinates. But the second time around, we give it _.throttle(announceCoordinates, 1000).That looks more like we’re calling a function than pointing at one, doesn’t it?

In fact, we are calling throttle here, passing into it two parameters: our function name and the throttle time in milliseconds. It then does its magic and ultimately returns a function. It’s that resulting function that our event binding registers.

Take a look at the source code for throttle and find where it returns the new function. (For the sake of simplicity, the options param and associated logic have been removed.)

_.throttle = function (func, wait) {
    var context, args, result;
    var timeout = null;
    var previous = 0;
    
    var later = function () {
        previous = _.now();
        timeout = null;
        result = func.apply(context, args);
        if (!timeout) context = args = null;
    };
    return function () {
        var now = _.now();
        if (!previous) previous = now;
        var remaining = wait - (now - previous);
        context = this;
        args = arguments;
        if (remaining <= 0 || remaining > wait) {
            if (timeout) {
                clearTimeout(timeout);
                timeout = null;
            }
            previous = now;
            result = func.apply(context, args);
            if (!timeout) context = args = null;
        } else if (!timeout) {
            timeout = setTimeout(later, remaining);
        }
        return result;
    };
};

Yup, line 12.

Calling throttle in essence configures a brand new function that wraps around our original one. This new function is what’s registered to the event handler. And yes, the returned function will get called constantly by our friend mousemove. But dutifully, it protects our logging function and keeps track of when it should fire.

Which leads to …

The Hardest Part

We see above setTimeout is being used inside throttle. Anyone familiar with JavaScript has probably used it at some point, too. “Run this code 5000ms from now!!”, they likely exclaim.

setTimeout by itself can’t rate limit — it would simply delay the inevitable flood of function calls. But in the new wrapper around our function, things get clever. As it gets called over and over and over again, it goes through this routine:

  • Check how much time has passed
  • If enough time has passed, call our function ❤
  • If we still need to wait, set a reminder called later. That’s nothing more than our function in a setTimeout call. It only sets one of these! If that reminder already exists, nothing happens.

Eventually, our function gets called one of two ways:

  • Automatically, by the later reminder, or
  • Directly, if the timing is just right (line 18). And if that happens, the reminder is cleared.

And around and around we go.

Extra Credit

  • Use console.log or web inspector breakpoints to watch the function as it works.
  • Check out the annotated source to see about that mysterious options parameter.
  • Try rewriting throttle in the lovely new ES6 syntax.

Originally posted on the author’s personal Medium. Reposted here with permission.

The post Deconstructing throttle in underscore.js appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/deconstructing-underscore-js-throttle/feed/ 0
Streamlined, Manual JSON Parsing in Swift https://ushipblogsubd.wpengine.com/shipping-code/streamlined-manual-json-parsing-swift/ https://ushipblogsubd.wpengine.com/shipping-code/streamlined-manual-json-parsing-swift/#comments Mon, 26 Sep 2016 15:53:52 +0000 https://ushipblogsubd.wpengine.com/?p=6424 There’s plenty of articles out there about how to parse JSON in Swift, what libraries to use.  Apple even posted one themselves as I was finishing up this post.  What I’d like to talk about here is the way the uShip iOS app handles JSON parsing, which is a variation of Apple’s approach. Sharing maintainable Swift code is... Read More

The post Streamlined, Manual JSON Parsing in Swift appeared first on The uShip Blog.

]]>
There’s plenty of articles out there about how to parse JSON in Swift, what libraries to use.  Apple even posted one themselves as I was finishing up this post.  What I’d like to talk about here is the way the uShip iOS app handles JSON parsing, which is a variation of Apple’s approach.

Sharing maintainable Swift code is a Challenge

Since its introduction just over two years ago, Apple’s Swift programming language has shown itself to be a language that can be really fun to work with.   uShip’s iOS team chose to adopt the language very early on and in the process has had to learn to deal with some interesting challenges as a result.

One particular challenge comes with adopting 3rd party libraries.  If you’re working with Swift, one thing to consider in adopting a library is how frequently such libraries are kept up to date.  It’s important to consider how bad it may be if these libraries stop compiling when you have to move on to a new version of XCode.  And unlike some other development environments, it is extremely challenging to produce and distribute reusable pre-compiled libraries.  Instead it is common for shared code modules to be distributed as source code for others to compile themselves.  Even if you use cocoapods, you’re still having to compile the pods yourself.  Another dependency management system, Carthage, claims to support shared binary libraries, but until very recently this was only true if you were sharing the binaries with yourself on a single machine.  This all becomes a pretty big issue when the actual source code language is changing drastically over time.

With all this in mind, sometimes it can be a big timesaver to replace a third-party library we’re using with simpler, in-house code that we can easily maintain ourselves.  One place we’ve chosen to do this is in parsing JSON.

The iOS SDK doesn’t parse JSON for you

One thing that may surprise developers from other languages is that there’s no built-in way to instantly parse JSON objects into Data Transfer Objects (strongly typed data structures).  There ARE some third party libraries out there for doing this:  SwiftyJSON, and dankogai’s swift-json, among others.  But as mentioned above, if you’re trying to avoid depending on 3rd party code, it’s worth considering doing it yourself.

As it turns out JSON parsing is not such a bad candidate for a more manual approach.  Throughout the rest of this article, I’ll be sharing with you the technique we use in the uShip app for taking raw JSON data and converting it into strongly typed, Swift structs.  These structs clearly expose the structure and meaning of data from a given endpoint.  They are also easily composable and reusable with other similar endpoints, and can even help you build in sensible default values for missing values in a structure.
How we set up our Swift JSON parsing
Let’s step through the approach the uShip app is currently using.

Create A DTO Protocol

The first major piece of the puzzle was in creating a special protocol for all of our DTOs (Data Transfer Objects) to adopt, which simply specifies that these DTOs must have a constructor that allows them to be built with one of the basic JSON data types (Dictionary, Array, String, Number or Bool).  At uShip all of our APIs provide NSDictionary or NSArray objects, so we’ll focus on those.
public enum JSONType {

case array(array:[AnyObject])
case dictionary(dictionary:[String:AnyObject])

//…

public static func create(jsonObject: AnyObject?) -&gt; JSONType {

//…

}
}

public protocol JSONObjectConvertable {
init?(JSON: JSONType?)
}
Create Collection Extensions
The second part was in creating a set of extensions on the Swift Dictionary and Array types.  These extensions have a set of functions which accept a key value or index into the collection object and return the strongly-typed value associated with that key or index.  Through the power of Swift Generics, we’re able to give all of these functions the same exact name.  Because of this, all you need to know in order to use these functions is that one function name.  We also created one additional override of this function which could grab the collection value and return it as a type conforming to our JSON data protocol (described in the last paragraph).  For example:
public extension Dictionary where Key: ExpressibleByStringLiteral, Value: AnyObject
{
//looks for a value with the given key
//if the value exists with the expected type, returns the value as that type
//returns nil otherwise
public func jsonValue&lt;T&gt;(_ key: String) -&gt; T?
{
return (self[key as! Key] as? T)
}

//…

}
If you’re unused to Swift generics that code may be a little difficult to wrap your head around. The function  jsonValue&lt;T&gt;(key:) works based on what you assign its return value to.  If you assign the result to a string, it will return the dictionary value if it happens to actually be a string.  If it isn’t a string, it will return nil.  If you instead assign it to an NSNumber, it will only return the value if it is actually an NSNumber.   If we want to pull out values of a non-object, primitive type like Float or UInt, we need more specialized overrides of this function.
public func jsonValue(_ key: String) -&gt; Float?
{
return (self[key as! Key] as? NSNumber)?.floatValue
}

public func jsonValue(_ key: String) -&gt; UInt?
{
return (self[key as! Key] as? NSNumber)?.uintValue
}
Create DTO Types
Finally we combine the first two pieces within a concrete DTO designed to mirror the expected contents of a JSON object.  We do this by creating a struct, having it conform to our DTO Protocol, and then in the protocol-required constructor, we use our collection extensions to parse whatever data is passed in by the arguments.
struct User : JSONObjectConvertable {

var id : UInt?
var username : String?

init?(JSON: JSONObject?) {

guard let json = json else { return nil }
guard case .dictionary(let dictionary) = JSON else { return nil }

id = dictionary.jsonValue(“id”)
username = dictionary.jsonValue(“username”)

}
}
&nbsp;

This shows everything really coming together.  Our User DTO adopts the custom JSONObjectConvertable protocol, so it must have the two required initializers.  In the second initializer, we ensure that the JSON object we build from is of the expected dictionary type.  And finally we populate the “id” and “username” properties with our jsonValue extension function.  Each call to jsonValue calls a different version of the function because we have one version that handles String optionals and one that handles UInt optionals.
Use the DTOs
Once this DTO is set up, we can put it to use wherever we get data back from a JSON document.  Below we get data from an API endpoint and parse it using one of our DTO types:
let JSONObject = try? JSONSerialization.jsonObject(with: data, options:[])
let JSON = JSONType.create(jsonObject: JSONObject)
That’s it.  And after you set up one endpoint, adding DTOs for more endpoints becomes increasingly easier.  Most of the work is done by the collection extensions, which are reusable.  Our actual networking code is a bit more complex than that, but explaining that is outside the scope of this particular article.

For more details on the code, check out our sample project, which uses this technique to parse JSON Data from NASA’s API for downloading photos from the Curiosity Mars Rover.

And if you want to try running the project for yourself, you might even see some cool martian landscapes!

Or Mars rover selfies.

The post Streamlined, Manual JSON Parsing in Swift appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/streamlined-manual-json-parsing-swift/feed/ 1
What We’re Reading https://ushipblogsubd.wpengine.com/shipping-code/what-we-are-reading-1/ https://ushipblogsubd.wpengine.com/shipping-code/what-we-are-reading-1/#respond Tue, 20 Sep 2016 16:24:19 +0000 https://ushipblogsubd.wpengine.com/?p=6419 uShip Engineering values a culture of continual learning. A degree may land you an entry level position, but in software development especially, education is an ongoing process in both formal and informal settings. As such, we implore our team members to continually stretch the bounds of their knowledge, and we love to dive into the... Read More

The post What We’re Reading appeared first on The uShip Blog.

]]>
uShip Engineering values a culture of continual learning. A degree may land you an entry level position, but in software development especially, education is an ongoing process in both formal and informal settings. As such, we implore our team members to continually stretch the bounds of their knowledge, and we love to dive into the things teammates find interesting enough to share. Here’s a sample of what we’ve come across in the last month.

Jacob Calder, Developer (Payments)

Why Uber Switched to MySQL from PostGres
A technical explanation of why Uber switched including discussions of the architectures of both databases. Postgres has responded to this here and here.
A Look at How Postgres Executes a Tiny Join
A technical explanation of what actually happens when you perform a join in postgres. A great way to start moving beyond understanding how to use a database and to start building an understanding of how they work.

Brent Lewis, Developer (Platform, Front-End specialization)

React: Mixins Considered Harmful
An early example of a potential anti-pattern in React. If you heed the warning, it would be an insightful way to ensure new React code is maintainable.
How Becoming a Pilot Made Me a Better Programmer
When critical issues appear mid-flight, being able to both read and cross-check instrumentation will save your life – valuable lessons for web development production systems.

Zack Whipkey, Developer (Platform, Back-End specialization)

Influence: Science and Practice
If you’re working with vendors for third party tools, it’s good to identify whenever someone is attempting to sell you something without sound reasoning.
Jeff Atwood – They Have to Be Monsters
A deep look into a ugliness of communication on the internet, especially public facing. If you ever write a blog, or get into public speaking in cyberspace, best to be prepared.

Matt Hayes, Sr. Developer (iOS)

Idea Flow
When development best practices fail, what can you do?  See how one team used detailed metrics and reporting to solve the problems that fell through the cracks.
iWoz : Computer Geek to Cult Icon
The story of Steve Wozniak’s career, in his own words.  From building his first computer to the Apple I to inventing the universal remote control, this gives an interesting perspective on the early days in Silicon Valley and one pretty darn influential hardware engineer.

Shaun Martin, Director of Development

How To Empower Your Employees To Act Like Entrepreneurs
I’m a big fan of the Lean Startup principles and practices, and as uShip grows I want to make sure we maintain the ability to move quickly, take calculated risks and continue on the path to disrupt an outdated industry.
Ego is the Enemy (book)
This is the best book I’ve read in years. Ryan Holiday is a former assistant to Robert Greene (48 Laws of Power, 33 Strategies of War) and a student of stoicism. In his fourth book, he shares historical and personal examples of how allowing your sense of self-importance can cripple the execution of your purpose and mission. Examples include Napolean, Ghengis Khan, Steve Jobs, Bill Belichick, Dov Charney and John Delorean among many others.
The Pragmatic Programmer Quick Reference Guide
Not so much “reading” as it is “constantly referencing”. I refer to a handful of items from this list every week to do a health check on our teams and ensure they’re constantly striving to improve.

The post What We’re Reading appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/what-we-are-reading-1/feed/ 0
Visual Studio+ReSharper-level Productivity in VSCode https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/ https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/#comments Tue, 06 Sep 2016 14:18:04 +0000 https://ushipblogsubd.wpengine.com/?p=6350 Update 2017-05-22: This post originally written for a project.json .NET Core project. It has been edited for a .csproj .NET Core project. Visual Studio Code (aka VSCode) is a lightweight text editor from Microsoft. Many people think just because it is a “text-editor”, they will be missing the features they are used to from an... Read More

The post Visual Studio+ReSharper-level Productivity in VSCode appeared first on The uShip Blog.

]]>
Update 2017-05-22: This post originally written for a project.json .NET Core project. It has been edited for a .csproj .NET Core project.

Visual Studio Code (aka VSCode) is a lightweight text editor from Microsoft. Many people think just because it is a “text-editor”, they will be missing the features they are used to from an IDE like Visual Studio. With the proper configuration, VSCode can be a very powerful tool.

Setup

VSCode by default doesn’t come with the tools necessary to build .NET Core projects. The following setup will be necessary to get the editor, compiler, and extension necessary to get you closer to an IDE experience.

To install an extension, open the Command Palette (cmd+shift+p), remove the >, and run ext install csharp.

Note: While this tutorial is cross-platform , all given commands are using Mac OS X key bindings. For Windows and Linux, replace cmd with ctrl.

Key Bindings

Command Palette

The most important key binding in VSCode is cmd+shift+p, which brings up the Command Palette, similar to Sublime Text. Why is it so important? Hitting those keys brings up a search box that allows you to start typing a command like “ext” for “Extensions: Install Extensions” or “build” for “Tasks: Run Build Task”.

Shell

You will frequently need to run shell commands within VSCode. ctrl+` toggles an in-editor shell.

ReSharper Bindings

Where would each of us be without alt+enter, ReSharper’s quick fix and context actions key binding? Just because you don’t have ReSharper doesn’t mean your life is over (even though some people might think that). Common ReSharper operations are supported in VSCode, and these operations can be bound to custom key bindings, which allows us to roughly mirror the ReSharper plugin in VSCode. The below are the most common ReSharper key bindings I use. You can use the Command Palette to search for “Preferences: Open Keyboard Shortcuts”.

[
	{ "key": "alt+enter",                 "command": "editor.action.quickFix",
                                     "when": "editorTextFocus" },
	{ "key": "cmd+b",               "command": "editor.action.goToDeclaration",
                                     "when": "editorTextFocus" },
	{ "key": "alt+f7",               "command": "editor.action.referenceSearch.trigger",
                                     "when": "editorTextFocus" },
	{ "key": "cmd+shift+alt+n",                 "command": "workbench.action.showAllSymbols" },
	{ "key": "cmd+n",                 "command": "workbench.action.quickOpen" },
	{ "key": "cmd+shift+n",                 "command": "workbench.action.quickOpen" },			
	{ "key": "cmd+f12",			"command": "workbench.action.gotoSymbol"},
	{ "key": "cmd+t l", 			"command": "workbench.action.tasks.test"},

	{ "key": "cmd+p",			"command": "editor.action.triggerParameterHints"}
]
Command ReSharper VSCode default
Quick Fix alt+enter cmd+.
Go to anything cmd+n cmd+p
Go to symbol cmd+shift+alt+n cmd+t
Go to declaration cmd+b f12
Go to file cmd+n cmd+p
Go to file member cmd+f12 shift+cmd+o
Parameter info cmd+p shift+cmd+space
Find usages alt+f7 shift+f12
Run all tests cmd+t l N/A

VSCode key bindings reference: https://code.visualstudio.com/docs/customization/keybindings
ReSharper key bindings reference: https://www.jetbrains.com/resharper/docs/ReSharper_DefaultKeymap_IDEAscheme.pdf

Building and Debugging .NET Core Applications

This is it. The moment you’ve been waiting for. Using VSCode as an IDE.

Creating a .NET Core Project

VSCode doesn’t have a UI to create new projects, since it is file and folder based. However, we can use the in-editor shell to create a project after creating a folder.

mkdir my_project
code my_project

Note that the above requires code to be in your path. You can do this via searching “PATH” in the Command Palette.

Once we are in VSCode, run the following in the in-editor shell to create a new .NET Core command line project

dotnet new
# Run `dotnet new -h` to see your available options.
# Some templates that are available: Console (the default), Web (ASP.NET Core MVC), Lib (class library), xunittest and nunittest (XUnit and NUnit test projects)

You might see: “Required assets to build and debug are missing from your project. Add them?” Select “Yes”.

Building and Debugging

The building and debugging key bindings are the typical bindings from Visual Studio.

To debug, set a breakpoint and hit F5. It’s really that easy!

NuGet

Now that we are able to debug a .NET Core application, let’s walk through the common task of adding a NuGet dependency.

VSCode doesn’t come with a NuGet client by default, so let’s install one via ext install vscode-nuget-package-manager.

To install a NuGet package:

  • Open the Command Palette and search for “NuGet Package Manager: Add Package” and hit enter
  • Enter a search term and hit enter (e.g. “json”)
  • Select a package from the list and hit enter (e.g. “Newtonsoft.Json”)
  • Select a version from the list and hit enter (e.g. 9.0.1)
  • Select a project to add the reference to
  • Run dotnet restore in the in-editor shell as prompted by the NuGet extension

Alternatively, you can use the dotnet NuGet commands directly:

dotnet add path/to/your_project package Example.Package -v 1.0

Be aware that not all NuGet packages are compatible with .NET Core. See this awesome list of packages that support .NET Core. Hint: your favorite packages are probably there.

Testing

“The code is not done until the tests run” – A person

Now that we have a .NET Core project with a NuGet package reference, let’s add a test.

Set up

We need to install the following NuGet packages:

  • NUnit
  • NUnit3TestAdapter, at least version 3.8.0-alpha1

The following will have to be added to .vscode/tasks.json:

{
	"taskName": "test",
	"args": [],
	"isTestCommand": true,
	"problemMatcher": "$msCompile"
}

Note: You may be able to run dotnet new -t xunittest or dotnet new -t nunittest depending on what version of dotnet you have installed. The bleeding edge can be installed from the GitHub page.

Running the Test

Now we can add the simplest failing test:

using NUnit.Framework;

[TestFixture]
public class ProgramTests
{
	[Test]
	public void Should_fail()
	{
		Assert.Fail("This is a failure!");
	}
}

Now when we hit cmd+t l, our test will fail!

Debugging the Test

If you prefer to use xUnit (see: dotnet-test-xunit) you can easily run or debug the test by simply selecting the corresponding option in the editor. Unfortunately, debugging with NUnit isn’t quite as simple yet, and currently requires a convoluted process. See this GitHub issue that addresses this.

Conclusion

VSCode out-of-the-box won’t give you everything you need to be fully productive with .NET Core, but with some setup you should be up and running in no time. Do you have any VSCode tips and tricks of your own that I didn’t mention? Please comment below and share.

Summary of Setup

[amp-cta id=’8486′]

The post Visual Studio+ReSharper-level Productivity in VSCode appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/feed/ 3