Ivan Valle – The uShip Blog https://ushipblogsubd.wpengine.com Mon, 24 Mar 2025 18:28:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 Visual Studio+ReSharper-level Productivity in VSCode https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/ https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/#comments Tue, 06 Sep 2016 14:18:04 +0000 https://ushipblogsubd.wpengine.com/?p=6350 Update 2017-05-22: This post originally written for a project.json .NET Core project. It has been edited for a .csproj .NET Core project. Visual Studio Code (aka VSCode) is a lightweight text editor from Microsoft. Many people think just because it is a “text-editor”, they will be missing the features they are used to from an... Read More

The post Visual Studio+ReSharper-level Productivity in VSCode appeared first on The uShip Blog.

]]>
Update 2017-05-22: This post originally written for a project.json .NET Core project. It has been edited for a .csproj .NET Core project.

Visual Studio Code (aka VSCode) is a lightweight text editor from Microsoft. Many people think just because it is a “text-editor”, they will be missing the features they are used to from an IDE like Visual Studio. With the proper configuration, VSCode can be a very powerful tool.

Setup

VSCode by default doesn’t come with the tools necessary to build .NET Core projects. The following setup will be necessary to get the editor, compiler, and extension necessary to get you closer to an IDE experience.

To install an extension, open the Command Palette (cmd+shift+p), remove the >, and run ext install csharp.

Note: While this tutorial is cross-platform , all given commands are using Mac OS X key bindings. For Windows and Linux, replace cmd with ctrl.

Key Bindings

Command Palette

The most important key binding in VSCode is cmd+shift+p, which brings up the Command Palette, similar to Sublime Text. Why is it so important? Hitting those keys brings up a search box that allows you to start typing a command like “ext” for “Extensions: Install Extensions” or “build” for “Tasks: Run Build Task”.

Shell

You will frequently need to run shell commands within VSCode. ctrl+` toggles an in-editor shell.

ReSharper Bindings

Where would each of us be without alt+enter, ReSharper’s quick fix and context actions key binding? Just because you don’t have ReSharper doesn’t mean your life is over (even though some people might think that). Common ReSharper operations are supported in VSCode, and these operations can be bound to custom key bindings, which allows us to roughly mirror the ReSharper plugin in VSCode. The below are the most common ReSharper key bindings I use. You can use the Command Palette to search for “Preferences: Open Keyboard Shortcuts”.

[
	{ "key": "alt+enter",                 "command": "editor.action.quickFix",
                                     "when": "editorTextFocus" },
	{ "key": "cmd+b",               "command": "editor.action.goToDeclaration",
                                     "when": "editorTextFocus" },
	{ "key": "alt+f7",               "command": "editor.action.referenceSearch.trigger",
                                     "when": "editorTextFocus" },
	{ "key": "cmd+shift+alt+n",                 "command": "workbench.action.showAllSymbols" },
	{ "key": "cmd+n",                 "command": "workbench.action.quickOpen" },
	{ "key": "cmd+shift+n",                 "command": "workbench.action.quickOpen" },			
	{ "key": "cmd+f12",			"command": "workbench.action.gotoSymbol"},
	{ "key": "cmd+t l", 			"command": "workbench.action.tasks.test"},

	{ "key": "cmd+p",			"command": "editor.action.triggerParameterHints"}
]
Command ReSharper VSCode default
Quick Fix alt+enter cmd+.
Go to anything cmd+n cmd+p
Go to symbol cmd+shift+alt+n cmd+t
Go to declaration cmd+b f12
Go to file cmd+n cmd+p
Go to file member cmd+f12 shift+cmd+o
Parameter info cmd+p shift+cmd+space
Find usages alt+f7 shift+f12
Run all tests cmd+t l N/A

VSCode key bindings reference: https://code.visualstudio.com/docs/customization/keybindings
ReSharper key bindings reference: https://www.jetbrains.com/resharper/docs/ReSharper_DefaultKeymap_IDEAscheme.pdf

Building and Debugging .NET Core Applications

This is it. The moment you’ve been waiting for. Using VSCode as an IDE.

Creating a .NET Core Project

VSCode doesn’t have a UI to create new projects, since it is file and folder based. However, we can use the in-editor shell to create a project after creating a folder.

mkdir my_project
code my_project

Note that the above requires code to be in your path. You can do this via searching “PATH” in the Command Palette.

Once we are in VSCode, run the following in the in-editor shell to create a new .NET Core command line project

dotnet new
# Run `dotnet new -h` to see your available options.
# Some templates that are available: Console (the default), Web (ASP.NET Core MVC), Lib (class library), xunittest and nunittest (XUnit and NUnit test projects)

You might see: “Required assets to build and debug are missing from your project. Add them?” Select “Yes”.

Building and Debugging

The building and debugging key bindings are the typical bindings from Visual Studio.

To debug, set a breakpoint and hit F5. It’s really that easy!

NuGet

Now that we are able to debug a .NET Core application, let’s walk through the common task of adding a NuGet dependency.

VSCode doesn’t come with a NuGet client by default, so let’s install one via ext install vscode-nuget-package-manager.

To install a NuGet package:

  • Open the Command Palette and search for “NuGet Package Manager: Add Package” and hit enter
  • Enter a search term and hit enter (e.g. “json”)
  • Select a package from the list and hit enter (e.g. “Newtonsoft.Json”)
  • Select a version from the list and hit enter (e.g. 9.0.1)
  • Select a project to add the reference to
  • Run dotnet restore in the in-editor shell as prompted by the NuGet extension

Alternatively, you can use the dotnet NuGet commands directly:

dotnet add path/to/your_project package Example.Package -v 1.0

Be aware that not all NuGet packages are compatible with .NET Core. See this awesome list of packages that support .NET Core. Hint: your favorite packages are probably there.

Testing

“The code is not done until the tests run” – A person

Now that we have a .NET Core project with a NuGet package reference, let’s add a test.

Set up

We need to install the following NuGet packages:

  • NUnit
  • NUnit3TestAdapter, at least version 3.8.0-alpha1

The following will have to be added to .vscode/tasks.json:

{
	"taskName": "test",
	"args": [],
	"isTestCommand": true,
	"problemMatcher": "$msCompile"
}

Note: You may be able to run dotnet new -t xunittest or dotnet new -t nunittest depending on what version of dotnet you have installed. The bleeding edge can be installed from the GitHub page.

Running the Test

Now we can add the simplest failing test:

using NUnit.Framework;

[TestFixture]
public class ProgramTests
{
	[Test]
	public void Should_fail()
	{
		Assert.Fail("This is a failure!");
	}
}

Now when we hit cmd+t l, our test will fail!

Debugging the Test

If you prefer to use xUnit (see: dotnet-test-xunit) you can easily run or debug the test by simply selecting the corresponding option in the editor. Unfortunately, debugging with NUnit isn’t quite as simple yet, and currently requires a convoluted process. See this GitHub issue that addresses this.

Conclusion

VSCode out-of-the-box won’t give you everything you need to be fully productive with .NET Core, but with some setup you should be up and running in no time. Do you have any VSCode tips and tricks of your own that I didn’t mention? Please comment below and share.

Summary of Setup

[amp-cta id=’8486′]

The post Visual Studio+ReSharper-level Productivity in VSCode appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/visual-studioresharper-level-productivity-in-vscode/feed/ 3
Consul Clustering: Our Experience https://ushipblogsubd.wpengine.com/shipping-code/consul-clustering-our-experience/ https://ushipblogsubd.wpengine.com/shipping-code/consul-clustering-our-experience/#respond Wed, 13 Jul 2016 07:31:17 +0000 https://ushipblogsubd.wpengine.com/?p=4567 Introduction Consul is a distributed key-value store, heavily opinionated towards service discovery. If you’re not familiar with the basics of service discovery, Gabriel Schenker, who we’ve had the pleasure of working with, has an excellent introduction to service discovery. In this blog post, we will cover how we set up and expanded our Consul cluster.... Read More

The post Consul Clustering: Our Experience appeared first on The uShip Blog.

]]>
Introduction

Consul is a distributed key-value store, heavily opinionated towards service discovery. If you’re not familiar with the basics of service discovery, Gabriel Schenker, who we’ve had the pleasure of working with, has an excellent introduction to service discovery. In this blog post, we will cover how we set up and expanded our Consul cluster. For more information on utilizing Consul for service discovery, see our blog post that uses Consul, consul-template, and nginx to load balance microservices. Additionally, Hashicorp has produced great documentation for Consul.

The official Consul documentation is excellent, but we had difficulty conceptualizing some of the higher level concepts of clustered systems. Hopefully, a real-world production example of a Consul cluster deployment and modification thereof can clear up any confusion.

Our Current Setup

consul_cluster

  • 2 Linux servers running a Consul server in a Docker container, managed by Chef, an infrastructure automation tool
  • 2 Windows servers running a Consul server as a Windows service, managed by Chef
  • 1 Windows server running a Consul server as a Windows service managed manually

When we initially created our Consul cluster, it was not recommended to run a Consul server on Windows in production. However, we decided to be adventurous and do it anyway. We have had no problems (that weren’t our fault) and were excited when the bogus warning message was removed.

For deciding how many servers you should have in your cluster, see this deployment table. The recommended deployment count is 3 or 5 servers, each giving you a fault tolerance of 1 or 2 servers respectively. A fault tolerance of 2 allows for 2 servers to go down without bringing down the cluster. 5 servers with a fault tolerance of 2 was the perfect number for our needs, but YMMV. If you ever lose more servers than your fault tolerance allows, you risk losing all data stored in the cluster which could include registered services, key value pairs, and other vital information. From our experience, it has been a lot easier to recover a bad cluster by recreating the entire cluster than attempting to bring it back from the dead.

Setting Up Your First Cluster

We’re going to start off with creating a 3-node cluster, the simplest deployment with a fault tolerance of at least 1.

Docker command for 1st node

docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 gliderlabs/consul-server:0.6 -server -bootstrap-expect 3 -ui -advertise 192.168.99.101 -join node1 -join node2 -join node3

Docker command for 2nd node

docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node2 gliderlabs/consul-server:0.6 -server -ui -advertise 192.168.99.102 -join node1 -join node2 -join node3

Docker command for 3rd node

docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node3 gliderlabs/consul-server:0.6 -server -ui -advertise 192.168.99.103 -join node1 -join node2 -join node3

The following configuration differences are important to note:

  • -h NODE_NAME: The NODE_NAME is the hostname of the Docker container, which will be used to uniquely identify the node in the cluster. This is a Docker option, not a Consul option.
  • -advertise NODE_IP: NODE_IP is the IP address of the Docker host. In his example, we are following the pattern 192.168.99.10X, where X is the node’s number
  • -bootstrap-expect N: N is the number of servers necessary to start an election of a cluster leader. A more detailed description is available here. In our implementation, node2 and node3 do not have the bootstrap-expect flag. node1, since it has a bootstrap-expect value of 3, will wait for there to be 3 nodes in the cluster to start an election

Notice that the -join options are identical for all the nodes (they even join themselves). The -join options are a list of the servers that the node can communicate with to join the cluster. While a node will be able to join the cluster by only joining to one existing node in the cluster, a symmetric join list provides for cleaner configuration.

We now have our cluster bootstrapped. We now need to remove node1 to make its configuration identical to the other nodes.

Docker commands for removing bootstrap on 1st node

docker exec consul consul leave
docker rm consul
docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h consul1 gliderlabs/consul-server:0.6 -server -ui -advertise 192.168.99.101 -join node1 -join node2 -join node3

Adding More Servers

We wanted to expand our original 3-node cluster into a 5-node cluster to increase our fault tolerance from 1 to 2. We added 2 more servers to our cluster by doing the following:

Docker command for 4th node

docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node4 gliderlabs/consul-server:0.6 -server -ui -advertise 192.168.99.104 -join node1 -join node2 -join node3 -join node4 -join node5

Docker command for 5th node

docker run --name=consul -d -p 8300-8302:8300-8302 -p 8301:8301/udp -p 8302:8302/udp -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node5 gliderlabs/consul-server:0.6 -server -ui -advertise 192.168.99.105 -join node1 -join node2 -join node3 -join node4 -join node5

Notice that we do not need to include the bootstrap-expect option since we have already bootstrapped the cluster. We also don’t need to update the join list of the original 3 nodes, since new node information is gossiped between all the nodes in the cluster.

Removing Servers

If you want to shrink your cluster size, simply issue a consul leave command on the nodes you want to remove from the cluster.

To downgrade from 5 servers to 3 servers:

Docker command to run on 4th and 5th nodes

docker exec consul consul leave
docker rm consul

Note that this changes your cluster size to 3. This is NOT the equivalent of 2 servers going down in a 5 server cluster. The latter case is a 5 server cluster reaching its maximum fault tolerance, unable to run on just 2 servers. The former case of changing to a 3 server cluster still has a fault tolerance of 1, so running on 2 servers would be acceptable.

Planned vs Unplanned Leaving of the Cluster

Consul nodes can and will leave the cluster for various reasons. The below describes the difference between a planned leave and unplanned leave of the cluster.

consul_table

Note: Agents in the left state will be cleaned up by Consul in a process known as reaping, configured by default to occur every 72 hours.

Testing Things Going Wrong

From the perspective of the other nodes in the cluster, the following command simulates most things that can go wrong (VM dies, network partition, power outage, etc).

docker rm -f consul

We learned the most information from Consul by killing random nodes and trying to guess what happened, and investigating why we were almost always wrong. Such an exercise is vital to know how you would act in a disaster recovery situation.

Possible Pitfalls

The following are some issues we ran into with their solutions:

  • Chef check-ins. We use Chef to automate setting up our server infrastructure, including Consul. In order to not risk putting the Consul cluster into a bad state, we had to turn off the Chef automatic check-ins and manually manage the update of every Consul node. A check-in turning on a Consul node while it is not yet properly configured can easily ruin your cluster.
  • Consul node names. We use Consul within Docker containers when running Consul on Linux. Consul, by default, uses the hostname of the machine (or container) that it is running in. Container hostnames, by default, are randomly generated by Docker. However, Docker allows you to specify the hostname of the container when running it via the -h option (which we used in our examples). We didn’t have this problem on Windows, since our Windows machines are nicely named (and we aren’t using Windows containers…yet).

Conclusion

Creating a Consul cluster can seem a little daunting at first, but once you familiarize yourself with some of the concepts, it will become a powerful asset of your infrastructure.

This blog post was written by both Ivan Valle and Jake Wilke.

[amp-cta id=’8486′]

The post Consul Clustering: Our Experience appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/consul-clustering-our-experience/feed/ 0
Self-hosting a .NET API: Choosing between OWIN with ASP.NET Web API and ASP.NET Core MVC 1.0 https://ushipblogsubd.wpengine.com/shipping-code/self-hosting-a-net-api-choosing-between-owin-with-asp-net-web-api-and-asp-net-core-mvc-1-0/ https://ushipblogsubd.wpengine.com/shipping-code/self-hosting-a-net-api-choosing-between-owin-with-asp-net-web-api-and-asp-net-core-mvc-1-0/#comments Fri, 29 Apr 2016 08:08:21 +0000 https://ushipblogsubd.wpengine.com/?p=4328 Intro At uShip, our monolithic web application and the services supporting it are written in ASP.NET MVC, Web API, and hosted on IIS . We have begun the journey of transitioning to microservices. One of the important decisions we had to make early on was the choice of hosting model for our microservice APIs. We... Read More

The post Self-hosting a .NET API: Choosing between OWIN with ASP.NET Web API and ASP.NET Core MVC 1.0 appeared first on The uShip Blog.

]]>
Intro

At uShip, our monolithic web application and the services supporting it are written in ASP.NET MVC, Web API, and hosted on IIS . We have begun the journey of transitioning to microservices. One of the important decisions we had to make early on was the choice of hosting model for our microservice APIs. We opted for a self-hosted solution, where we didn’t have to depend on the existence of IIS on a machine, instead allowing the application to create an instance of an HTTP server within its own process. Self-hosting also opens up the possibility for future Linux deployments. We invested a bit of time investigating OWIN and the new ASP.NET Core offering. Here’s what we found.

OWIN vs ASP.NET Core MVC 1.0

OWIN

OWIN is a specification that describes how web servers and web applications interact that has been around for quite a while. One can build applications on top of OWIN which removes the dependency on IIS, allowing self-hosting. Once the application is self-hosted, wrapping it in a Windows service allows Windows to help manage the application’s lifecycle.

ASP.NET Core MVC 1.0

ASP.NET Core MVC 1.0 (the framework formally known as ASP.NET 5 MVC 6) is a new web framework by Microsoft. It is not a successor to ASP.NET Web API 2.2 or MVC 5, the web frameworks built for .NET Framework 4.6 (the latest version of the full .NET Framework). Rather, it is an alternative web framework that one can use if their code can run on .NET Core, a re-imaging of the .NET Framework that includes a subset of the full .NET Framework and is cross platform. Web applications built with ASP.NET Core can be run on Kestrel, an HTTP server built for ASP.NET Core that will allow you to self-host your application.

The below are some of our reasons we chose self-hosting with OWIN instead of with Kestrel and ASP.NET Core:

  • Stability. The ASP.NET Core ecosystem as a whole is not yet stable. At time of writing ASP.NET Core is on RC1, but we have seen code interface changes, project file changes, major tooling changes, and so on. The bleeding edge is too bloody for us to be productive in a codebase our size.
  • Seamless upgrade. .NET Framework 4.6 is the natural upgrade path for us. Our software is more than 10 years old, and we have invested lots of time in learning various libraries, and have significant use of them throughout the entire codebase. A lot of these libraries (e.g,. NHibernate) are not yet compatible ASP.NET Core.
  • Maturity. Nothing beats software that has stood the test of time in production. Along with reliability, there is also a plethora of online documentation for OWIN and tutorials by people that have already run into the problems that we will inevitably run into.

The below will help you implement your own OWIN self-hosted application. While the implementation is OWIN-specific, the overall idea of self-hosting is very similar and will be much easier to port to ASP.NET Core than an IIS deployment would be.

Terms To Be Familiar With

Below are some terms that you should be familiar with before moving on to the implementation

  • OWIN: Open Web Interface for .NET. Acts as an adapter between web servers and web applications. This is a specification, not an implementation.
  • Middleware: “plugins” for OWIN. Similar to an IHttpModule in IIS. If using Web API, these middlewares run before and after the ASP.NET pipeline finishes.
  • Katana: Microsoft’s implementation of OWIN, a collection of NuGet packages for developing OWIN applications. A breakdown of the NuGet packages and their purposes are shown in the following diagram.
    katana-nuget-map
  • Topshelf: An opinionated framework for easily developing Windows services.

Self-hosted Web API Hello, World! Windows service with OWIN and Topshelf

Note: All code is available on GitHub.

  • In Visual Studio, create a new Console Application project called “OwinHelloWorld”
  • Install the Microsoft.AspNet.WebApi.OwinSelfHost NuGet package
  • Install the Topshelf NuGet package
  • Add the following code:
    using Microsoft.Owin.Hosting;
    using Owin;
    using System;
    using System.Web.Http;
    using Topshelf;
    
    namespace OwinHelloWorld
    {
        public class Program
        {
            public static int Main(string[] args)
            {
                return (int) HostFactory.Run(x =>
                {
                    x.Service<OwinService>(s =>
                    {
                        s.ConstructUsing(() => new OwinService());
                        s.WhenStarted(service => service.Start());
                        s.WhenStopped(service => service.Stop());
                    });
                });
            }
        }
    
        public class OwinService
        {
            private IDisposable _webApp;
    
            public void Start()
            {
                _webApp = WebApp.Start<StartOwin>("http://localhost:9000");
            }
    
            public void Stop()
            {
                _webApp.Dispose();
            }
        }
    
        public class StartOwin
        {
            public void Configuration(IAppBuilder appBuilder)
            {
                var config = new HttpConfiguration();
                config.Routes.MapHttpRoute(
                    name: "DefaultApi",
                    routeTemplate: "api/{controller}/{id}",
                    defaults: new { id = RouteParameter.Optional }
                    );
    
                appBuilder.UseWebApi(config);
            }
        }
    
        public class HelloWorldController : ApiController
        {
            public string Get()
            {
                return "Hello, World!";
            }
        }
    }
  • Run the application one of the following ways:
    • Hit F5 in Visual Studio to debug
    • Run the exe to run the application as a regular process. The exe is usually located in SolutionRoot/SelfHostDemo/bin/Debug/SelfHostDemo.exe
    • Manage the application as a Windows service
      # Install and start the Windows service
      SelfHostDemo.exe install start
      
      # Stop and uninstall the Windows service
      SelfHostDemo.exe stop uninstall
  • Hit http://localhost:9000/api/helloworld in your browser

Gotchas Encountered While Switching from IIS to a Self-hosted Model

It would be a bad to assume that you can simply port over your IIS-based codebase into a self-hosted model. The below are some gotchas that you may run into.

    • HttpContext.Current: This will be null. HttpContext is IIS based and will not be set when self-hosting with OWIN.. If you have any code that relies on HttpContext, HttpRequest, or HttpResponse, it will have to be rewritten to handle an HttpRequestMessage or HttpResponseMessage, the HTTP types provided by Web API. Fortunately, we still have access to CallContext provided by ASP.NET. This class can be used to provide per-request static semantics. We have written an OWIN Middleware that gives us the request scope behavior of HttpContext.Current using CallContext:
      using Microsoft.Owin;
      using System.Runtime.Remoting.Messaging;
      using System.Threading.Tasks;
      
      namespace OwinHelloWorld
      {
          /// <summary>
          /// Sets the current <see cref="IOwinContext"/> for later access via <see cref="OwinCallContext.Current"/>.
          /// Inspiration: https://github.com/neuecc/OwinRequestScopeContext
          /// </summary>
          public class OwinContextMiddleware : OwinMiddleware
          {
              public OwinContextMiddleware(OwinMiddleware next) : base(next)
              {
              }
      
              public override async Task Invoke(IOwinContext context)
              {
                  try
                  {
                      OwinCallContext.Set(context);
                      await Next.Invoke(context);
                  }
                  finally 
                  {
                      OwinCallContext.Remove(context);
                  }
              }
          }
      
          /// <summary>
          /// Helper class for setting and accessing the current <see cref="IOwinContext"/>
          /// </summary>
          public class OwinCallContext
          {
              private const string OwinContextKey = "owin.IOwinContext";
      
              public static IOwinContext Current
              {
                  get { return (IOwinContext) CallContext.LogicalGetData(OwinContextKey); }
              }
      
              public static void Set(IOwinContext context)
              {
                  CallContext.LogicalSetData(OwinContextKey, context);
              }
      
              public static void Remove(IOwinContext context)
              {
                  CallContext.FreeNamedDataSlot(OwinContextKey);
              }
          }
      }
    • HttpRequestMessage.Content.ReadAsStreamAsync().Result: IIS let’s you read the request stream multiple times, but by default OWIN does not, not does it let you reset the stream after reading it once. A common reason people need to read the stream twice is to log the incoming request before the input body is deserialized by the framework. We have written an OWIN Middleware that copies the request stream into an in-memory buffer to get around this:
      using Microsoft.Owin;
      using System.IO;
      using System.Threading.Tasks;
      
      namespace OwinHelloWorld
      {
          /// <summary>
          /// Buffers the request stream to allow for reading multiple times.
          /// The Katana (OWIN implementation) implementation of request streams
          /// is different than that of IIS.
          /// </summary>
          public class RequestBufferingMiddleware : OwinMiddleware
          {
              public RequestBufferingMiddleware(OwinMiddleware next)
                  : base(next)
              {
              }
      
              // Explanation of why this is necessary: http://stackoverflow.com/a/25607448/4780595
              // Implementation inspiration: http://stackoverflow.com/a/26216511/4780595
              public override Task Invoke(IOwinContext context)
              {
                  var requestStream = context.Request.Body;
                  var requestMemoryBuffer = new MemoryStream();
                  requestStream.CopyTo(requestMemoryBuffer);
                  requestMemoryBuffer.Seek(0, SeekOrigin.Begin);
      
                  context.Request.Body = requestMemoryBuffer;
      
                  return Next.Invoke(context);
              }
          }
      }
    • IIS-specific modules: When investigating the switch from IIS to self-hosted, we discovered we relied on ISAPI_Rewrite, an IIS module that rewrites URLs ala Apache’s .htaccess. If we wanted to keep such behavior, we would need to either write an OWIN middleware that does the same thing, or somehow get a reverse proxy to do it.

What the future holds

Once the ASP.NET Core MVC 1.0 ecosystem stabilizes, it may be a suitable alternative to building OWIN applications for most. But for now, writing a self-hosted application using OWIN might be the best choice for a pre-existing codebase. If you are one of the lucky few who is in a greenfield project, it is definitely worth building your application with Core in mind for an easy upgrade. For help on determining if your software can be ported to Core, see ApiPort.

[amp-cta id=’8510′]

The post Self-hosting a .NET API: Choosing between OWIN with ASP.NET Web API and ASP.NET Core MVC 1.0 appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/self-hosting-a-net-api-choosing-between-owin-with-asp-net-web-api-and-asp-net-core-mvc-1-0/feed/ 16
Does your REST API need an SDK? https://ushipblogsubd.wpengine.com/shipping-code/does-your-rest-api-need-an-sdk/ https://ushipblogsubd.wpengine.com/shipping-code/does-your-rest-api-need-an-sdk/#comments Thu, 18 Feb 2016 08:41:02 +0000 https://ushipblogsubd.wpengine.com/?p=4273 Introduction When integrating with a platform that offers a REST API, a developer sometimes has the option of downloading the client library in their language of choice, or writing HTTP code themselves to integrate with the API directly. You as an API provider should decide early on if you wish to offer SDKs to your... Read More

The post Does your REST API need an SDK? appeared first on The uShip Blog.

]]>
Introduction

When integrating with a platform that offers a REST API, a developer sometimes has the option of downloading the client library in their language of choice, or writing HTTP code themselves to integrate with the API directly. You as an API provider should decide early on if you wish to offer SDKs to your customers.

REST, SOAP, and WSDLs

REST in its simplicity has generally been considered less of a barrier to entry than more cumbersome SOAP APIs, despite not having the assistance of a WSDL to aid in generating client code easily. One of the things that made REST simple was the flexibility in its design. As a consequence, for a long time there was not a common, standardized way to describe a REST API’s interface. Swagger has recently come to fill this void, even offering a tool to autogenerate client SDKs via swagger-codegen. However, these tools for REST APIs are fairly new. Hand-written SDKs have been offered by quite a few API providers for the past several years, their necessity varying from provider to provider.

Case study: Braintree

Braintree, a platform that makes it easy for your application to accept various forms of payment, does not offer a public REST API. Instead, they choose to deliver their platform through language-specific client libraries that they maintain. They have a wonderful write-up here. Things to note in the case of Braintree:

  • Their API is their business, and they need to do anything they can to make sure that clients can integrate easily.
  • They are in a space (payment processing) where security and application correctness are incredibly important.

For Braintree, SDKs make sense. They allow clients to integrate quickly with a complicated workflow and can abstract away things that the client doesn’t necessarily care about. For your API, it might also make sense to have an SDK if your API has non-trivial functionality such as a complicated authorization scheme, binary data serialization, request signing, etc. This is especially true if your clients do not need to customize any of this functionality.

Case study: uShip

We at uShip started our API platform with SOAP (which, to my surprise, looks like it is still in use here o_O). Eventually, we moved to a more RESTful approach, especially in anticipation of releasing native mobile apps. Things to note about our case:

  • Our mobile and web platforms are our business. A majority of external API integrations simply push customers to our platform.
  • Most integrations by third parties are limited in scope (targeted to a small subset of the markets we support) and therefore consume a very limited amount of easy to consume REST APIs, which makes the need for an SDK not as important.

For uShip, SDKs do not make sense as a necessity yet. Integration against our API via the REST interface is quick and easy. For your API, an SDK does not make sense as a necessity if your SDK does not offer much beyond simple HTTP bindings, especially if you don’t have the developer resources to maintain the SDKs in terms of bug fixes, new features, and availability on many platforms. This isn’t to say that you should not offer an SDK. Some form of example client interaction is always useful to a prospective integrator. Even if this means that I as a developer only end up using your DTOs (which are easy enough to automatically generate).

Automatically generating SDKs

When writing an SDK by hand, most of your code usually ends up looking like the following:

public class NounClient
{
  private final String BASE_PATH = "https://api.example.com";
  private HttpClient _httpClient;
  
  public NounClient() {
    _httpClient = new HttpClient();
  }
  
  public Noun getNouns(string filter) {
    String resourcePath = "/nouns?filter=" + filter;
    HttpRequest request = new HttpRequest(BASE_PATH + path)
    HttpResponse response = _httpClient.Execute(request);
    return response.Content.Deserialize(Noun.class);
  }
}

The resource names will be different, and you might add a couple of helper classes that allow a client to interact with your API easier, but a lot will look like copy-paste. With hand-written SDKs, there is a huge amount of boilerplate code that adds very little value for consumers, which means lots of wasted development efforts on your part. As developers we know things like this can and should be automated.

With the aforementioned tool swagger-codegen, it is possible to take a swagger spec and automatically generate client libraries. The default template can get you pretty far (especially if you lack resources in a particular programming language), but the real power comes in being able to write custom templates to generate the boilerplate. This allows you to spend your time designing an SDK instead of dealing with boring HTTP code. For the longest time we didn’t have any sort of SDK, but having something autogenerated lets us dip our toes in the water. Now we have something that internal (or even external) projects can use to quickly build a prototype application.

Conclusion

Whether you want to offer an SDK depends on the experience you would like your client to have, and that ultimately comes from how your API was designed, but no SDK will make a poorly designed API easy-to-use. A chatty API could lead to chatty code, or at the very least high latency between calls. Covering up a bad API with an SDK will inherently make the client complicated and could lead to possible bugs. Regardless of the existence of an SDK, good documentation with pertinent examples of common integration patterns is a must.

[amp-cta id=’8486′]

The post Does your REST API need an SDK? appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/does-your-rest-api-need-an-sdk/feed/ 5
From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/ https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/#respond Thu, 12 Nov 2015 08:15:13 +0000 https://ushipblogsubd.wpengine.com/?p=4218 Introduction Here at uShip, our web services have gone through quite a change over the last few years. We have gone from SOAP based services to a RESTful API. Recently we dipped our toes into Swagger, an API specification framework, to provide more value to our developers and external partners. Why Swagger? We originally started... Read More

The post From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger appeared first on The uShip Blog.

]]>
Introduction

Here at uShip, our web services have gone through quite a change over the last few years. We have gone from SOAP based services to a RESTful API. Recently we dipped our toes into Swagger, an API specification framework, to provide more value to our developers and external partners.

Why Swagger?

We originally started to experiment with Swagger because we heard great things about swagger codegen, a tool that automatically generates client libraries from a Swagger specification. We had an internal SDK for consuming our own APIs that we built by hand in C#. Every time someone had to consume an internal API, someone had to add to this SDK and go through the code review process as we do with every other line of code. After a while, we started to notice that the code that was manually written conformed to a pattern that could easily be reproduced by a machine. Combine that with various other tools that integrate with Swagger specifications, and we just had to try it out!

How we did it

Because we have more than a handful of existing APIs, we thought that manually writing a Swagger specification by hand would not be worth our time. Luckily, we found Swashbuckle, a library that integrates with our .NET Web API implementation to automatically discover APIs and generate a spec. After getting the kinks out of that integration, we had a valid spec and were able to generate usable SDKs in a couple of languages. We were hooked!

What we learned

Going into this, we simply expected to make use of the tools that Swagger provides. It turns out that we actually learned quite a bit about our API and its implementation.

We weren’t consistent

  • We try to reuse as many objects in our API as possible. Because we’re in the shipping industry, one of our most common reusable inputs is an address that looks like the following:
    {
    "postalCode": "78731",
    "country": "US",
    "type": "Residence"
    }
    

    One of our resources was actually using the following for input:

    {
    "postalCode": "78731",
    "country": "US",
    "type": {
    "value": "Residence",
    "label": "Residence"
    }
    }
    

    This object does indeed exist in our API, but should have only been used for outputs. By the time we found this discrepancy, we already had too many clients consuming the resource to be able to change it.

    We have since added an integration test in our codebase that scans all APIs and verifies that only GETs and PUTs can use the output address.

  • We try very hard to make sure that our APIs and their implementations follow internal documented standards. What we failed to do was enforce all those standards via some form of automated testing. Here is an example:
    using System.Web.Http;
    
    namespace uShip.Api.Controllers
    {
    public class EstimatesController : ApiController
    {
    
    }
    }
    

    Above is our controller that receives requests from our POST /v2/estimate resource, an API that allows you to calculate a rough estimate of how much it would cost to ship your anything from anywhere to anywhere. We try to follow the convention that controllers should be named directly after their resource. We slipped up in this case, and named the controller EstimatesController instead of EstimateController.

    What does this incredibly minor inconsistency have to do with Swagger? By default, Swashbuckle will use controller names to generate a set of classes for a client library. Someone wanting to consume the POST /v2/estimate resource through the client library would have to do the following:

    var api = new EstimatesApi();
    api.EstimatesPost(/*POST body*/);
    

    This could confuse the client, especially in a dynamic language.

    Again, we have added an integration test that scans all APIs and verifies that routes match controller names. Since we don’t have to worry about breaking clients when we rename controllers, we were able to make these changes right away.

  • We never thought to follow a pattern when naming the server-side C# classes that are used for deserializing request bodies. If we had a resource called POST /v2/Nouns, we could have any of the following class names:
    • NounInput
    • NounModel
    • NounInputModel
    • PostNounModel
    • NounCreationRequest (not even kidding)
    • ReubenSandwichModel (alright, kidding a little bit on this one)

    The above is not only a nightmare for discoverability when investigating an existing API, it also makes for a terrible situation for Swashbuckle. Swashbuckle reuses the class names for the models it will use in the client libraries. While autocompletion in the IDE kind of hides this problem, it’s still not very nice for the client to deal with.

    We haven’t written an integration test for this particular issue quite yet, but writing such a test or anything like it is trivial to do with Web API.

We should have started with an API specification

We wouldn’t have had any (or at least as many) of the mistakes as we did above had we started with some form of an API specification. Having one source of truth for our API would have saved us so much headache when compared to our collection of API “definitions” scattered throughout our issue tracker tickets, acceptance tests, lacking documentation, and API developer knowledge.
When we started creating our API, API definition languages weren’t very feature complete. Now, with Swagger being the official API description language of the Open API Initiative, we all have the tools necessary to do things right from the get-go.

Caveat

Even after we created something amazing, we realized that we didn’t create an API specification. What we created was a way for our API to produce a convenient byproduct. Any “specification” we automatically generated would have just been a self-fulfilling prophecy. APIs should be built to spec; specs should not be built from APIs (at least in the long term). We have toyed with the idea of taking more of a design-first approach to building APIs, especially as we start building out APIs for our new microservices. But currently, we are content with what automatically-generated, retrofitted Swagger has given us. You should give it a try!

[amp-cta id=’8486′]

The post From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/feed/ 0
uShip at HackTX Fall 2015 https://ushipblogsubd.wpengine.com/shipping-code/uship-at-hacktx-fall-2015/ https://ushipblogsubd.wpengine.com/shipping-code/uship-at-hacktx-fall-2015/#respond Mon, 28 Sep 2015 15:22:19 +0000 https://ushipblogsubd.wpengine.com/?p=4197 Here at uShip, we love hackathons. Recently, some us went down to the University of Texas at Austin’s annual hackathon, HackTX, hosting over 750 students from all over the state. As a company sponsor, we acted as mentors for the student hackers, helping them solve a wide range of problems – anything from general Android... Read More

The post uShip at HackTX Fall 2015 appeared first on The uShip Blog.

]]>
Here at uShip, we love hackathons. Recently, some us went down to the University of Texas at Austin’s annual hackathon, HackTX, hosting over 750 students from all over the state. As a company sponsor, we acted as mentors for the student hackers, helping them solve a wide range of problems – anything from general Android development to setting up a hosting provider and domain name. We even had a chance to talk about our API and watch students integrate with it. We had a blast and enjoyed working with such bright people. It’s great to see this kind of culture presented to developers at an early age, at such a fun event.

Peter Thai, a freshman computer science student at UT Austin, received a prize for hacking on our API.

uShip at HackTX Fall 2015
uShip at HackTX Fall 2015

Peter Thai receiving a prize for hacking on our API
Peter Thai receiving a prize for hacking on our API

Do you love hackathons as much as we do? We’re hiring!

The post uShip at HackTX Fall 2015 appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/uship-at-hacktx-fall-2015/feed/ 0
Implementing API Errors with Web API and FluentValidation https://ushipblogsubd.wpengine.com/shipping-code/implementing-api-errors-web-api-fluentvalidation/ https://ushipblogsubd.wpengine.com/shipping-code/implementing-api-errors-web-api-fluentvalidation/#comments Thu, 10 Sep 2015 08:37:37 +0000 https://ushipblogsubd.wpengine.com/?p=4172 In a previous article, we talked about how APIs should return RESTful error responses so that API clients can act on them. There are plenty of articles like this one that talk about how to actually integrate the FluentValidation framework into the Web API pipeline, so we won’t go into the plumbing details. Below is... Read More

The post Implementing API Errors with Web API and FluentValidation appeared first on The uShip Blog.

]]>
In a previous article, we talked about how APIs should return RESTful error responses so that API clients can act on them. There are plenty of articles like this one that talk about how to actually integrate the FluentValidation framework into the Web API pipeline, so we won’t go into the plumbing details. Below is a simple implementation of the RESTful API response model using these tools.

Validation Options

There are numerous options one has when choosing how to validate input using Web API. A classic option commonly used in Web API tutorials is using attributes for model validation. We started off going this route, but ran into a couple of issues:

  • Some of our validation rules depended on two adjacent properties. We found ourselves constantly overriding the default attribute validation behavior to account for this.
  • In many cases, our input models had shared nested models, but we needed different validation for the nested properties. Adding an attribute to the shared nested model did not allow it to be reused with different validation rules.

FluentValidation is a very flexible validation framework and is perfect for our needs.

WithState

FluentValidation provides an extension method when building validation rules called WithState. This method allows you to add any context you wish to the current rule. Whatever object you add to this context will be available to you when the rule fails. Let’s see it in action.

First, define an object that will hold the data we will need when validation fails:
[cc lang=”csharp”]
public class ErrorState
{
public ErrorCode ErrorCode { get; set; }
public string DocumentationPath { get; set; }
public string DeveloperMessageTemplate { get; set; }
public string UserMessage { get; set; }
}

public enum ErrorCode
{
None = 0,
Required = 10271992,
TooShort = 11051992
}
[/cc]

Next, define the validator that takes advantage of the WithState method and uses the aforementioned ErrorState object to encapsulate the type of validation failure:
[cc lang=”csharp” escaped=”true”]
public class UserInputModelValidator : AbstractValidator<UserInputModel>
{
public UserInputModelValidator()
{
RuleFor(x => x.Username)
.Must(x => x.Length >= 4)
.When(x => x.Username != null)
.WithState(x => new ErrorState
{
ErrorCode = ErrorCode.TooShort,
DeveloperMessageTemplate = “{0} must be at least 4 characters”,
DocumentationPath = “/Usernames”,
UserMessage = “Please enter a username with at least 4 characters”
});

RuleFor(x => x.Address.ZipCode)
.Must(x => x != null)
.When(x => x.Address != null)
.WithState(x => new ErrorState
{
ErrorCode = ErrorCode.Required,
DeveloperMessageTemplate = “{0} is required”,
DocumentationPath = “/Addresses”,
UserMessage = “Please enter a Zip Code”
});
}
}
[/cc]

Returning a RESTful Response

Before we can return the validation’s result to the client, we must first map it over to our RESTful API error structure.

The following objects are code representations of the JSON that we will send back to the client:
[cc lang=”csharp”]
public class ErrorsModel
{
public IEnumerable Errors { get; set; }
}

public class ErrorModel
{
public ErrorCode ErrorCode { get; set; }
public string Field { get; set; }
public string DeveloperMessage { get; set; }
public string Documentation { get; set; }
public string UserMessage { get; set; }
}
[/cc]
Notice how ErrorModel is very similar to ErrorState. ErrorsModel is our contract to the outside world. No matter how our validation implementation changes, this class must stay the same. Conversely, ErrorState is free to change as you improve your validation layer. You can add convenience methods, new enums, etc. without worrying about changing the JSON response to the client.

Once we run our validator, it is time for Web API to handle converting the result to something a client can consume. Below is code that can be placed in an ActionFilter:
[cc lang=”csharp” escaped=”true”]
private void ThrowFormattedApiResponse(ValidationResult validationResult)
{
var errorsModel = new ErrorsModel();

var formattedErrors = validationResult.Errors.Select(x =>
{
var errorModel = new ErrorModel();
var errorState = x.CustomState as ErrorState;
if (errorState != null)
{
errorModel.ErrorCode = errorState.ErrorCode;
errorModel.Field = x.PropertyName;
errorModel.Documentation = “https://developer.example.com/docs” + errorState.DocumentationPath;
errorModel.DeveloperMessage = string.Format(errorState.DeveloperMessageTemplate, x.PropertyName);

// Can be replaced by translating a localization key instead
// of just mapping over a hardcoded message
errorModel.UserMessage = errorState.UserMessage;
}
return errorModel;
});
errorsModel.Errors = formattedErrors;

var responseMessage = new HttpResponseMessage(HttpStatusCode.BadRequest)
{
Content = new StringContent(JsonConvert.SerializeObject(errorsModel, Formatting.Indented))
};
throw new HttpResponseException(responseMessage);
}
[/cc]
Our filter is doing the following:

  • Mapping our ErrorState object inline to the ErrorModel contract that we will be serializing
  • Creating an HTTP response object with the serialized data, which is then sent to the client by wrapping the response in and throwing an HttpResponseException

Now the client is ready to handle the nicely-structured API response.

Conclusion

FluentValidation is a powerful validation framework whose WithState method is a great entry point for custom logic including generating RESTful error responses. If you would like to see a full implementation, you can clone this GitHub repo. Keep in mind that this is a rough example and will most likely need to be modified to meet your exact needs and current infrastructure.

[amp-cta id=’8486′]

The post Implementing API Errors with Web API and FluentValidation appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/implementing-api-errors-web-api-fluentvalidation/feed/ 1
Actionable RESTful API Errors https://ushipblogsubd.wpengine.com/shipping-code/actionable-restful-api-errors/ https://ushipblogsubd.wpengine.com/shipping-code/actionable-restful-api-errors/#comments Thu, 27 Aug 2015 09:32:00 +0000 https://ushipblogsubd.wpengine.com/?p=4164 No matter how hard your API clients try, they will eventually get back errors from your API. apigee has an excellent blog post describing the reasons why detailed error messages are incredibly important for API client developers when starting to consume your API. When designed and used appropriately, these errors can be immensely useful to... Read More

The post Actionable RESTful API Errors appeared first on The uShip Blog.

]]>
No matter how hard your API clients try, they will eventually get back errors from your API. apigee has an excellent blog post describing the reasons why detailed error messages are incredibly important for API client developers when starting to consume your API. When designed and used appropriately, these errors can be immensely useful to the end users of API client applications.

The Simplest Errors: String Error Messages

[cc lang=”javascript”]
HTTP/1.1 400 Bad Request
Content-Type: application/json

{
“errors”: [
“Zip Code is required”,
“Username must be at least 4 characters”
]
}
[/cc]

Above is a common response for errors from an API. Although it is common, it is not an ideal response. In the case of form validation, clients may not be able to handle every error case themselves. To handle additional errors, they can catch 400 error responses from the API and display the provided message.

simple errors

Pictured is a sample Android application consuming these simple errors. While this method of displaying errors works, it has its downsides:

  • The wording of the user facing messages have to be chosen carefully to point the user in the right direction.
  • On a long form, users will also have to scroll back and forth between reading the error messages and correcting their mistakes.

More Advanced Errors: Actionable Error Objects

[cc lang=”javascript”]
HTTP/1.1 400 Bad Request
Content-Type: application/json

{
“errors”: [
{
“errorCode”: 10271992,
“field”: “Address.ZipCode”,
“developerMessage”: “Address.ZipCode is required”,
“documentation”: “https://developer.example.com/docs/Addresses”,
“userMessage”: “Please enter a Zip Code”,
},
{
“errorCode”: 11051992,
“field”: “Username”,
“developerMessage”: “Username must be at least 4 characters”,
“documentation”: “https://developer.example.com/docs/Usernames”,
“userMessage”: “Please enter a username with at least 4 characters”,
}
]
}
[/cc]
Above is a much more detailed error response from an API. This JSON response format is ideal, since it gives the developer the power to do something with the errors.
Here’s a breakdown of a possible structure for actionable API errors:

  • ErrorCode: A unique code per type of error (required, too long, profanity filter, etc.).
  • Field: The field in the HTTP request that caused the error to be triggered. The could be a property on a JSON object for a POST request or a query parameter in a GET request. API clients would use this to bind back to their application.
  • DeveloperMessage: A plain English message that describes what the error is, aimed to help the developer fix errors in their consumption of the API.
  • Documentation: If applicable, this could return link to the API documentation that describes where the particular validation rule is defined. This could be to an index of errors, or to a section on the API resource being consumed.
  • UserMessage: A localized message that describes what the error is, to be presented in a UI aimed to help the end user of the application. If a client wanted to do so, they could take advantage of the ErrorCode and Field properties of an error to produce a custom error message that would override the message provided by the API.

Because the context of the source of the error is returned from the API, the API client can catch those 400 error responses and bind them to their UI to tell the user which fields specifically need to be modified.

actionable errors

In this example, the client wanted to provide a very specific message as to why a Zip Code is required to register on the application. To accomplish this, the client can parse the response from the API as shown above to look for an error with the “required” error code (in our case, 10271992) and an Address.ZipCode field. The client still has the option of falling back to the “userMessage” field from the API, as in the username validation case above.

Implementation

The pattern outlined by the example JSON response in the previous section could easily be implemented in many languages and API frameworks. In a follow up post, we will walk through a sample implementation in C# with ASP.NET Web API and FluentValidation.

Conclusion

Your API’s error responses should be designed alongside the rest of your resources. While returning simple string messages is a good starting point, you should always strive for complete error responses to make life easier for your API clients. Following these error response best practices both on the server and client side will ensure that even if the HTTP status code of an API call is not 200, your client will be OK.

The original post can be found on my personal site.

The post Actionable RESTful API Errors appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/actionable-restful-api-errors/feed/ 2