.net – The uShip Blog https://ushipblogsubd.wpengine.com Fri, 14 Oct 2022 20:18:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 JavaScript for C# Developers https://ushipblogsubd.wpengine.com/shipping-code/javascript-for-csharp-developers/ https://ushipblogsubd.wpengine.com/shipping-code/javascript-for-csharp-developers/#comments Thu, 17 Mar 2016 14:45:48 +0000 https://ushipblogsubd.wpengine.com/?p=4285 Overheard at work today: “…this is why I hate JavaScript so much.” Sound like something you’d say? Instead of letting the hate flow through you, know that it doesn’t have to be like that. JavaScript is evolving quickly and picking up luxuries that C# has had for years. Subtle Differences Before I get into the... Read More

The post JavaScript for C# Developers appeared first on The uShip Blog.

]]>

Overheard at work today:

“…this is why I hate JavaScript so much.”

Sound like something you’d say? Instead of letting the hate flow through you, know that it doesn’t have to be like that. JavaScript is evolving quickly and picking up luxuries that C# has had for years.

Subtle Differences

Before I get into the cooler, newer parts of JavaScript, a few key differences from C# that you should really know about:

  • Equality checking. Use triple equals (===) for common, everyday equality checking (or !== for inequality). Avoid double equals (==) due to some hidden gotchas.
  • Variable declarations. Variables instantiated with var do not behave how you expect. They’re lexically scoped as opposed to the block scoping of C# and most other languages. e.g. vars created inside of for loops and if blocks are visible beyond the curly braces. The 2015 edition introduces let, which works like C#’s var.

Language Features

JavaScript went several years between editions: the third edition was released in 1999, the fifth edition in 2009. Not anymore – the sixth edition was published June 2015 and included a proposal for two-year release cycles. Some of the new features include:

    • LINQ. Code written with LINQ is generally more declarative and expressive than code that isn’t, and it’s easily within reach when writing JS. Similar functions for arrays exist in JS, they’re just named differently:
      • map instead of Select
      • filter instead of Where
      • sort instead of OrderBy
// C#
var philly = new MenuItem("philly", "fries", 10.99m);
var reuben = new MenuItem("reuben", "fries", 9.99m);
var pizza = new MenuItem("pizza", "salad", 16.99m);
var menu = new [] { philly, reuben, pizza };

var choices = menu
   .Where(x => x.Side == "fries")
   .OrderBy(x => x.Price)
   .Select(x => x.Name);

// choices => ["reuben", "philly"]
// JS
var philly = { name: "philly", side: "fries", price: 10.99 }
var reuben = { name: "reuben", side: "fries", price: 9.99 }
var pizza = { name: "pizza", side: "salad", price: 16.99 }
var menu = [philly, reuben, pizza];

var choices = menu
   .filter(x => x.side === "fries")
   .sort((x, y) => x.price > y.price)
   .map(x => x.name);

// choices => ["reuben", "philly"]
    • Class syntax. Introduced in ES6, the class syntax makes creating classes look much more familiar to the usual C# style. There are still fundamental differences in how inheritance works in JS vs. C#, but a similar syntax will help smooth some of that over.
// C#
class Address {
    private readonly string _city;
    private readonly string _state;
    private readonly string _zip;

    public Address(string city, string state, string zip) {
        _city = city;
        _state = state;
        _zip = zip;
    }

    public string ToFormattedString() { 
        return _city + ", " + _state + " " + _zip;
    }
}
// JS
class Address {
    constructor(city, state, zip) {
        this.city = city;
        this.state = state;
        this.zip = zip;
    }

    toFormattedString() {
        return this.city + ", " + this.state + " " + this.zip;
    }
}
    • String interpolation. JavaScript’s version of string interpolation uses backtick (`) characters. Must be using ES6 or later. Use it like so:
// C#
var number = 3;
var size = "large";
return $"I'll take {number} {size} pizzas, please";
// JS
var number = 3;
var size = "large";
return `I'll take ${number} ${size} pizzas, please`;

Development Environment

Both the development environment and tools that are available to you for a given language greatly affect your productivity and experience for that language, irrespective of the language itself. Visual Studio, despite being a bit sluggish at times, is a great development environment to work in. Popular JavaScript IDEs tend to be less IDE and more text editor, which makes them feel quicker and more responsive.

  • Powerful IDE with community plugins. What makes VS even better is the plugin ecosystem, ranging from the essential ReSharper to the tiny Hide Main Menu (personal favorite). For JavaScript, Sublime is hugely popular, and more recently Atom by GitHub, which both offer a fantastic set of user-created packages and themes. There’s also the lightweight Visual Studio Code, which supports both C# and JS!
  • ReSharper-style suggestions. Even though JavaScript is an interpreted language, front end developers have recognized the value in catching potential errors before execution. In Sublime, adding the SublimeLinter package (plus a little bit of configuration) gives you static code analysis à la ReSharper. Examples include removing unused variables, removing unreachable code, and requiring a default case in switch statements.
  • Importing. Gone are the days where you have to include a multitude of scripts on the page in a particular order. Just like in statically typed languages like C#, you can use import statements in your JS files to reference other files, which are then built with a tool like webpack or Browserify. Throw NodeRequirer into the mix, an Intellisense-like plugin for Sublime for finding files, and you’ll feel right at home.
  • Package manager. NuGet is handy, but npm is handier.

Testing

JavaScript development is a wild west of roughly thrown together code, with undefined is not a function hiding under every rock, right? Wrong!

    • Unit testing. Popular testing frameworks include Jest and Mocha, with Karma as a kickass test runner – it even has the ability to rerun the test suite when a .js file changes! Test-driven development is popular in .NET communities, and JS developers are starting to embrace it – for example, this Redux tutorial is written in TDD style.
// C#
[Test]
public void Should_return_true_for_valid_prime_number()
{
   var isPrime = _classUnderTest.IsPrimeNumber(37);
   isPrime.Should().Be(true);
}
// JS
describe("isPrimeNumber", function () {
    it("should return true for valid prime number", function () {
        var isPrime = isPrimeNumber(37);
        expect(isPrime).to.be(true);
    });
});

Server-side

Historically, JavaScript has always been known as a client-side language, but node.js has completely changed that notion.

    • Server / API frameworks. With the addition of a middleware layer like Express or Hapi, you can quickly write robust, fully-featured server and API code.
// C#
public class SandwichesController : ApiController
{
   public ActionResult Index(int id)
   {
       // do stuff with the request and response
   }
}
// JS
var express = require("express");
var app = express();

app.get("/v2/sandwiches/:id", function (request, response) {
   var sandwichId = request.params.id;
   // do stuff with the request and response
}

Conclusion

JavaScript isn’t so bad, right?

Originally posted on the author’s personal blog.

The post JavaScript for C# Developers appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/javascript-for-csharp-developers/feed/ 1
From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/ https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/#respond Thu, 12 Nov 2015 08:15:13 +0000 https://ushipblogsubd.wpengine.com/?p=4218 Introduction Here at uShip, our web services have gone through quite a change over the last few years. We have gone from SOAP based services to a RESTful API. Recently we dipped our toes into Swagger, an API specification framework, to provide more value to our developers and external partners. Why Swagger? We originally started... Read More

The post From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger appeared first on The uShip Blog.

]]>
Introduction

Here at uShip, our web services have gone through quite a change over the last few years. We have gone from SOAP based services to a RESTful API. Recently we dipped our toes into Swagger, an API specification framework, to provide more value to our developers and external partners.

Why Swagger?

We originally started to experiment with Swagger because we heard great things about swagger codegen, a tool that automatically generates client libraries from a Swagger specification. We had an internal SDK for consuming our own APIs that we built by hand in C#. Every time someone had to consume an internal API, someone had to add to this SDK and go through the code review process as we do with every other line of code. After a while, we started to notice that the code that was manually written conformed to a pattern that could easily be reproduced by a machine. Combine that with various other tools that integrate with Swagger specifications, and we just had to try it out!

How we did it

Because we have more than a handful of existing APIs, we thought that manually writing a Swagger specification by hand would not be worth our time. Luckily, we found Swashbuckle, a library that integrates with our .NET Web API implementation to automatically discover APIs and generate a spec. After getting the kinks out of that integration, we had a valid spec and were able to generate usable SDKs in a couple of languages. We were hooked!

What we learned

Going into this, we simply expected to make use of the tools that Swagger provides. It turns out that we actually learned quite a bit about our API and its implementation.

We weren’t consistent

  • We try to reuse as many objects in our API as possible. Because we’re in the shipping industry, one of our most common reusable inputs is an address that looks like the following:
    {
    "postalCode": "78731",
    "country": "US",
    "type": "Residence"
    }
    

    One of our resources was actually using the following for input:

    {
    "postalCode": "78731",
    "country": "US",
    "type": {
    "value": "Residence",
    "label": "Residence"
    }
    }
    

    This object does indeed exist in our API, but should have only been used for outputs. By the time we found this discrepancy, we already had too many clients consuming the resource to be able to change it.

    We have since added an integration test in our codebase that scans all APIs and verifies that only GETs and PUTs can use the output address.

  • We try very hard to make sure that our APIs and their implementations follow internal documented standards. What we failed to do was enforce all those standards via some form of automated testing. Here is an example:
    using System.Web.Http;
    
    namespace uShip.Api.Controllers
    {
    public class EstimatesController : ApiController
    {
    
    }
    }
    

    Above is our controller that receives requests from our POST /v2/estimate resource, an API that allows you to calculate a rough estimate of how much it would cost to ship your anything from anywhere to anywhere. We try to follow the convention that controllers should be named directly after their resource. We slipped up in this case, and named the controller EstimatesController instead of EstimateController.

    What does this incredibly minor inconsistency have to do with Swagger? By default, Swashbuckle will use controller names to generate a set of classes for a client library. Someone wanting to consume the POST /v2/estimate resource through the client library would have to do the following:

    var api = new EstimatesApi();
    api.EstimatesPost(/*POST body*/);
    

    This could confuse the client, especially in a dynamic language.

    Again, we have added an integration test that scans all APIs and verifies that routes match controller names. Since we don’t have to worry about breaking clients when we rename controllers, we were able to make these changes right away.

  • We never thought to follow a pattern when naming the server-side C# classes that are used for deserializing request bodies. If we had a resource called POST /v2/Nouns, we could have any of the following class names:
    • NounInput
    • NounModel
    • NounInputModel
    • PostNounModel
    • NounCreationRequest (not even kidding)
    • ReubenSandwichModel (alright, kidding a little bit on this one)

    The above is not only a nightmare for discoverability when investigating an existing API, it also makes for a terrible situation for Swashbuckle. Swashbuckle reuses the class names for the models it will use in the client libraries. While autocompletion in the IDE kind of hides this problem, it’s still not very nice for the client to deal with.

    We haven’t written an integration test for this particular issue quite yet, but writing such a test or anything like it is trivial to do with Web API.

We should have started with an API specification

We wouldn’t have had any (or at least as many) of the mistakes as we did above had we started with some form of an API specification. Having one source of truth for our API would have saved us so much headache when compared to our collection of API “definitions” scattered throughout our issue tracker tickets, acceptance tests, lacking documentation, and API developer knowledge.
When we started creating our API, API definition languages weren’t very feature complete. Now, with Swagger being the official API description language of the Open API Initiative, we all have the tools necessary to do things right from the get-go.

Caveat

Even after we created something amazing, we realized that we didn’t create an API specification. What we created was a way for our API to produce a convenient byproduct. Any “specification” we automatically generated would have just been a self-fulfilling prophecy. APIs should be built to spec; specs should not be built from APIs (at least in the long term). We have toyed with the idea of taking more of a design-first approach to building APIs, especially as we start building out APIs for our new microservices. But currently, we are content with what automatically-generated, retrofitted Swagger has given us. You should give it a try!

[amp-cta id=’8486′]

The post From Zero To Swagger: Retrofitting an Existing REST API to an API Specification Using Swagger appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/from-zero-to-swagger-retrofitting-an-existing-rest-api-to-an-api-specification-using-swagger/feed/ 0
.NET Web Applications Running in Docker https://ushipblogsubd.wpengine.com/shipping-code/net-web-applications-running-docker/ https://ushipblogsubd.wpengine.com/shipping-code/net-web-applications-running-docker/#respond Thu, 20 Aug 2015 16:49:02 +0000 https://ushipblogsubd.wpengine.com/?p=4156 Here at uShip, we love to try interesting things during our hackathons. Recently, Greg Walker and I decided to try to get one of our front-end solutions up and running in a docker container. Since docker containers have to run on Linux, that meant getting the project running on Mono first. Installing Mono To start,... Read More

The post .NET Web Applications Running in Docker appeared first on The uShip Blog.

]]>
Here at uShip, we love to try interesting things during our hackathons. Recently, Greg Walker and I decided to try to get one of our front-end solutions up and running in a docker container. Since docker containers have to run on Linux, that meant getting the project running on Mono first.

Installing Mono

To start, we set up a Ubuntu 14.04 virtual machine. The first thing we did when we got the box setup is installed MonoDevelop, which installed all the Mono dependencies for us. I would recommend following the installation instructions provided by the Mono Project. The latest version of Mono in Ubuntu’s repositories at the time of this post (3.2.8) were out of date and caused additional problems. Use at least Mono version 4.0.2 and MonoDevelop version 5.9.5. Once you get the official Mono repositories setup. You can install MonoDevelop with the command sudo apt-get update && sudo apt-get install monodevelop.

Building in MonoDevelop

The first step we took was to try to clone our repository in Linux and attempt to build it in MonoDevelop. Unfortunately, one of our test projects refused to load because the project type wasn’t supported. We decided to ignore this error and unload that test project since we weren’t planning on running any tests for this example.

 

We attempted to build our project in Mono, but we had a few assembly versioning issues. Luckily, these are only warnings that MonoDevelop treats as errors, thus we decided to turn those errors off and move onward.

 

Once we made these changes, our project was building successfully.

Running with Mono

Once we solved the issues with getting the solution building on Linux, it was just a matter of fixing our code to make everything work in the new environment.

To start up a web server to host our MVC project, we used xsp4. You can install this by running sudo apt-get install mono-xsp4. To get this up and running, we ran the command xsp4 in our project’s root directory which got a server up and running on port 8080.

Missing Assemblies

The first code-related issue we ran into was assemblies that live in the Global Assembly Cache (GAC) on Windows were not found on Linux. We fixed this by copying the DLLs over and placing them in the bin directory. Another solution to this problem is to add it to Mono’s GAC. This can be done by running the following command gacutil -i <assembly>. If you are trying to add any delayed-signed assemblies to the GAC, add the -bootstrap option before specifying the assembly.

We fixed several of these missing assembly errors until we started getting new exceptions that had our code in the stack traces. Once we saw our methods in the stack traces, this told us that Mono was able to begin executing our code!

Loading our Configuration

The next major issue was in the way we locate our external configuration files.

In our code, when we look for our configuration files, we start by calling Server.MapPath(“/”) in order to get the root directory. Unfortunately, this doesn’t quite work in Mono, and instead we needed to change it to Server.MapPath(“~”). This works in both Mono and in .NET, so this is likely the correct way to do it, anyway.

For Linux file systems, letter case matters! We had several places in our code where we looked for files without the proper casing. Solving this was simple, but tracking it down took quite a bit of time stepping through code and very thoroughly analyzing every filename in our code to make sure it matched what was on disk.

Differences in Mono

Now that configuration files were loading, we could actually use our site. We ran into one problem that was due to differences between Mono and Microsoft’s .NET Runtime.

CultureInfo Implementations

Deep in our localization code, we do a check to see if a culture is the InvariantCulture. To do this, we were comparing the culture’s ThreeLetterISOLanguageName to a constant string “ivl”.

[cc lang=”csharp”]
private const string InvariantCultureCode = “ivl”;
public static CultureSpecificity Specificity(this CultureInfo culture)
{
if (culture.ThreeLetterISOLanguageName == InvariantCultureCode)
{
return CultureSpecificity.Default;
}
//…
}
[/cc]

This works in Windows, where the ThreeLetterISOLanguageName is all lower case. But, in Mono, it is all uppercase and this check was not passing when it should have. To solve this error, we changed the code to compare the culture to CultureInfo.InvariantCulture. Doing so removes the need for the constant string and the ThreeLetterISOLanguageName.

[cc lang=”csharp”]
public static CultureSpecificity Specificity(this CultureInfo culture)
{
if(culture.Equals(CultureInfo.InvariantCulture))
{
return CultureSpecificity.Default;
}
//…
}
[/cc]

This is the only difference we encountered when running on Mono, which was a huge surprise to us. We were expecting many more things to not work quite right, but had no clue what might go wrong.

Installing Docker

Setting up docker on our Ubuntu 14.04 virtual machine was straight forward. We were able to follow the docker installation instructions for Linux and got up and running quickly.

The technologies that docker relies on are only available in Linux. Installing docker on Windows or OSX requires running a Linux virtual machine that hosts your containers.

Setting up our Docker Container

Setting up our docker container proved to be the least convoluted part of this experiment. We decided to use the official Mono container from Docker Hub which saved us a bit of time scripting out the installation of Mono in our own container.

Dockerfile

FROM mono:4.0
RUN apt-get update && apt-get -y install mono-xsp4
ADD . /app/
# RUN gacutil -i -bootstrap assembly_1.dll
# RUN gacutil -i -bootstrap assembly_2.dll
WORKDIR /app
EXPOSE 9000
ENTRYPOINT ["xsp4", "--port=9000", "--nonstop"]

What this configuration file is doing:

    • Start FROM the mono:4.0 base image
    • RUN our install command to get our server, mono-xsp4, installed
    • ADD the current directory to /app/. This makes our code available to processes within the container
    • If you’ve decided to store assemblies in Mono’s GAC, but sure to register them in your container
    • Set the current working directory to /app
    • EXPOSE port 9000, this will be the port we expect requests to come to
    • Finally, our container’s ENTRYPOINT, or what to run when the container starts

Now, we can open a console in our application’s directory and execute docker run. This will read our Dockerfile, build the container, and start our web application.

Conclusion

Getting our application up and running in a docker container was easier than we originally thought it would be when we first started. The whole process took Greg and I about 6 hours, most of which was spent debugging to figure out what was wrong with our code and not issues with Mono or docker.

We’re looking deeper into this to see how and if this can be integrated into our development and continuous integration and deployment processes.

We’re Hiring

Interested in playing with these technologies? We’re hiring!

The post .NET Web Applications Running in Docker appeared first on The uShip Blog.

]]>
https://ushipblogsubd.wpengine.com/shipping-code/net-web-applications-running-docker/feed/ 0