Moving from HTTP-Triggered Azure Functions to Web API (Update 2019.07.09)

Fireworks over Chicago

This past weekend was a long one due to the Fourth of July, and despite a weekend filled with cookouts, swimming, fireworks, an anniversary date night, and a trip to St. Louis, I was able to knock off an important task on my side project‘s TODO List.

Refactoring HTTP-Triggered Azure Functions into a Web API Service

When I started work on Stock Alerts, I began with the Azure Functions for retrieving quotes and evaluating alert definitions, because they were the most interesting pieces to me.

I then started thinking about the API endpoints that I’d need to support a mobile client. I figured I wouldn’t need too many endpoints to support the very minimal functionality that I was aiming to implement for MVP, and I already had the Azure Functions project, so I figured I’d just stand up a few HTTP-triggered functions for my API. After all, I could always refactor them into their own proper web API project later.

It was midway through implementing authentication that I realized that rather than continuing to try to fit the endpoints that I needed into Azure Functions, it made sense to move the HTTP-triggered functions into their own web API project with a separate app service in Azure sooner rather than later.

So that’s what I did.

I performed the refactor Thursday/Friday, and fiddled with the build and release pipelines in Azure DevOps in my free moments on Friday/Saturday. Monday morning I switched the app to use the new API.

Thankfully the refactoring of the code was fairly simple because my functions, much like good controller methods, were thin – they simply received the HTTP request, deserialized it, performed any necessary request-level validation, and delegated processing to the domain layer which lives in a separate assembly. The controllers in the Web API project that I created ended up being very similar.

I’m now closer to the MVP infrastructure that I mentioned a week ago, depicted below (I’m just missing StockAlerts.WebApp now):

Stock Alerts MVP Infrastructure Resources

I love the feeling of checking items off of my TODO list.

Why I Chose Web API Over HTTP-Triggered Azure Functions

So why did I choose to move my API methods from my Functions project to their own Web API project and service?

A few key reasons: 1. Inability to configure middleware for Azure Functions 2. Prefer controller methods over Azure Functions 3. API usage patterns

Let’s talk about these one-by-one…

ASP.NET Core Middleware

ASP.NET Core gives the developer the ability to plug in logic into the request/response pipeline. This logic is referred to as middleware. Andrew Lock has a great post on what it is and how it works in an ASP.NET Core web app.

ASP.NET Core has default middleware that it executes during the normal course of processing a request and response, but it also allows the developer to configure at startup what additional middleware should execute with each request, including custom middleware. Middleware is generally used for performing cross-cutting tasks – things like logging, handling exceptions, rendering views, and performing authentication, to name a few.

Early in my adventures into Azure Functions I learned that the developer doesn’t have the ability to configure the middleware that executes during an HTTP-triggered function invocation. Sure, some folks have rolled their own middleware pattern in Azure Functions (like here and here), but I didn’t want to invest that much effort into building something that an ASP.NET Core Web API gives me for free.

My custom middleware needs aren’t too many: for a typical web API I add custom error-handling middleware and enable authentication middleware.

Though I was able to implement workarounds to accomplish these tasks to work in my functions, they weren’t nearly as clean as accomplishing the same thing with middleware in an ASP.NET Core Web API.

Error Handling

My preferred approach to handling exceptions on recent Web API projects has been to create an ErrorHandlingMiddleware class that catches any unhandled exception during the processing of the request and turns it into the appropriate HTTP response. The code can be found here. Adding it to the pipeline is as simple as one line in Startup.cs:

app.UseMiddleware(typeof(ErrorHandlingMiddleware));

To accomplish similar functionality in my Azure Functions required an additional Nuget package (PostSharp), a custom attribute, and a [HandleExceptions] on top of all of my functions. Not terrible, but I’d rather not have the extra package and have to remember to manually decorate my functions to get error-handling.

Authentication/Authorization

To turn on token-based authentication/authorization for an ASP.NET Core Web API endpoint, you must configure the authentication, JWT bearer, and authorization options in Startup.cs, add the authentication middleware with app.UseAuthentication();, and decorate your controller methods with the [Authorize] attribute.

To implement token-based authentication/authorization on my Azure Functions, there wasn’t an easy way for me to simply decorate a function with an [Authorize] attribute and let the framework make sure that the user could invoke the function. Instead, for each function I had to use AuthorizationLevel.Anonymous and manually check for a valid ClaimsPrincipal and return new UnauthorizedResult() if there wasn’t one.

It worked, but it wasn’t pretty.

Beyond that, I had trouble getting it to add the Token-Expired header on responses when the auth token has expired. After switching over to Web API, this just works with the configuration I have in place.

Prefer Controllers Over Functions

As I began to add multiple HTTP-triggered functions that manipulated the same server resource, I grouped them into a single file per resource, similar to how controllers are often organized. But even though the methods were grouped like controllers, there were significant differences at the code level that cause me to prefer the controller implementations over the Azure Functions implementations.

Let’s compare the two side-by-side…

Here’s an Azure Function for getting an alert definition by ID:

[FunctionName("GetAlertDefinition")]
[HandleExceptions]
public async Task<IActionResult> GetAlertDefinitionAsync(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "alert-definitions/{alertDefinitionId}")] HttpRequest req,
    string alertDefinitionId)
{
    var claimsPrincipal = _authService.GetAuthenticatedPrincipal(req);
    if (claimsPrincipal == null)
        return new UnauthorizedResult();

    var alertDefinition = await _alertDefinitionsService.GetAlertDefinitionAsync(new Guid(alertDefinitionId));

    claimsPrincipal.GuardIsAuthorizedForAppUserId(alertDefinition.AppUserId);

    var resource = _mapper.Map<Resources.Model.AlertDefinition>(alertDefinition);
    return new OkObjectResult(resource);
}

And here’s the analogous controller method for getting an alert definition by ID:

[HttpGet]
[Route("{alertDefinitionId}")]
public async Task<IActionResult> GetAsync(Guid alertDefinitionId)
{
    var alertDefinition = await _alertDefinitionsService.GetAlertDefinitionAsync(alertDefinitionId);

    HttpContext.User.GuardIsAuthorizedForAppUserId(alertDefinition.AppUserId);

    var resource = _mapper.Map<Resources.Model.AlertDefinition>(alertDefinition);
    return new OkObjectResult(resource);
}

Observations:

  1. The controller method is shorter.
  2. The controller method is able to accept ID parameters as GUIDs, avoiding manual conversion.
  3. The controller method declares the HTTP verb and route more cleanly (in my opinion) as method attributes rather than a parameter attribute.
  4. Because I’ve decorated the controller with [Authorize], the controller method avoids the manual authorization logic.
  5. Because I’m using the ErrorHandlingMiddleware, the controller method avoids the extra [HandleExceptions] attribute.
  6. Not illustrated in this example, but the controller method accepts request bodies as entities, avoiding having to manually deserialize the request body from the HttpRequest.

From a purely code aesthetics perspective, I just prefer controller methods over HTTP-triggered functions.

API Usage Pattern

I expect the usage pattern of my API to be fairly uniform across the available endpoints and the traffic to ebb and flow predictably with the amount of users using the mobile app. I don’t expect large spikes in traffic to specific endpoints where I would need to be able to scale individual endpoints; if there are large spikes due to a sudden increase in the number of users, I’ll want to scale the whole web API service.

While HTTP-triggered Azure Functions may be the right choice for other use cases, the anticipated usage pattern of the Stock Alerts API aligns much more closely with a Web API service.

I’m still using Azure Functions for pulling stock quotes, evaluating alert definitions, and sending out notifications. Azure Functions are well-suited for these use cases, for the reasons I described here.

Wrapping Up

With this change behind me, I’m ready to continue moving forward working on the mobile app. My mornings the rest of this week will be focused on building the Create Alert Definition screen.

Here’s the repository for the project if you’d like to follow along: https://github.com/jonblankenship/stock-alerts.

Thanks for reading!

-Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Update 2019.07.03

I’d been meaning to get this update out over the weekend, but a stomach bug visited our house and threw off my schedule. I’d like to get these updates out about once a week going forward, but since this is a side project and I’m working on it for fun in my off hours, I’m not going to sweat it too much.

Also, as I mentioned in my first post, these updates will be pretty informal and unpolished. I just want to talk in detail about some of the things I did in the past week on the project, and what I plan to do in the coming week.

Last Week

Writing

With my announcement last weekend that I’ll be building Stock Alerts in public, I was compelled to write a few extra posts to lay some of the groundwork for project. I wrote the introductory post, spoke about the features, and laid out the infrastructure.

Naturally, this took some of my time away from development, but I think it was time well spent.

I have other posts that I want to write in the future to cover some of the work I’ve already done (particularly in the API), and I’ll try to work those in in the coming weeks without sacrificing too much dev time.

Create Alert Definition – Stock Search

I’ve been working on the Create Alert Definition screen in the Stock Alerts mobile app. This is where the user defines an alert, including selecting the stock and defining the alert criteria. Specifically, I was focused on the stock selection functionality last week (we’ll talk more about building the alert criteria in a couple weeks).

Here’s a wireframe for these screens:

Stock Alerts Create Alert Definition screen wireframes

I want the stock search feature to function like a typeahead search, allowing the user to type as much or as little of the stock symbol or company name as desired, and when they pause, the system retrieves the search results.

I already had an API endpoint for finding stocks based on a search string; I just needed to add CancellationToken support, which was as simple as adding it to the Azure function signature and plumbing it down to the data access layer:

[FunctionName("FindStocks")]
[HandleExceptions]
public async Task<IActionResult> FindStocksAsync(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "stocks")] HttpRequest req,
    CancellationToken cancellationToken,
    ILogger log)

Implementing search on the mobile app side took a bit more work…

Thinking about this from an implementation perspective, my StockSearchPageViewModel needs to have a SearchString property that receives the input from the textbox, waits a second, and if there’s no additional input, execute the web request to get the search results from the API, which will populate a collection of results on the view model to which the view is bound. If additional input is received from the user while the web request is executing, we need to cancel it and issue a new request.

I can’t (shouldn’t) implement all of this in the SearchString property setter, because you can’t (and shouldn’t want to) make a property setter async. Property setters should be fast and non-blocking. And yet I want to be able to simply bind the Text property of my search box to a property on my view model.

I ended up using NotifyTask from Stephen Cleary’s Nito.Mvvm.Async library, which contains helpers for working with async methods in MVVM. NotifyTask is “essentially an INotifyPropertyChanged wrapper for Task/Task<T>,” as Stephen writes in this SO answer, which helped me quite a bit (the answer refers to NotifyTaskCompletion, which was replaced by NotifyTask).

So here’s my StockSearchPageViewModel implementation:

public class StockSearchPageViewModel : ViewModelBase
{
    private readonly IStocksService _stocksService;
    private CancellationTokenSource _searchCancellationTokenSource;

    public StockSearchPageViewModel(
        IStocksService stocksService,
        INavigationService navigationService, 
        ILogger logger) : base(navigationService, logger)
    {
        _stocksService = stocksService ?? throw new ArgumentNullException(nameof(stocksService));
    }

    private string _searchString;
    public string SearchString
    {
        get => _searchString;
        set
        {
            _searchString = value;

            var newSearchCancellationTokenSource = new CancellationTokenSource();
            if (_searchCancellationTokenSource != null)
                _searchCancellationTokenSource.Cancel();
            _searchCancellationTokenSource = newSearchCancellationTokenSource;

            Stocks = NotifyTask.Create(SearchStocksAsync(_searchCancellationTokenSource));
            RaisePropertyChanged(nameof(SearchString));
        }
    }

    private NotifyTask<List<Stock>> _stocks;
    public NotifyTask<List<Stock>> Stocks
    {
        get => _stocks;
        set
        {
            _stocks = value;
            RaisePropertyChanged(nameof(Stocks));
        }
    }

    private Stock _stock;
    public Stock SelectedStock
    {
        get => _stock;
        set
        {
            _stock = value;
            var navigationParams = new NavigationParameters();
            navigationParams.Add(NavigationParameterKeys.SelectedStock, _stock);
            NavigationService.GoBackAsync(navigationParams);
        }
    }

    private async Task<List<Stock>> SearchStocksAsync(CancellationTokenSource searchCancellationTokenSource)
    {
        if (SearchString.Length >= 1)
        {
            await Task.Delay(1000, searchCancellationTokenSource.Token);
            try
            {
                if (!searchCancellationTokenSource.IsCancellationRequested)
                {
                    var stocks = await _stocksService.FindStocksAsync(SearchString, searchCancellationTokenSource.Token);
                    return stocks.ToList();
                }
            }
            finally
            {
                searchCancellationTokenSource.Dispose();
                _searchCancellationTokenSource = null;
            }
        }

        return new List<Stock>();
    }
}

The view model creates and manages the cancellation token source, and cancels it when necessary, in the SearchString property setter. This is also where we create the NotifyTask, passing it a delegate for the SearchStocksAsync(..) method, which delays one second and calls the search API. The results of the SearchStocksAsync(..) method call are exposed as NotifyTask<List<Stock>> by the Stocks property.

In my StockSearchPage view, I can simply bind to the properties, like so:

<SearchBar Grid.Row="1" Placeholder="Start typing ticker or company name" Text="{Binding SearchString, Mode=TwoWay}"></SearchBar>
<ListView Grid.Row="2" ItemsSource="{Binding Stocks.Result}" SelectedItem="{Binding SelectedStock}">
    <!--snip-->
</ListView>

… and with that, the typeahead stock search seems to be working pretty well.

HttpClientFactory

ASP.Net Core 2.1 introduced HttpClientFactory, which solves some of the problems developers run into when they create too many HttpClients in their projects. Steven Gordon has a nice write-up on HttpClientFactory and the problems it attempts to solve here.

The syntax to configure clients using HttpClientFactory is straightforward. In your ASP.NET Core Startup.cs:

services.AddHttpClient(Apis.SomeApi, c =>
{
    c.BaseAddress = new Uri("https://api.someapi.com");
    c.DefaultRequestHeaders.Add("Accept", "application/json");
});

Unfortunately, since Xamarin.Forms projects target .NET Standard, we can’t use any of the .NET Core goodies like HttpClientFactory. I wanted a similar pattern for configuring and creating my HttpClients in the mobile app, so I took some inspiration from here and created my own poor man’s HttpClientFactory.

Here’s my IHttpClientFactory interface:

public interface IHttpClientFactory
{
    void AddHttpClient(string name, Action<HttpClient> configurationAction);

    HttpClient CreateClient(string name);
}

And here’s my fairly naïve, yet adequate implementation:

public class HttpClientFactory : IHttpClientFactory, IDisposable
{
    private readonly IDictionary<string, Action<HttpClient>> _configurations = new Dictionary<string, Action<HttpClient>>();
    private readonly IDictionary<string, HttpClient> _clients = new Dictionary<string, HttpClient>();

    public void AddHttpClient(string name, Action<HttpClient> configurationAction)
    {
        if (string.IsNullOrWhiteSpace(name)) throw new ArgumentNullException(nameof(name), $"{nameof(name)} must be provided.");
        if (_configurations.ContainsKey(name)) throw new ArgumentNullException(nameof(name), $"A client with the name {name} has already been added.");

        _configurations.Add(name, configurationAction);
    }

    public HttpClient CreateClient(string name)
    {
        if (!_clients.ContainsKey(name))
        {
            if (!_configurations.ContainsKey(name)) throw new ArgumentException($"A client by the name of {name} has not yet been registered.  Call {nameof(AddHttpClient)} first.");

            var httpClient = new HttpClient();
            _configurations[name].Invoke(httpClient);
            _clients.Add(name, httpClient);
        }

        return _clients[name];
    }

    public void Dispose()
    {
        foreach (var c in _clients)
        {
            c.Value.Dispose();
        }
    }
}

Finally, the registration of the factory with a single client in my App.xaml.cs:

IHttpClientFactory httpClientFactory = new HttpClientFactory();
httpClientFactory.AddHttpClient(
    MiscConstants.StockAlertsApi, 
    c =>
    {
        c.BaseAddress = new Uri(MiscConstants.StockAlertsApiBaseUri);
        c.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    });
containerRegistry.RegisterInstance(httpClientFactory);

This gives me a nice way to create and manage my HttpClients in my Xamarin.Forms project, and it will be easy to drop in the real HttpClientFactory if it ever becomes available for Xamarin.Forms projects.


Last week’s activities also included implementing a web service client base class for handling common tasks when communicating with the API, storing access and refresh tokens on the client, and working out my unauthorized/refresh token flow, but those are topics for another post. This one’s long enough.

This Week

This week’s already about half over, and we’ve got the 4th of July coming up. I plan to continue working on the Create Alert Definition screen, and perhaps by the next time I write I’ll have the functionality for building the alert criteria and saving the alert definition working – we’ll see.

Here’s the repository for the project if you’d like to follow along: https://github.com/jonblankenship/stock-alerts.

Thanks for reading, and Happy Fourth of July!

-Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Infrastructure

We’ve talked about the features that we’ll be implementing in Stock Alerts. Today we’ll look at the infrastructure that will be needed to support those features.

I’ve been working in Azure for several years now, both in my work life and on side projects. Being a primarily .NET developer, it makes sense that Azure is my preferred cloud. One of these days I will probably check out AWS, but for this project we’ll be hosting our backend services in Azure.

Microsoft Azure

More Than Just CRUD

When deciding what to build for this project, I wanted to do something that was a bit more than just a simple CRUD app that consists of an app talking to a web service talking to a database. Stock Alerts will need to continuously monitor the current prices of all stocks for which there are active alert definitions and evaluate whether the alert should be triggered, so we’ll need a process that runs on a regular basis to perform that work. Further, when the process detects that an alert should be triggered, it needs to send out notifications on the user’s preferred channel(s).

For this processing, we’ll use a combination of Azure Functions and Service Bus queues.

Here’s a sequence diagram depicting the retrieving of quotes, evaluation of alert definitions, and sending of notifications:

Stock Alerts Notification Sequence Diagram

Alert Definition Evaluation

The evaluation of the active alert definitions will have a predictable load. The system will query the stock data provider on a defined time interval for the distinct set of stocks for which there are active alerts and iterate through the alert definitions and evaluate them against the latest data received from the data provider for that stock.

A timer-triggered Azure Function, which is essentially a CRON job running in Azure, will work nicely for periodically pulling new stock data. Initially, there will be a single function instance to pull the data, but this work can be partitioned out to multiple function instances if/when the need arises. It will then enqueue a message on a service bus queue (alertprocessingqueue) for each active alert indicating that there’s new data and the alert needs to be evaluated.

A service bus queue-triggered function (EvaluateAlert) will receive the service bus message and perform the evaluation for a single alert definition.

Sending Notifications

The actual notification of users, on the other hand, will likely be characterized by periods of low activity with occasional spikes of high activity. Imagine a very popular stock like AAPL receiving an earnings surprise and opening 5% higher – several alert definitions could be triggered at once and notifications will need to be sent immediately.

Azure Functions will help us with this use case as well – we’ll enqueue notification messages on service bus queues (pushnotificationqueue, for example) when alerts are triggered and service bus queue-triggered functions (SendPushNotification, for example) will respond and send out the notifications. We’ll have a queue for each delivery channel (push, e-mail, SMS), and a function for each as well.

When AAPL spikes and 500 alerts are triggered, 500 messages will be enqueued on service bus queues (assuming each user only has one delivery channel) and 500 functions will be invoked to deliver those notifications.

The Infrastructure

So what Azure resources will be required to support the Stock Alerts features? Here’s a diagram of what we’ll need for the MVP:

Stock Alerts MVP Infrastructure Resources

We’ve got an Azure SQL database to store our users and their preferences, alert definitions and criteria, and stocks and their latest data.

We’ve already talked about the service bus queues, which are used for communicating between the Azure Functions, and we’ve already talked about the Azure Functions as well.

The Stock Alerts API will be an ASP.NET Core Web API service running in Azure, and it will expose endpoints to handle the user and alert definition maintenance as well as authentication.

The Stock Alerts web app, though depicted on the diagram, will actually be implemented post-MVP.

Current State

The above shows the infrastructure as I plan to have it at launch. Below is the current infrastructure I have deployed in Azure:

Stock Alerts Current Infrastructure Resources

All of the API endpoints are currently implemented as HTTP-triggered Azure functions. I did this because I already had the StockAlerts.Functions project, and I didn’t think there’d be that many HTTP endpoints. As I started implementing the authentication endpoints and I ran into some of the limitations of Azure Functions HTTP endpoints (i.e., you can’t inject logic into the pipeline as you can into the ASP.NET Core middleware for a full-fledged Web API), I increasingly felt like the API endpoints deserved their own project and app service. It’s on my TODO list to move these into their own project and service.

Wrapping Up

I think the most interesting part of the Stock Alerts infrastructure is the use of Azure Functions and Service Bus queues to evaluate alert definitions and send notifications. Azure Functions make sense for these processes because they can be triggered by schedule or service bus queue message (among other methods), and they are easily scaled. Service bus queues are appropriate for the communication between functions because they are highly available, reliable, and fast.

Though one of the key value props of serverless computing is automatic scaling, I don’t have practical experience with scaling Azure Functions during periods of high load. I’ll log and monitor timings from the various functions to ensure that notifications are being delivered in a timely fashion from when they are triggered, which is crucial for Stock Alerts’ primary function.

That’s all for now. Thanks for reading.

-Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Features

In my previous post I revealed my new side project, Stock Alerts, and my intention to build a .NET Core product on the Azure stack, posting regular updates about my work as I go. Before I get down into the weeds in future posts, I thought it might be good to first talk at a higher level about the MVP features I’ll be implementing and the infrastructure that will be needed.

Features

Stocks Alerts will consist of a mobile app with a small number of key features. Of course there will be backend services to support the mobile app functionality, but we’ll talk about the features from the user’s perspective as he/she interacts with the app.

There will also be a web app eventually, but for now we’ll just focus on the mobile app.

Alert Definition Management

Push notificationStock Alerts users will be able to view, create, edit, and delete alert definitions that define the criteria for a stock alert. In the course of creating an alert definition, the user will search for and select a stock, name the alert, and define the criteria that will trigger the alert.

Initially, the set of stocks available to create alert definitions will be limited, due to daily API call limits set by my data provider (which I’ll talk in a future post). I’ve considered either supporting only stocks in the S&P 500 at first or allowing the initial users to essentially define the initial universe of stocks by the alert definitions that they create. Still thinking on this one…

For the alert criteria, the API will support allowing the user to choose from multiple rule types (like various price alerts, technical alerts, and fundamental alerts), as well as combining multiple rules into a boolean logic tree, but for MVP the mobile app will only expose the ability to enter 1..n price alerts combined by an AND or OR.

Alert Notifications

Azure FunctionsThe Stock Alerts backend will evaluate all active alert definitions as it receives new stock data and notify users of triggered alerts via one or more of three methods: push notification, e-mail, and/or SMS message.

The processing of active alert definitions and the sending of notifications via the three channels will be handled by Azure Functions in conjunction with Azure Service Bus queues. Azure Functions are well-suited for these types of tasks – I pay for the compute that I use and they can scale out under load (for example, when AAPL makes a large move and triggers a large number of alerts).

User Registration & Login

The Stock Alerts mobile app will allow new users to register with the app and existing users to login. To register, a user just needs to provide their e-mail address, a unique username, and their password. After login, web requests will be authenticated via token-based authentication

User Preferences Management

Users will be able to set their notification preferences in the Stock Alerts app, including the channels that they will be notified on (push notification, e-mail, or SMS), as well as their e-mail address and SMS phone number, if applicable.

Payment/Subscription Management

Stripe logoThough users will be able to register and set up a limited number of alert definitions for free, I’ll charge users who have more than a few active alert definitions. Stripe is the standard for managing subscriptions and taking online payments, and their API is well-designed. We’ll integrate with Stripe to manage user subscriptions and payments for premium users.

Cross-Platform

Xamarin FormsThe mobile app will target both Android and iOS (and eventually web), and we’ll use Xamarin.Forms to accomplish this. I’m an Android user, so Android will be the first platform I focus on. I can get idea validation and feedback by launching on a single platform, and if there is traction after launch and I want to expand to iOS, I’ll be well-positioned to do so having build the app with Xamarin.Forms.

A web app will probably be post-MVP as well.

Is That All?

The feature set that I’m targeting for MVP is extremely limited by design. There’s enough here to provide value, prove out the idea, and demonstrate interactions throughout the full stack while being small enough to complete within a couple of months at 5-10 hours per week (though writing about it will slow me down some). There will also be plenty of opportunities for enhancements and additional features post-MVP, if I’m so inclined.

When choosing a side project, it’s important to me that it be very small and limited in scope for a few reasons. First, I want something that I can build quickly while I’m motivated and energized about the new idea and lessen the risk of losing interest or becoming bored with it halfway through. I also want to launch as soon as possible to begin the feedback loop and start learning from my users: What do they like/not like? How do they use the product? Will they pay for it? Etc… Finally, by limiting the initial feature set, I focus on just the core features of the product and I don’t waste time building features that my users may not even care about.

So now that we know what features we’ll be building, we’re ready to talk about the infrastructure needed to support those features!

… but that’s a topic for my next post. Thanks for reading.

-Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Working on a New Side Project in Public

For the past month or so I’ve been working on a new side project. It’s a small project that will allow me to exercise existing skills, re-sharpen older skills that I’ve not used in awhile, and learn some new ones.

I’d like to share it with you…

Working in Public

I’ve been working my side project every weekday morning for an hour or two before my workday starts and on the occasional weeknight or weekend. I toil away mostly in solitude, except when I solicit feedback or ask for advice from a colleague and buddy who’s working on his own project.

I’ve recently been inspired by a few folks on various podcasts to start working in public. They speak of the benefits of sharing what you’re learning, developing, building with your audience as you’re working on it, an idea that strikes fear in the hearts of those who, like me, are inclined towards self-conscious perfectionism. It’s difficult to put your in-progress work out there for all to see, warts and all.

There’s been an increasing trend towards working in public, particularly among indie makers and developers. I dipped my toe into those waters with my last project, Open Shippers, earlier this year*, and I plan to get back in the pool with this project.

My plan is to post fairly regular (weekly?) updates about what I’m doing on the project:

  • What did I do the prior week?
  • Is there a particular problem I solved, pattern I used, or trick I learned that is worth sharing?
  • What do I plan to work on in the coming week?
  • Any particular challenges worth mentioning?

Things like that.

These will be pretty rough, unpolished posts. Sometimes a particular topic I encounter will be worthy of becoming a more in-depth, polished post, and put these up from time to time.

Goals

So what are my goals with working on this new project, and doing so in public?

  • Get into the habit of writing regularly.
  • Sharpen old skills and learn new ones.
  • Share what I’m working on and what I’ve learned. Hopefully it is helpful to someone.
  • Demonstrate the process of taking a product from concept to market.
  • Demonstrate well-architected, working product on the Azure / .NET Core stack.
  • Teach.
  • Learn.
  • Have fun.
  • Make some lunch money if anyone subscribes to the service.

What’s the Project?

Concept

The idea is not particularly novel or interesting, but it’s a good candidate for what I’m trying to achieve with this project.

I’m building a stock alerts app (hereafter referred to as Stock Alerts until I think of a good name for it) that will allow the user to create an alert definition for a given stock based on a set of criteria that they define. When the alert is triggered, the user will be notified according to their preferences by push notification, SMS, and/or e-mail.

“Aren’t there a ton of other stock alert apps out there?” Yes, there are. Some just let you set a simple price alert; others offer many more features. I plan to support complex alert criteria that include not only simple price alerts, but eventually also technical and fundamental alerts as well, which will differentiate the app from a portion of the market.

I also want to challenge the idea that you have to have a completely novel idea to succeed in building an app that people will pay for. There’s room for multiple competitors in most markets, and oftentimes your app will be the right choice for a segment of the market that’s not currently being served well.

To be clear, I have no illusions of replacing my income with this app, but it will be great fodder for achieving the goals I mentioned above, and it should be a fun little project.

Tech

Here’s the some of the technology I’m using:

Current Status

The backend API is coded and running in Azure. I’ll share more detail about the API and how it’s built in future posts, including the overall architecture. I’ve been working on the mobile app for about a week now. I’m able to register/login through the mobile app and display my current alert definitions. Last week I put together some UI wireframes for the Create Alert Definition screens, and I’ll be working on implementing those screens this week.

Wrapping Up

In the spirit of working in public, I’ve made the Stock Alerts repository public: https://github.com/jonblankenship/stock-alerts. As with most projects, there are areas of the codebase that need some work and refinement, but we’ll address those things together down the road.

Thanks for reading! Until next time,

-Jon

(* After spending several months working on Open Shippers, I was burned out on it by the time I launched it in preview in February. I failed to differentiate it from other similar services out there, and I didn’t have any gas left in the tank to work on building an audience and making it successful. Someday I may do a post-mortem on the project. Open Shippers wasn’t a total failure though – I learned quite a bit while working on it, particularly about ASP.NET Core and Blazor, and I’ve reused several things I developed for Open Shippers in more recent projects. The site is still live, and I may revive efforts around it at some point, but I’m content to work on something else for the time being.)


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Setting Environment for EF Core Data Migrations

Most of my side project work is in ASP.NET Core lately, and my main side project uses EF Core for data access with data migrations. My application is deployed in Azure using Azure SQL as the database. When developing and running locally, I hit a LocalDB instance on my machine.

When I need to make changes to the data model, I make the code changes, run Add-Migration Update### and then Update-Database in package manager console, and my local database happily gets updated. But when I’m ready to make the changes to the SQL Azure database, how can I change the connection string that the EF database context is using?

The ASPNETCORE_ENVIRONMENT Variable

First let’s talk about how an ASP.NET Core application knows where it’s running.

ASP.NET Core uses an environment variable named ASPNETCORE_ENVIRONMENT to control how an application behaves in different environments. The framework supports three environment values out of the box: Development, Staging, and Production. When running locally, my app uses the environment variable set up for me by Visual Studio when I created the project on the Debug tab of my project’s properties:

The ASPNETCORE_ENVIRONMENT variable set on the Debug tab of Project Properties.
The ASPNETCORE_ENVIRONMENT variable in project properties.

On the Application settings tab for my App Service in Azure, this variable is defined and set to Production so the deployed instance of my application knows to use production settings and connection strings:

The ASPNETCORE_ENVIRONMENT variable set on the Application settings tab on the App Service in Azure.

The value of this environment variable can be accessed from within the Startup class by using a constructor that takes an instance of IHostingEnvironment:

public Startup(IConfiguration configuration, IHostingEnvironment environment)

You can then use convenience members, like environment.IsDevelopment or environment.IsProduction to switch based on the predefined environments, or access environment.EnvironmentName directly.

ASP.NET Core also supports the use of different appsettings.json files, depending on the value of the ASPNETCORE_ENVIRONMENT environment variable. For example, I have an appsettings.Development.json in my project with overrides for settings and connection strings specific to my development environment. When loading configuration options on startup, the framework knows to look for and use any settings defined in an appsettings.{EnvironmentName}.json file, where {EnvironmentName} matches the value of the ASPNETCORE_ENVIRONMENT environment variable.

Setting ASPNETCORE_ENVIRONMENT for Data Migrations

All of this is prelude to the point of this post. I’ve never really used package manager console for much more than, well, managing Nuget packages and running the occasional data migration commands. But as it turns out, the Package Manager Console window is a PowerShell shell that can do much more. For now, I just need it to help me target the correct database when running my data migrations.

I can see a list of my application’s DbContext types and where they’re pointing by issuing EF Core Tools command Get-DbContext in package manager console. Doing so yields the following output:

PM> Get-DbContext
Microsoft.EntityFrameworkCore.Infrastructure[10403]
      Entity Framework Core 2.2.1-servicing-10028 initialized 'ApplicationDbContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer' with options: None

providerName                            databaseName  dataSource             options
------------                            ------------  ----------             -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase    (localdb)\MSSQLLocalDB None   

I can then set the ASPNETCORE_ENVIRONMENT variable to Production for the context of the package manager window only by issuing the following command:

PM> $Env:ASPNETCORE_ENVIRONMENT = "Production"

All subsequent calls to Update-Database will be now run against the database configured for my Production environment. I can double-check to make sure, though, by issuing Get-DbContext again. This time it shows that I’m pointing to my deployed database:

PM> Get-DbContext

providerName                            databaseName dataSource                              options
------------                            ------------ ----------                              -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase   tcp:acme.database.windows.net,1433      None

Thanks for reading!

–Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Creating a Loading Overlay for a Composite Blazor View

One of my projects’ home screen is composed of multiple Blazor components that make a web service calls in their respective OnInitAsync()s to retrieve their data.  When loading the page, all of the components render immediately in their unpopulated state; each one then updates and fills out as the various web service calls complete and their data loads.  I’m not a fan of this – I’d rather have a nice overlay with some sort of loading indicator hiding the components while they’re loading that disappears when the last component is ready for display.  This weekend I developed a simple pattern for accomplishing just that.

The full source code for this example can be found here.

What We’re Building

Below is an example of a Blazor page that contains three components, each one simulating a web service call on initialization that completes from one to three seconds after it is initialized.  We see them show up empty at first, then resize, and finally load their text, all at different times.

Blazor page load without loading overlay.
Blazor page load without loading overlay.

Instead, we want our end product to look like the following.  When the page loads we see a nice little loading indicator (I pulled a free one from here) on an overlay that hides the components while they’re doing their business of intializing, resizing, and loading their data before they’re ready to be seen.

Blazor page load with loading overlay.
Blazor page load with loading overlay.

Now that we know what we’re building, let’s see some code…

Blazor Components

Index View and Model

We’ll start with our Blazor components.  Our IndexView.cshtml is the main page, and it looks like this:

@inherits PageLoadingOverlay.App.Features.Home.IndexModel
@page "/"
@layout Layout.MainLayout

<!-- Loading overlay that displays until all individual components have loaded their data -->
<LoadingOverlayView LoadStateManager="@LoadStateManager"></LoadingOverlayView>

<div class="row">
    <div class="col-3 component-div component-1-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="1" Delay="2000"></ComponentView>
    </div>
    <div class="col-3 component-div component-2-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="2" Delay="1000"></ComponentView>
    </div>
    <div class="col-3 component-div component-3-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="3" Delay="3000"></ComponentView>
    </div>
</div>

Note that we have four components in this view: three instances of ComponentView that load data on initialization, and an instance of LoadingOverlayView that hides the other components until they are all loaded. We’ll look at these components shortly.

The corresponding IndexModel for this view has a single property:

public class IndexModel: BlazorComponent
{
    /// <summary>
    /// The <see cref="LoadStateManager"/> for this view
    /// </summary>
    /// <remarks>
    /// Since this is the parent view that contains the loading overlay, we new
    /// up an instance of the <see cref="LoadStateManager"/> here, and pass it
    /// to child components.
    /// </remarks>
    public LoadStateManager LoadStateManager { get; } = new LoadStateManager();
}

We instantiate a new instance of LoadStateManager here, which will manage the loading state of the child components and tell the loading overlay when all components are loaded and it can disappear.

Component View and Model

The three instances of the component that we’re loading have these properties:

public class ComponentModel: BlazorComponent, ILoadableComponent
{
    [Parameter] protected LoadStateManager LoadStateManager { get; set; }

    [Parameter] protected int Number { get; set; }

    [Parameter] protected int Delay { get; set; }
    
    public string Title => $"Component {Number}";

    public string Status { get; private set; }

    public ComponentLoadState LoadState { get; } = new ComponentLoadState();

The LoadStateManager from the parent view is passed to the component, as well as a couple of other parameters (like Number and Delay) that control the component’s behavior.

The component implements interface ILoadableComponent which defines a single property:

/// <summary>
/// Defines a component whose <see cref="ComponentLoadState"/> can be managed by a <see cref="LoadStateManager"/>
/// </summary>
public interface ILoadableComponent
{
    /// <summary>
    /// The load state of a component
    /// </summary>
    ComponentLoadState LoadState { get; }
}

Note that when we implement ILoadableComponent in our component, we go ahead and instantiate a new instance of ComponentLoadState to manage the loading state of this individual component.

Back to our ComponentModel, we want to register our component with the LoadStateManager when its parameters are set, like so:

protected override void OnParametersSet()
{
    base.OnParametersSet();

    // Register this component with the LoadStateManager
    LoadStateManager.Register(this);
}

This call makes the LoadStateManager aware of this component and ensures that it will wait for this component to finish loading before alerts the parent view that all child components are done loading.

Finally, we have our good friend, OnInitAsync() which simulates a web service call with a Task.Delay(Delay), sets our Status, and finishes up with by telling the component’s LoadState that we’re done loading:

protected override async Task OnInitAsync()
{
    // Simulate a web service call to get data
    await Task.Delay(Delay);

    Status = StringConstants.RandomText;

    // Ok - we're done loading. Notify the LoadStateManager!
    LoadState.OnLoadingComplete();
}

Loading Overlay View and Model

The final view is the LoadingOverlayView:

@inherits LoadingOverlayModel

@if (!LoadStateManager.IsLoadingComplete)
{
    <div class="loading-overlay">
        <div class="loader-container">
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>
        </div>   
    </div>
}

This is essentially a div that covers the main content area with a simple CSS loading animation in the middle that is displayed as long as LoadStateManager.IsLoadingComplete is false. (The CSS for this overlay is not my main focus for this post, but it can be found towards the bottom of the standard site.css that was created for my new project here for those interested.) This is the same LoadStateManager that is instantiated in the IndexModel and passed to the child components.

Here’s the corresponding model:

/// <summary>
/// Model class for <see cref="LoadingOverlayView"/>
/// </summary>
public class LoadingOverlayModel: BlazorComponent
{
    /// <summary>
    /// The <see cref="LoadStateManager"/> for this <see cref="LoadingOverlayModel"/>
    /// </summary>
    [Parameter] protected LoadStateManager LoadStateManager { get; set; }

    protected override async Task OnInitAsync()
    {
        // When LoadStateManager indicates that all components are loaded, notify
        // this component of the state change
        LoadStateManager.LoadingComplete += (sender, args) => StateHasChanged();

        await Task.CompletedTask;
    }
}

In this model, we subscribe to the LoadStateManager.LoadingComplete event, which will fire when all of the components that the LoadStateManager is monitoring have completed loading. When the event fires, we simply need to call StateHasChanged() to alert the component to update itself. since it is bound directly to LoadStateManager.IsLoadingComplete.

Helper Classes

LoadStateManager

As we’ve already mentioned, LoadStateManager manages the load state of a collection of components on a screen. Components register themselves with the LoadStateManager. The LoadStateManager keeps a collection of their ComponentLoadStates and subscribes to each one’s LoadingComplete event, triggering the LoadingComplete event when the last one completes:

/// <summary>
/// Manages the <see cref="ComponentLoadState"/>s for a particular view
/// </summary>
public class LoadStateManager
{
    private readonly ICollection<ComponentLoadState> _componentLoadStates = new List<ComponentLoadState>();
    
    /// <summary>
    /// Gets a value indicating whether all registered components are loaded
    /// </summary>
    public bool IsLoadingComplete => _componentLoadStates.All(c => c.IsLoaded);

    /// <summary>
    /// Registers an <see cref="ILoadableComponent"/> with this <see cref="LoadStateManager"/>
    /// </summary>
    /// <param name="component"></param>
    public void Register(ILoadableComponent component)
    {
        _componentLoadStates.Add(component.LoadState);

        component.LoadState.LoadingComplete += (sender, args) =>
        {
            if (IsLoadingComplete) OnLoadingComplete();
        };
    }

    /// <summary>
    /// Notifies subscribers that all loading is complete for all registered components
    /// </summary>
    public event EventHandler LoadingComplete;
    
    protected void OnLoadingComplete()
    {
        LoadingComplete?.Invoke(this, new EventArgs());
    }
}

ComponentLoadState

Finally we have ComponentLoadState, which represents the load state of an individual component. It exposes an IsLoaded property, a LoadingComplete event, and an OnLoadingComplete() method for storing and communicating the component’s load state:

/// <summary>
/// Represents the load state of an individual component
/// </summary>
public class ComponentLoadState
{
    /// <summary>
    /// Gets a value indicating whether this component is loaded
    /// </summary>
    public bool IsLoaded { get; private set; }

    /// <summary>
    /// Notifies the <see cref="LoadStateManager"/> that this component has completed loading
    /// </summary>
    public event EventHandler LoadingComplete;

    /// <summary>
    /// Invoked by the corresponding component to indicate that it has completed loading
    /// </summary>
    public void OnLoadingComplete()
    {
        IsLoaded = true;
        LoadingComplete?.Invoke(this, new EventArgs());
    }
}

Putting It All Together

Once you have your helper classes and overlay view and spinner created, it’s fairly trivial to add this functionality to a Blazor page. The page view just needs to instantiate the LoadStateManager and pass it to its children, and the child components need to define properties for the LoadStateManager and ComponentLoadState, register with the LoadStateManager, and tell their ComponentLoadState when they’re done loading.

I hope you’ve found this post helpful. You can find the full source code for this sample here. Thanks for reading!

–Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Experiences Converting from Client-Side to Server-Side Blazor

I’ve been using client-side Blazor for a couple of months now on one of my side projects and I’ve become a pretty big fan, because it allows me to write a modern, dynamic web app in C# with minimal JavaScript.  The Blazor docs give a nice synopsis of how this happens here:

1. C# code files and Razor files are compiled into .NET assemblies.
2. The assemblies and the .NET runtime are downloaded to the browser.
3. Blazor uses JavaScript to bootstrap the .NET runtime and configures the runtime to load required assembly references. Document object model (DOM) manipulation and browser API calls are handled by the Blazor runtime via JavaScript interoperability.

With Blazor I’m able to build single-page applications using my preferred language in a natural and enjoyable programming paradigm.

(Disclaimer:  Blazor is still considered an experimental framework by Microsoft, so proceed with caution only if you are brave, daring, and have a penchant for adventure.  You’ve been warned…)

Server-Side Blazor

In late July 2018, the Blazor team shipped release 0.5.0, which introduced server-side Blazor.  Initially I dismissed server-side Blazor, quite content to continue working completely client-side.  But as I saw that it seemed the team was putting quite a bit of emphasis on server-side Blazor and I read about the benefits it promised over client-side Blazor, I became intrigued.

The release notes do a really nice job of explaining what server-side Blazor is and how it works.  In a nutshell, Blazor was designed to be able to be run in a web worker thread separate from the main UI thread, like this:

Blazor running in a separate web worker thread in the browser.

Server-side Blazor leverages this model and stretches the communication over a network connection, using SignalR to send UI updates, raise events, and invoke JavaScript interop calls between the browser and the server, like this:

Server-side Blazor running on the server and communicating with the Browser via SignalR.

The release notes also provide a good breakdown of the benefits and downsides of the server-side model compared to the client-side model.  I’ll highlight just a couple benefits here that I’ve experienced from the get-go:

  • Faster app load time:  I received early feedback on my client-side Blazor app that it took a long time to load the app initially (on the order of tens of seconds).  This is understandable, as the framework has to ship the entire app, the .NET runtime, and any dependencies down to the client on load.  After switching over to server-side Blazor, my load time went down to sub-second.
  • Much better debugging:  With client-side Blazor there are ways to get basic debugging of the Blazor components working in Chrome developer tools, but it is a far cry from the rich debugging experience that we’re used to in Visual Studio.  I found myself using Debug.WriteLine(..) a lot.  With server-side Blazor, since Blazor component code is running on .NET Core on the server, the Visual Studio debugging tooling just works.
  • Feels like client-side Blazor:  Apart from the improved load time and debugging support, server-side Blazor is almost indistinguishable from client-side Blazor to both the developer and the end-user.  As we’ll see in a moment, apart from a couple of small changes at startup, you develop a server-side Blazor app just like a client-side Blazor app: composing Blazor components in exactly the same way regardless of where they will be running.  And the end-user still has a rich, interactive SPA experience.

Now the downsides to server-side Blazor are what you might expect: increased latency on every UI interaction, no offline support, and scalability limited by the number of client connections that SignalR can manage.  It’s too early for me to speak to the scalability concern, but increased latency is mostly imperceptible to the user given a decent internet connection.

Converting a Solution to Server-Side Blazor

Converting a client-side Blazor solution to server-side Blazor is much easier than you might expect and can be done in a matter of minutes.  (In fact, Suchiman has a demo showing how to dynamically switch between client-side and server-side at runtime based on a querystring parameter.)

A server-side Blazor application consists of two projects, typically named {SolutionName}.App and {SolutionName}.Server, both created by the VS tooling when you create a new server-side Blazor application.  I already had a client-side project that I had named ProjectX.Web.  The first step I took was Add > New Project… > ASP.NET Core Web Application on my solution:Visual Studio Add New Project dialog - ASP.NET Core Web Application

After entering my desired name and clicking Next, I selected Blazor (Server-side in ASP.NET Core):New ASP.NET Core Web Application - Blazor (Server-side in ASP.NET Core)

This added two new projects to my solution: ProjectX.App and ProjectX.Server.  Since I already had a ProjectX.Web project representing the client-side piece, I deleted the newly created ProjectX.App project.  I made ProjectX.Server the startup project and added a reference in it to my existing ProjectX.Web.

In my index.html in ProjectX.Web, I replaced this:

<script src="_framework/blazor.webassembly.js" />

With this:

<script src="_framework/blazor.server.js" />

In ProjectX.Server.Startup.cs, there are two places where it was referencing App.Startup; I changed them to use Web.Startup instead.

public void ConfigureServices(IServiceCollection services)
{
    services.AddServerSideBlazor<ProjectX.Web.Startup>();

    // snip...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // snip...

    app.UseServerSideBlazor<ProjectX.Web.Startup>();
}

… and that’s all there was to it!  Well, almost…

Gotchas

I ran into a few minor issues that you may or may not encounter, depending on what you’re doing in your application.

  1. Blank Screen After App Load

    After making the changes described above, I built and ran my project.  The loading screen displayed as expected, but then it just displayed a blank white screen with no errors logged to the console.

    Following the guidance in this thread, I tried removing the global.json file that was created when I created by client-side project, which pinned the SDK version to 2.1.3xx.  This didn’t help in my case.  I then checked my .NET Core SDK version by running dotnet --info at the VS Developer Command Prompt.  I was running 2.1.500-preview-009335.  I deleted the preview version of the SDK and then my app started loading and running.

  2. JS Errors Invoking Blazor Component Methods using .invokeMethod

    With my app running now, I started testing some of the functionality.  I have a couple of controls that invoke methods on my Blazor components via JavaScript that I found were no longer working.  A quick peek at the Chrome dev tools console showed the culprit (thank you, Blazor team, for good error messages!):
    Uncaught Error: The current dispatcher does not support synchronous calls blazor.server.js from JS to .NET. Use invokeMethodAsync instead.Easy fix: I just changed the couple of spots where I was using .invokeMethod(..) to invoke methods on my Blazor components to .invokeMethodAsync(..), and most of them started working.

  3. JSInvokable Methods Not Marked Virtual

    Even after switching to use .invokeMethodAsync(..), I still had a couple of Blazor component methods that were failing to be invoked from my JS.  I found that the difference between the ones that worked and the ones that didn’t was the virtual keyword on the method declaration.  I added virtual to the methods that weren’t executing and they started working.

That said, sitting here a few days later, I tried removing the virtual keyword from those JSInvokable methods again, and they continue to work.  So I’m not sure why this worked in the first place or if I had another change at that time that actually fixed it (I don’t think so).  Your mileage may vary…

Wrapping Up

Switching my solution from client-side to server-side Blazor was a piece of cake, and the benefits were well worth it.  I can now see the reasons for the recent buzz around server-side Blazor.  If I get to the point where I need to support a larger number of concurrent SignalR connections, I’ll start using Azure SignalR Service and route the communication through it, as described in the Blazor 0.6.0 release notes, but for now I’m content to run it through my app service.


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Adding a Loading Spinner to a Button with Blazor

I’ve been playing with Blazor for several weeks now, and it’s fantastic.  For the uninitiated, Blazor allows you to write .NET code that runs in the browser through the power of WebAssembly (Wasm).  This means that the amount of JavaScript you end up having to write to create a dynamic web UI is significantly reduced, or even eliminated (check out this Tetris clone  @mistermag00 wrote running completely on Blazor with ZERO JavaScript).

Today my objective is much less ambitious than attempting to recreate a classic 1980s-era video game in the browser:  I just want to change the state of a search button to indicate to the user that something is happening while we retrieve their results.  Here’s what we’re shooting for:

Let’s start with the Razor view code:

@if (IsSearching)
{
    <button class="btn btn-primary float-right search disabled" disabled><span class='fa-left fas fa-sync-alt spinning'></span>Searching...</button>
}
else
{
    <button class="btn btn-primary btn-outline-primary float-right search" onclick="@OnSearchAsync">Search</button>
}

Here we have two versions of the button, and we determine which one to display based on an IsSearching boolean property in the Blazor component code.

The first button represents the state of the button while we’re searching.  We set the visual state to disabled with the disabled CSS class and disable clicks with the disabled attribute.  Both buttons have a custom search CSS class which just sets the width of the button so we don’t have a jarring width change when transitioning between states.  I’m using a Font Awesome icon for my spinning icon (so don’t forget the link to their CSS in your HEAD), and animating it with a couple of custom CCS classes that we’ll look at in a minute.

The second button represents the state of the button when we’re not searching.  It has the onclick handler calling into the OnSearchAsync method in my Blazor component code.

Speaking of my Blazor component code, let’s check it out…

public class SearchCriteriaBase : BlazorComponent
{
    protected bool IsSearching { get; set; }

    public async Task OnSearchAsync()
    {
        IsSearching = true;
        StateHasChanged();

        if (CheckIsValid())
        {
            // Make my long-running web service call and do something with the results
        }

        IsSearching = false;
        StateHasChanged();
    }
}

All that’s needed is a simple IsSearching property that  I can set to switch between button states while searching.  I just need to remember to call StateHasChanged() to let Blazor know it needs to update the DOM.

And finally, here’s the custom CSS to make the animation happen:

.fas.spinning {
    animation: spin 1s infinite linear;
    -webkit-animation: spin2 1s infinite linear;
}

@keyframes spin {
    from {
        transform: scale(1) rotate(0deg);
    }

    to {
        transform: scale(1) rotate(360deg);
    }
}

@-webkit-keyframes spin2 {
    from {
        -webkit-transform: rotate(0deg);
    }

    to {
        -webkit-transform: rotate(360deg);
    }
}

.fa-left {
    margin-right: 7px;
}

.btn.search {
    width: 10rem;
}

And that’s it. A simple button state transition with no Javascript, just the way I like it!

Testing Stripe Webhooks in an ASP.NET Core Project

I’m using Stripe for subscription management and payment processing for the the SaaS side project I’m currently working on.  Stripe offers webhook functionality that allows you to register callback endpoints on your API with Stripe that they will call whenever any one of numerous specified events occur on their side.

So, for example, in my SaaS I’m offering customers a free 30-day trial period at the beginning of their subscription before they have to provide credit card information.  I can register for the customer.subscription.trial_will_end event with an endpoint of my choice on my API, and Stripe will call that endpoint three days before the end of any of my customers’ trial periods with details about the given customer and their subscription.  I’ll have logic on my side to check to see if we have a credit card for that customer yet, and, if not, send them a friendly e-mail reminding them that their trial is about to expire and they need to enter a credit card if they’d like to continue to use the service.

Stripe offers the ability through their dashboard to send test events to a webhook endpoint.  As I worked on my integration this past week, I ran into a couple of small issues in getting the test events to reach my service running locally on my machine.  So here’s a quick summary of what it took to get the messages flowing.

StripeEventsController

First we need a controller with an endpoint that Stripe will call into.  I’m using a single endpoint to catch all events.  It then delegates the processing of events to an IStripeEventProcessor.  Here’s my endpoint:

[code lang=”csharp”]

[Route("api/[controller]")]
public class StripeEventsController : Controller
{
private readonly IStripeEventProcessor _stripeEventProcessor;
private readonly IEnvironmentSettings _environmentSettings;

public StripeEventsController(
IStripeEventProcessor stripeEventProcessor,
IEnvironmentSettings environmentSettings)
{
_stripeEventProcessor = stripeEventProcessor ?? throw new ArgumentNullException(nameof(stripeEventProcessor));
_environmentSettings = environmentSettings ?? throw new ArgumentNullException(nameof(environmentSettings));
}

[HttpPost]
public async Task<IActionResult> IndexAsync()
{
var json = await new StreamReader(HttpContext.Request.Body).ReadToEndAsync();

try
{
var stripeEvent = StripeEventUtility.ConstructEvent(json,
Request.Headers["Stripe-Signature"], _environmentSettings.StripeConfiguration.Value.WebhookSigningSecret);

await _stripeEventProcessor.ProcessAsync(stripeEvent);

return Ok();
}
catch (StripeException)
{
return BadRequest();
}
}
}

[/code]

Note that I’m passing a WebhookSigningSecret to StripeEventUtility.ConstructEvent(..); this verifies that the event was actually sent by Stripe by checking the “Stripe-Signature” header value.  The webhook signing secret can be obtained from Stripe > Developers > Webhooks > select endpoint > Signing secret.

Turn Off HTTPS Redirection in Development

I’m using HTTPS Redirection in my project to redirect any non-secure requests to their corresponding HTTPS endpoint.  This caused me to receive an error whenever sending a test event: “Test webhook error: 307.”

Stripe expects to receive a 200-series status code back on all webhook calls, so the 307 Temporary Redirect status was a problem.  To resolve this, I modified my Startup.cs to only use HTTPS Redirection when not in development mode, like so:

[code lang=”csharp”]
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
// snip…
}
else
{
// snip…
app.UseHttpsRedirection(); // <- Moved from outside to inside else block to allow ngrok tunneling for testing Stripe webhooks
}
// snip…
app.UseMvc();
}
[/code]

ngrok

In order for Stripe to send test webhook events to our service, it needs to be able to connect to it.  ngrok is a neat little utility that allows you to expose your locally running web app or service via a public URL.  Download ngrok and install it, following the four getting started steps here.

When we’re ready to test, we’ll start up ngrok with the following command (where 64768 is the port number of your service):

<span class="pln">ngrok http 64768</span> <span class="pun">-</span><span class="pln">host</span><span class="pun">-</span><span class="pln">header</span><span class="pun">=</span><span class="str">"localhost:64768"</span>

It’s important to note that my service is configured to be accessible via the following bindings:

[code]
<binding protocol="http" bindingInformation="*:64768:localhost" />
<binding protocol="https" bindingInformation="*:44358:localhost" />
[/code]

You want to specify the non-secure (non-HTTPS) port when starting up ngrok.  It’s also important to specify the host-header flag; if you don’t, you’ll get a 400 Bad Request on all of your test calls:

Upon starting ngrok, you’ll see a screen like the following, indicating (in my case) that it is forwarding https://96c1bf3b.ngrok.io to my localhost:64768: 

Configure Stripe Webhook Endpoint and Test

Finally, you need to set up a Stripe webhook that points to your Stripe event handler endpoint exposed publicly via ngrok.  This is done by navigating to the Stripe dashboard > Webhooks, and clicking “Add endpoint”.  In my case, my endpoint looks like this:

Now we test our endpoint by clicking “Send test webhook.”  If all goes as planned, you’ll see a successful response like the following:

You can also fire up http://localhost:4040/inspect/http in your browser and see a nice dashboard where you can inspect and replay all requests made through the ngrok public endpoint:

Congratulations!  With just a few steps you’re now successfully sending test Stripe webhook events to your service running locally.