Create/Edit Screens, Bindable Layouts, and More (Update 2019.07.21)


Since my last update I’ve been focused on building the Create Alert Definition screen in the Stock Alerts mobile app project. It’s been slow but steady progress, and I’m happy to report that both the Create Alert Definition and Edit Alert Definition screens are now functional. (Technically they’re both the same screen, but I’ve been treating them separately from a work management perspective.)

For those just tuning in, these past posts will bring you up to speed on the project, the features I’m building, the infrastructure, etc…

Create Alert Definition Screen

The MVP version of the Create Alert Definition screen is fairly simple – you search for and select a stock, enter one or more criteria for the alert, and click Save. I talked about the search functionality a couple of weeks ago, so the next task was to build the alert criteria.

Create Alert Definition Screen

The API supports complex, multi-level composite criteria, but for this version I’m building a UI that supports just one level of criteria combined with an AND or OR Boolean operator, as demonstrated in the GIF, to keep things simple.



To build the criteria section, I just needed a couple of toggle buttons to switch the composite operator between AND and OR, and a control to list 0..n criteria rules.

For the AND/OR selector, I chose to use the SegmentedButtonGroup control from the FreshEssentials library, which provides a look and feel similar to the iOS segmented control.

And/Or Selector

Integrating it into the view was fairly straightforward:

<freshEssentials:SegmentedButtonGroup OnColor="{StaticResource DarkGrayColor}" OffColor="{StaticResource WhiteColor}" 
                                      SelectedIndex="{Binding SelectedOperatorButtonIndex, Mode=TwoWay}" 
                                      HorizontalOptions="Center" HeightRequest="30" WidthRequest="120" CornerRadius="10">
        <freshEssentials:SegmentedButton Title="AND"></freshEssentials:SegmentedButton>
        <freshEssentials:SegmentedButton Title="OR"></freshEssentials:SegmentedButton>

I bind the SelectedIndex to a SelectedOperatorButtonIndex property on the view model.

public int SelectedOperatorButtonIndex
    get => (int)(_alertDefinition.RootCriteria?.Operator ?? CriteriaOperator.And);
    set => _alertDefinition.RootCriteria.Operator = (CriteriaOperator)Enum.ToObject(typeof(CriteriaOperator), value);

I considered creating a converter to convert between the CriteriaOperator enum value and its corresponding value, but this does the job and is fine for now.

Bindable Layouts

Working with the individual criteria would require a little more work.

To support the dynamic adding/removing of criteria, I knew I’d need a control like WPF’s ItemsControl, which binds to a collection of items on the data context and renders a view for each item. I would need something similar for Xamarin.Forms.

I was pleased to learn that Xamarin.Forms now has bindable layouts, which were introduced sometime since the last time I did any Xamarin.Forms work, which was a couple of years ago. Bindable layouts allow the user to bind a layout control ( any class that derives from Layout<T>, like StackLayout, Grid, etc…) to a collection on the data context to control how the layout is populated, using data templates to define how the items are rendered.

The data binding of bindable layouts is familiar and consistent with other Xamarin.Forms data-bound controls. Let’s see how it works in the EditAlertDefinitionPage view. Here’s the StackLayout that contains the list of criteria:

<StackLayout BindableLayout.ItemsSource="{Binding CriteriaCollection}">
            <alertDefinitions:CriteriaView BindingContext="{Binding}"></alertDefinitions:CriteriaView>

I only have one DataTemplate defined, but the ability to leverage a DataTemplateSelector to render different DataTemplates will likely come in handy in the future as I add more types of criteria.

CriteriaCollection is an ObservableCollection (observable, since the UI needs to update when the user adds or removes a new criteria) of CriteriaViewModels on the view model:

public ObservableCollection<CriteriaViewModel> CriteriaCollection { get; set; } = new ObservableCollection<CriteriaViewModel>();

CriteriaView is a custom view that simply has a single-row `Grid` to hold the individual controls for each criteria.

Adding and Removing Criteria

The adding and removing of the criteria is handled by methods on the view model:

public ICommand AddCriteriaCommand => new Command(ExecuteAddCriteria);

private void ExecuteAddCriteria()
    AddCriteria(new CriteriaViewModel(new AlertCriteria(), NavigationService, Logger));

private void AddCriteria(CriteriaViewModel criteriaViewModel)

    criteriaViewModel.RemoveCriteria += RemoveCriteria;

private void RemoveCriteria(object sender, EventArgs e)
    var criteriaViewModel = sender as CriteriaViewModel;
    criteriaViewModel.RemoveCriteria -= RemoveCriteria;

Since the button to remove a criteria is on the individual criteria views, the ICommand for removal is located on the CriteriaViewModel, which raises an event to alert the parent EditAlertDefinitionPageViewModel to remove the specific criteria from its collection:

public EventHandler RemoveCriteria;

public ICommand RemoveCriteriaCommand => new Command(ExecuteRemoveCriteria);

private void ExecuteRemoveCriteria(object obj)
    RemoveCriteria?.Invoke(this, null);

I won’t go into too much more detail regarding the specifics of managing the criteria. It’s mostly typical MVVM.

Saving the Alert Definition

I added simple validation to the view models that we’ve been looking at that fires when the SaveCommand is executed. If any validation on the alert definition or a child criteria fails, then I just display a red message at the bottom of the screen and prevent the save from occurring. It’s very basic, but good enough for now.

The act of saving the alert definition by executing a POST request to the Stock Alerts API is handled by the AlertDefinitionsService, which is a wrapper around an HttpClient for communicating with the alert-definition-related endpoints on the API.

Edit Alert Definition

Once I had the Create Alert Definition screen working and creating alert definitions, making the modifications to enable the editing and saving of existing alert definitions was fairly straightforward.

Here’s what it took:

  • Make alert definitions on the AlertsPageViewModel selectable, and bind the SelectedItem to property SelectedAlertDefinition on the view model, which navigates to the EditAlertDefinitionPage passing the selected AlertDefinition as a navigation parameter (I’m using Prism’s navigation service.
  • Modify EditAlertDefinitionPageViewModel.OnNavigatedTo(..) to check for a SelectedAlertDefinition navigation parameter, and if found, call private InitializeForEdit() to set various properties appropriately for edit rather than create mode.
  • Make minor adjustments to CriteriaViewModel appropriate for edit mode.

Edit Alert Definition Screen

At some point I’ll fix the orange color of the selected item in the list view and the sizing bug of the “remove” icon on individual criteria, but not today. We’re functioning, and that’s good enough for now.

Other UI Stuff

I also spent some time this week tweaking the colors and layouts of some of the app’s screens, added a loading/busy indicator to several screens, and made a few other minor UI adjustments.

I’m trying to keep the UI simple and clean.

I’m not a designer, and I’m much stronger on API and backend development, but I do the best I can with the pixels.

Wrapping Up

Well that’s all I’ve got for this week! This coming week I’ll continue to work on the other screens in the app. It shouldn’t take too long to get the app to MVP state.

I’m also working on a post about how I build the criteria rules evaluation engine for Stock Alerts using the specification pattern, so keep an eye out for that in the next week or two.

Thanks for reading!


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Moving from HTTP-Triggered Azure Functions to Web API (Update 2019.07.09)

Fireworks over Chicago

This past weekend was a long one due to the Fourth of July, and despite a weekend filled with cookouts, swimming, fireworks, an anniversary date night, and a trip to St. Louis, I was able to knock off an important task on my side project‘s TODO List.

Refactoring HTTP-Triggered Azure Functions into a Web API Service

When I started work on Stock Alerts, I began with the Azure Functions for retrieving quotes and evaluating alert definitions, because they were the most interesting pieces to me.

I then started thinking about the API endpoints that I’d need to support a mobile client. I figured I wouldn’t need too many endpoints to support the very minimal functionality that I was aiming to implement for MVP, and I already had the Azure Functions project, so I figured I’d just stand up a few HTTP-triggered functions for my API. After all, I could always refactor them into their own proper web API project later.

It was midway through implementing authentication that I realized that rather than continuing to try to fit the endpoints that I needed into Azure Functions, it made sense to move the HTTP-triggered functions into their own web API project with a separate app service in Azure sooner rather than later.

So that’s what I did.

I performed the refactor Thursday/Friday, and fiddled with the build and release pipelines in Azure DevOps in my free moments on Friday/Saturday. Monday morning I switched the app to use the new API.

Thankfully the refactoring of the code was fairly simple because my functions, much like good controller methods, were thin – they simply received the HTTP request, deserialized it, performed any necessary request-level validation, and delegated processing to the domain layer which lives in a separate assembly. The controllers in the Web API project that I created ended up being very similar.

I’m now closer to the MVP infrastructure that I mentioned a week ago, depicted below (I’m just missing StockAlerts.WebApp now):

Stock Alerts MVP Infrastructure Resources

I love the feeling of checking items off of my TODO list.

Why I Chose Web API Over HTTP-Triggered Azure Functions

So why did I choose to move my API methods from my Functions project to their own Web API project and service?

A few key reasons: 1. Inability to configure middleware for Azure Functions 2. Prefer controller methods over Azure Functions 3. API usage patterns

Let’s talk about these one-by-one…

ASP.NET Core Middleware

ASP.NET Core gives the developer the ability to plug in logic into the request/response pipeline. This logic is referred to as middleware. Andrew Lock has a great post on what it is and how it works in an ASP.NET Core web app.

ASP.NET Core has default middleware that it executes during the normal course of processing a request and response, but it also allows the developer to configure at startup what additional middleware should execute with each request, including custom middleware. Middleware is generally used for performing cross-cutting tasks – things like logging, handling exceptions, rendering views, and performing authentication, to name a few.

Early in my adventures into Azure Functions I learned that the developer doesn’t have the ability to configure the middleware that executes during an HTTP-triggered function invocation. Sure, some folks have rolled their own middleware pattern in Azure Functions (like here and here), but I didn’t want to invest that much effort into building something that an ASP.NET Core Web API gives me for free.

My custom middleware needs aren’t too many: for a typical web API I add custom error-handling middleware and enable authentication middleware.

Though I was able to implement workarounds to accomplish these tasks to work in my functions, they weren’t nearly as clean as accomplishing the same thing with middleware in an ASP.NET Core Web API.

Error Handling

My preferred approach to handling exceptions on recent Web API projects has been to create an ErrorHandlingMiddleware class that catches any unhandled exception during the processing of the request and turns it into the appropriate HTTP response. The code can be found here. Adding it to the pipeline is as simple as one line in Startup.cs:


To accomplish similar functionality in my Azure Functions required an additional Nuget package (PostSharp), a custom attribute, and a [HandleExceptions] on top of all of my functions. Not terrible, but I’d rather not have the extra package and have to remember to manually decorate my functions to get error-handling.


To turn on token-based authentication/authorization for an ASP.NET Core Web API endpoint, you must configure the authentication, JWT bearer, and authorization options in Startup.cs, add the authentication middleware with app.UseAuthentication();, and decorate your controller methods with the [Authorize] attribute.

To implement token-based authentication/authorization on my Azure Functions, there wasn’t an easy way for me to simply decorate a function with an [Authorize] attribute and let the framework make sure that the user could invoke the function. Instead, for each function I had to use AuthorizationLevel.Anonymous and manually check for a valid ClaimsPrincipal and return new UnauthorizedResult() if there wasn’t one.

It worked, but it wasn’t pretty.

Beyond that, I had trouble getting it to add the Token-Expired header on responses when the auth token has expired. After switching over to Web API, this just works with the configuration I have in place.

Prefer Controllers Over Functions

As I began to add multiple HTTP-triggered functions that manipulated the same server resource, I grouped them into a single file per resource, similar to how controllers are often organized. But even though the methods were grouped like controllers, there were significant differences at the code level that cause me to prefer the controller implementations over the Azure Functions implementations.

Let’s compare the two side-by-side…

Here’s an Azure Function for getting an alert definition by ID:

public async Task<IActionResult> GetAlertDefinitionAsync(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "alert-definitions/{alertDefinitionId}")] HttpRequest req,
    string alertDefinitionId)
    var claimsPrincipal = _authService.GetAuthenticatedPrincipal(req);
    if (claimsPrincipal == null)
        return new UnauthorizedResult();

    var alertDefinition = await _alertDefinitionsService.GetAlertDefinitionAsync(new Guid(alertDefinitionId));


    var resource = _mapper.Map<Resources.Model.AlertDefinition>(alertDefinition);
    return new OkObjectResult(resource);

And here’s the analogous controller method for getting an alert definition by ID:

public async Task<IActionResult> GetAsync(Guid alertDefinitionId)
    var alertDefinition = await _alertDefinitionsService.GetAlertDefinitionAsync(alertDefinitionId);


    var resource = _mapper.Map<Resources.Model.AlertDefinition>(alertDefinition);
    return new OkObjectResult(resource);


  1. The controller method is shorter.
  2. The controller method is able to accept ID parameters as GUIDs, avoiding manual conversion.
  3. The controller method declares the HTTP verb and route more cleanly (in my opinion) as method attributes rather than a parameter attribute.
  4. Because I’ve decorated the controller with [Authorize], the controller method avoids the manual authorization logic.
  5. Because I’m using the ErrorHandlingMiddleware, the controller method avoids the extra [HandleExceptions] attribute.
  6. Not illustrated in this example, but the controller method accepts request bodies as entities, avoiding having to manually deserialize the request body from the HttpRequest.

From a purely code aesthetics perspective, I just prefer controller methods over HTTP-triggered functions.

API Usage Pattern

I expect the usage pattern of my API to be fairly uniform across the available endpoints and the traffic to ebb and flow predictably with the amount of users using the mobile app. I don’t expect large spikes in traffic to specific endpoints where I would need to be able to scale individual endpoints; if there are large spikes due to a sudden increase in the number of users, I’ll want to scale the whole web API service.

While HTTP-triggered Azure Functions may be the right choice for other use cases, the anticipated usage pattern of the Stock Alerts API aligns much more closely with a Web API service.

I’m still using Azure Functions for pulling stock quotes, evaluating alert definitions, and sending out notifications. Azure Functions are well-suited for these use cases, for the reasons I described here.

Wrapping Up

With this change behind me, I’m ready to continue moving forward working on the mobile app. My mornings the rest of this week will be focused on building the Create Alert Definition screen.

Here’s the repository for the project if you’d like to follow along:

Thanks for reading!


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Update 2019.07.03

I’d been meaning to get this update out over the weekend, but a stomach bug visited our house and threw off my schedule. I’d like to get these updates out about once a week going forward, but since this is a side project and I’m working on it for fun in my off hours, I’m not going to sweat it too much.

Also, as I mentioned in my first post, these updates will be pretty informal and unpolished. I just want to talk in detail about some of the things I did in the past week on the project, and what I plan to do in the coming week.

Last Week


With my announcement last weekend that I’ll be building Stock Alerts in public, I was compelled to write a few extra posts to lay some of the groundwork for project. I wrote the introductory post, spoke about the features, and laid out the infrastructure.

Naturally, this took some of my time away from development, but I think it was time well spent.

I have other posts that I want to write in the future to cover some of the work I’ve already done (particularly in the API), and I’ll try to work those in in the coming weeks without sacrificing too much dev time.

Create Alert Definition – Stock Search

I’ve been working on the Create Alert Definition screen in the Stock Alerts mobile app. This is where the user defines an alert, including selecting the stock and defining the alert criteria. Specifically, I was focused on the stock selection functionality last week (we’ll talk more about building the alert criteria in a couple weeks).

Here’s a wireframe for these screens:

Stock Alerts Create Alert Definition screen wireframes

I want the stock search feature to function like a typeahead search, allowing the user to type as much or as little of the stock symbol or company name as desired, and when they pause, the system retrieves the search results.

I already had an API endpoint for finding stocks based on a search string; I just needed to add CancellationToken support, which was as simple as adding it to the Azure function signature and plumbing it down to the data access layer:

public async Task<IActionResult> FindStocksAsync(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "stocks")] HttpRequest req,
    CancellationToken cancellationToken,
    ILogger log)

Implementing search on the mobile app side took a bit more work…

Thinking about this from an implementation perspective, my StockSearchPageViewModel needs to have a SearchString property that receives the input from the textbox, waits a second, and if there’s no additional input, execute the web request to get the search results from the API, which will populate a collection of results on the view model to which the view is bound. If additional input is received from the user while the web request is executing, we need to cancel it and issue a new request.

I can’t (shouldn’t) implement all of this in the SearchString property setter, because you can’t (and shouldn’t want to) make a property setter async. Property setters should be fast and non-blocking. And yet I want to be able to simply bind the Text property of my search box to a property on my view model.

I ended up using NotifyTask from Stephen Cleary’s Nito.Mvvm.Async library, which contains helpers for working with async methods in MVVM. NotifyTask is “essentially an INotifyPropertyChanged wrapper for Task/Task<T>,” as Stephen writes in this SO answer, which helped me quite a bit (the answer refers to NotifyTaskCompletion, which was replaced by NotifyTask).

So here’s my StockSearchPageViewModel implementation:

public class StockSearchPageViewModel : ViewModelBase
    private readonly IStocksService _stocksService;
    private CancellationTokenSource _searchCancellationTokenSource;

    public StockSearchPageViewModel(
        IStocksService stocksService,
        INavigationService navigationService, 
        ILogger logger) : base(navigationService, logger)
        _stocksService = stocksService ?? throw new ArgumentNullException(nameof(stocksService));

    private string _searchString;
    public string SearchString
        get => _searchString;
            _searchString = value;

            var newSearchCancellationTokenSource = new CancellationTokenSource();
            if (_searchCancellationTokenSource != null)
            _searchCancellationTokenSource = newSearchCancellationTokenSource;

            Stocks = NotifyTask.Create(SearchStocksAsync(_searchCancellationTokenSource));

    private NotifyTask<List<Stock>> _stocks;
    public NotifyTask<List<Stock>> Stocks
        get => _stocks;
            _stocks = value;

    private Stock _stock;
    public Stock SelectedStock
        get => _stock;
            _stock = value;
            var navigationParams = new NavigationParameters();
            navigationParams.Add(NavigationParameterKeys.SelectedStock, _stock);

    private async Task<List<Stock>> SearchStocksAsync(CancellationTokenSource searchCancellationTokenSource)
        if (SearchString.Length >= 1)
            await Task.Delay(1000, searchCancellationTokenSource.Token);
                if (!searchCancellationTokenSource.IsCancellationRequested)
                    var stocks = await _stocksService.FindStocksAsync(SearchString, searchCancellationTokenSource.Token);
                    return stocks.ToList();
                _searchCancellationTokenSource = null;

        return new List<Stock>();

The view model creates and manages the cancellation token source, and cancels it when necessary, in the SearchString property setter. This is also where we create the NotifyTask, passing it a delegate for the SearchStocksAsync(..) method, which delays one second and calls the search API. The results of the SearchStocksAsync(..) method call are exposed as NotifyTask<List<Stock>> by the Stocks property.

In my StockSearchPage view, I can simply bind to the properties, like so:

<SearchBar Grid.Row="1" Placeholder="Start typing ticker or company name" Text="{Binding SearchString, Mode=TwoWay}"></SearchBar>
<ListView Grid.Row="2" ItemsSource="{Binding Stocks.Result}" SelectedItem="{Binding SelectedStock}">

… and with that, the typeahead stock search seems to be working pretty well.


ASP.Net Core 2.1 introduced HttpClientFactory, which solves some of the problems developers run into when they create too many HttpClients in their projects. Steven Gordon has a nice write-up on HttpClientFactory and the problems it attempts to solve here.

The syntax to configure clients using HttpClientFactory is straightforward. In your ASP.NET Core Startup.cs:

services.AddHttpClient(Apis.SomeApi, c =>
    c.BaseAddress = new Uri("");
    c.DefaultRequestHeaders.Add("Accept", "application/json");

Unfortunately, since Xamarin.Forms projects target .NET Standard, we can’t use any of the .NET Core goodies like HttpClientFactory. I wanted a similar pattern for configuring and creating my HttpClients in the mobile app, so I took some inspiration from here and created my own poor man’s HttpClientFactory.

Here’s my IHttpClientFactory interface:

public interface IHttpClientFactory
    void AddHttpClient(string name, Action<HttpClient> configurationAction);

    HttpClient CreateClient(string name);

And here’s my fairly naïve, yet adequate implementation:

public class HttpClientFactory : IHttpClientFactory, IDisposable
    private readonly IDictionary<string, Action<HttpClient>> _configurations = new Dictionary<string, Action<HttpClient>>();
    private readonly IDictionary<string, HttpClient> _clients = new Dictionary<string, HttpClient>();

    public void AddHttpClient(string name, Action<HttpClient> configurationAction)
        if (string.IsNullOrWhiteSpace(name)) throw new ArgumentNullException(nameof(name), $"{nameof(name)} must be provided.");
        if (_configurations.ContainsKey(name)) throw new ArgumentNullException(nameof(name), $"A client with the name {name} has already been added.");

        _configurations.Add(name, configurationAction);

    public HttpClient CreateClient(string name)
        if (!_clients.ContainsKey(name))
            if (!_configurations.ContainsKey(name)) throw new ArgumentException($"A client by the name of {name} has not yet been registered.  Call {nameof(AddHttpClient)} first.");

            var httpClient = new HttpClient();
            _clients.Add(name, httpClient);

        return _clients[name];

    public void Dispose()
        foreach (var c in _clients)

Finally, the registration of the factory with a single client in my App.xaml.cs:

IHttpClientFactory httpClientFactory = new HttpClientFactory();
    c =>
        c.BaseAddress = new Uri(MiscConstants.StockAlertsApiBaseUri);
        c.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

This gives me a nice way to create and manage my HttpClients in my Xamarin.Forms project, and it will be easy to drop in the real HttpClientFactory if it ever becomes available for Xamarin.Forms projects.

Last week’s activities also included implementing a web service client base class for handling common tasks when communicating with the API, storing access and refresh tokens on the client, and working out my unauthorized/refresh token flow, but those are topics for another post. This one’s long enough.

This Week

This week’s already about half over, and we’ve got the 4th of July coming up. I plan to continue working on the Create Alert Definition screen, and perhaps by the next time I write I’ll have the functionality for building the alert criteria and saving the alert definition working – we’ll see.

Here’s the repository for the project if you’d like to follow along:

Thanks for reading, and Happy Fourth of July!


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Infrastructure

We’ve talked about the features that we’ll be implementing in Stock Alerts. Today we’ll look at the infrastructure that will be needed to support those features.

I’ve been working in Azure for several years now, both in my work life and on side projects. Being a primarily .NET developer, it makes sense that Azure is my preferred cloud. One of these days I will probably check out AWS, but for this project we’ll be hosting our backend services in Azure.

Microsoft Azure

More Than Just CRUD

When deciding what to build for this project, I wanted to do something that was a bit more than just a simple CRUD app that consists of an app talking to a web service talking to a database. Stock Alerts will need to continuously monitor the current prices of all stocks for which there are active alert definitions and evaluate whether the alert should be triggered, so we’ll need a process that runs on a regular basis to perform that work. Further, when the process detects that an alert should be triggered, it needs to send out notifications on the user’s preferred channel(s).

For this processing, we’ll use a combination of Azure Functions and Service Bus queues.

Here’s a sequence diagram depicting the retrieving of quotes, evaluation of alert definitions, and sending of notifications:

Stock Alerts Notification Sequence Diagram

Alert Definition Evaluation

The evaluation of the active alert definitions will have a predictable load. The system will query the stock data provider on a defined time interval for the distinct set of stocks for which there are active alerts and iterate through the alert definitions and evaluate them against the latest data received from the data provider for that stock.

A timer-triggered Azure Function, which is essentially a CRON job running in Azure, will work nicely for periodically pulling new stock data. Initially, there will be a single function instance to pull the data, but this work can be partitioned out to multiple function instances if/when the need arises. It will then enqueue a message on a service bus queue (alertprocessingqueue) for each active alert indicating that there’s new data and the alert needs to be evaluated.

A service bus queue-triggered function (EvaluateAlert) will receive the service bus message and perform the evaluation for a single alert definition.

Sending Notifications

The actual notification of users, on the other hand, will likely be characterized by periods of low activity with occasional spikes of high activity. Imagine a very popular stock like AAPL receiving an earnings surprise and opening 5% higher – several alert definitions could be triggered at once and notifications will need to be sent immediately.

Azure Functions will help us with this use case as well – we’ll enqueue notification messages on service bus queues (pushnotificationqueue, for example) when alerts are triggered and service bus queue-triggered functions (SendPushNotification, for example) will respond and send out the notifications. We’ll have a queue for each delivery channel (push, e-mail, SMS), and a function for each as well.

When AAPL spikes and 500 alerts are triggered, 500 messages will be enqueued on service bus queues (assuming each user only has one delivery channel) and 500 functions will be invoked to deliver those notifications.

The Infrastructure

So what Azure resources will be required to support the Stock Alerts features? Here’s a diagram of what we’ll need for the MVP:

Stock Alerts MVP Infrastructure Resources

We’ve got an Azure SQL database to store our users and their preferences, alert definitions and criteria, and stocks and their latest data.

We’ve already talked about the service bus queues, which are used for communicating between the Azure Functions, and we’ve already talked about the Azure Functions as well.

The Stock Alerts API will be an ASP.NET Core Web API service running in Azure, and it will expose endpoints to handle the user and alert definition maintenance as well as authentication.

The Stock Alerts web app, though depicted on the diagram, will actually be implemented post-MVP.

Current State

The above shows the infrastructure as I plan to have it at launch. Below is the current infrastructure I have deployed in Azure:

Stock Alerts Current Infrastructure Resources

All of the API endpoints are currently implemented as HTTP-triggered Azure functions. I did this because I already had the StockAlerts.Functions project, and I didn’t think there’d be that many HTTP endpoints. As I started implementing the authentication endpoints and I ran into some of the limitations of Azure Functions HTTP endpoints (i.e., you can’t inject logic into the pipeline as you can into the ASP.NET Core middleware for a full-fledged Web API), I increasingly felt like the API endpoints deserved their own project and app service. It’s on my TODO list to move these into their own project and service.

Wrapping Up

I think the most interesting part of the Stock Alerts infrastructure is the use of Azure Functions and Service Bus queues to evaluate alert definitions and send notifications. Azure Functions make sense for these processes because they can be triggered by schedule or service bus queue message (among other methods), and they are easily scaled. Service bus queues are appropriate for the communication between functions because they are highly available, reliable, and fast.

Though one of the key value props of serverless computing is automatic scaling, I don’t have practical experience with scaling Azure Functions during periods of high load. I’ll log and monitor timings from the various functions to ensure that notifications are being delivered in a timely fashion from when they are triggered, which is crucial for Stock Alerts’ primary function.

That’s all for now. Thanks for reading.


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Stock Alerts Features

In my previous post I revealed my new side project, Stock Alerts, and my intention to build a .NET Core product on the Azure stack, posting regular updates about my work as I go. Before I get down into the weeds in future posts, I thought it might be good to first talk at a higher level about the MVP features I’ll be implementing and the infrastructure that will be needed.


Stocks Alerts will consist of a mobile app with a small number of key features. Of course there will be backend services to support the mobile app functionality, but we’ll talk about the features from the user’s perspective as he/she interacts with the app.

There will also be a web app eventually, but for now we’ll just focus on the mobile app.

Alert Definition Management

Push notificationStock Alerts users will be able to view, create, edit, and delete alert definitions that define the criteria for a stock alert. In the course of creating an alert definition, the user will search for and select a stock, name the alert, and define the criteria that will trigger the alert.

Initially, the set of stocks available to create alert definitions will be limited, due to daily API call limits set by my data provider (which I’ll talk in a future post). I’ve considered either supporting only stocks in the S&P 500 at first or allowing the initial users to essentially define the initial universe of stocks by the alert definitions that they create. Still thinking on this one…

For the alert criteria, the API will support allowing the user to choose from multiple rule types (like various price alerts, technical alerts, and fundamental alerts), as well as combining multiple rules into a boolean logic tree, but for MVP the mobile app will only expose the ability to enter 1..n price alerts combined by an AND or OR.

Alert Notifications

Azure FunctionsThe Stock Alerts backend will evaluate all active alert definitions as it receives new stock data and notify users of triggered alerts via one or more of three methods: push notification, e-mail, and/or SMS message.

The processing of active alert definitions and the sending of notifications via the three channels will be handled by Azure Functions in conjunction with Azure Service Bus queues. Azure Functions are well-suited for these types of tasks – I pay for the compute that I use and they can scale out under load (for example, when AAPL makes a large move and triggers a large number of alerts).

User Registration & Login

The Stock Alerts mobile app will allow new users to register with the app and existing users to login. To register, a user just needs to provide their e-mail address, a unique username, and their password. After login, web requests will be authenticated via token-based authentication

User Preferences Management

Users will be able to set their notification preferences in the Stock Alerts app, including the channels that they will be notified on (push notification, e-mail, or SMS), as well as their e-mail address and SMS phone number, if applicable.

Payment/Subscription Management

Stripe logoThough users will be able to register and set up a limited number of alert definitions for free, I’ll charge users who have more than a few active alert definitions. Stripe is the standard for managing subscriptions and taking online payments, and their API is well-designed. We’ll integrate with Stripe to manage user subscriptions and payments for premium users.


Xamarin FormsThe mobile app will target both Android and iOS (and eventually web), and we’ll use Xamarin.Forms to accomplish this. I’m an Android user, so Android will be the first platform I focus on. I can get idea validation and feedback by launching on a single platform, and if there is traction after launch and I want to expand to iOS, I’ll be well-positioned to do so having build the app with Xamarin.Forms.

A web app will probably be post-MVP as well.

Is That All?

The feature set that I’m targeting for MVP is extremely limited by design. There’s enough here to provide value, prove out the idea, and demonstrate interactions throughout the full stack while being small enough to complete within a couple of months at 5-10 hours per week (though writing about it will slow me down some). There will also be plenty of opportunities for enhancements and additional features post-MVP, if I’m so inclined.

When choosing a side project, it’s important to me that it be very small and limited in scope for a few reasons. First, I want something that I can build quickly while I’m motivated and energized about the new idea and lessen the risk of losing interest or becoming bored with it halfway through. I also want to launch as soon as possible to begin the feedback loop and start learning from my users: What do they like/not like? How do they use the product? Will they pay for it? Etc… Finally, by limiting the initial feature set, I focus on just the core features of the product and I don’t waste time building features that my users may not even care about.

So now that we know what features we’ll be building, we’re ready to talk about the infrastructure needed to support those features!

… but that’s a topic for my next post. Thanks for reading.


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Working on a New Side Project in Public

For the past month or so I’ve been working on a new side project. It’s a small project that will allow me to exercise existing skills, re-sharpen older skills that I’ve not used in awhile, and learn some new ones.

I’d like to share it with you…

Working in Public

I’ve been working my side project every weekday morning for an hour or two before my workday starts and on the occasional weeknight or weekend. I toil away mostly in solitude, except when I solicit feedback or ask for advice from a colleague and buddy who’s working on his own project.

I’ve recently been inspired by a few folks on various podcasts to start working in public. They speak of the benefits of sharing what you’re learning, developing, building with your audience as you’re working on it, an idea that strikes fear in the hearts of those who, like me, are inclined towards self-conscious perfectionism. It’s difficult to put your in-progress work out there for all to see, warts and all.

There’s been an increasing trend towards working in public, particularly among indie makers and developers. I dipped my toe into those waters with my last project, Open Shippers, earlier this year*, and I plan to get back in the pool with this project.

My plan is to post fairly regular (weekly?) updates about what I’m doing on the project:

  • What did I do the prior week?
  • Is there a particular problem I solved, pattern I used, or trick I learned that is worth sharing?
  • What do I plan to work on in the coming week?
  • Any particular challenges worth mentioning?

Things like that.

These will be pretty rough, unpolished posts. Sometimes a particular topic I encounter will be worthy of becoming a more in-depth, polished post, and put these up from time to time.


So what are my goals with working on this new project, and doing so in public?

  • Get into the habit of writing regularly.
  • Sharpen old skills and learn new ones.
  • Share what I’m working on and what I’ve learned. Hopefully it is helpful to someone.
  • Demonstrate the process of taking a product from concept to market.
  • Demonstrate well-architected, working product on the Azure / .NET Core stack.
  • Teach.
  • Learn.
  • Have fun.
  • Make some lunch money if anyone subscribes to the service.

What’s the Project?


The idea is not particularly novel or interesting, but it’s a good candidate for what I’m trying to achieve with this project.

I’m building a stock alerts app (hereafter referred to as Stock Alerts until I think of a good name for it) that will allow the user to create an alert definition for a given stock based on a set of criteria that they define. When the alert is triggered, the user will be notified according to their preferences by push notification, SMS, and/or e-mail.

“Aren’t there a ton of other stock alert apps out there?” Yes, there are. Some just let you set a simple price alert; others offer many more features. I plan to support complex alert criteria that include not only simple price alerts, but eventually also technical and fundamental alerts as well, which will differentiate the app from a portion of the market.

I also want to challenge the idea that you have to have a completely novel idea to succeed in building an app that people will pay for. There’s room for multiple competitors in most markets, and oftentimes your app will be the right choice for a segment of the market that’s not currently being served well.

To be clear, I have no illusions of replacing my income with this app, but it will be great fodder for achieving the goals I mentioned above, and it should be a fun little project.


Here’s the some of the technology I’m using:

Current Status

The backend API is coded and running in Azure. I’ll share more detail about the API and how it’s built in future posts, including the overall architecture. I’ve been working on the mobile app for about a week now. I’m able to register/login through the mobile app and display my current alert definitions. Last week I put together some UI wireframes for the Create Alert Definition screens, and I’ll be working on implementing those screens this week.

Wrapping Up

In the spirit of working in public, I’ve made the Stock Alerts repository public: As with most projects, there are areas of the codebase that need some work and refinement, but we’ll address those things together down the road.

Thanks for reading! Until next time,


(* After spending several months working on Open Shippers, I was burned out on it by the time I launched it in preview in February. I failed to differentiate it from other similar services out there, and I didn’t have any gas left in the tank to work on building an audience and making it successful. Someday I may do a post-mortem on the project. Open Shippers wasn’t a total failure though – I learned quite a bit while working on it, particularly about ASP.NET Core and Blazor, and I’ve reused several things I developed for Open Shippers in more recent projects. The site is still live, and I may revive efforts around it at some point, but I’m content to work on something else for the time being.)

If you’d like to receive new posts via e-mail, consider joining the newsletter.

Open Shippers in Limited Preview

For the past few months I’ve been working on Open Shippers (, a place for solo makers to build and ship projects in the open, publicly sharing their progress as they take their products from conception to launch and beyond. Makers will post daily standups, log decisions made, and provide general commentary about their projects in real-time while receiving support, feedback, and accountability from the community.

Open Shippers home page.

Yesterday I launched a limited preview of Open Shippers with a post on Indie Hackers:

Open Shippers post on Indie Hackers.

My goal for the next couple of weeks is to gain a handful of beta users on Open Shippers to provide feedback and validate (or invalidate) the idea.

The Story So Far

The Open Shippers story starts last summer…

I’d started getting up early in the morning to devote the first couple of hours before the workday starts to personal pursuits and side project work at the suggestion of a buddy and colleague of mine who was doing the same. Think of the financial aphorism “Pay yourself first,” except applied to time in a day.

Every weekday morning shortly after 5:00 AM I’d join him in a Slack channel. After the obligatory “good morning,” we’d each give our daily standup for our respective projects: what I did yesterday, what I plan to do today, and what’s getting in my way.

I found that this daily routine of sharing my standup with a like-minded individual yielded several benefits:

  • I felt accountable to someone else to ship. Every. Day.
  • I received valuable feedback on the features I was building.
  • I focused more on the things that mattered for MVP and less on those that didn’t.
  • I had a sounding board for things I was considering and decisions I was making.
  • My productivity increased.

Sometime around the beginning of November, it occurred to me that there might be value in a dedicated place for solo makers to post their daily standups and journal their project progress. There already exists a fantastic ecosystem of maker communities online, and, while I benefit from many of them, I hadn’t found a project-focused place to journal my day-to-day progress.

So I got to work.

From the point I first started working on Open Shippers I recorded my daily standups in a file (in addition to sharing them with my buddy on Slack). It was my intention to start using the application as soon as possible as I’m building it, eat my own dog food. Once the database was up I’d seed it with the prior standups from my file.

November turned into December, and my colleague got busy with work trips and other life priorities that prevented him from joining me in the mornings for a few weeks. I continued my routine of posting my daily standups, and I found that the practice was valuable despite his absence.

Fast-forward to today – I’ve finished the functionality needed to support more users than just myself. Open Shippers is in limited preview.

Next Steps

There are many features on the Open Shippers roadmap, which I’ll publish soon. I’ll continue to develop new features in the open, with daily standups and updates.

I’d welcome any feedback that my readers might have on the site, and, if you’re so inclined, I invite you to join me on Open Shippers in the limited preview to use and hopefully get value from posting your daily standups and interacting with other shippers.

Thanks for reading! What will you ship today?


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Setting Environment for EF Core Data Migrations

Most of my side project work is in ASP.NET Core lately, and my main side project uses EF Core for data access with data migrations. My application is deployed in Azure using Azure SQL as the database. When developing and running locally, I hit a LocalDB instance on my machine.

When I need to make changes to the data model, I make the code changes, run Add-Migration Update### and then Update-Database in package manager console, and my local database happily gets updated. But when I’m ready to make the changes to the SQL Azure database, how can I change the connection string that the EF database context is using?


First let’s talk about how an ASP.NET Core application knows where it’s running.

ASP.NET Core uses an environment variable named ASPNETCORE_ENVIRONMENT to control how an application behaves in different environments. The framework supports three environment values out of the box: Development, Staging, and Production. When running locally, my app uses the environment variable set up for me by Visual Studio when I created the project on the Debug tab of my project’s properties:

The ASPNETCORE_ENVIRONMENT variable set on the Debug tab of Project Properties.
The ASPNETCORE_ENVIRONMENT variable in project properties.

On the Application settings tab for my App Service in Azure, this variable is defined and set to Production so the deployed instance of my application knows to use production settings and connection strings:

The ASPNETCORE_ENVIRONMENT variable set on the Application settings tab on the App Service in Azure.

The value of this environment variable can be accessed from within the Startup class by using a constructor that takes an instance of IHostingEnvironment:

public Startup(IConfiguration configuration, IHostingEnvironment environment)

You can then use convenience members, like environment.IsDevelopment or environment.IsProduction to switch based on the predefined environments, or access environment.EnvironmentName directly.

ASP.NET Core also supports the use of different appsettings.json files, depending on the value of the ASPNETCORE_ENVIRONMENT environment variable. For example, I have an appsettings.Development.json in my project with overrides for settings and connection strings specific to my development environment. When loading configuration options on startup, the framework knows to look for and use any settings defined in an appsettings.{EnvironmentName}.json file, where {EnvironmentName} matches the value of the ASPNETCORE_ENVIRONMENT environment variable.

Setting ASPNETCORE_ENVIRONMENT for Data Migrations

All of this is prelude to the point of this post. I’ve never really used package manager console for much more than, well, managing Nuget packages and running the occasional data migration commands. But as it turns out, the Package Manager Console window is a PowerShell shell that can do much more. For now, I just need it to help me target the correct database when running my data migrations.

I can see a list of my application’s DbContext types and where they’re pointing by issuing EF Core Tools command Get-DbContext in package manager console. Doing so yields the following output:

PM> Get-DbContext
      Entity Framework Core 2.2.1-servicing-10028 initialized 'ApplicationDbContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer' with options: None

providerName                            databaseName  dataSource             options
------------                            ------------  ----------             -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase    (localdb)\MSSQLLocalDB None   

I can then set the ASPNETCORE_ENVIRONMENT variable to Production for the context of the package manager window only by issuing the following command:


All subsequent calls to Update-Database will be now run against the database configured for my Production environment. I can double-check to make sure, though, by issuing Get-DbContext again. This time it shows that I’m pointing to my deployed database:

PM> Get-DbContext

providerName                            databaseName dataSource                              options
------------                            ------------ ----------                              -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase,1433      None

Thanks for reading!


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Safely Rendering Markdown in Blazor

This week I added the ability to post and properly display markdown content in my Blazor (server-side Blazor, actually… Razor Components) project. Markdown is a lightweight, standardized way of formatting text without having to resort to HTML or depend on a WYSIWYG editor. There’s a nice markdown quick reference here.

Leveraging the Work of Others

My need was simple: display formatted markdown on one screen that was saved as plain text on another. Accomplishing this was almost as simple, thanks to the work of those who have gone before me. I used two libraries:

  • Markdig – “a fast, powerful, CommonMark compliant, extensible Markdown processor for .NET.”
  • HtmlSanitizer – “a .NET library for cleaning HTML fragments and documents from constructs that can lead to XSS attacks.”

I also drew from Ed Charbeneau’s fantastic work on BlazeDown, an experimental markdown editor he wrote using Blazor. He first built it back when Blazor was on release 0.2, and he had to do a little extra work to get it to work due to some deficiencies in Blazor at the time (namely, the inability to render raw HTML). The Blazor team added MarkupString for rendering raw HTML with release 0.5, which made the task of rendering markup much simpler. He revisited BlazeDown with release 0.5.1 of Blazor, and updated the project to use the new feature.

This Example

What I’ll show here is just enough code to meet the requirements that I had in my project – simply render a string of in-memory markdown as HTML on the screen and do it safely (more on that later).

The code for this short sample can be found here.

markdown entered as plain text and rendered as HTML

MarkdownView Component

Part of the power of Blazor is the ability to componentize commonly used controls and logic for easy reuse.  I created a MarkdownView Blazor component that is responsible for safely rendering a string of markdown as HTML.

MarkdownView is just two lines:

@inherits MarkdownModel


The corresponding MarkdownModel is as follows:

public class MarkdownModel: BlazorComponent
    private string _content;

    [Inject] public IHtmlSanitizer HtmlSanitizer { get; set; }

    protected string Content
        get => _content;
            _content = value;
            HtmlContent = ConvertStringToMarkupString(_content);

    public MarkupString HtmlContent { get; private set; }

    private MarkupString ConvertStringToMarkupString(string value)
        if (!string.IsNullOrWhiteSpace(_content))
            // Convert markdown string to HTML
            var html = Markdig.Markdown.ToHtml(value, new MarkdownPipelineBuilder().UseAdvancedExtensions().Build());

            // Sanitize HTML before rendering
            var sanitizedHtml = HtmlSanitizer.Sanitize(html);

            // Return sanitized HTML as a MarkupString that Blazor can render
            return new MarkupString(sanitizedHtml);

        return new MarkupString();

This is a good simple example to demonstrate a few different concepts. First, I’ve specified a service to inject into the component on instantiation, HtmlSanitizer. We’ll discuss this more in a bit, but for now just know that it is a dependency registered with the IoC container.

Second, I’ve specified a parameter, Content, that is bound to a to a property on the model of parent view. This is how I pass a string of markdown into this component.

Third, I’ve exposed an HtmlContent property of type MarkupString. This is the property that will expose the string of markdown converted to a string of HTML that this component will display.

When Content is set, I use a function ConvertStringToMarkupString(..) to convert the string to HTML, sanitize the string of HTML, and return it as a MarkupString.

Usage of the component consists of simply binding it to a string that we want to render:

<MarkdownView Content="@MarkdownContent"/>

Be Safe – Sanitize Your HTML

It’s important to sanitize any user-supplied HTML that you will be rendering back as raw HTML to prevent malicious users from injecting scripts into you app and making it vulnerable to cross-site scripting (XSS) attacks. For this task, I use HtmlSanitizer, an actively-maintained, highly-configurable .NET library. I already showed above how it is injected and used in my MarkdownView component. The only remaining piece is the registration of the HtmlSanitizer with my IoC container in the ConfigureServices method in my Startup class:

services.AddScoped<IHtmlSanitizer, HtmlSanitizer>(x =>
    // Configure sanitizer rules as needed here.
    // For now, just use default rules + allow class attributes
    var sanitizer = new Ganss.XSS.HtmlSanitizer();
    return sanitizer;

By making the sanitation of the HTML a part of the MarkdownView component’s logic, I ensure that I won’t forget to sanitize a piece of content as long as I always use the component to render my markdown. It’s also wise to sanitize markdown and HTML on ingress prior to writing it to storage.

Wrapping Up

This was a pretty short example demonstrating how to add a feature that can have a big impact. The tools available to us in Blazor and a couple of existing libraries made this a pretty simple task, which is one of the reasons I’m so excited about Blazor: the ability to leverage existing .NET libraries directly in the browser directly translates to a number of significant benefits including faster delivery times, smaller codebases, lower total cost of ownership, etc…

Thanks for reading!


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Creating a Loading Overlay for a Composite Blazor View

One of my projects’ home screen is composed of multiple Blazor components that make a web service calls in their respective OnInitAsync()s to retrieve their data.  When loading the page, all of the components render immediately in their unpopulated state; each one then updates and fills out as the various web service calls complete and their data loads.  I’m not a fan of this – I’d rather have a nice overlay with some sort of loading indicator hiding the components while they’re loading that disappears when the last component is ready for display.  This weekend I developed a simple pattern for accomplishing just that.

The full source code for this example can be found here.

What We’re Building

Below is an example of a Blazor page that contains three components, each one simulating a web service call on initialization that completes from one to three seconds after it is initialized.  We see them show up empty at first, then resize, and finally load their text, all at different times.

Blazor page load without loading overlay.
Blazor page load without loading overlay.

Instead, we want our end product to look like the following.  When the page loads we see a nice little loading indicator (I pulled a free one from here) on an overlay that hides the components while they’re doing their business of intializing, resizing, and loading their data before they’re ready to be seen.

Blazor page load with loading overlay.
Blazor page load with loading overlay.

Now that we know what we’re building, let’s see some code…

Blazor Components

Index View and Model

We’ll start with our Blazor components.  Our IndexView.cshtml is the main page, and it looks like this:

@inherits PageLoadingOverlay.App.Features.Home.IndexModel
@page "/"
@layout Layout.MainLayout

<!-- Loading overlay that displays until all individual components have loaded their data -->
<LoadingOverlayView LoadStateManager="@LoadStateManager"></LoadingOverlayView>

<div class="row">
    <div class="col-3 component-div component-1-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="1" Delay="2000"></ComponentView>
    <div class="col-3 component-div component-2-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="2" Delay="1000"></ComponentView>
    <div class="col-3 component-div component-3-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="3" Delay="3000"></ComponentView>

Note that we have four components in this view: three instances of ComponentView that load data on initialization, and an instance of LoadingOverlayView that hides the other components until they are all loaded. We’ll look at these components shortly.

The corresponding IndexModel for this view has a single property:

public class IndexModel: BlazorComponent
    /// <summary>
    /// The <see cref="LoadStateManager"/> for this view
    /// </summary>
    /// <remarks>
    /// Since this is the parent view that contains the loading overlay, we new
    /// up an instance of the <see cref="LoadStateManager"/> here, and pass it
    /// to child components.
    /// </remarks>
    public LoadStateManager LoadStateManager { get; } = new LoadStateManager();

We instantiate a new instance of LoadStateManager here, which will manage the loading state of the child components and tell the loading overlay when all components are loaded and it can disappear.

Component View and Model

The three instances of the component that we’re loading have these properties:

public class ComponentModel: BlazorComponent, ILoadableComponent
    [Parameter] protected LoadStateManager LoadStateManager { get; set; }

    [Parameter] protected int Number { get; set; }

    [Parameter] protected int Delay { get; set; }
    public string Title => $"Component {Number}";

    public string Status { get; private set; }

    public ComponentLoadState LoadState { get; } = new ComponentLoadState();

The LoadStateManager from the parent view is passed to the component, as well as a couple of other parameters (like Number and Delay) that control the component’s behavior.

The component implements interface ILoadableComponent which defines a single property:

/// <summary>
/// Defines a component whose <see cref="ComponentLoadState"/> can be managed by a <see cref="LoadStateManager"/>
/// </summary>
public interface ILoadableComponent
    /// <summary>
    /// The load state of a component
    /// </summary>
    ComponentLoadState LoadState { get; }

Note that when we implement ILoadableComponent in our component, we go ahead and instantiate a new instance of ComponentLoadState to manage the loading state of this individual component.

Back to our ComponentModel, we want to register our component with the LoadStateManager when its parameters are set, like so:

protected override void OnParametersSet()

    // Register this component with the LoadStateManager

This call makes the LoadStateManager aware of this component and ensures that it will wait for this component to finish loading before alerts the parent view that all child components are done loading.

Finally, we have our good friend, OnInitAsync() which simulates a web service call with a Task.Delay(Delay), sets our Status, and finishes up with by telling the component’s LoadState that we’re done loading:

protected override async Task OnInitAsync()
    // Simulate a web service call to get data
    await Task.Delay(Delay);

    Status = StringConstants.RandomText;

    // Ok - we're done loading. Notify the LoadStateManager!

Loading Overlay View and Model

The final view is the LoadingOverlayView:

@inherits LoadingOverlayModel

@if (!LoadStateManager.IsLoadingComplete)
    <div class="loading-overlay">
        <div class="loader-container">
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>

This is essentially a div that covers the main content area with a simple CSS loading animation in the middle that is displayed as long as LoadStateManager.IsLoadingComplete is false. (The CSS for this overlay is not my main focus for this post, but it can be found towards the bottom of the standard site.css that was created for my new project here for those interested.) This is the same LoadStateManager that is instantiated in the IndexModel and passed to the child components.

Here’s the corresponding model:

/// <summary>
/// Model class for <see cref="LoadingOverlayView"/>
/// </summary>
public class LoadingOverlayModel: BlazorComponent
    /// <summary>
    /// The <see cref="LoadStateManager"/> for this <see cref="LoadingOverlayModel"/>
    /// </summary>
    [Parameter] protected LoadStateManager LoadStateManager { get; set; }

    protected override async Task OnInitAsync()
        // When LoadStateManager indicates that all components are loaded, notify
        // this component of the state change
        LoadStateManager.LoadingComplete += (sender, args) => StateHasChanged();

        await Task.CompletedTask;

In this model, we subscribe to the LoadStateManager.LoadingComplete event, which will fire when all of the components that the LoadStateManager is monitoring have completed loading. When the event fires, we simply need to call StateHasChanged() to alert the component to update itself. since it is bound directly to LoadStateManager.IsLoadingComplete.

Helper Classes


As we’ve already mentioned, LoadStateManager manages the load state of a collection of components on a screen. Components register themselves with the LoadStateManager. The LoadStateManager keeps a collection of their ComponentLoadStates and subscribes to each one’s LoadingComplete event, triggering the LoadingComplete event when the last one completes:

/// <summary>
/// Manages the <see cref="ComponentLoadState"/>s for a particular view
/// </summary>
public class LoadStateManager
    private readonly ICollection<ComponentLoadState> _componentLoadStates = new List<ComponentLoadState>();
    /// <summary>
    /// Gets a value indicating whether all registered components are loaded
    /// </summary>
    public bool IsLoadingComplete => _componentLoadStates.All(c => c.IsLoaded);

    /// <summary>
    /// Registers an <see cref="ILoadableComponent"/> with this <see cref="LoadStateManager"/>
    /// </summary>
    /// <param name="component"></param>
    public void Register(ILoadableComponent component)

        component.LoadState.LoadingComplete += (sender, args) =>
            if (IsLoadingComplete) OnLoadingComplete();

    /// <summary>
    /// Notifies subscribers that all loading is complete for all registered components
    /// </summary>
    public event EventHandler LoadingComplete;
    protected void OnLoadingComplete()
        LoadingComplete?.Invoke(this, new EventArgs());


Finally we have ComponentLoadState, which represents the load state of an individual component. It exposes an IsLoaded property, a LoadingComplete event, and an OnLoadingComplete() method for storing and communicating the component’s load state:

/// <summary>
/// Represents the load state of an individual component
/// </summary>
public class ComponentLoadState
    /// <summary>
    /// Gets a value indicating whether this component is loaded
    /// </summary>
    public bool IsLoaded { get; private set; }

    /// <summary>
    /// Notifies the <see cref="LoadStateManager"/> that this component has completed loading
    /// </summary>
    public event EventHandler LoadingComplete;

    /// <summary>
    /// Invoked by the corresponding component to indicate that it has completed loading
    /// </summary>
    public void OnLoadingComplete()
        IsLoaded = true;
        LoadingComplete?.Invoke(this, new EventArgs());

Putting It All Together

Once you have your helper classes and overlay view and spinner created, it’s fairly trivial to add this functionality to a Blazor page. The page view just needs to instantiate the LoadStateManager and pass it to its children, and the child components need to define properties for the LoadStateManager and ComponentLoadState, register with the LoadStateManager, and tell their ComponentLoadState when they’re done loading.

I hope you’ve found this post helpful. You can find the full source code for this sample here. Thanks for reading!


If you’d like to receive new posts via e-mail, consider joining the newsletter.