Open Shippers in Limited Preview

For the past few months I’ve been working on Open Shippers (openshippers.io), a place for solo makers to build and ship projects in the open, publicly sharing their progress as they take their products from conception to launch and beyond. Makers will post daily standups, log decisions made, and provide general commentary about their projects in real-time while receiving support, feedback, and accountability from the community.

Open Shippers home page.

Yesterday I launched a limited preview of Open Shippers with a post on Indie Hackers:

Open Shippers post on Indie Hackers.

My goal for the next couple of weeks is to gain a handful of beta users on Open Shippers to provide feedback and validate (or invalidate) the idea.

The Story So Far

The Open Shippers story starts last summer…

I’d started getting up early in the morning to devote the first couple of hours before the workday starts to personal pursuits and side project work at the suggestion of a buddy and colleague of mine who was doing the same. Think of the financial aphorism “Pay yourself first,” except applied to time in a day.

Every weekday morning shortly after 5:00 AM I’d join him in a Slack channel. After the obligatory “good morning,” we’d each give our daily standup for our respective projects: what I did yesterday, what I plan to do today, and what’s getting in my way.

I found that this daily routine of sharing my standup with a like-minded individual yielded several benefits:

  • I felt accountable to someone else to ship. Every. Day.
  • I received valuable feedback on the features I was building.
  • I focused more on the things that mattered for MVP and less on those that didn’t.
  • I had a sounding board for things I was considering and decisions I was making.
  • My productivity increased.

Sometime around the beginning of November, it occurred to me that there might be value in a dedicated place for solo makers to post their daily standups and journal their project progress. There already exists a fantastic ecosystem of maker communities online, and, while I benefit from many of them, I hadn’t found a project-focused place to journal my day-to-day progress.

So I got to work.

From the point I first started working on Open Shippers I recorded my daily standups in a file (in addition to sharing them with my buddy on Slack). It was my intention to start using the application as soon as possible as I’m building it, eat my own dog food. Once the database was up I’d seed it with the prior standups from my file.

November turned into December, and my colleague got busy with work trips and other life priorities that prevented him from joining me in the mornings for a few weeks. I continued my routine of posting my daily standups, and I found that the practice was valuable despite his absence.

Fast-forward to today – I’ve finished the functionality needed to support more users than just myself. Open Shippers is in limited preview.

Next Steps

There are many features on the Open Shippers roadmap, which I’ll publish soon. I’ll continue to develop new features in the open, with daily standups and updates.

I’d welcome any feedback that my readers might have on the site, and, if you’re so inclined, I invite you to join me on Open Shippers in the limited preview to use and hopefully get value from posting your daily standups and interacting with other shippers.

Thanks for reading! What will you ship today?

-Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Setting Environment for EF Core Data Migrations

Most of my side project work is in ASP.NET Core lately, and my main side project uses EF Core for data access with data migrations. My application is deployed in Azure using Azure SQL as the database. When developing and running locally, I hit a LocalDB instance on my machine.

When I need to make changes to the data model, I make the code changes, run Add-Migration Update### and then Update-Database in package manager console, and my local database happily gets updated. But when I’m ready to make the changes to the SQL Azure database, how can I change the connection string that the EF database context is using?

The ASPNETCORE_ENVIRONMENT Variable

First let’s talk about how an ASP.NET Core application knows where it’s running.

ASP.NET Core uses an environment variable named ASPNETCORE_ENVIRONMENT to control how an application behaves in different environments. The framework supports three environment values out of the box: Development, Staging, and Production. When running locally, my app uses the environment variable set up for me by Visual Studio when I created the project on the Debug tab of my project’s properties:

The ASPNETCORE_ENVIRONMENT variable set on the Debug tab of Project Properties.
The ASPNETCORE_ENVIRONMENT variable in project properties.

On the Application settings tab for my App Service in Azure, this variable is defined and set to Production so the deployed instance of my application knows to use production settings and connection strings:

The ASPNETCORE_ENVIRONMENT variable set on the Application settings tab on the App Service in Azure.

The value of this environment variable can be accessed from within the Startup class by using a constructor that takes an instance of IHostingEnvironment:

public Startup(IConfiguration configuration, IHostingEnvironment environment)

You can then use convenience members, like environment.IsDevelopment or environment.IsProduction to switch based on the predefined environments, or access environment.EnvironmentName directly.

ASP.NET Core also supports the use of different appsettings.json files, depending on the value of the ASPNETCORE_ENVIRONMENT environment variable. For example, I have an appsettings.Development.json in my project with overrides for settings and connection strings specific to my development environment. When loading configuration options on startup, the framework knows to look for and use any settings defined in an appsettings.{EnvironmentName}.json file, where {EnvironmentName} matches the value of the ASPNETCORE_ENVIRONMENT environment variable.

Setting ASPNETCORE_ENVIRONMENT for Data Migrations

All of this is prelude to the point of this post. I’ve never really used package manager console for much more than, well, managing Nuget packages and running the occasional data migration commands. But as it turns out, the Package Manager Console window is a PowerShell shell that can do much more. For now, I just need it to help me target the correct database when running my data migrations.

I can see a list of my application’s DbContext types and where they’re pointing by issuing EF Core Tools command Get-DbContext in package manager console. Doing so yields the following output:

PM> Get-DbContext
Microsoft.EntityFrameworkCore.Infrastructure[10403]
      Entity Framework Core 2.2.1-servicing-10028 initialized 'ApplicationDbContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer' with options: None

providerName                            databaseName  dataSource             options
------------                            ------------  ----------             -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase    (localdb)\MSSQLLocalDB None   

I can then set the ASPNETCORE_ENVIRONMENT variable to Production for the context of the package manager window only by issuing the following command:

PM> $Env:ASPNETCORE_ENVIRONMENT = "Production"

All subsequent calls to Update-Database will be now run against the database configured for my Production environment. I can double-check to make sure, though, by issuing Get-DbContext again. This time it shows that I’m pointing to my deployed database:

PM> Get-DbContext

providerName                            databaseName dataSource                              options
------------                            ------------ ----------                              -------
Microsoft.EntityFrameworkCore.SqlServer mydatabase   tcp:acme.database.windows.net,1433      None

Thanks for reading!

–Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Safely Rendering Markdown in Blazor

This week I added the ability to post and properly display markdown content in my Blazor (server-side Blazor, actually… Razor Components) project. Markdown is a lightweight, standardized way of formatting text without having to resort to HTML or depend on a WYSIWYG editor. There’s a nice markdown quick reference here.

Leveraging the Work of Others

My need was simple: display formatted markdown on one screen that was saved as plain text on another. Accomplishing this was almost as simple, thanks to the work of those who have gone before me. I used two libraries:

  • Markdig – “a fast, powerful, CommonMark compliant, extensible Markdown processor for .NET.”
  • HtmlSanitizer – “a .NET library for cleaning HTML fragments and documents from constructs that can lead to XSS attacks.”

I also drew from Ed Charbeneau’s fantastic work on BlazeDown, an experimental markdown editor he wrote using Blazor. He first built it back when Blazor was on release 0.2, and he had to do a little extra work to get it to work due to some deficiencies in Blazor at the time (namely, the inability to render raw HTML). The Blazor team added MarkupString for rendering raw HTML with release 0.5, which made the task of rendering markup much simpler. He revisited BlazeDown with release 0.5.1 of Blazor, and updated the project to use the new feature.

This Example

What I’ll show here is just enough code to meet the requirements that I had in my project – simply render a string of in-memory markdown as HTML on the screen and do it safely (more on that later).

The code for this short sample can be found here.

markdown entered as plain text and rendered as HTML

MarkdownView Component

Part of the power of Blazor is the ability to componentize commonly used controls and logic for easy reuse.  I created a MarkdownView Blazor component that is responsible for safely rendering a string of markdown as HTML.

MarkdownView is just two lines:

@inherits MarkdownModel

@HtmlContent

The corresponding MarkdownModel is as follows:

    public class MarkdownModel: BlazorComponent
    {
        private string _content;

        [Inject] public IHtmlSanitizer HtmlSanitizer { get; set; }

        [Parameter]
        protected string Content
        {
            get => _content;
            set
            {
                _content = value;
                HtmlContent = ConvertStringToMarkupString(_content);
            }
        }

        public MarkupString HtmlContent { get; private set; }

        private MarkupString ConvertStringToMarkupString(string value)
        {
            if (!string.IsNullOrWhiteSpace(_content))
            {
                // Convert markdown string to HTML
                var html = Markdig.Markdown.ToHtml(value, new MarkdownPipelineBuilder().UseAdvancedExtensions().Build());

                // Sanitize HTML before rendering
                var sanitizedHtml = HtmlSanitizer.Sanitize(html);

                // Return sanitized HTML as a MarkupString that Blazor can render
                return new MarkupString(sanitizedHtml);
            }

            return new MarkupString();
        }
    }

This is a good simple example to demonstrate a few different concepts. First, I’ve specified a service to inject into the component on instantiation, HtmlSanitizer. We’ll discuss this more in a bit, but for now just know that it is a dependency registered with the IoC container.

Second, I’ve specified a parameter, Content, that is bound to a to a property on the model of parent view. This is how I pass a string of markdown into this component.

Third, I’ve exposed an HtmlContent property of type MarkupString. This is the property that will expose the string of markdown converted to a string of HTML that this component will display.

When Content is set, I use a function ConvertStringToMarkupString(..) to convert the string to HTML, sanitize the string of HTML, and return it as a MarkupString.

Usage of the component consists of simply binding it to a string that we want to render:

<MarkdownView Content="@MarkdownContent"/>

Be Safe – Sanitize Your HTML

It’s important to sanitize any user-supplied HTML that you will be rendering back as raw HTML to prevent malicious users from injecting scripts into you app and making it vulnerable to cross-site scripting (XSS) attacks. For this task, I use HtmlSanitizer, an actively-maintained, highly-configurable .NET library. I already showed above how it is injected and used in my MarkdownView component. The only remaining piece is the registration of the HtmlSanitizer with my IoC container in the ConfigureServices method in my Startup class:

            services.AddScoped<IHtmlSanitizer, HtmlSanitizer>(x =>
            {
                // Configure sanitizer rules as needed here.
                // For now, just use default rules + allow class attributes
                var sanitizer = new Ganss.XSS.HtmlSanitizer();
                sanitizer.AllowedAttributes.Add("class");
                return sanitizer;
            });

By making the sanitation of the HTML a part of the MarkdownView component’s logic, I ensure that I won’t forget to sanitize a piece of content as long as I always use the component to render my markdown. It’s also wise to sanitize markdown and HTML on ingress prior to writing it to storage.

Wrapping Up

This was a pretty short example demonstrating how to add a feature that can have a big impact. The tools available to us in Blazor and a couple of existing libraries made this a pretty simple task, which is one of the reasons I’m so excited about Blazor: the ability to leverage existing .NET libraries directly in the browser directly translates to a number of significant benefits including faster delivery times, smaller codebases, lower total cost of ownership, etc…

Thanks for reading!

–Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Creating a Loading Overlay for a Composite Blazor View

One of my projects’ home screen is composed of multiple Blazor components that make a web service calls in their respective OnInitAsync()s to retrieve their data.  When loading the page, all of the components render immediately in their unpopulated state; each one then updates and fills out as the various web service calls complete and their data loads.  I’m not a fan of this – I’d rather have a nice overlay with some sort of loading indicator hiding the components while they’re loading that disappears when the last component is ready for display.  This weekend I developed a simple pattern for accomplishing just that.

The full source code for this example can be found here.

What We’re Building

Below is an example of a Blazor page that contains three components, each one simulating a web service call on initialization that completes from one to three seconds after it is initialized.  We see them show up empty at first, then resize, and finally load their text, all at different times.

Blazor page load without loading overlay.
Blazor page load without loading overlay.

Instead, we want our end product to look like the following.  When the page loads we see a nice little loading indicator (I pulled a free one from here) on an overlay that hides the components while they’re doing their business of intializing, resizing, and loading their data before they’re ready to be seen.

Blazor page load with loading overlay.
Blazor page load with loading overlay.

Now that we know what we’re building, let’s see some code…

Blazor Components

Index View and Model

We’ll start with our Blazor components.  Our IndexView.cshtml is the main page, and it looks like this:

@inherits PageLoadingOverlay.App.Features.Home.IndexModel
@page "/"
@layout Layout.MainLayout

<!-- Loading overlay that displays until all individual components have loaded their data -->
<LoadingOverlayView LoadStateManager="@LoadStateManager"></LoadingOverlayView>

<div class="row">
    <div class="col-3 component-div component-1-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="1" Delay="2000"></ComponentView>
    </div>
    <div class="col-3 component-div component-2-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="2" Delay="1000"></ComponentView>
    </div>
    <div class="col-3 component-div component-3-div">
        <ComponentView LoadStateManager="@LoadStateManager" Number="3" Delay="3000"></ComponentView>
    </div>
</div>

Note that we have four components in this view: three instances of ComponentView that load data on initialization, and an instance of LoadingOverlayView that hides the other components until they are all loaded. We’ll look at these components shortly.

The corresponding IndexModel for this view has a single property:

    public class IndexModel: BlazorComponent
    {
        /// <summary>
        /// The <see cref="LoadStateManager"/> for this view
        /// </summary>
        /// <remarks>
        /// Since this is the parent view that contains the loading overlay, we new
        /// up an instance of the <see cref="LoadStateManager"/> here, and pass it
        /// to child components.
        /// </remarks>
        public LoadStateManager LoadStateManager { get; } = new LoadStateManager();
    }

We instantiate a new instance of LoadStateManager here, which will manage the loading state of the child components and tell the loading overlay when all components are loaded and it can disappear.

Component View and Model

The three instances of the component that we’re loading have these properties:

    public class ComponentModel: BlazorComponent, ILoadableComponent
    {
        [Parameter] protected LoadStateManager LoadStateManager { get; set; }

        [Parameter] protected int Number { get; set; }

        [Parameter] protected int Delay { get; set; }
        
        public string Title => $"Component {Number}";

        public string Status { get; private set; }

        public ComponentLoadState LoadState { get; } = new ComponentLoadState();

The LoadStateManager from the parent view is passed to the component, as well as a couple of other parameters (like Number and Delay) that control the component’s behavior.

The component implements interface ILoadableComponent which defines a single property:

    /// <summary>
    /// Defines a component whose <see cref="ComponentLoadState"/> can be managed by a <see cref="LoadStateManager"/>
    /// </summary>
    public interface ILoadableComponent
    {
        /// <summary>
        /// The load state of a component
        /// </summary>
        ComponentLoadState LoadState { get; }
    }

Note that when we implement ILoadableComponent in our component, we go ahead and instantiate a new instance of ComponentLoadState to manage the loading state of this individual component.

Back to our ComponentModel, we want to register our component with the LoadStateManager when its parameters are set, like so:

        protected override void OnParametersSet()
        {
            base.OnParametersSet();

            // Register this component with the LoadStateManager
            LoadStateManager.Register(this);
        }

This call makes the LoadStateManager aware of this component and ensures that it will wait for this component to finish loading before alerts the parent view that all child components are done loading.

Finally, we have our good friend, OnInitAsync() which simulates a web service call with a Task.Delay(Delay), sets our Status, and finishes up with by telling the component’s LoadState that we’re done loading:

        protected override async Task OnInitAsync()
        {
            // Simulate a web service call to get data
            await Task.Delay(Delay);

            Status = StringConstants.RandomText;

            // Ok - we're done loading. Notify the LoadStateManager!
            LoadState.OnLoadingComplete();
        }

Loading Overlay View and Model

The final view is the LoadingOverlayView:

@inherits LoadingOverlayModel

@if (!LoadStateManager.IsLoadingComplete)
{
    <div class="loading-overlay">
        <div class="loader-container">
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>
            <div class="dot-loader"></div>
        </div>   
    </div>
}

This is essentially a div that covers the main content area with a simple CSS loading animation in the middle that is displayed as long as LoadStateManager.IsLoadingComplete is false. (The CSS for this overlay is not my main focus for this post, but it can be found towards the bottom of the standard site.css that was created for my new project here for those interested.) This is the same LoadStateManager that is instantiated in the IndexModel and passed to the child components.

Here’s the corresponding model:

    /// <summary>
    /// Model class for <see cref="LoadingOverlayView"/>
    /// </summary>
    public class LoadingOverlayModel: BlazorComponent
    {
        /// <summary>
        /// The <see cref="LoadStateManager"/> for this <see cref="LoadingOverlayModel"/>
        /// </summary>
        [Parameter] protected LoadStateManager LoadStateManager { get; set; }

        protected override async Task OnInitAsync()
        {
            // When LoadStateManager indicates that all components are loaded, notify
            // this component of the state change
            LoadStateManager.LoadingComplete += (sender, args) => StateHasChanged();

            await Task.CompletedTask;
        }
    }

In this model, we subscribe to the LoadStateManager.LoadingComplete event, which will fire when all of the components that the LoadStateManager is monitoring have completed loading. When the event fires, we simply need to call StateHasChanged() to alert the component to update itself. since it is bound directly to LoadStateManager.IsLoadingComplete.

Helper Classes

LoadStateManager

As we’ve already mentioned, LoadStateManager manages the load state of a collection of components on a screen. Components register themselves with the LoadStateManager. The LoadStateManager keeps a collection of their ComponentLoadStates and subscribes to each one’s LoadingComplete event, triggering the LoadingComplete event when the last one completes:

    /// <summary>
    /// Manages the <see cref="ComponentLoadState"/>s for a particular view
    /// </summary>
    public class LoadStateManager
    {
        private readonly ICollection<ComponentLoadState> _componentLoadStates = new List<ComponentLoadState>();
        
        /// <summary>
        /// Gets a value indicating whether all registered components are loaded
        /// </summary>
        public bool IsLoadingComplete => _componentLoadStates.All(c => c.IsLoaded);

        /// <summary>
        /// Registers an <see cref="ILoadableComponent"/> with this <see cref="LoadStateManager"/>
        /// </summary>
        /// <param name="component"></param>
        public void Register(ILoadableComponent component)
        {
            _componentLoadStates.Add(component.LoadState);

            component.LoadState.LoadingComplete += (sender, args) =>
            {
                if (IsLoadingComplete) OnLoadingComplete();
            };
        }

        /// <summary>
        /// Notifies subscribers that all loading is complete for all registered components
        /// </summary>
        public event EventHandler LoadingComplete;
        
        protected void OnLoadingComplete()
        {
            LoadingComplete?.Invoke(this, new EventArgs());
        }
    }

ComponentLoadState

Finally we have ComponentLoadState, which represents the load state of an individual component. It exposes an IsLoaded property, a LoadingComplete event, and an OnLoadingComplete() method for storing and communicating the component’s load state:

    /// <summary>
    /// Represents the load state of an individual component
    /// </summary>
    public class ComponentLoadState
    {
        /// <summary>
        /// Gets a value indicating whether this component is loaded
        /// </summary>
        public bool IsLoaded { get; private set; }

        /// <summary>
        /// Notifies the <see cref="LoadStateManager"/> that this component has completed loading
        /// </summary>
        public event EventHandler LoadingComplete;

        /// <summary>
        /// Invoked by the corresponding component to indicate that it has completed loading
        /// </summary>
        public void OnLoadingComplete()
        {
            IsLoaded = true;
            LoadingComplete?.Invoke(this, new EventArgs());
        }
    }

Putting It All Together

Once you have your helper classes and overlay view and spinner created, it’s fairly trivial to add this functionality to a Blazor page. The page view just needs to instantiate the LoadStateManager and pass it to its children, and the child components need to define properties for the LoadStateManager and ComponentLoadState, register with the LoadStateManager, and tell their ComponentLoadState when they’re done loading.

I hope you’ve found this post helpful. You can find the full source code for this sample here. Thanks for reading!

–Jon


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Experiences Converting from Client-Side to Server-Side Blazor

I’ve been using client-side Blazor for a couple of months now on one of my side projects and I’ve become a pretty big fan, because it allows me to write a modern, dynamic web app in C# with minimal JavaScript.  The Blazor docs give a nice synopsis of how this happens here:

1. C# code files and Razor files are compiled into .NET assemblies.
2. The assemblies and the .NET runtime are downloaded to the browser.
3. Blazor uses JavaScript to bootstrap the .NET runtime and configures the runtime to load required assembly references. Document object model (DOM) manipulation and browser API calls are handled by the Blazor runtime via JavaScript interoperability.

With Blazor I’m able to build single-page applications using my preferred language in a natural and enjoyable programming paradigm.

(Disclaimer:  Blazor is still considered an experimental framework by Microsoft, so proceed with caution only if you are brave, daring, and have a penchant for adventure.  You’ve been warned…)

Server-Side Blazor

In late July 2018, the Blazor team shipped release 0.5.0, which introduced server-side Blazor.  Initially I dismissed server-side Blazor, quite content to continue working completely client-side.  But as I saw that it seemed the team was putting quite a bit of emphasis on server-side Blazor and I read about the benefits it promised over client-side Blazor, I became intrigued.

The release notes do a really nice job of explaining what server-side Blazor is and how it works.  In a nutshell, Blazor was designed to be able to be run in a web worker thread separate from the main UI thread, like this:

Blazor running in a separate web worker thread in the browser.

Server-side Blazor leverages this model and stretches the communication over a network connection, using SignalR to send UI updates, raise events, and invoke JavaScript interop calls between the browser and the server, like this:

Server-side Blazor running on the server and communicating with the Browser via SignalR.

The release notes also provide a good breakdown of the benefits and downsides of the server-side model compared to the client-side model.  I’ll highlight just a couple benefits here that I’ve experienced from the get-go:

  • Faster app load time:  I received early feedback on my client-side Blazor app that it took a long time to load the app initially (on the order of tens of seconds).  This is understandable, as the framework has to ship the entire app, the .NET runtime, and any dependencies down to the client on load.  After switching over to server-side Blazor, my load time went down to sub-second.
  • Much better debugging:  With client-side Blazor there are ways to get basic debugging of the Blazor components working in Chrome developer tools, but it is a far cry from the rich debugging experience that we’re used to in Visual Studio.  I found myself using Debug.WriteLine(..) a lot.  With server-side Blazor, since Blazor component code is running on .NET Core on the server, the Visual Studio debugging tooling just works.
  • Feels like client-side Blazor:  Apart from the improved load time and debugging support, server-side Blazor is almost indistinguishable from client-side Blazor to both the developer and the end-user.  As we’ll see in a moment, apart from a couple of small changes at startup, you develop a server-side Blazor app just like a client-side Blazor app: composing Blazor components in exactly the same way regardless of where they will be running.  And the end-user still has a rich, interactive SPA experience.

Now the downsides to server-side Blazor are what you might expect: increased latency on every UI interaction, no offline support, and scalability limited by the number of client connections that SignalR can manage.  It’s too early for me to speak to the scalability concern, but increased latency is mostly imperceptible to the user given a decent internet connection.

Converting a Solution to Server-Side Blazor

Converting a client-side Blazor solution to server-side Blazor is much easier than you might expect and can be done in a matter of minutes.  (In fact, Suchiman has a demo showing how to dynamically switch between client-side and server-side at runtime based on a querystring parameter.)

A server-side Blazor application consists of two projects, typically named {SolutionName}.App and {SolutionName}.Server, both created by the VS tooling when you create a new server-side Blazor application.  I already had a client-side project that I had named ProjectX.Web.  The first step I took was Add > New Project… > ASP.NET Core Web Application on my solution:Visual Studio Add New Project dialog - ASP.NET Core Web Application

After entering my desired name and clicking Next, I selected Blazor (Server-side in ASP.NET Core):New ASP.NET Core Web Application - Blazor (Server-side in ASP.NET Core)

This added two new projects to my solution: ProjectX.App and ProjectX.Server.  Since I already had a ProjectX.Web project representing the client-side piece, I deleted the newly created ProjectX.App project.  I made ProjectX.Server the startup project and added a reference in it to my existing ProjectX.Web.

In my index.html in ProjectX.Web, I replaced this:

<script src="_framework/blazor.webassembly.js" />

With this:

<script src="_framework/blazor.server.js" />

In ProjectX.Server.Startup.cs, there are two places where it was referencing App.Startup; I changed them to use Web.Startup instead.

public void ConfigureServices(IServiceCollection services)
{
    services.AddServerSideBlazor<ProjectX.Web.Startup>();

    // snip...
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // snip...

    app.UseServerSideBlazor<ProjectX.Web.Startup>();
}

… and that’s all there was to it!  Well, almost…

Gotchas

I ran into a few minor issues that you may or may not encounter, depending on what you’re doing in your application.

  1. Blank Screen After App Load

    After making the changes described above, I built and ran my project.  The loading screen displayed as expected, but then it just displayed a blank white screen with no errors logged to the console.

    Following the guidance in this thread, I tried removing the global.json file that was created when I created by client-side project, which pinned the SDK version to 2.1.3xx.  This didn’t help in my case.  I then checked my .NET Core SDK version by running dotnet --info at the VS Developer Command Prompt.  I was running 2.1.500-preview-009335.  I deleted the preview version of the SDK and then my app started loading and running.

  2. JS Errors Invoking Blazor Component Methods using .invokeMethod

    With my app running now, I started testing some of the functionality.  I have a couple of controls that invoke methods on my Blazor components via JavaScript that I found were no longer working.  A quick peek at the Chrome dev tools console showed the culprit (thank you, Blazor team, for good error messages!):
    Uncaught Error: The current dispatcher does not support synchronous calls blazor.server.js from JS to .NET. Use invokeMethodAsync instead.Easy fix: I just changed the couple of spots where I was using .invokeMethod(..) to invoke methods on my Blazor components to .invokeMethodAsync(..), and most of them started working.

  3. JSInvokable Methods Not Marked Virtual

    Even after switching to use .invokeMethodAsync(..), I still had a couple of Blazor component methods that were failing to be invoked from my JS.  I found that the difference between the ones that worked and the ones that didn’t was the virtual keyword on the method declaration.  I added virtual to the methods that weren’t executing and they started working.

That said, sitting here a few days later, I tried removing the virtual keyword from those JSInvokable methods again, and they continue to work.  So I’m not sure why this worked in the first place or if I had another change at that time that actually fixed it (I don’t think so).  Your mileage may vary…

Wrapping Up

Switching my solution from client-side to server-side Blazor was a piece of cake, and the benefits were well worth it.  I can now see the reasons for the recent buzz around server-side Blazor.  If I get to the point where I need to support a larger number of concurrent SignalR connections, I’ll start using Azure SignalR Service and route the communication through it, as described in the Blazor 0.6.0 release notes, but for now I’m content to run it through my app service.


If you’d like to receive new posts via e-mail, consider joining the newsletter.

Adding a Loading Spinner to a Button with Blazor

I’ve been playing with Blazor for several weeks now, and it’s fantastic.  For the uninitiated, Blazor allows you to write .NET code that runs in the browser through the power of WebAssembly (Wasm).  This means that the amount of JavaScript you end up having to write to create a dynamic web UI is significantly reduced, or even eliminated (check out this Tetris clone  @mistermag00 wrote running completely on Blazor with ZERO JavaScript).

Today my objective is much less ambitious than attempting to recreate a classic 1980s-era video game in the browser:  I just want to change the state of a search button to indicate to the user that something is happening while we retrieve their results.  Here’s what we’re shooting for:

Let’s start with the Razor view code:

@if (IsSearching)
{
    <button class="btn btn-primary float-right search disabled" disabled><span class='fa-left fas fa-sync-alt spinning'></span>Searching...</button>
}
else
{
    <button class="btn btn-primary btn-outline-primary float-right search" onclick="@OnSearchAsync">Search</button>
}

Here we have two versions of the button, and we determine which one to display based on an IsSearching boolean property in the Blazor component code.

The first button represents the state of the button while we’re searching.  We set the visual state to disabled with the disabled CSS class and disable clicks with the disabled attribute.  Both buttons have a custom search CSS class which just sets the width of the button so we don’t have a jarring width change when transitioning between states.  I’m using a Font Awesome icon for my spinning icon (so don’t forget the link to their CSS in your HEAD), and animating it with a couple of custom CCS classes that we’ll look at in a minute.

The second button represents the state of the button when we’re not searching.  It has the onclick handler calling into the OnSearchAsync method in my Blazor component code.

Speaking of my Blazor component code, let’s check it out…

public class SearchCriteriaBase : BlazorComponent
{
    protected bool IsSearching { get; set; }

    public async Task OnSearchAsync()
    {
        IsSearching = true;
        StateHasChanged();

        if (CheckIsValid())
        {
            // Make my long-running web service call and do something with the results
        }

        IsSearching = false;
        StateHasChanged();
    }
}

All that’s needed is a simple IsSearching property that  I can set to switch between button states while searching.  I just need to remember to call StateHasChanged() to let Blazor know it needs to update the DOM.

And finally, here’s the custom CSS to make the animation happen:

.fas.spinning {
    animation: spin 1s infinite linear;
    -webkit-animation: spin2 1s infinite linear;
}

@keyframes spin {
    from {
        transform: scale(1) rotate(0deg);
    }

    to {
        transform: scale(1) rotate(360deg);
    }
}

@-webkit-keyframes spin2 {
    from {
        -webkit-transform: rotate(0deg);
    }

    to {
        -webkit-transform: rotate(360deg);
    }
}

.fa-left {
    margin-right: 7px;
}

.btn.search {
    width: 10rem;
}

And that’s it. A simple button state transition with no Javascript, just the way I like it!

Testing Stripe Webhooks in an ASP.NET Core Project

I’m using Stripe for subscription management and payment processing for the the SaaS side project I’m currently working on.  Stripe offers webhook functionality that allows you to register callback endpoints on your API with Stripe that they will call whenever any one of numerous specified events occur on their side.

So, for example, in my SaaS I’m offering customers a free 30-day trial period at the beginning of their subscription before they have to provide credit card information.  I can register for the customer.subscription.trial_will_end event with an endpoint of my choice on my API, and Stripe will call that endpoint three days before the end of any of my customers’ trial periods with details about the given customer and their subscription.  I’ll have logic on my side to check to see if we have a credit card for that customer yet, and, if not, send them a friendly e-mail reminding them that their trial is about to expire and they need to enter a credit card if they’d like to continue to use the service.

Stripe offers the ability through their dashboard to send test events to a webhook endpoint.  As I worked on my integration this past week, I ran into a couple of small issues in getting the test events to reach my service running locally on my machine.  So here’s a quick summary of what it took to get the messages flowing.

StripeEventsController

First we need a controller with an endpoint that Stripe will call into.  I’m using a single endpoint to catch all events.  It then delegates the processing of events to an IStripeEventProcessor.  Here’s my endpoint:

    [Route("api/[controller]")]
    public class StripeEventsController : Controller
    {
        private readonly IStripeEventProcessor _stripeEventProcessor;
        private readonly IEnvironmentSettings _environmentSettings;

        public StripeEventsController(
            IStripeEventProcessor stripeEventProcessor,
            IEnvironmentSettings environmentSettings)
        {
            _stripeEventProcessor = stripeEventProcessor ?? throw new ArgumentNullException(nameof(stripeEventProcessor));
            _environmentSettings = environmentSettings ?? throw new ArgumentNullException(nameof(environmentSettings));
        }

        [HttpPost]
        public async Task<IActionResult> IndexAsync()
        {
            var json = await new StreamReader(HttpContext.Request.Body).ReadToEndAsync();

            try
            {
                var stripeEvent = StripeEventUtility.ConstructEvent(json,
                    Request.Headers["Stripe-Signature"], _environmentSettings.StripeConfiguration.Value.WebhookSigningSecret);

                await _stripeEventProcessor.ProcessAsync(stripeEvent);

                return Ok();
            }
            catch (StripeException)
            {
                return BadRequest();
            }
        }
    }

Note that I’m passing a WebhookSigningSecret to StripeEventUtility.ConstructEvent(..); this verifies that the event was actually sent by Stripe by checking the “Stripe-Signature” header value.  The webhook signing secret can be obtained from Stripe > Developers > Webhooks > select endpoint > Signing secret.

Turn Off HTTPS Redirection in Development

I’m using HTTPS Redirection in my project to redirect any non-secure requests to their corresponding HTTPS endpoint.  This caused me to receive an error whenever sending a test event: “Test webhook error: 307.”

Stripe expects to receive a 200-series status code back on all webhook calls, so the 307 Temporary Redirect status was a problem.  To resolve this, I modified my Startup.cs to only use HTTPS Redirection when not in development mode, like so:

        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            { 
                // snip...
            }
            else
            {
                // snip...
                app.UseHttpsRedirection(); // <- Moved from outside to inside else block to allow ngrok tunneling for testing Stripe webhooks
            }
            // snip...
            app.UseMvc();
        }

ngrok

In order for Stripe to send test webhook events to our service, it needs to be able to connect to it.  ngrok is a neat little utility that allows you to expose your locally running web app or service via a public URL.  Download ngrok and install it, following the four getting started steps here.

When we’re ready to test, we’ll start up ngrok with the following command (where 64768 is the port number of your service):

ngrok http 64768 -host-header="localhost:64768"

It’s important to note that my service is configured to be accessible via the following bindings:

    <binding protocol="http" bindingInformation="*:64768:localhost" />
    <binding protocol="https" bindingInformation="*:44358:localhost" />

You want to specify the non-secure (non-HTTPS) port when starting up ngrok.  It’s also important to specify the host-header flag; if you don’t, you’ll get a 400 Bad Request on all of your test calls:

Upon starting ngrok, you’ll see a screen like the following, indicating (in my case) that it is forwarding https://96c1bf3b.ngrok.io to my localhost:64768: 

Configure Stripe Webhook Endpoint and Test

Finally, you need to set up a Stripe webhook that points to your Stripe event handler endpoint exposed publicly via ngrok.  This is done by navigating to the Stripe dashboard > Webhooks, and clicking “Add endpoint”.  In my case, my endpoint looks like this:

Now we test our endpoint by clicking “Send test webhook.”  If all goes as planned, you’ll see a successful response like the following:

You can also fire up http://localhost:4040/inspect/http in your browser and see a nice dashboard where you can inspect and replay all requests made through the ngrok public endpoint:

Congratulations!  With just a few steps you’re now successfully sending test Stripe webhook events to your service running locally.

Configuring AutoMapper for use with the ASP.NET Core DI Container

I’m generally a strong proponent of using separate models for commmunicating across service boundaries that are separate and distinct from my core domain model. In a typical web application that is backed by a database, this usually means that I have a domain model, a storage model, and a view model, and I map between storage model ↔ domain model and between domain model ↔ view model when crossing their respective boundaries. This keeps my domain model isolated, ensures that it is not forced to change due to concerns in another layer, and promotes adherence to the Single Responsibility Principle (a class should have one, and only one, reason to change).

Having multiple models implies that there is some mechanism for mapping between models. My go-to tool for this task is AutoMapper. Configuring AutoMapper for use in an ASP.NET Core project using the ASP.NET Core DI container differs slightly from my prior experiences with it on past projects. Here’s what I did to get up and running…

Install Nuget Packages

Let’s first install the Nuget packages that we’ll need.  I’m using AutoMapper 7.0.1 and AutoMapper.Extensions.Microsoft.DependencyInjection 5.0.1, which provides AutoMapper extensions on IServiceCollection that allow us to wire-up AutoMapper in Startup.ConfigureServices(IServiceCollection services).

Create Models

Suppose we have a Stock domain model class that looks like this:

    public class Stock
    {
        private readonly IStockRepository _repository;

        public Stock(IStockRepository repo)
        {
            _repository = repo ?? throw new ArgumentNullException(nameof(repo));
        }

        public string Name { get; set; }

        public string Symbol { get; set; }
    }

Note that we’re injecting an IStockRepository.  (I often have the aggregate roots in my domain models take a reference to their corresponding repository so I can do neat things like stock.SaveAsync(), but that’s another topic…)

For our purposes here it doesn’t really matter what IStockRepository and StockRepository look like – we’ll just leave them empty:

    public interface IStockRepository
    {
    }

    public class StockRepository : IStockRepository
    {
    }

We have a StockViewModel class that roughly corresponds with our Stock domain model and looks like this:

 
    public class StockViewModel
    {
        public string Name { get; set; }

        public string Symbol { get; set; }
    }

We’d also probably have a Stock storage model class and perhaps other corresponding view model representations of a stock, but we’ll keep it to just these two classes to keep things simple.

Create Mapping Profile

Next we’ll create our mapping profile, which is where we’ll configure our specific type mappings.  Usually I’ll create a profile per bounded context per layer I’m mapping.  So, for example, if I have a single bounded context but I map between storage model ↔ domain model and between domain model ↔ view model, I’ll have a ViewModelMappingProfile and a StorageModelMappingProfile.

Our ViewModelMappingProfile in this case looks like this:

 
    public class ViewModelMappingProfile : Profile
    {
        public ViewModelMappingProfile()
        {           
            CreateMap<StockViewModel, Stock>().ConstructUsingServiceLocator();
        }
    }

It derives from AutoMapper.Profile.  I’ve just got a single, simple mapping from StockViewModel to Stock, and I’ve called ConstructUsingServiceLocator() on the mapping to enable the resolution and injection of dependencies into the target.  Comprehensive documentation on configuring individual mappings can be found here.

Configure Services

Next we configure our services and add AutoMapper to the service collection.   In Startup.ConfigureServices(IServiceCollection services)we add the following after services.AddMvc():

    services.AddAutoMapper();

The line above does a couple of things: it registers IMapper with our DI container, and it searches our assembly for classes that inherit from AutoMapper.Profileand automatically loads them.  So our ViewModelMappingProfile gets loaded automatically at startup, and we can confirm that by putting a breakpoint in its constructor.

We also must register with the DI container any dependencies that will be injected in at map time (like our IStockRepository) as well as any type mappings that are configured with ConstructUsingServiceLocator()(like our Stock domain class).  This part is important when working with the ASP.NET Core DI container – other third-party containers that I’ve worked with in the past (such as Unity) will resolve concrete types automatically without requiring an explicit type mapping.  Not so with the ASP.NET Core DI container, and I spent a bit of time figuring this out.  Since we define the mapping to Stock with ConstructUsingServiceLocator(), it must be explicitly registered with the container.

So, we have the following somewhere in our Startup.ConfigureServices(IServiceCollection services) method:

    services.AddScoped<IStockRepository, StockRepository>();
    services.AddTransient<Stock, Stock>();

Map!

With our configuration complete, all that’s left to do is start using our mappings.  We can inject an instance of IMapper into any of our classes that are resolved using the DI container.

Here’s an example usage in an OnPostAsync() method on a Razor page model that maps from a posted view model to the domain model and then saves the data:

        public async Task<IActionResult> OnPostAsync(CancellationToken cancellationToken)
        {
            if (!ModelState.IsValid)
            {
                return Page();
            }

            var stock = _mapper.Map<StockViewModel, Stock>(Stock);
            await stock.SaveAsync(cancellationToken);

            return RedirectToPage($"./Stock?id={stock.Id}");
        }

Wrapping Up

We’ve shown how to configure and start using AutoMapper with the ASP.NET Core DI container, for a quick and easy way to get model mapping with DI working in your ASP.NET Core project.

It’s worth noting, though, that while the core DI container may be adequate for small, simple projects, it’s not as feature-rich as most other third-party DI containers.  More complex projects might require a more robust and capable DI container to meet their specific needs.  (This post on Stack Overflow gives a nice overview of some of its shortcomings.)

Adding Simple Pagination to a Bootstrap Table in ASP.NET Core

I recently needed to add simple pagination to a Bootstrap table on a page in my ASP.NET Core project.  I’m using MVC Razor Pages, but a similar approach will work with Razor Views.

My data set contains 270+ rows, and originally I was returning all 270+ rows to the client and displaying them in a table that scrolled well past the end of the page.  I only want to show ten at a time with a simple paging control.  Here’s what the final product will look like:

The following is a standard Bootstrap table generated from items in Model.Stocks from our page model:

<table class="table table-striped table-bordered table-sm table-responsive">
    <thead>
        <tr>
            <th scope="col">Symbol</th>
            <th scope="col">Name</th>
            <th scope="col">Sector</th>
            <th scope="col">Price</th>
        </tr>
    </thead>
    <tbody>
        @foreach (var item in Model.Stocks)
        {
            <tr>
                <th scope="row">@item.Symbol</th>
                <td>@item.Name</td>
                <td>@item.Sector</td>
                <td>@item.Price</td>
            </tr>
        }
    </tbody>
</table>

Here is the page model:

    public class DividendPicksModel : PageModel
    {
        private readonly IStockService _stockService;

        public DividendPicksModel(
            IStockService stockService)
        {
            _stockService = stockService ?? throw new ArgumentNullException(nameof(stockService));
        }

        public PaginatedList<StockViewModel> Stocks { get; private set; }

        public async Task<IActionResult> OnGetAsync(int? pageIndex, CancellationToken cancellation)
        {
            var stocks = await _stockService.GetCachedStockResources(cancellation) ?? new List<Stock>();

            Stocks = PaginatedList<StockViewModel>.Create(stocks.Select(s => new StockViewModel(s)), pageIndex ?? 1, 10, 5);

            return Page();
        }
    }

The page implements

public async Task<IActionResult> OnGetAsync(int? pageIndex, CancellationToken cancellation)

This is responsible for handling GET requests for the page. There’s not too much to it: retrieve the data from a domain service, set the Stocks collection property on the page model, and return the page.

Note that the method takes int? pageIndex as a parameter which indicates which page of the table is being requested by the client. Not passing this parameter will cause the the first page of the table to be returned.

The interesting thing to note here is that the Stocks property is not a standard .NET collection. It’s a PaginatedList<Stock>PaginatedList<T>derives from List<T>  and represents a “page” of our data, and it exposes the properties needed to enable paging functionality for the collection on the view.  Let’s take a look at it:

    public class PaginatedList<T> : List<T>
    {
        private PaginatedList(List<T> items, int count, int pageIndex, int pageSize, int countOfPageIndexesToDisplay)
        {
            PageIndex = pageIndex;
            TotalPages = (int)Math.Ceiling(count / (double)pageSize);

            SetPageIndexesToDisplay(pageIndex, countOfPageIndexesToDisplay);

            AddRange(items);
        }

        public int PageIndex { get; }

        public int TotalPages { get; }

        public bool HasPreviousPage => (PageIndex > 1);

        public bool HasNextPage => (PageIndex < TotalPages);

        public List<PageIndex> PageIndexesToDisplay { get; private set; }

        public static PaginatedList<T> Create(
            IEnumerable<T> source, int pageIndex, int pageSize, int countOfPageIndexesToDisplay = 3)
        {
            var list = source.ToList();
            var count = list.Count;
            var items = list
                .Skip((pageIndex - 1) * pageSize)
                .Take(pageSize).ToList();
            return new PaginatedList<T>(items, count, pageIndex, pageSize, countOfPageIndexesToDisplay);
        }

        private void SetPageIndexesToDisplay(int pageIndex, int countOfPageIndexesToDisplay)
        {
            PageIndexesToDisplay = new List<PageIndex>();
            if (pageIndex > TotalPages - countOfPageIndexesToDisplay + Math.Floor(countOfPageIndexesToDisplay / 2.0m))
            {
                for (var i = Math.Max(TotalPages - countOfPageIndexesToDisplay + 1, 0); i <= TotalPages; i++)
                {
                    PageIndexesToDisplay.Add(new PageIndex(i, i == pageIndex));
                }
            }
            else if (pageIndex < countOfPageIndexesToDisplay - Math.Floor(countOfPageIndexesToDisplay / 2.0m))
            {
                for (var i = 1; i <= Math.Min(countOfPageIndexesToDisplay, TotalPages); i++)
                {
                    PageIndexesToDisplay.Add(new PageIndex(i, i == pageIndex));
                }
            }
            else
            {
                var startIndex = pageIndex - (int)Math.Floor(countOfPageIndexesToDisplay / 2.0m);
                for (var i = startIndex; i <= startIndex + countOfPageIndexesToDisplay - 1; i++)
                {
                    PageIndexesToDisplay.Add(new PageIndex(i, i == pageIndex));
                }
            }

        }
    }

We create a PaginatedList by calling into the static Create method and passing it the collection that will be paged, the index of the page we need to display, the page size, and the number of page indexes we want to display in the pagination control at a time (in the screenshot at the beginning of the post it’s five).

PaginatedList exposes several properties about the page of data that it represents:

  • PageIndex
  • TotalPages
  • HasPreviousPage
  • HasNextPage
  • PageIndexesToDisplay

… and, of course, since it derives from List<T>, we have access to the list of items on our page.

The most complex part of PaginatedList<T> is SetPageIndexesToDisplay(..), which builds the collection of page indices to display in the Bootstrap pagination control based on the current page index and the total count of indices to display.  PageIndex is a simple POCO that contains the index number and a boolean indicating whether that page index is the active (displayed) page:

    public class PageIndex
    {
        public PageIndex(int index, bool isActive)
        {
            Index = index;
            IsActive = isActive;
        }

        public int Index { get; }

        public bool IsActive { get; }
    }

The final piece to pull this all together is the actual pagination control on the view. Immediately below the table in our HTML we have the following:

@{
    var prevDisabled = !Model.Stocks.HasPreviousPage ? "disabled" : "";
    var nextDisabled = !Model.Stocks.HasNextPage ? "disabled" : "";
}
<nav aria-label="Pagination" class="col-12">
    <ul class="pagination justify-content-end">
        <li class="page-item @prevDisabled">
            <a class="page-link" 
                asp-page="./MyPage" 
                asp-route-pageIndex="@(Model.Stocks.PageIndex - 1)"
                aria-label="Previous">
                <span aria-hidden="true">&laquo;</span>
                <span class="sr-only">Previous</span>
            </a>
        </li>
        @foreach (var i in Model.Stocks.PageIndexesToDisplay)
        {
            var activeClass = i.IsActive ? "active" : "";
            
            <li class="page-item @activeClass"><a class="page-link"  
                                        asp-page="./MyPage"
                                        asp-route-pageIndex="@(i.Index)">@i.Index</a></li>
        }
        <li class="page-item @nextDisabled">
            <a class="page-link" 
                asp-page="./MyPage"
                asp-route-pageIndex="@(Model.Stocks.PageIndex + 1)"
                aria-label="Next">
                <span aria-hidden="true">&raquo;</span>
                <span class="sr-only">Next</span>
            </a>
        </li>
    </ul>
</nav>

Here we use the properties exposed by our PaginatedList<Stock> to control how the pagination control is rendered and functions.  We enable/disable the previous/next buttons based on the HasPreviousPage and HasNextPage properties, respectively.  We build the numeric page index buttons from the PageIndexesToDisplay collection, and we set the link appropriately for the given index.

Clicking one of the links in the pagination control requests the page again passing the given page index and the PaginatedList<Stock> is rebuilt for the requested page.

Wrapping Up

We’ve explored here a simple, no-frills way to add pagination to a Bootstrap table.  There are a couple of improvements that could be made:

  • Interacting with the pagination control causes a refresh of the entire page.  This could be modified to execute a little Javascript to hit an endpoint that returns the new page of data and update the table and pagination control in-place without reloading the whole page.
  • This implementation retrieves the entire data set from the service and then applies paging within our app service.  This is fine for my purposes since my dataset is small, but for larger datasets I would push the execution of the paging (skip/take operators) to the database.  (If using Entity Framework, this could be as simple as deferring query execution until after the Skip and Take are applied in PaginatedList<T>.Create(..).)

I may consider these enhancements in the future, but for now this implementation suits my needs just fine.

Switching Your Website to Use HTTPS, the Free and Easy Way

Google has announced that with their July 2018 release of Chrome 68 they’ll start marking all non-HTTPS websites as “not secure.”  Thankfully, these days there are several cheap, and even free, options for securing your site’s traffic.  There’s really no reason not to do it.

One such option is to use CloudFlare’s free SSL.  Security expert Troy Hunt has a fantastic 4-part video series called Https Is Easy! that takes you through the simple steps to secure your site.

Fixing Mixed Content Issues on WordPress Sites

While I was able to get a couple of my websites switched to HTTPS with no issues in a matter of minutes, I ran into issues with one of my Azure-hosted WordPress sites.  Much of the site’s functionality broke in the switch and I was now getting a host of mixed content errors, particularly with plugins that I was using, like the following:

The fix for this is simple: install the SSL Insecure Content Fixer plugin.

Once installed, open the Settings for the plugin.  I was able to leave all of the settings at their default except for HTTPS detection.  Change this setting to use HTTP_X_FORWARDED_PROTO (e.g. load balancer, reverse proxy, NginX), as this is the header that CloudFlare uses.

After applying this change, my mixed content errors went away, my site was secured, and I was good to go.