Category Archives: ASP.NET

YAML-defined CI/CD for ASP .NET Core 3.1

By Shahed C on June 24, 2020

This is the twenty-fifth of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled ASP .NET Core A-Z! To differentiate from the 2019 series, the 2020 series will mostly focus on a growing single codebase (NetLearner!) instead of new unrelated code snippets week.

Previous post:

NetLearner on GitHub:

In this Article:

Y is for YAML-defined CI/CD for ASP .NET Core

If you haven’t heard of it yet, YAML is yet another markup language. No really, it is. YAML literally stands for Yet Another Markup Language. If you need a reference for YAML syntax and how it applies to Azure DevOps Pipelines, check out the official docs:

In the NetLearner repository, check out the sample YAML code:

NOTE: Before using the aforementioned YAML sample in an Azure DevOps project, please replace any placeholder values and rename the file to remove the .txt suffix.

In the context of Azure DevOps, you can use Azure Pipelines with YAML to make it easier for you set up a CI/CD pipeline for Continuous Integration and Continuous Deployment. This includes steps to build and deploy your app. Pipelines consist of stages, which consist of jobs, which consists of steps. Each step could be a script or task. In addition to these options, a step can also be a reference to an external template to make it easier to create your pipelines.

DevOps Pipeline in YAML
DevOps Pipeline in YAML

Getting Started With Pipelines

To get started with Azure Pipelines in Azure DevOps:

  1. Log in at: https://dev.azure.com
  2. Create a Project for your Organization
  3. Add a new Build Pipeline under Pipelines | Builds
  4. Connect to your code location, e.g. GitHub repo
  5. Select your repo, e.g. a specific GitHub repository
  6. Configure your YAML
  7. Review your YAML and Run it

From here on forward, you may come back to your YAML here, edit it, save it, and run as necessary. You’ll even have the option to commit your YAML file “azure-pipelines.yml” into your repo, either in the master branch or in a separate branch (to be submitted as a Pull Request that can be merged).

YAML file in Azure DevOps
YAML file in Azure DevOps

If you need more help getting started, check out the official docs and Build 2019 content at:

To add pre-written snippets to your YAML, you may use the Task Assistant side panel to insert a snippet directly into your YAML file. This includes tasks for .NET Core builds, Azure App Service deployment and more.

Task Assistant in Azure DevOps
Task Assistant in Azure DevOps

OS/Environment and Runtime

From the sample repo, take a look at the sample YAML code sample “azure-pipelines.yml.txt“. Near the top, there is a definition for a “pool” with a “vmImage” set to ‘windows-latest’.

pool:
 vmImage: 'windows-latest'

If I had started off with the default YAML pipeline configuration for a .NET Core project, I would probably get a vmImage value set to ‘ubuntu-latest’. This is just one of many possible values. From the official docs on Microsoft-hosted agents, we can see that Microsoft’s agent pool provides at least the following VM images across multiple platforms, e.g.

  • Windows Server 2019 with Visual Studio 2019 (windows-latest OR windows-2019)
  • Windows Server 2016 with Visual Studio 2017 (vs2017-win2016)
  • Ubuntu 18.04 (ubuntu-latest OR ubuntu-18.04)
  • Ubuntu 16.04 (ubuntu-16.04)
  • macOS X Mojave 10.14 (macOS-10.14)
  • macOS X Catalina 10.15 (macOS-latest OR macOS-10.15)

In addition to the OS/Environment, you can also set the .NET Core runtime version. This may come in handy if you need to explicitly set the runtime for your project.

steps:
- task: DotNetCoreInstaller@0
 inputs:
 version: '3.1.0'

Restore and Build

Once you’ve set up your OS/environment and runtime, you can restore (dependencies) and build your project. Restoring dependencies with a command is optional since the Build step will take care of the Restore as well. To build a specific configuration by name, you can set up a variable first to define the build configuration, and then pass in the variable name to the build step.

variables:
  BuildConfiguration: 'Release'
  SolutionPath: 'YOUR_SOLUTION_FOLDER/YOUR_SOLUTION.sln'

steps:
# Optional: 'dotnet restore' is not necessary because the 'dotnet build' command executes restore as well.
#- task: DotNetCoreCLI@2
#  displayName: 'Restore dependencies'
#  inputs:
#    command: restore
#    projects: '**/*.csproj'

- task: DotNetCoreCLI@2
  displayName: 'Build web project'
  inputs:
    command: 'build'
    projects: $(SolutionPath)

In the above snippet, the BuildConfiguration is set to ‘Release’ so that the project is built for its ‘Release’ configuration. The displayName is a friendly name in a text string (for any step) that may include variable names as well. This is useful for observing logs and messages during troubleshooting and inspection.

NOTE: You may also use script steps to make use of dotnet commands with parameters you may already be familiar with, if you’ve been using .NET Core CLI Commands. This makes it easier to run steps without having to spell everything out.

variables:
 buildConfiguration: 'Release'

steps:
- script: dotnet restore

- script: dotnet build --configuration $(buildConfiguration)
 displayName: 'dotnet build $(buildConfiguration)'

From the official docs, here are some more detailed steps for restore and build, if you wish to customize your steps and tasks further:

steps:
- task: DotNetCoreCLI@2
 inputs:
 command: restore
 projects: '**/*.csproj'
 feedsToUse: config
 nugetConfigPath: NuGet.config 
 externalFeedCredentials: <Name of the NuGet service connection>

Note that you can set your own values for an external NuGet feed to restore dependencies for your project. Once restored, you may also customize your build steps/tasks.

steps:
- task: DotNetCoreCLI@2
 displayName: Build
 inputs:
 command: build
 projects: '**/*.csproj'
 arguments: '--configuration Release'

Unit Testing and Code Coverage

Although unit testing is not required for a project to be compiled and deployed, it is absolutely essential for any real-world application. In addition to running unit tests, you may also want to measure your code coverage for those unit tests. All these are possible via YAML configuration.

From the official docs, here is a snippet to run your unit tests, that is equivalent to a “dotnet test” command for your project:

steps:
- task: DotNetCoreCLI@2
 inputs:
 command: test
 projects: '**/*Tests/*.csproj'
 arguments: '--configuration $(buildConfiguration)'

Also, here is another snippet to collect code coverage:

steps:
- task: DotNetCoreCLI@2
 inputs:
 command: test
 projects: '**/*Tests/*.csproj'
 arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'

Once again, the above snippet uses the “dotnet test” command, but also adds the –collect option to enable the data collector for your test run. The text string value that follows is a friendly name that you can set for the data collector. For more information on “dotnet test” and its options, check out the docs at:

Publish and Deploy

Finally, it’s time to package and deploy your application. In this example, I am deploying my web app to Azure App Service.

- task: DotNetCoreCLI@2
  displayName: 'Publish and zip'
  inputs:
    command: publish
    publishWebProjects: False
    projects: $(SolutionPath)
    arguments: '--configuration $(BuildConfiguration) --output $(Build.ArtifactStagingDirectory)'
    zipAfterPublish: True

- task: AzureWebApp@1
  displayName: 'Deploy Azure Web App'
  inputs:
    azureSubscription: '<REPLACE_WITH_AZURE_SUBSCRIPTION_INFO>'
    appName: <REPLACE_WITH_EXISTING_APP_SERVICE_NAME>
    appType: 'webApp'
    package: $(Build.ArtifactStagingDirectory)/**/*.zip

The above snippet runs a “dotnet publish” command with the proper configuration setting, followed by an output location, e.g. Build.ArtifactStagingDirectory. The value for the output location is one of many predefined build/system variables, e.g. System.DefaultWorkingDirectory, Build.StagingDirectory, Build.ArtifactStagingDirectory, etc. You can find out more about these variables from the official docs:

Note that there is a placeholder text string for the Azure Subscription ID. If you use the Task Assistant panel to add a “Azure App Service Deploy” snippet, you will be prompted to select your Azure Subscription, and a Web App location to deploy to, including deployment slots if necessary.

The PublishBuildArtifacts task uploads the package to a file container, ready for deployment. After your artifacts are ready, a zip file will become available in a named container, e.g. ‘drop’.

# Optional step if you want to deploy to some other system using a Release pipeline or inspect the package afterwards
- task: PublishBuildArtifacts@1
  displayName: 'Publish Build artifacts'
  inputs:
    PathtoPublish: '$(Build.ArtifactStagingDirectory)'
    ArtifactName: 'drop'
    publishLocation: 'Container'

You may use the Azure DevOps portal to inspect the progress of each step and troubleshoot any failed steps. You can also drill down into each step to see the commands that are running in the background, followed by any console messages.

Azure DevOps success messages
Azure DevOps success messages

NOTE: to set up a release pipeline with multiple stages and optional approval conditions, check out the official docs at:

Triggers, Tips & Tricks

Now that you’ve set up your pipeline, how does this all get triggered? If you’ve taken a look at the sample YAML file, you will notice that the first command includes a trigger, followed by the word “master”. This ensures that the pipeline will be triggered every time code is pushed to the corresponding code repository’s master branch. When using a template upon creating the YAML file, this trigger should be automatically included for you.

trigger: 
- master

To include more triggers, you may specify triggers for specific branches to include or exclude.

trigger:
 branches:
 include:
 - master
 - releases/*
 exclude:
 - releases/old*

Finally here are some tips and tricks when using YAML to set up CI/CD using Azure Pipelines:

  • Snippets: when you use the Task Assistant panel to add snippets into your YAML, be careful where you are adding each snippet. It will insert it wherever your cursor is positioned, so make sure you’ve clicked into the correction location before inserting anything.
  • Order of tasks and steps: Verify that you’ve inserted (or typed) your tasks and steps in the correct order. For example: if you try to deploy an app before publishing it, you will get an error.
  • Indentation: Whether you’re typing your YAML or using the snippets (or some other tool),  use proper indentation. You will get syntax errors of the steps and tasks aren’t indented correctly.
  • Proper Runtime/OS: Assign the proper values for the desired runtime, environment and operating system.
  • Publish: Don’t forget to publish before attempting to deploy the build.
  • Artifacts location: Specify the proper location(s) for artifacts when needed.
  • Authorize Permissions: When connecting your Azure Pipeline to your code repository (e.g. GitHub repo) and deployment location (e.g. Azure App Service), you will be prompted to authorize the appropriate permissions. Be aware of what permissions you’re granting.
  • Private vs Public: Both your Project and your Repo can be private or public. If you try to mix and match a public Project with a private Repo, you may get the following warning message: “You selected a private repository, but this is a public project. Go to project settings to change the visibility of the project.” 

References

XML + JSON Output for Web APIs in ASP .NET Core 3.1

By Shahed C on June 22, 2020

This is the twenty-fourth of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled ASP .NET Core A-Z! To differentiate from the 2019 series, the 2020 series will mostly focus on a growing single codebase (NetLearner!) instead of new unrelated code snippets week.

Previous post:

NetLearner on GitHub:

In this Article:

X is for XML + JSON Output

XML (eXtensible Markup Language) is a popular document format that has been used for a variety of applications over the years, including Microsoft Office documents, SOAP Web Services, application configuration and more. JSON (JavaScript Object Notation) was derived from object literals of JavaScript, but has also been used for storing data in both structured and unstructured formats, regardless of the language used. In fact, ASP .NET Core applications switched from XML-based .config files to JSON-based .json settings files for application configuration.

Returning XML/JSON format from a Web API

Returning JsonResult and IActionResult

Before we get into XML output for your Web API, let’s start off with JSON output first, and then we’ll get to XML. If you run the Web API sample project in the NetLearner repository, you’ll notice a LearningResourcesController.cs file that represents a “Learning Resources Controller” that exposes API endpoints. These endpoints can serve up both JSON and XML results of Learning Resources, i.e. blog posts, tutorials, documentation, etc.

Run the application and navigate to the following endpoint in an API testing tool, e.g. Postman:

  • https://localhost:44350/api/LearningResources
Sample JSON data in Postman
Sample JSON data in Postman

This triggers a GET request by calling the LearningResourcesController‘s Get() method:

  // GET: api/LearningResources
 [HttpGet]
 public JsonResult Get()
 {
    return new JsonResult(_sampleRepository.LearningResources());
 }

In this case, the Json() method returns a JsonResult object that serializes a list of Learning Resources. For simplicity, the _sampleRepository object’s LearningResources() method (in SampleRepository.cs) returns a hard-coded list of LearningResource objects. Its implementation here isn’t important, because you would typically retrieve such values from a persistent data store, preferably through some sort of service class.

public List<LearningResource> LearningResources()
{
   ... 
   return new List<LearningResource>
   {
      new LearningResource
      {
         Id= 1,
         Name= "ASP .NET Core Docs",
         Url = "https://docs.microsoft.com/aspnet/core",
         ...
      },
      ... 
   }
}

The JSON result looks like the following, where a list of learning resources are returned:

[
    {
        "id": 1,
        "name": "ASP .NET Core Docs",
        "url": "https://docs.microsoft.com/aspnet/core",
        "resourceListId": 1,
        "resourceList": {
            "id": 1,
            "name": "RL1",
            "learningResources": []
        },
        "contentFeedUrl": null,
        "learningResourceTopicTags": null
    },
    {
        "id": 2,
        "name": "Wake Up And Code!",
        "url": "https://WakeUpAndCode.com",
        "resourceListId": 1,
        "resourceList": {
            "id": 1,
            "name": "RL1",
            "learningResources": []
        },
        "contentFeedUrl": "https://WakeUpAndCode.com/rss",
        "learningResourceTopicTags": null
    }
]

Instead of specifically returning a JsonResult, you could also return a more generic IActionResult, which can still be interpreted as JSON. Run the application and navigate to the following endpoint, to include the action method “search” folllowed by a QueryString parameter “fragment” for a partial match.

  • https://localhost:44350/api/LearningResources/search?fragment=Wa
 Sample JSON data with search string
Sample JSON data with search string

This triggers a GET request by calling the LearningResourceController‘s Search() method, with its fragment parameter set to “Wa” for a partial text search:

// GET: api/LearningResources/search?fragment=Wa
[HttpGet("Search")]
public IActionResult Search(string fragment)
{
   var result = _sampleRepository.GetByPartialName(fragment);
   if (!result.Any())
   {
      return NotFound(fragment);
   }
   return Ok(result);
}

In this case, the GetByPartialName() method returns a List of LearningResources objects that are returned as JSON by default, with an HTTP 200 OK status. In case no results are found, the action method will return a 404 with the NotFound() method.

public List<LearningResource> GetByPartialName(string nameSubstring)
{
   return LearningResources()
      .Where(lr => lr.Title
         .IndexOf(nameSubstring, 0, StringComparison.CurrentCultureIgnoreCase) != -1)
      .ToList();
}

The JSON result looks like the following, which includes any learning resource that partially matches the string fragment provided:

[
    {
        "id": 2,
        "name": "Wake Up And Code!",
        "url": "https://WakeUpAndCode.com",
        "resourceListId": 1,
        "resourceList": {
            "id": 1,
            "name": "RL1",
            "learningResources": []
        },
        "contentFeedUrl": "https://WakeUpAndCode.com/rss",
        "learningResourceTopicTags": null
    }
]

Returning Complex Objects

An overloaded version of the Get() method takes in a “listName” string parameter to filter results by a list name for each learning resource in the repository. Instead of returning a JsonResult or IActionResult, this one returns a complex object (LearningResource) that contains properties that we’re interested in.

// GET api/LearningResources/RL1
[HttpGet("{listName}")]
public LearningResource Get(string listName)
{
   return _sampleRepository.GetByListName(listName);
}

The GetByListName() method in the SampleRepository.cs class simply checks for a learning resource by the listName parameter and returns the first match. Again, the implementation is not particularly important, but it illustrates how you can pass in parameters to get back JSON results.

public LearningResource GetByListName(string listName)
{
   return LearningResources().FirstOrDefault(lr => lr.ResourceList.Name == listName);
}

While the application is running, navigate to the following endpoint:

  • https://localhost:44350/api/LearningResources/RL1
Sample JSON data with property filter
Sample JSON data with property filter

This triggers another GET request by calling the LearningResourcesController‘s overloaded Get() method, with the listName parameter. When passing the list name “RL1”, this returns one item, as shown below:

{
    "id": 1,
    "name": "ASP .NET Core Docs",
    "url": "https://docs.microsoft.com/aspnet/core",
    "resourceListId": 1,
    "resourceList": {
        "id": 1,
        "name": "RL1",
        "learningResources": []
    },
    "contentFeedUrl": null,
    "learningResourceTopicTags": null
}

Another example with a complex result takes in a similar parameter via QueryString and checks for an exact match with a specific property. In this case the Queried() action method calls the repository’s existing GetByListName() method to find a specific learning resource by its matching list name.

// GET: api/LearningResources/queried?listName=RL1
[HttpGet("Queried")]
public LearningResource Queried(string listName)
{
 return _sampleRepository.GetByListName(listName);
}

While the application is running, navigate to the following endpoint:

  • https://localhost:44350/api/LearningResources/Queried?listName=RL1
Sample JSON data with QueryString parameter
Sample JSON data with QueryString parameter

This triggers a GET request by calling the LearningResourcesController‘s Queried() method, with the listName parameter. When passing the list name “RL1”, this returns one item, as shown below:

{
    "id": 1,
    "name": "ASP .NET Core Docs",
    "url": "https://docs.microsoft.com/aspnet/core",
    "resourceListId": 1,
    "resourceList": {
        "id": 1,
        "name": "RL1",
        "learningResources": []
    },
    "contentFeedUrl": null,
    "learningResourceTopicTags": null
}

As you can see, the above result is in JSON format for the returned object.

XML Output

Wait a minute… with all these JSON results, when will we get to XML output? Not to worry, there are multiple ways to get XML results while reusing the above code. First, update your Startup.cs file’s ConfigureServices() to include a call to services.AddControllers().AddXmlSeralizerFormatters():

public void ConfigureServices(IServiceCollection services)
{
   ...
   services.AddControllers()
    .AddXmlSerializerFormatters();
   ...
}

In Postman, set the request’s Accept header value to “application/xml” before requesting the endpoint, then run the application and navigate to the following endpoint once again:

  • https://localhost:44350/api/LearningResources/RL1
XML-formatted results in Postman without code changes
XML-formatted results in Postman without code changes

This should provide the following XML results:

<LearningResource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <Id>1</Id>
    <Name>ASP .NET Core Docs</Name>
    <Url>https://docs.microsoft.com/aspnet/core</Url>
    <ResourceListId>1</ResourceListId>
    <ResourceList>
        <Id>1</Id>
        <Name>RL1</Name>
        <LearningResources />
    </ResourceList>
</LearningResource>

Since the action method returns a complex object, the result can easily be switched to XML simply by changing the Accept header value. In order to return XML using an IActionResult method, you should also use the [Produces] attribute, which can be set to “application/xml” at the API Controller level.

[Produces("application/xml")]
[Route("api/[controller]")]
[ApiController]
public class LearningResourcesController : ControllerBase
{
   ...
}

Then revisit the following endpoint, calling the search action method with the fragment parameter set to “ir”:

  • https://localhost:44350/api/LearningResources/Queried?listName=RL1

At this point, it is no longer necessary to set the Accept header to “application/xml” (in Postman) during the request, since the [Produces] attribute is given priority over it.

XML-formatted output using Produces attribute
XML-formatted output using Produces attribute

This should produces the following result , with a LearningResource object in XML:

<LearningResource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <Id>1</Id>
    <Name>ASP .NET Core Docs</Name>
    <Url>https://docs.microsoft.com/aspnet/core</Url>
    <ResourceListId>1</ResourceListId>
    <ResourceList>
        <Id>1</Id>
        <Name>RL1</Name>
        <LearningResources />
    </ResourceList>
</LearningResource>

As for the first Get() method returning JsonResult, you can’t override it with the [Produces] attribute or the Accept header value to change the result to XML format.

To recap, the order of precedence is as follows:

  1. public JsonResult Get()
  2. [Produces(“application/…”)]
  3. Accept: “application/…”

References

Worker Service in .NET Core 3.1

By Shahed C on June 17, 2020

This is the twenty-third of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled ASP .NET Core A-Z! To differentiate from the 2019 series, the 2020 series will mostly focus on a growing single codebase (NetLearner!) instead of new unrelated code snippets week.

Previous post:

NetLearner on GitHub:

NOTE: The Worker Service sample is a meta project that generates Word documents from blog posts, to auto-generate an ebook from this blog series. You can check out the code in the following experimental subfolder, merged from a branch:

In this Article:

W is for Worker Service

When you think of ASP .NET Core, you probably think of web application backend code, including MVC and Web API. MVC Views and Razor Pages also allow you to use backend code to generate frontend UI with HTML elements. The all-new Blazor goes one step further to allow client-side .NET code to run in a web browser, using WebAssembly. And finally, we now have a template for Worker Service applications.

Released with ASP .NET Core 3.0, the new project type was introduced in ASP .NET Core early previews. Although the project template was initially listed under the Web templates, it has since been relocated one level up in the New Project wizard. This is a great way to create potentially long-running cross-platform services in .NET Core. This article covers the Windows operating system.

Cross-platform .NET Core Worker Service

New Worker Service Project

The quickest way to create a new Worker Service project in Visual Studio 2019 is to use the latest template available with .NET Core 3.1. You may also use the appropriate dotnet CLI command.

Launch Visual Studio and select the Worker service template as shown below. After selecting the location, verify the version number (e.g. .NET Core 3.1) to create the worker service project.

Worker Service template in Visual Studio 2019
Worker Service on .NET Core 3.1

To use the Command Line, simply use the following command:

> dotnet new worker -o myproject

where -o is an optional flag to provide the output folder name for the project.

You can learn more about this template at the following location:

Program and BackgroundService

The Program.cs class contains the usual Main() method and a familiar CreateHostBuilder() method. This can be seen in the snippet below:

public class Program
{
   public static void Main(string[] args)
   {
      CreateHostBuilder(args).Build().Run();
   }

   public static IHostBuilder CreateHostBuilder(string[] args) =>
      Host.CreateDefaultBuilder(args)
      .ConfigureServices(hostContext, services =>
      {
         services.AddHostedService<Worker>();
      });
 }

Things to note:

  1. The Main method calls the CreateHostBuilder() method with any passed parameters, builds it and runs it.
  2. As of ASP .NET Core 3.0, the Web Host Builder has been replaced by a Generic Host Builder. The so-called Generic Host Builder was covered in an earlier blog post in this series.
  3. CreateHostBuilder() creates the host and configures it by calling AddHostService<T>, where T is an IHostedService, e.g. a worker class that is a child of BackgroundService

The worker class, Worker.cs, is defined as shown below:

public class Worker : BackgroundService
{
   // ...
 
   protected override async Task ExecuteAsync(CancellationToken stoppingToken)
   {
      // do stuff here
   }
}

Things to note:

  1. The worker class implements the BackgroundService class, which comes from the namespace Microsoft.Extensions.Hosting
  2. The worker class can then override the ExecuteAsync() method to perform any long-running tasks.

In the sample project, a utility class (DocEngine.cs) is used to convert a web page (e.g. a blog post or article) into a Word document for offline viewing. Fun fact: when this A-Z series wraps up, the blog posts will be assembled into a free ebook, by using this DocMaker, which uses some 3rd-party NuGet packages to generate the Word document.

Logging in a Worker Service

Logging in ASP .NET Core has been covered in great detail in an earlier blog post in this series. To get a recap, take a look at the following writeup:

To use Logging in your Worker Service project, you may use the following code in your Program.cs class:

using Microsoft.Extensions.Logging;
public static IHostBuilder CreateHostBuilder(string[] args) =>
 Host.CreateDefaultBuilder(args)
 .ConfigureLogging(loggerFactory => loggerFactory.AddEventLog())
 .ConfigureServices((hostContext, services) =>
 {
    services.AddHostedService<Worker>();
 });
  1. Before using the extension method, add its NuGet package to your project:
    • Microsoft.Extensions.Logging.EventLog
  2. Add the appropriate namespace to your code:
    • using Microsoft.Extensions.Logging;
  3. Call the method ConfigureLogging() and call the appropriate logging method, e.g. AddEventLog()

The list of available loggers include:

  • AddConsole()
  • AddDebug()
  • AddEventLog()
  • AddEventSourceLogger()

The Worker class can then accept an injected ILogger<Worker> object in its constructor:

private readonly ILogger<Worker> _logger;

public Worker(ILogger<Worker> logger)
{
   _logger = logger;
}

Running the Worker Service

NOTE: Run Powershell in Administrator Mode before running the commands below.

Before you continue, add a call to UseWindowsService() in your Program class, or verify that it’s already there. To call UseWindowsService(), the following package must be installed in your project: Microsoft.Extensions.Hosting.WindowsServices

The official announcement and initial document referred to UseServiceBaseLifetime() in an earlier preview. This method was renamed to UseWindowsService() before release.

   public static IHostBuilder CreateHostBuilder(string[] args) =>
      Host.CreateDefaultBuilder(args)
      .UseWindowsService()
      .ConfigureServices(hostContext, services =>
      {
         services.AddHostedService<Worker>();
      });

According to the code documentation, UseWindowsService() does the following:

  1. Sets the host lifetime to WindowsServiceLifetime
  2. Sets the Content Root
  3. Enables logging to the event log with the application name as the default source name

You can run the Worker Service in various ways:

  1. Build and Debug/Run from within Visual Studio.
  2. Publish to an exe file and run it
  3. Run the sc utility (from Windows\System32) to create a new service

To publish the Worker Service as an exe file with dependencies, run the following dotnet command:

dotnet publish -o C:\path\to\project\pubfolder

The -o parameter can be used to specify the path to a folder where you wish to generate the published files. It could be the path to your project folder, followed by a new subfolder name to hold your published files, e.g. pubfolder. Make a note of your EXE name, e.g. MyProjectName.exe but omit the pubfolder from your source control system.

To create a new service, run sc.exe from your System32 folder and pass in the name of the EXE file generated from the publish command.

> C:\Windows\System32\sc create MyServiceName binPath=C:\path\to\project\pubfolder\MyProjectName.exe

When running the sample manually, you should see some logging messages, as shown below:

info: DocMaker.Worker[0]
 Making doc 1 at: 06/09/2020 00:09:52 -04:00
Making your document...
info:  DocMaker.Worker[0]
 Making doc 2 at: 06/09/2020 00:10:05 -04:00
Making your document...

After the service is installed, it should show up in the operating system’s list of Windows Services:

Windows Services, showing custom Worker Service

NOTE: When porting to other operating systems, the call to UseWindowsService() is safe to leave as is. It doesn’t do anything on a non-Windows system.

References

Validation in ASP .NET Core 3.1

By Shahed C on June 15, 2020

This is the twenty-second of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled ASP .NET Core A-Z! To differentiate from the 2019 series, the 2020 series will mostly focus on a growing single codebase (NetLearner!) instead of new unrelated code snippets week.

Previous post:

NetLearner on GitHub:

In this Article:

V is for Validation

To build upon a previous post on Forms and Fields in ASP .NET Core, this post covers Validation in ASP .NET Core. When a user submits form field values, proper validation can help build a more user-friendly and secure web application. Instead of coding each view/page individually, you can simply use server-side attributes in your models/viewmodels.

NOTE: As of ASP .NET Core 2.2, validation may be skipped automatically if ASP .NET Core decides that validation is not needed. According to the “What’s New” release notes, this includes primitive collections (e.g. a byte[] array or a Dictonary<string, string> key-value pair collection)

Validation in ASP .NET Core

Validation Attributes

To implement model validation with [Attributes], you will typically use Data Annotations from the System.ComponentModel.DataAnnotations namespace. The list of attribute does go beyond just validation functionality though. For example, the DataType attribute takes a datatype parameter, used for inferring the data type and used for displaying the field on a view/page (but does not provide validation for the field).

Common attributes include the following

  • Range: lets you specify min-max values, inclusive of min and max
  • RegularExpression: useful for pattern recognition, e.g. phone numbers, zip/postal codes
  • Required: indicates that a field is required
  • StringLength: sets the maximum length for the string entered
  • MinLength: sets the minimum length of an array or string data

From the sample code, here is an example from the LearningResource model class in NetLearner‘s shared library:

public class LearningResource
{
    public int Id { get; set; }

    [DisplayName("Resource")]
    [Required]
    [StringLength(100)]
    public string Name { get; set; }


    [DisplayName("URL")]
    [Required]
    [StringLength(255)]
    [DataType(DataType.Url)]
    public string Url { get; set; }

    public int ResourceListId { get; set; }
    [DisplayName("In List")]
    public ResourceList ResourceList { get; set; }

    [DisplayName("Feed Url")]
    public string ContentFeedUrl { get; set; }

    public List<LearningResourceTopicTag> LearningResourceTopicTags { get; set; }
}

From the above code, you can see that:

  • The value for Name is a required string, needs to be less than 100 characters
  • The value for Url is a required string, needs to be less than 255 characters
  • The value for ContentFeedUrl can be left blank, but has to be less than 255 characters.
  • When the DataType is provided (e.g. DataType.Url, Currency, Date, etc), the field is displayed appropriately in the browser, with the proper formatting
  • For numeric values, you can also use the [Range(x,y)] attribute, where x and y sets the minimum and maximum values allowed for the number

Here’s what it looks like in a browser when validation fails:

Validation errors in NetLearner.MVC
Validation errors in NetLearner.Pages
Validation errors in NetLearner.Blazor

The validation rules make it easier for the user to correct their entries before submitting the form.

  • In the above scenario, the “is required” messages are displayed directly in the browser through client-side validation.
  • For field-length restrictions, the client-side form will automatically prevent the entry of string values longer than the maximum threshold
  • If a user attempts to circumvent any validation requirements on the client-side, the server-side validation will automatically catch them.

In the MVC and Razor Pages web projects, the validation messages are displayed with the help of <div> and <span> elements, using asp-validation-summary and asp-validation-for.

NetLearner.Mvc: /Views/LearningResources/Create.cshtml

<div asp-validation-summary="ModelOnly" class="text-danger"></div>
 <div class="form-group">
     <label asp-for="Name" class="control-label"></label>
     <input asp-for="Name" class="form-control" />
     <span asp-validation-for="Name" class="text-danger"></span>
 </div>

NetLearner.Pages: /Pages/LearningResources/Create.cshtml

<div asp-validation-summary="ModelOnly" class="text-danger"></div>
 <div class="form-group">
     <label asp-for="LearningResource.Name" class="control-label"></label>
     <input asp-for="LearningResource.Name" class="form-control" />
     <span asp-validation-for="LearningResource.Name" class="text-danger"></span>
 </div>

In the Blazor project, the “The DataAnnotationsValidator component attaches validation support using data annotations” and “The ValidationSummary component summarizes validation messages”.

NetLearner.Blazor: /Pages/ResourceDetail.razor

<EditForm Model="@LearningResourceObject" OnValidSubmit="@HandleValidSubmit">
     <DataAnnotationsValidator />
     <ValidationSummary />

For more information on Blazor validation, check out the official documentation at:

Server-Side Validation

Validation occurs before an MVC controller action (or equivalent handler method for Razor Pages) takes over. As a result, you should check to see if the validation has passed before continuing next steps.

e.g. in an MVC controller

[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Create(...)
{
   if (ModelState.IsValid)
   {
      // ... 
      return RedirectToAction(nameof(Index));
   }
   return View(...);
}

e.g. in a Razor Page’s handler code:

public async Task<IActionResult> OnPostAsync()
{
   if (!ModelState.IsValid)
   {
      return Page();
   }

   //... 
   return RedirectToPage(...);
}

Note that ModelState.IsValid is checked in both the Create() action method of an MVC Controller or the OnPostAsync() handler method of a Razor Page’s handler code. If IsValid is true, perform actions as desired. If false, reload the current view/page as is.

In the Blazor example, the OnValidSubmit event is triggered by <EditForm> when a form is submitted, e.g.

<EditForm Model="@SomeModel" OnValidSubmit="@HandleValidSubmit">

The method name specified refers to a C# method that handles the form submission when valid.

private async void HandleValidSubmit()
{
   ...
}

Client-Side Validation

It goes without saying that you should always have server-side validation. All the client-side validation in the world won’t prevent a malicious user from sending a GET/POST request to your form’s endpoint. Cross-site request forgery in the Form tag helper does provide a certain level of protection, but you still need server-side validation. That being said, client-side validation helps to catch the problem before your server receives the request, while providing a better user experience.

When you create a new ASP .NET Core project using one of the built-in templates for MVC or Razor Pages, you should see a shared partial view called _ValidationScriptsPartial.cshtml. This partial view should include references to jQuery unobtrusive validation, as shown below:

<script src="~/lib/jquery-validation-unobtrusive/jquery.validate.unobtrusive.min.js"></script>

If you create a scaffolded controller with views/pages, you should see the following reference at the bottom of your page or view.

e.g. at the bottom of Create.cshtml view

@section Scripts {
   @{await Html.RenderPartialAsync("_ValidationScriptsPartial");}
}

e.g. at the bottom of the Create.cshtml page

@section Scripts {
   @{await Html.RenderPartialAsync("_ValidationScriptsPartial");}
}

Note that the syntax is identical whether it’s an MVC view or a Razor page. If you ever need to disable client-side validation for some reason, that can be accomplished in different ways, whether it’s for an MVC view or a Razor page. (Blazor makes use of the aforementioned EditForm element in ASP .NET Core to include built-in validation, with the ability to track whether a submitted form is valid or invalid.)

From the official docs, the following code should be used within the ConfigureServices() method of your Startup.cs class, to set ClientValidationEnabled to false in your HTMLHelperOptions configuration.

services.AddMvc().AddViewOptions(options =>
{
   if (_env.IsDevelopment())
   {
      options.HtmlHelperOptions.ClientValidationEnabled = false;
   }
});

Also mentioned in the official docs, the following code can be used for your Razor Pages, within the ConfigureServices() method of your Startup.cs class.

services.Configure<HtmlHelperOptions>(o => o.ClientValidationEnabled = false);

Client to Server with Remote Validation

If you need to call a server-side method while performing client-side validation, you can use the [Remote] attribute on a model property. You would then pass it the name of a server-side action method which returns an IActionResult with a true boolean result for a valid field. This [Remote] attribute is available in the Microsoft.AspNetCore.Mvc namespace, from the Microsoft.AspNetCore.Mvc.ViewFeatures NuGet package.

The model property would look something like this:

[Remote(action: "MyActionMethod", controller: "MyControllerName")]
public string MyProperty { get; set; }

In the controller class, (e.g. MyControllerName), you would define an action method with the name specified in the [Remote] attribute parameters, e.g. MyActionMethod. 

[AcceptVerbs("Get", "Post")]
public IActionResult MyActionMethod(...)
{
   if (TestForFailureHere())
   {
      return Json("Invalid Error Message");
   }
   return Json(true);
}

You may notice that if the validation fails, the controller action method returns a JSON response with an appropriate error message in a string. Instead of a text string, you can also use a false, null, or undefined value to indicate an invalid result. If validation has passed, you would use Json(true) to indicate that the validation has passed.

So, when would you actually use something like this? Any scenario where a selection/entry needs to be validated by the server can provide a better user experience by providing a result as the user is typing, instead of waiting for a form submission. For example: imagine that a user is buying online tickets for an event, and selecting a seat number displayed on a seating chart. The selected seat could then be displayed in an input field and then sent back to the server to determine whether the seat is still available or not.

Custom Attributes

In addition to all of the above, you can simply build your own custom attributes. If you take a look at the classes for the built-in attributes, e.g. RequiredAttribute, you will notice that they also extend the same parent class:

  • System.ComponentModel.DataAnnotations.ValidationAttribute

You can do the same thing with your custom attribute’s class definition:

public class MyCustomAttribute: ValidationAttribute 
{
   // ...
}

The parent class ValidationAttribute, has a virtual IsValid() method that you can override to return whether validation has been calculated successfully (or not).

public class MyCustomAttribute: ValidationAttribute 
{
   // ...
   protected override ValidationResult IsValid(
      object value, ValidationContext validationContext)
   {
      if (TestForFailureHere())
      {
         return new ValidationResult("Invalid Error Message");
      }
      
      return ValidationResult.Success;
   }
}

You may notice that if the validation fails, the IsValid() method returns a ValidationResult() with an appropriate error message in a string. If validation has passed, you would return ValidationResult.Success to indicate that the validation has passed.

References

Unit Testing in ASP .NET Core 3.1

By Shahed C on May 25, 2020

This is the twenty-first of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled ASP .NET Core A-Z! To differentiate from the 2019 series, the 2020 series will mostly focus on a growing single codebase (NetLearner!) instead of new unrelated code snippets week.

Previous post:

NetLearner on GitHub:

In this Article:

U is for Unit testing

Whether you’re practicing TDD (Test-Driven Development) or writing your tests after your application code, there’s no doubt that unit testing is essential for web application development. When it’s time to pick a testing framework, there are multiple alternatives such as xUnit.netNUnit and MSTest. This article will focus on xUnit.net because of its popularity (and similarity to its alternatives) when compared to the other testing frameworks.

In a nutshell: a unit test is code you can write to test your application code. Your web application will not have any knowledge of your test project, but your test project will need to have a dependency of the app project that it’s testing.

Unit Testing Project Dependencies
Unit Testing Project Dependencies

Here are some poll results, from asking 500+ developers about which testing framework they prefer, showing xUnit.net in the lead (from May 2019).

similar poll on Facebook also showed xUnit.net leading ahead of other testing frameworks. If you need to see the equivalent attributes and assertions, check out the comparison table provided by xUnit.net:

To follow along, take a look at the test projects on Github:

Setting up Unit Testing

The quickest way to set up unit testing for an ASP .NET Core web app project is to create a new test project using a template. This creates a cross-platform .NET Core project that includes one blank test. In Visual Studio 2019, search for “.net core test project” when creating a new project to identify test projects for MSTest, XUnit and NUnit. Select the XUnit project to follow along with the NetLearner samples.

Test Project Templates in Visual Studio 2019
Test Project Templates in Visual Studio 2019

The placeholder unit test class includes a blank test. Typically, you could create a test class for each application class being tested. The simplest unit test usually includes three distinct steps: Arrange, Act and Assert.

  1. Arrange: Set up the any variables and objects necessary.
  2. Act: Call the method being tested, passing any parameters needed
  3. Assert: Verify expected results

The unit test project should have a dependency for the app project that it’s testing. In the test project file NetLearner.Mvc.Tests.csproj, you’ll find a reference to NetLearner.Mvc.csproj.

... 
<ItemGroup>
   <ProjectReference Include="..\NetLearner.Mvc\NetLearner.Mvc.csproj" />
</ItemGroup>
...

In the Solution Explorer panel, you should see a project dependency of the reference project.

Project Reference in Unit Testing Project
Project Reference in Unit Testing Project

If you need help adding reference projects using CLI commands, check out the official docs at:

Facts, Theories and Inline Data

When you add a new xUnit test project, you should get a simple test class (UnitTest1) with an empty test method (Test1). This test class should be a public class and the test method should be decorated with a [Fact] attribute. The attribute indicates that this is a test method without any parameters, e.g. Test1().

public class UnitTest1
{
   [Fact]
   public void Test1()
   {
      
   }
}

In the NetLearner Shared Library test project, you’ll see a test class (ResourceListServiceTests.cs) with a series of methods that take 1 or more parameters. Instead of a [Fact] attribute, each method has a [Theory] attribute. In addition to this primary attribute, each [Theory] attribute is followed by one of more [InlineData] attributes that have sample argument values for each method parameter.

[Theory(DisplayName = "Add New Resource List")]
[InlineData("RL1")]
public async void TestAdd(string expectedName)
{
   ...
}

In the code sample, each occurrence of [InlineData] should reflect the test method’s parameters, e.g.

  • [InlineData(“RL1”)] –> this implies that expectedName = “RL1”

NOTE: If you want to skip a method during your test runs, simply add a Skip parameter to your Fact or Theory with a text string for the “Reason”.

e.g.

  • [Fact(Skip=”this is broken”)]
  • [Theory(Skip=”we should skip this too”)]

Asserts and Exceptions

Back to the 3-step process, let’s explore the TestAdd() method and its method body.

public async void TestAdd(string expectedName)
{
    var options = new DbContextOptionsBuilder<LibDbContext>()
        .UseInMemoryDatabase(databaseName: "TestNewListDb").Options;

    // Set up a context (connection to the "DB") for writing
    using (var context = new LibDbContext(options))
    {
        // 1. Arrange
        var rl = new ResourceList
        {
            Name = "RL1"
        };

        // 2. Act 
        var rls = new ResourceListService(context);
        await rls.Add(rl);
    }

    using (var context = new LibDbContext(options))
    {
        var rls = new ResourceListService(context);
        var result = await rls.Get();

        // 3. Assert
        Assert.NotEmpty(result);
        Assert.Single(result);
        Assert.NotEmpty(result.First().Name);
        Assert.Equal(expectedName, result.First().Name);
    }
}

  1. During the Arrange step, we create a new instance of an object called ResourceList which will be used during the test.
  2. During the Act step, we create a ResourceListService object to be tested, and then call its Add() method to pass along a string value that was assigned via InlineData.
  3. During the Assert step, we compare the expectedName (passed by InlineData) with the returned result (obtained from a call to the Get method in the service being tested).

The Assert.Equal() method is a quick way to check whether an expected result is equal to a returned result. If they are equal, the test  method will pass. Otherwise, the test will fail. There is also an Assert.True() method that can take in a boolean value, and will pass the test if the boolean value is true.

For a complete list of Assertions in xUnit.net, refer to the Assertions section of the aforementioned comparison table:

If an exception is expected, you can assert a thrown exception. In this case, the test passes if the exception occurs. Keep in mind that unit tests are for testing expected scenarios. You can only test for an exception if you know that it will occur, e.g.

Exception ex = Assert
    .Throws<SpecificException>(() => someObject.MethodBeingTested(x, y));

The above code tests a method named MethodBeingTested() for someObject being tested. A SpecificException() is expected to occur when the parameter values x and y are passed in. In this case, the Act and Assert steps occur in the same statement.

NOTE: There are some differences in opinion whether or not to use InMemoryDatabase for unit testing. Here are some viewpoints from .NET experts Julie Lerman (popular Pluralsight author) and Nate Barbettini (author of the Little ASP .NET Core book):

Running Tests

To run your unit tests in Visual Studio, use the Test Explorer panel.

  1. From the top menu, click Test | Windows | Test Explorer
  2. In the Test Explorer panel, click Run All
  3. Review the test status and summary
  4. If any tests fail, inspect the code and fix as needed.
Test Explorer in VS2019
Test Explorer in VS2019

To run your unit tests with a CLI Command, run the following command in the test project folder:

> dotnet test

The results may look something like this:

As of xUnit version 2, tests can automatically run in parallel to save time. Test methods within a class are considered to be in the same implicit collection, and so will not be run in parallel. You can also define explicit collections using a [Collection] attribute to decorate each test class. Multiple test classes within the same collection will not be run in parallel.

For more information on collections, check out the official docs at:

NOTE: Visual Studio includes a Live Unit Testing feature that allows you to see the status of passing/failing tests as you’re typing your code. This feature is only available in the Enterprise Edition of Visual Studio.

Custom Names and Categories

You may have noticed a DisplayName parameter when defining the [Theory] attribute in the code samples. This parameter allows you to defined a friendly name for any test method (Fact or Theory)  that can be displayed in the Test Explorer. For example:

[Theory(DisplayName = "Add New Learning Resource")]

Using the above attribute above the TestAdd() method will show the friendly name “Add New Learning Resource” in the Test Explorer panel during test runs.

Unit Test with custom DisplayName
Unit Test with custom DisplayName

Finally, consider the [Trait] attribute. This attribute can be use to categorize related test methods by assigning an arbitrary name/value pair for each defined “Trait”. For example (from LearningResource and ResourceList tests, respectively):

[Trait("Learning Resource Tests", "Adding LR")]
public void TestAdd() { ... }

[Trait("Resource List Tests", "Adding RL")]
public void TestAdd { ... }

Using the above attribute for the two TestAdd() methods will categorize the methods into their own named “category”, e.g. Learning Resource Tests and Resource List Tests. This makes it possible to filter just the test methods you want to see, e.g. Trait: “Adding RL”

Filtering Unit Tests by Trait Values
Filtering Unit Tests by Trait Values

Next Steps: Mocking, Integration Tests and More!

There is so much more to learn with unit testing. You could read several chapters or even an entire book on unit testing and related topics. To continue your learning in this area, consider the following:

  • MemberData: use the MemberData attribute to go beyond isolated test methods. This allows you to reuse test values for multiples methods in the test class.
  • ClassData: use the ClassData attribute to use your test data in multiple test classes. This allows you to specify a class that will pass a set of collections to your test method.

For more information on the above, check out this Nov 2017 post from Andrew Lock:

To go beyond Unit Tests, consider the following:

  • Mocking: use a mocking framework (e.g. Moq) to mock external dependencies that you shouldn’t need to test from your own code.
  • Integration Tests: use integration tests to go beyond isolated unit tests, to ensure that multiple components of your application are working correctly. This includes databases and file systems.
  • UI Tests: test your UI components using a tool such as Selenium WebDriver or IDE in the language of your choice, e.g. C#. For browser support, you may use Chrome or Firefox extensions, so this includes the new Chromium-based Edge browser.

While this article only goes into the shared library, the same concepts carry over into the testing of each individual web app project (MVC, Razor Pages and Blazor). Refer to the following documentation and blog content for each:

Refer to the NetLearner sample code for unit tests for each web project:

In order to set up a shared service object to be used by the controller/page/component being tested, Moq is used to mock the service. For more information on Moq, check out their official documentation on GitHub:

For the Blazor testing project, the following references were consulted:

NOTE: Due to differences between bUnit beta 6 and 7, there are some differences between the Blazor guide and the NetLearner tests on Blazor. I started off with the Blazor guide, but made some notable changes.

  1. Instead of starting with a Razor Class Library template for the test project, I started with the xUnit Test Project template.
  2. There was no need to change the test project’s target framework from .NET Standard to .NET Core 3.1 manually, since the test project template was already Core 3.1 when created.
  3. As per the bUnit guidelines, the test class should no longer be derived from the ComponentTestFixture class, which is now obsolete: https://github.com/egil/bunit/blob/6c66cc2c77bc8c25e7a2871de9517c2fbe6869dd/src/bunit.web/ComponentTestFixture.cs
  4. Instead, the test class is now derived from the TestContext class, as seen in the bUnit source code: https://github.com/egil/bunit/blob/6c66cc2c77bc8c25e7a2871de9517c2fbe6869dd/src/bunit.web/TestContext.cs

References