November 8, 2021

PBKDF2 Tool in Blazor

I first heard about Blazor a couple of years ago, and it sounded very interesting, especially the part about running .NET via WebAssembly on the browser. Blazor seems to have matured enough now and it's officially part of ASP.NET. Recently, my team needed a tool for PBKDF2 as we integrate with Keycloak, so I tried writing it in Blazor. Here are some of my thoughts on it so far.

You can find the full source in my GitHub repo.

C#

It's actually very refreshing to use just one language – C# (probably my favorite language) – for both the frontend and the backend, in a modern SPA-like paradigm, not WebForms with full page PostBacks like the good old days...? Of course, you're still using HTML and CSS (with support for SCSS and scoped CSS) for the "view" template. You can also call JavaScript via interop, which will be inevitable if you want to use built-in browser functionality such as alert().

WebAssembly

Originally, I wanted to build it as a WebAssembly project so that I can host as a static site and all you'd need is the browser to run it, but sadly, the KeyDerivation library doesn't support WebAssembly platform, so I had to use the Blazor Server hosting model.

The server hosting model uses WebSockets, and the server-side runs the logic, so scalability would need to be considered if writing real business applications with it.

JSX..?

I suppose this is more of a Razor template feature than Blazor, but I like that it's similar to React's JSX. For example, being able to use code blocks within the template, such as using the if statement to wrap around a component. (Never liked how Angular templates handle logic, such as ngIf and ngFor directives, where they are added as element attributes.)

Unit Testing

There's no official Microsoft library for unit testing Blazor, but an open-source project called bUnit seems popular for writing unit tests for Blazor components.

Being able to test the difference between snapshots of a render seems useful, but might get too complicated if testing a large change. Component isolation and keeping them small would help.

You can write tests as a razor component, which means you can use razor syntax for the component-under-test. Visual Studio editor's auto-formatting support still needs some work though.

DisplayName Validation Message Bug?

Perhaps there's a bug when using the DisplayName attribute on the InputText component. For both ValidationSummary and ValidationMessage<T>, setting the DisplayName attribute seems to have no effect on InputText. It just displays the property name, not the DisplayName.

InputNumber works fine though, as can be seen here:

...
validationErrorMessage = string.Format(ParsingErrorMessage, DisplayName ?? FieldIdentifier.FieldName);
...

Maybe it's not supported yet?

It does work if I use the [Display] attribute on the model itself, but nowadays I feel that's not the right place, since it breaks the layer boundary. If the model is serving as a "View Model", then it'd be okay.

Finding out how the [Required] attribute's message validation works was taking longer than I'd like, so I stopped at ValidationAttributeAdapterOfTAttribute.cs. If I can get to spend more time on this and can confirm it, perhaps I'll file a bug report. By the way, did you know that the ASP.NET Core solution has 480 projects? Took a very long time to load it on VS2019...

Dotnet Watch Hot Reload

It sometimes forces a manual reload, but I like the experience. VS2022 will have this feature, and will remain in .NET 6 CLI..

Misc.

RNGCryptoServiceProvider

RNGCryptoServiceProvider is obsolete in .Net 6 Preview. Be aware that a lot of examples on the internet still use this class for generating the salt.

JavaScript PBKDF2

There's actually a JavaScript library for handling PBKDF2, so yes, I could've just done it all in CodeSandbox:

import "./styles.css";
import { pbkdf2Sync } from "pbkdf2";

export default function App() {
  let textToHash = "foobar";
  let salt = atob("b9Txti27LxJWX9BejPSaAQ==");
  let result = pbkdf2Sync(textToHash, salt, 27500, 64, "sha256").toString(
    "base64"
  );

  return <div className="App">Hash: {result}</div>;
}

November 5, 2021

Azure Artifacts and NuGet

Recently, I had to design a solution where a common module needed to be shared between multiple projects. One way to do this in .NET is NuGet. Since our packages are commercial and proprietary, I can't publish them to nuget.org, so here are some notes on using Azure Artifacts with NuGet.

Building and Publishing NuGet Packages to Azure Artifacts

The first step is getting the feed information from Azure Artifacts for NuGet, then creating a nuget.config file in the solution and adding the feed information:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
	<packageSources>
		<clear />
		<add key="[Custom Name]" value="https://pkgs.dev.azure.com/[Organization Name]/_packaging/[Feed Name]/nuget/v3/index.json" />
		<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
	</packageSources>
</configuration>

Note that I also had to add the main nuget.org feed.

For building and publishing, here's a sample azure-pipelines.yml:

trigger:
- master

pr:
- master

pool:
  vmImage: ubuntu-latest

steps:

- task: NuGetAuthenticate@0

- script: |
    SHORT_COMMIT_HASH=${BUILD_SOURCEVERSION:0:7}

    dotnet pack [Path to .csproj] -p:PackageVersion=$(PACKAGE_MAJOR_VERSION).$(PACKAGE_MINOR_VERSION).$(Build.BuildId)+$SHORT_COMMIT_HASH -p:Version=$(PACKAGE_MAJOR_VERSION).$(PACKAGE_MINOR_VERSION).$(Build.BuildId).0
  displayName: 'Dotnet Pack'

- script: |
    dotnet nuget push --source "[Custom Name]" --api-key notused [Path to .nupkg from above, e.g., ./src/ProjectName/bin/Debug/ProjectName.$(PACKAGE_MAJOR_VERSION).$(PACKAGE_MINOR_VERSION).$(Build.BuildId).nupkg]
  displayName: 'Dotnet Nuget Push'
  condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))

For credentials, all you need is the NuGetAuthenticate task for Azure Pipelines. If your Azure Artifacts are in the same organization as the pipeline, then no parameters are needed. Note that using a PAT (Personal Access Token) doesn't seem to work on Azure Pipelines for Azure Artifacts.

I'm using the dotnet CLI to both pack and publish the project as a NuGet package, and using some custom pipeline variables to set the version number.

It's good to know that NuGet supports SemVer 2.0. One feature from SemVer 2.0 that I'm using is the metadata - stamping the git hash on it. Note that you need to be on NuGet 4.3.0+ to use SemVer 2.0. With .NET 6 coming out soon, NuGet version will be 6.0 as well. Traditionally Microsoft's versioning has 4 parts, so for now I'm keeping the regular version's last part as 0.

Consuming NuGet Packages from Azure Artifacts

To use the NuGet package that we've published, add the feed to the client project's nuget.config, creating the file if necessary. Then in Visual Studio you should be able to browse the feed and install the package. You might be prompted to sign-in to Azure DevOps.

For pipelines, if it's an Azure DevOps pipeline, then just add the NuGetAuthenticate task before your build step and you should be good to go. In my case I also had to consume it from other build environments such as Bitbucket pipelines and Jenkins.

Here's a snippet of bitbucket-pipelines.yml for consuming the package and building a client project:

image: mcr.microsoft.com/dotnet/core/sdk:3.1
...
...
  - dotnet nuget update source [Custom Name] --source https://pkgs.dev.azure.com/[Organization Name]/_packaging/[Feed Name]/nuget/v3/index.json -u notused -p $AZURE_ARTIFACTS_PAT --store-password-in-clear-text
  - dotnet restore
  - dotnet build [Project Name] --no-restore

The $AZURE_ARTIFACTS_PAT is a custom pipeline variable that I've created to store the PAT from Azure DevOps, which in my case only has a read-only permission to the Packaging scope.

You might've noticed --store-password-in-clear-text. I couldn't figure out where NuGet stores the password. It looks like it should update the nuget.config with the specified password, but it is not there after the update or even the add command. On Windows, it's not in the Credential Manager either.

If you don't specify it, you get the following error, running on mcr.microsoft.com/dotnet/core/sdk:3.1:

error: Password encryption is not supported on .NET Core for this platform. The following feed try to use an encrypted password: '[Feed Name]'. You can use a clear text password as a workaround.
error:   Encryption is not supported on non-Windows platforms.

Debug and Release Versions

Since this package will be used by just our teams, it would be nice if both the Debug version and the Release version were included in the NuGet package, so that the team members can debug through the code, if needed, while using the package. It seems such a feature is not supported and there's an open issue for it.

Looks like you can include symbols.

Multiple DLLs

Related to above, as I was developing the library, I wanted to organize it using multiple projects – e.g., one main project that exposes the library's public APIs and other supporting projects that are referenced from the main project (via project-references), but it seems NuGet is designed to contain only a single DLL. There are work-arounds.

Note that any other NuGet packages that you're referencing from your project, need to come from a NuGet source that will be accessible by the client.

March 1, 2021

Keycloak with Microsoft SQL Server Database (MSSQL)

Keycloak uses the H2 database out-of-the-box, which is not recommended for production use. The Keycloak documentation uses PostgreSQL as an example. Since my team would need to use it with SQL Server, I did some preliminary research on configuring Keycloak with MSSQL.

Create an Empty Database

The first thing to do is to create an empty database in the SQL Server for Keycloak to use. The example below assumes a database named KeycloakDB has been created.

If you need to use an existing database, then you can set up a new default schema for the database user to separate Keycloak data from the existing data. You can also specify the schema in the configuration.

JDBC Driver

The next thing to do is getting the Microsoft SQL Server JDBC driver. If you're using Maven and have used SQL Server in the past, you might have it already, such as in this folder: %USERPROFILE%\.m2\repository\com\microsoft\sqlserver\mssql-jdbc\9.2.0.jre15\. If not, download it from Microsoft, unzip it and locate the jar. Then copy the jar file, e.g., mssql-jdbc-9.2.0.jre15.jar over to the following location, creating subfolders as needed: [Keycloak Root]\modules\system\layers\keycloak\com\microsoft\sqlserver\main. (Note that I'm using JDK 15, which is the latest at the time of this post, but it is not an LTS release.)

Then create the module.xml file in the same folder:

<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.3" name="com.microsoft.sqlserver">

  <resources>
    <resource-root path="mssql-jdbc-9.2.0.jre15.jar"/>
  </resources>

  <dependencies>
    <module name="javax.api"/>
    <module name="javax.transaction.api"/>
  </dependencies>
</module>

There's an alternate JDBC driver for MSSQL, the jTDS driver, which is open source. I wouldn't recommend using it since it hasn't been updated for some time now, and may not support newer versions of MSSQL. The official Microsoft driver has been open source for some time.

Update Configuration

Next, we need to update the Keycloak XML configuration file. Add the <driver> element to [Keycloak Root]\standalone\configuration\standalone.xml (I'm using standalone mode, for domain mode, refer to documentation):

<driver name="sqlserver" module="com.microsoft.sqlserver">
  <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class>
</driver>

Note that the module matches the driver module name and the path from what we've set up earlier, and the name will be used below. (The xa-datasource-class name was found from JBoss documentation)

Then search for KeycloakDS in the file and update it to:

<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true" statistics-enabled="${wildfly.datasources.statistics-enabled:${wildfly.statistics-enabled:false}}">
  <connection-url>jdbc:sqlserver://localhost;instanceName=[INSTANCE NAME];databaseName=KeycloakDB;integratedSecurity=true</connection-url>
  <driver>sqlserver</driver>
</datasource>

Note that I'm using an instance name. I'm also using integrated security, which means the SQL Server's authentication DLL, e.g., mssql-jdbc_auth-9.2.0.x64.dll should be placed in the bin folder of the Java runtime. This DLL won't be in the .m2 folder mentioned earlier, so you'll need to extract it from the driver zip file.

If you need to use username and password, you can specify the following elements in the <datasource>:

<user-name>[Username]</user-name>
<password>[Password]</password>

If you do end up specifying username and password, refer to JBoss/Wildfly documentation on securely storing the password.

Now start Keycloak (restart if it was running already). You should see the new tables show up in the Keycloak database. (Keycloak uses JPA Hibernate).

After you create the initial admin user in Keycloak, do a select on the USER_ENTITY table, and you should see a new row for the user:

ExampleDS

If you explore the standalone.xml configuration file further, you may have noticed another database named ExampleDS. As per Wildfly documentation, looks like it can be removed. After removing the <datasource> element for ExampleDS, be sure to remove the datasource attribute from the <default-bindings> element as well. Note that when it's set to another SQL Server database, it didn't create any new objects or data. I suppose you can also point it to the Keycloak database, and as long as no other apps running on the same Wildfly instance are relying on <default-bindings>, it should be okay. There seems to be more information on Redhat Knowledgebase thread, but you need a paid subscription to access it... Another option is just leaving it as H2, but might not be desirable if you want to optimize memory in case of containerization.

February 7, 2021

Microsoft Teams Notification from GitHub Actions

I was surprised to find out that there is no official GitHub Actions for integrating with Microsoft Teams, because well, Microsoft owns GitHub now. There are third-party actions in the marketplace, but because some of our jobs run on self-hosted runners, I didn't want to take any chances, so decided to write a quick script that notifies Microsoft Teams on build status, and having used Jenkins Office 365 Connector already to integrate with Teams in the past, I had some basic idea of what I wanted to do.

Started out by adding an incoming webhook to a Teams channel where we can send messages to. This URL will be stored as an encrypted secret in GitHub, since anyone who knows this URL will be able to send messages to the channel.

The next step was to define how the message would look like in Teams, which is done through cards. Microsoft even has a nifty designer. At first I created an adaptive card that's more elaborate than the Jenkins plugin's card, but found out the hard way that Teams connectors don't support it. It supports the Office 365 connector card, which has less features.

The last step was just a matter of getting the build status and calling the webhook with the appropriate content.

A working demo can be found at my github-actions-microsoft-teams repository.

Here are some examples of how the notification looks in Teams:

I didn't have too much time for writing the script, so it isn't as robust or pretty, but for now, it's meeting our needs. It can probably be converted into a composite steps action for easier reuse. It could use some refactoring, and more features can be added, such as identifying which job failed, and parsing through the commit log to figure out who made the commits & merges and report on those as well, etc..

Also, I've been writing more PowerShell lately, and found PoshCode documentation that explains coding styles, such as putting the brace on the same line. As far as I can tell, Microsoft doesn't have an official coding style guide for PowerShell. The PoshCode doesn't seem to have a style for the switch statement though...

January 30, 2021

GitHub Actions Sparse Checkout

Sparse Checkout Not Supported in actions/checkout@v2

At the time of writing this post, GitHub's built-in actions/checkout@v2 does not support sparse (partial) checkout, so I decided to just manually perform the checkout with sparse enabled. As a reference, there is an open issue about this on GitHub, with some comments offering some suggestions.

Note that the script I'm using is PowerShell, since the project I'm working on needs Windows to build.

A working demo can be found at my GitHub repo. The rest of the post goes through some of my findings.

Authentication

One tricky part was authentication. A simple way would be just adding the token to the repo URL:

git remote add origin https://${Env:GITHUB_ACTOR}:${{secrets.GITHUB_TOKEN}}@github.com/$Env:GITHUB_REPOSITORY

But that's not what actions/checkout does. Also, it's not a good idea to add authentication into the URL since it maybe logged by the server (though in this case it might not be that critical since it's the GitHub server itself and the auth token expires after each run). So to follow what actions/checkout does, I tried to use the header. According the log of when I used actions/checkout, it looked like it's just the token itself, since it's masked out:

"C:\Program Files\Git\cmd\git.exe" config --local http.https://github.com/.extraheader "AUTHORIZATION: basic ***"

But when I tried it, it didn't work. After looking at the actions/checkout code, found out it actually wraps the token with additional data. Here's the snippet of the job for handling this:

- name: Git - Setup auth extraheaders config
  run: |
	$authToken="x-access-token:${{ secrets.GITHUB_TOKEN }}"
	$bytes=[System.Text.Encoding]::UTF8.GetBytes($authToken)
	$encodedAuthToken=[Convert]::ToBase64String($bytes)

	git config --local http.https://github.com/.extraheader "AUTHORIZATION: basic ${encodedAuthToken}"
Fake Log Output..?

Well, I suppose technically it's not fake, but one very interesting thing that I noticed in the actions/checkout code is that the log masking is explicitly set, and not delegated to the general token masking feature of the log output. It executes git config command with the mask as the actual parameter. Afterwards, it changes the config file with the correct value, to avoid the OS from capturing the command line with the base64 token on it. So to people just looking at the log output, it looks as if it executed the git config command with the token value, but in reality, it didn't.

In addition to the regular token value, GitHub actions will also automatically mask the encodedAuthToken above.

LFS

To handle sparse checkout for LFS, I looked at how Jenkins' git plug-in does it, by looking at the log file first from the old Jenkins job I've had setup prior to moving to GitHub, and skimmed through the code to confirm, and which led me to using lfs.fetchinclude:

git config lfs.fetchinclude folder/subfolder
LFS Pull?

Since git lfs is included in the git installation of the host runner, the checkout will actually go through the filters and download the LFS files. But I've seen some cases where it missed a few, so I decided to add git lfs pull origin. I can probably just enable skip-smudge on the checkout and keep lfs pull for possibly better performance.

Further Improvements

As mentioned above, just like the actions/checkout, the job script can be modified to write the authentication in the config file instead of executing the git config command so that the OS will not capture it.

The job only handles main/master branch builds. Probably should parse the ref to find out the branch or use explicit sha.

If you're running this on a self-hosted runner, you should also do some clean up before checkout. For GitHub runners, it's probably not necessary since every new run starts with a clean slate.

Self-Hosted Runner

One thing I want to mention about GitHub actions – compared to Azure DevOps, I like that GitHub actions' self-hosted runner is free. So I can add multiple self-hosted runners for a repo without worrying about cost. Azure DevOps charges $15 per month for each runner if you want more than one for your organization.

January 6, 2021

Updating Windows File Permission from Command Line

Here's how to update Windows file system permission from the command line, using the icacls tool that comes with Windows (C:\Windows\System32\icacls.exe):

icacls ".\folder\*.*" /grant BUILTIN\Users:(RX)

In the example above, it adds Read and Execute permissions to BUILTIN\Users.

January 5, 2021

OpenSSL Error While Creating PFX: Expecting: ANY PRIVATE KEY

Recently had to install a certificate on IIS and didn't have a pfx file, so used openssl to generate one from the certificate and the corresponding private key, but got the following error:

$ openssl pkcs12 -export -in domain.crt -inkey privatekey.txt -out domain.pfx
unable to load private key
...:error:0909006C:PEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: ANY PRIVATE KEY

While investigating, noticed that the private key file they sent was in UTF-8 BOM format, and it looks like OpenSSL doesn't like that.

After converting it to plain UTF-8 (removing BOM), everything worked.

I've had a similar problem when using the authors file with Git LFS.

Note that OpenSSL is not part of Windows, so use WSL. It also works in Git Bash.