OliverScheer.net

Cloud, DevOps, GitHub, Developer Experience & Code

asd

Login to Azure in a GitHub Action

2024-06-06

I’m creating solutions on GitHub for Azure, aiming to deploy them easily via GitHub Actions. To achieve this, you need to authorize GitHub securely, and writing credentials directly in the pipeline is not recommended.

A better approach is to use a Service Principal and store the credentials as a GitHub Secret.

If you prefer using Managed Identities, this is also possible but requires your own build agents. The standard public build agents of GitHub do not support Managed Identities.

Step 1 - Create a Service Principal with Azure CLI

There are several ways to create a Service Principal, but my preferred method is using the Azure CLI tool az.

$subscriptionId='<yoursubscriptionid>'
$appName='<yourAppName>'
$resourceGroup='<yourResourceGroupName>'

az login
az account set -s $subscriptionId
az ad app create --display-name $appName
az ad sp create-for-rbac --name $appName `
    --role contributor `
    --scopes /subscriptions/$subscriptionId//resourceGroups/$resourceGroup

Save the result securely, you never get the clientSecret value again.

{
  "clientId": "******",
  "clientSecret": "******",
  "subscriptionId": "******",
  "tenantId": "******",
  ...
}

You need exactly these four values; you can remove all others.

Next, add the contributor role to this Service Principal. This allows the principal to create resources in an Azure Resource Group.

az role assignment create --role contributor `
    --subscription $subscriptionId `
    --assignee-object-id $clientId `
    --assignee-principal-type ServicePrincipal `
    --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroup

Step 2 - Store Azure Credentials in GitHub Secrets

Take the JSON with the four values and go to GitHub –> Settings –> Secrets and Variables –> Actions –> Repository Settings. Add a new secret named AZURE_CREDENTIALS.

You won’t be able to see these values again, but you can completely overwrite them with new values if needed.

Step 3 - Use the Settings in GitHub Actions

Use this secret to login within your GitHub Action.

    - name: Azure Login
      uses: Azure/login@v2.0.0
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS }}

And that’s it.

More Information

https://github.com/Azure/login

https://learn.microsoft.com/en-us/azure/developer/github/connect-from-azure

https://learn.microsoft.com/en-us/cli/azure/azure-cli-sp-tutorial-1?tabs=bash

Building a Data Driven Appwith Blazor and Fluent UI

2024-06-05

As some of my colleagues and friends may already know, I’m a live concert enthusiast. I think I’ve been to hundreds of concerts since the age of 14. But as I get older, it becomes more complicated to remember them all. That’s the idea behind this sample application. It might be a bit over-engineered, but it also serves as a demonstration project for using Blazor with ASP.NET Core and the Fluent Design Language.

This project will demonstrate the following Blazor topics:

  • Navigation
  • URLs for pages
  • Displaying data
  • Editing data
  • Dialogs

App Structure

When you start from scratch, as I described in this post, you’ll have a quite simple project structure with a few sample pages.

What I really like about Blazor is that you can structure your folders as you like, without affecting the final URL of the pages. This can be controlled completely independently.

For example, the artists list of my application is in the folder /Components/Pages/Artists/Index.razor.

In the code of this file, the @page attribute defines the route of this page.

Some examples in the Razor file can look like this:

@page "/artists"
@page "/artist/{ItemID:guid}"
@page "/customer/{customerId}/buildinggroup/{buildingGroupId}/calculation/{calculationId}"

This leads to quite simple URLs for my page about Artists, such as https://www.myawesomeconcertdatabase.com/artists or https://www.myawesomeconcertdatabase.com/artist/123456.

The following image describes the structure of the websites I build in this project. I also created some additional folders and files that contain more of the business logic, which we will discuss later.

Project Structure

For navigation, it is quite common to use the hamburger menu with flyouts. The template uses this, and so do I.

Project Structure

The navigation menu on the left side of the app can be configured via NavMenu.razor:

@rendermode InteractiveServer

<div class="navmenu">
    <input type="checkbox" title="Menu expand/collapse toggle" id="navmenu-toggle" class="navmenu-icon" />
    <label for="navmenu-toggle" class="navmenu-icon"><FluentIcon Value="@(new Icons.Regular.Size20.Navigation())" Color="Color.Fill" /></label>
    <nav class="sitenav" aria-labelledby="main-menu" onclick="document.getElementById('navmenu-toggle').click();">
        <FluentNavMenu Id="main-menu" Collapsible="true" Width="250" Title="Navigation menu" @bind-Expanded="expanded">
            <FluentNavLink Href="/" Match="NavLinkMatch.All" Icon="@(new Icons.Regular.Size20.Home())" IconColor="Color.Accent">Home</FluentNavLink>
            <FluentNavLink Href="artists" Icon="@(new Icons.Regular.Size20.BuildingLighthouse())" IconColor="Color.Accent">Artists</FluentNavLink>
            <FluentNavLink Href="concerts" Icon="@(new Icons.Regular.Size20.People())" IconColor="Color.Accent">Concerts</FluentNavLink>
        </FluentNavMenu>
    </nav>
</div>

@code {
    private bool expanded = true;
}

The component <FluentNavLink Href="artists" ...>Artists</FluentNavLink> will generate an <a href> to our artist page, which contains the path defined by @page "/artists".

NavMenu is just a part of another file called MainLayout.razor. This demonstrates quite well the way of building components in Blazor. The file NavMenu.razor is a component that is used in MainLayout.razor as the HTML tag <NavMenu/>, which I personally really like. MainLayout.razor:

@inherits LayoutComponentBase

<FluentLayout>
    <FluentHeader>
        Olivers Concert Database
    </FluentHeader>
    <FluentStack Class="main" Orientation="Orientation.Horizontal" Width="100%">
        <NavMenu />
        <FluentBodyContent Class="body-content">
            <div class="content">
                @Body
                <FluentDialogProvider @rendermode="RenderMode.InteractiveServer" />
            </div>
        </FluentBodyContent>
    </FluentStack>
    <FluentFooter>
       <a style="vertical-align:middle" href="https://www.medialesson.de" target="_blank">
            Made with
            <FluentIcon Value="@(new Icons.Regular.Size12.Heart())" Color="@Color.Warning" />
            by Medialesson
        </a>
    </FluentFooter>
</FluentLayout>

<div id="blazor-error-ui">
    An unhandled error has occurred.
    <a href="" class="reload">Reload</a>
    <a class="dismiss">🗙</a>
</div>

Display Data aka The Artists

Please assume that we are using Entity Framework in combination with the repository pattern here. You can see the details of the implementation in the source code that I will reference at the end of this post.

Components/Pages/Artists/Index.razor:

@page "/artists"
@using ConcertDatabase.Components.Pages.Artists.Panels
@using ConcertDatabase.Entities
@using ConcertDatabase.Repositories
@inject IDialogService dialogService
@inject ArtistRepository repository
@inject NavigationManager navigationManager

@rendermode InteractiveServer

<h3>Artist List</h3>

<FluentButton IconStart="@(new Icons.Regular.Size16.Add())" OnClick="@(() => AddInDialog())">Add</FluentButton>

@if (artists != null)
{
    <FluentDataGrid Items="@artists" TGridItem="Artist" Pagination="@pagination">
        <PropertyColumn Property="@(c => c.Name)" Sortable="true" />
        <PropertyColumn Property="@(c => c.Description)" Sortable="true" />
        <TemplateColumn Title="Actions">
            <FluentButton IconStart="@(new Icons.Regular.Size16.Edit())" OnClick="@(() => EditInDialog(context))" />
            <FluentButton IconStart="@(new Icons.Regular.Size16.DesktopEdit())" OnClick="@(() => EditInPanel(context))" />
            <FluentButton IconStart="@(new Icons.Regular.Size16.Delete())" OnClick="@(() => DeleteItem(context))" />
            <FluentButton IconStart="@(new Icons.Regular.Size16.Glasses())" OnClick="@(() => ShowItem(context))" />
        </TemplateColumn>
    </FluentDataGrid>

    <FluentPaginator State="@pagination" />
}
else
{
    <p><em>Loading...</em></p>
}

@code {
    IQueryable<Artist>? artists;
    PaginationState pagination = new PaginationState { ItemsPerPage = 15 };

    protected override void OnInitialized()
    {
        LoadData();
    }

    private void LoadData()
    {
        artists = repository.Entities.ToList().AsQueryable();
    }
    ... more code  ...
}

Some explanations here:

  1. The code at the top defines the route with @page, imports some namespaces with @using, and injects some dependency services with @inject.
  2. It also defines the render mode. You can have different render modes in Blazor. @rendermode InteractiveServer enables interaction with server code.
  3. The <FluentDataGrid> is the table definition of what we want to render. It contains the data in the Items property and enables pagination.
  4. Several actions are defined to demonstrate some interesting features. These features are triggered through the OnClick event, which calls methods like EditInDialog with the current row’s data.
  5. The @code area is essentially the code-behind. You can create a separate code-behind file if you prefer.
  6. In the @code section, I define the variable artists and fill it in the LoadData method with data from a database.

You can see the result of this little code snippet, which looks almost like pure HTML.

Artists

Delete Existing Data

I understand that not everyone is a fan of my music. I can tolerate that, most of the time. :-)

In case you want to delete an entry, you can click the delete symbol in the data grid. The code behind this method is in the @code section of the same file as the “HTML.” You remember the OnClick event in the code above? This event calls the following C# function.

private async Task DeleteItem(Artist item)
{
    // Check if the item is null
    if (item is null)
    {
        return;
    }

    // Create and show a dialog to confirm the delete
    IDialogReference dialog = await dialogService.ShowConfirmationAsync(
        $"Are you sure you want to delete the artist '{item.Name}'?",
        "Yes", 
        "No", 
        "Delete Artist?");
    DialogResult result = await dialog.Result;

    // If cancelled, return
    if (result.Cancelled)
    {
        return;
    }

    // Delete the item
    try
    {
        repository.Delete(item);
        await repository.SaveAsync();
        LoadData();
    }
    catch (Exception exc)
    {
        string errorMessage = exc.InnerException?.Message ?? exc.Message;
        await dialogService.ShowErrorAsync("Error", errorMessage);
    }
}

Some remarks on this code: This code is executed on the server, but you don’t need to think about this because you picked the @rendermode InteractiveServer.

Before I delete an artist (you always think twice before deleting the Boss), I open a dialog to ask the user if they really want to delete this brilliant artist.

Delete Dialog

This type of confirmation dialog is a built-in feature of the Fluent library. In the next step, I’ll show you how to build your own dialogs.

Additional remark: You should never, ever delete the Boss, by the way.

Edit or Add Data

If you want to add new artists to the database, you need to enter additional information like name and description. For this scenario, you may need a customized form to enter this data. Like in other frameworks, you can build a new “component” based on other components.

In Blazor, you create a new component and display it in a “dialog,” “flyout panel,” or other components.

Here is the EditArtistPanel.razor that I will use later in different kinds of dialogs:

@using ConcertDatabase.Entities
@implements IDialogContentComponent<Artist>

<FluentDialogHeader ShowDismiss="false">
    <FluentStack VerticalAlignment="VerticalAlignment.Center">
        <FluentIcon Value="@(new Icons.Regular.Size24.Delete())" />
        <FluentLabel Typo="Typography.PaneHeader">
            @Dialog.Instance.Parameters.Title
        </FluentLabel>
    </FluentStack>
</FluentDialogHeader>

<FluentTextField Label="Name" @bind-Value="@Content.Name" />
<FluentTextField Label="Description" @bind-Value="@Content.Description" />

<FluentDialogFooter>
    <FluentButton Appearance="Appearance.Accent" IconStart="@(new Icons.Regular.Size20.Save())" OnClick="@SaveAsync">Save</FluentButton>
    <FluentButton Appearance="Appearance.Neutral" OnClick="@CancelAsync">Cancel</FluentButton>
</FluentDialogFooter>

@code {

    [Parameter]
    public Artist Content { get; set; } = default!;

    [CascadingParameter]
    public FluentDialog Dialog { get; set; } = default!;

    private async Task SaveAsync()
    {
        await Dialog.CloseAsync(Content);
    }

    private async Task CancelAsync()
    {
        await Dialog.CancelAsync();
    }
}

This Razor component is quite simple. It implements the IDialogContentComponent interface, which means adding a property parameter called Content and the cascading parameter Dialog.

The Content property defines the data that is passed to the component and will also be returned when the dialog is closed. The component contains a header, a footer with save and cancel buttons, and fields for the artist’s name and description.

The code only closes the dialog and does nothing more.

Before I show you my implementation of the call to open the dialog, I want to show you two possible ways to open an editor for the artist item.

Option 1: A modal dialog that looks like a classic window

Dialog

Option 2: A flyout panel

Panel

Both methods use the exact same component, but they appear differently.

The following code shows how to call both of them:

// Open the dialog for the item
private async Task EditInDialog(Artist originalItem)
{
    var parameters = new DialogParameters
        {
            Title = "Edit Artist",
            PreventDismissOnOverlayClick = true,
            PreventScroll = true
        };

    var dialog = await dialogService.ShowDialogAsync<EditArtistPanel>(originalItem.DeepCopy(), parameters);
    var dialogResult = await dialog.Result;
    await HandleEditConcertDialogResult(dialogResult, originalItem);
}

// Open the panel for the item
private async Task EditInPanel(Artist originalItem)
{
    DialogParameters<Artist> parameters = new()
        {
            Title = $"Edit Artist",
            Alignment = HorizontalAlignment.Right,
            PrimaryAction = "Ok",
            SecondaryAction = "Cancel"
        };
    var dialog = await dialogService.ShowPanelAsync<EditArtistPanel>(originalItem.DeepCopy(), parameters);
    var dialogResult = await dialog.Result;
    await HandleEditConcertDialogResult(dialogResult, originalItem);
}

// Handle the result of the edit dialog/panel
private async Task HandleEditConcertDialogResult(DialogResult result, Artist originalItem)
{
    // If cancelled, return
    if (result.Cancelled)
    {
        return;
    }

    // If the data is not null, update the item
    if (result.Data is not null)
    {
        var updatedItem = result.Data as Artist;
        if (updatedItem is null)
        {
            return;
        }

        // Take the data from the "edited" item and put it into the original item
        originalItem.Name = updatedItem.Name;
        originalItem.Description = updatedItem.Description;

        repository.Update(originalItem);
        await repository.SaveAsync();
        LoadData();
    }
}

The function EditInDialog calls the ShowDialogAsync method of the dialogService, and EditInPanel calls the ShowPanelAsync function. Both are configured with parameters for visualization.

You may notice that I’m using a variable called dialogService. This was injected at the top of the component with @inject IDialogService dialogService. To make this work correctly, you also need to add the component <FluentDialogProvider @rendermode="RenderMode.InteractiveServer" /> in the MainLayout.razor component or where it will be required. Otherwise, the dialogs will not show up.

I have one more additional remark about the code here. I’m using this code statement: originalItem.DeepCopy(), to create a copy of an object. I’m doing this because otherwise, the dialogs would change the object instantly and not only on clicking “OK”.

I’m doing this deep copy with a quite simple extension method:

public static class ExtensionMethods
{
    public static T DeepCopy<T>(this T self)
    {
        var serialized = JsonSerializer.Serialize(self);
        var result = JsonSerializer.Deserialize<T>(serialized) ?? default!;
        return result;
    }
}

This is the simplest way to clone an object, regardless of its depth and complexity. It may not be the most efficient way, but it works for me here.

To be complete on the methods, I also want to show the add method:

private async Task AddInDialog()
{
    // Create new empty object
    Artist newItem = new();

    var parameters = new DialogParameters
        {
            Title = "Add Artist",
            PreventDismissOnOverlayClick = true,
            PreventScroll = true
        };
    // show dialog
    var dialog = await dialogService.ShowDialogAsync<EditArtistPanel>(newItem, parameters);
    var dialogResult = await dialog.Result;
    await HandleAddDialogResult(dialogResult);
}

private async Task HandleAddDialogResult(DialogResult result)
{
    if (result.Cancelled)
    {
        return;
    }

    if (result.Data is not null)
    {
        var newItem = result.Data as Artist;
        if (newItem is null)
        {
            return;
        }
        await repository.AddAsync(newItem);
        await repository.SaveAsync();
        LoadData();
    }
}

And What About Concerts

Each artist I’m tracking in my database, has concerts that I’ve visited. This is handled in the Artist Details Page.

Concerts

The implementation looks like this:

@page "/artist/{ItemID:guid}"
@using ConcertDatabase.Components.Pages.Artists.Panels
@using ConcertDatabase.Components.Pages.Concerts.Panels
@using ConcertDatabase.Entities
@using ConcertDatabase.Repositories
@inject IDialogService dialogService
@inject ArtistRepository repository
@inject NavigationManager navigationManager

@rendermode InteractiveServer

<h3>Artist Details</h3>

@if (artist != null)
{
    <FluentLabel>@artist.Name</FluentLabel>
    <FluentLabel>@artist.Description</FluentLabel>

    <FluentButton IconStart="@(new Icons.Regular.Size16.Delete())" OnClick="@(() => DeleteArtist())">Delete Artist</FluentButton>

    <FluentButton IconStart="@(new Icons.Regular.Size16.Add())" OnClick="@(() => AddConcert())">Add Concert</FluentButton>

    if (artist.Concerts != null)
    {
        <FluentDataGrid Items="@concerts" TGridItem="Concert">
            <PropertyColumn Property="@(c => c.Name)" Sortable="true" />
            <TemplateColumn Title="Date" Sortable="true">
                <FluentLabel>@context.Date?.ToShortDateString()</FluentLabel>
            </TemplateColumn>
            <PropertyColumn Property="@(c => c.Venue)" Sortable="true" />
            <PropertyColumn Property="@(c => c.City)" Sortable="true" />
            <TemplateColumn Title="Actions">
                <FluentButton IconStart="@(new Icons.Regular.Size16.DesktopEdit())" OnClick="@(() => EditInPanel(context))" />
                <FluentButton IconStart="@(new Icons.Regular.Size16.Delete())" OnClick="@(() => DeleteItem(context))" />
                <FluentButton IconStart="@(new Icons.Regular.Size16.Glasses())" OnClick="@(() => ShowConcert(context))" />
            </TemplateColumn>
        </FluentDataGrid>
    }
}
else
{
    <p><em>Loading...</em></p>
}

@code {
    [Parameter]
    public Guid ItemId { get; set; }

    Artist? artist;
    IQueryable<Concert>? concerts;

    protected override async Task OnInitializedAsync()
    {
        await LoadData();
    }

    private async Task LoadData()
    {
        artist = await repository.GetByIdWithConcerts(ItemId);
        concerts = artist?.Concerts?.AsQueryable() ?? null;
    }

    #region Data Methods

    private async Task DeleteArtist()
    {
        if (artist is null)
        {
            return;
        }

        var dialogParameters = new DialogParameters
            {
                Title = "Delete Artist",
                PreventDismissOnOverlayClick = true,
                PreventScroll = true
            };

        var dialog = await dialogService.ShowConfirmationAsync(
            "Are you sure you want to delete this artist?",
            "Yes",
            "No",
            "Delete Concert?");
        var result = await dialog.Result;
        if (!result.Cancelled)
        {
            repository.Delete(artist);
            await repository.SaveAsync();
            navigationManager.NavigateTo("/artists");
        }
    }

    #region Add

    private async Task AddConcert()
    {
        Concert newItem = new();

        var parameters = new DialogParameters
            {
                Title = "Add Concert",
                PreventDismissOnOverlayClick = true,
                PreventScroll = true
            };

        var dialog = await dialogService.ShowDialogAsync<EditConcertPanel>(newItem, parameters);
        var dialogResult = await dialog.Result;
        await HandleAddDialogResult(dialogResult);
    }

    private async Task HandleAddDialogResult(DialogResult result)
    {
        if (result.Cancelled)
        {
            return;
        }

        if (result.Data is not null)
        {
            var concert = result.Data as Concert;
            if (concert is null)
            {
                return;
            }

            if (artist is null)
            {
                return;
            }

            repository.AddConcert(artist, concert);
            await LoadData();
        }
    }

    #endregion 

    #region Edit

    private async Task EditInDialog(Concert originalItem)
    {
        var parameters = new DialogParameters
            {
                Title = "Edit Concert",
                PreventDismissOnOverlayClick = true,
                PreventScroll = true
            };

        var dialog = await dialogService.ShowDialogAsync<EditConcertPanel>(originalItem.DeepCopy(), parameters);
        var dialogResult = await dialog.Result;
        await HandleEditConcertDialogResult(dialogResult, originalItem);
    }

    private async Task EditInPanel(Concert originalItem)
    {
        DialogParameters<Concert> parameters = new()
            {
                Title = $"Edit Concert",
                Alignment = HorizontalAlignment.Right,
                PrimaryAction = "Ok",
                SecondaryAction = "Cancel"
            };
        var dialog = await dialogService.ShowPanelAsync<EditConcertPanel>(originalItem.DeepCopy(), parameters);
        var dialogResult = await dialog.Result;
        await HandleEditConcertDialogResult(dialogResult, originalItem);
    }

    private async Task HandleEditConcertDialogResult(DialogResult result, Concert originalItem)
    {
        if (result.Cancelled)
        {
            return;
        }

        if (result.Data is not null)
        {
            var concert = result.Data as Concert;
            if (concert is null)
            {
                return;
            }

            originalItem.Name = concert.Name;
            originalItem.Description = concert.Description;
            originalItem.Date = concert.Date;
            originalItem.Venue = concert.Venue;
            originalItem.City = concert.City;
            originalItem.SetList = concert.SetList;
            originalItem.Url = concert.Url;

            repository.UpdateConcert(originalItem);
            await repository.SaveAsync();
            await LoadData();
        }
    }

    #endregion

    #region Delete
    
    private async Task DeleteItem(Concert item)
    {
        if (item is null)
        {
            return;
        }

        var dialogParameters = new DialogParameters
        {
            Title = "Delete Concert",
            PreventDismissOnOverlayClick = true,
            PreventScroll = true
        };

        var dialog = await dialogService.ShowConfirmationAsync(
            "Are you sure you want to delete this concert?", 
            "Yes", 
            "No", 
            "Delete Concert?");
        var result = await dialog.Result;
        if (!result.Cancelled)
        {
            repository.DeleteConcert(item);
            await repository.SaveAsync();
            await LoadData();
        }
    }

    #endregion

    private void ShowConcert(Concert item)
    {
        navigationManager.NavigateTo($"/concert/{item.ID}");
    }

    #endregion
}

More Information

This article documents some (but not all) interesting features and my learnings with Blazor and the Fluent UI. It only took a few hours to set this up. In one of my next posts, I will describe the data infrastructure behind this solution in depth.

🤟 Stay tuned and rock on.

You can find my latest code for the concert database here: https://github.com/oliverscheer/blazor-fluent-ui-demo

Getting Started with Blazor and Fluent UI

2024-06-03

I’ve been away from real UI projects for a while. Recently, I needed to create some simple UIs for several projects to pump data into databases. While almost everyone at Medialesson loves Angular, I wanted to explore something different and revisit my roots. That’s why I chose Blazor as my “new” UI framework of the month. I was surprised at how easy it is to get started with it.

Years ago, I also realized that I’m not particularly talented at building nice UIs, so I wanted to keep the design simple and use existing controls and themes. I’m happy I discovered that Fluent is quite easy to use for decent design, and it brings a lot of good UI controls, like my favorite data grid.

In this post, I want to lay out the base for my upcoming posts about how to build data-driven apps with Blazor and Fluent.

Agenda

  • Why I Like Blazor and Fluent?
  • What is Blazor?
  • What is Fluent 2?
  • The First Application

Why I Like Blazor and Fluent

My top (incomplete and still growing) highlights in Blazor are:

  • Pure C#, with no real need for JavaScript/TypeScript, though it’s possible to use them.
  • Real components that can be structured in libraries for reuse.
  • Reuse of almost any other C#/.NET features, like Entity Framework and Dependency Injection.
  • Older code still works seamlessly.
  • Controls, controls, and even more controls.

But before I begin coding in the next posts, I want to highlight some essentials about Blazor and the Fluent design.

What is Blazor?

Blazor is a …

  • Web Framework: Blazor is a web framework developed by Microsoft that allows developers to build interactive web applications using C# instead of JavaScript.

And it brings …

  • .NET Integration: It is part of the ASP.NET Core framework, enabling full-stack web development with .NET, sharing code between server and client.

  • Web Assembly Support: Blazor Web Assembly (WASM) runs client-side in the browser via Web Assembly, allowing for near-native performance and offline capabilities.

  • Component-Based Architecture: Blazor uses a component-based architecture, where UI components are built as reusable pieces of code that can include markup and logic.

  • SignalR Integration: Blazor Server uses SignalR for real-time web functionality, maintaining a constant connection between the client and server to handle user interactions and UI updates.

More information about Blazor: https://blazor.net/

What is Fluent 2?

Fluent is a …

  • Design System: Microsoft Fluent 2 is a design system that provides a comprehensive set of design guidelines, components, and tools to create cohesive, accessible, and high-quality user interfaces.

And it brings …

  • Cross-Platform: Fluent 2 is designed to work across multiple platforms, including web, mobile, and desktop, ensuring a consistent user experience across different devices and applications.

  • Modern Aesthetics: It focuses on modern design principles such as simplicity, clarity, and efficiency, with an emphasis on clean lines, intuitive layouts, and vibrant yet harmonious color schemes.

  • Accessibility: Fluent 2 prioritizes accessibility, providing guidelines and components that help developers create inclusive applications that are usable by people with various disabilities.

  • Customization and Flexibility: The system is highly customizable, allowing developers to tailor the design components to match their brand identity while maintaining a coherent overall look and feel.

More information about Fluent 2: https://blazor.net/

Getting Started

I work with the latest version of the .NET Core SDK: https://dotnet.microsoft.com/en-us/download.

I always use a mix of Visual Studio and Visual Studio Code for editing code. Visual Studio Code is more straightforward and shows all files, while Visual Studio has a richer editing UI but hides some of the dirty secrets.

I assume you have the Web Development package installed when using Visual Studio.

The Fluent UI features are not part of the default installation of Visual Studio or the .NET SDKs. They are maintained separately on GitHub: https://github.com/microsoft/fluentui-blazor. Fortunately, there are project templates for dotnet, which can be used with the dotnet CLI and/or Visual Studio.

You can also manually add them to existing projects with the package manager.

dotnet add package Microsoft.Fast.Components.FluentUI

But honestly, you need to add some more files, links, etc., to your project. The complete documentation on how to add Fluent to an existing Blazor app can be found here.

For now, I prefer to start fresh on a greenfield project.

To check if you have the project templates already installed:

# list installed templates
dotnet new list 

If you can’t find them in the list, install them from the cli with:

# install blazor fluent templates
dotnet new install Microsoft.FluentUI.AspNetCore.Templates

Create Your First Blazor Fluent App

Create a new project with dotnet cli and start it with:

dotnet new fluentblazor -n ConcertDatabase
cd ConcertDatabase
dotnet run

Create a new project in Visual Studio:

Templates in Visual Studio

Hit F5 in Visual Studio or click the web-link in the cli and you will see the beautiful sample web app

Templates in Visual Studio

UI Controls

When you are interest in what kind of controls come with Fluent for Blazor, take a look at: https://www.fluentui-blazor.net/. That’s where I got my inspirations and sample codes from

FluentUI Blazor Components

What’s next

In this blog post I demonstrate how to build a data driven application with Blazor and Fluent.

🤟Stay tuned.

Creating Apps from Websites in Microsoft Edge

2024-05-05

In the modern digital age, individuals often find themselves inundated with numerous browser tabs while attempting to navigate through their favorite websites. This influx of tabs can lead to cluttered workspaces, decreased productivity, and increased cognitive load as users struggle to manage their online activities efficiently. Moreover, accessing frequently visited websites typically requires sifting through multiple tabs or bookmarks, further complicating the browsing experience. As a result, there exists a pressing need for a solution that allows users to streamline their workflow by transforming websites into standalone applications, accessible with a single click.

Solution

Enter Microsoft Edge, the cutting-edge web browser developed by Microsoft, equipped with a feature that addresses the aforementioned challenge. With Microsoft Edge, users have the ability to convert their favorite websites into dedicated applications, providing a tailored and streamlined browsing experience. This feature eliminates the need to navigate through cluttered browser tabs or bookmarks, allowing users to access their preferred websites directly from their desktop or application menu.

The process of creating an app from a website in Microsoft Edge is straightforward:

  1. Navigate to the desired website: Open Microsoft Edge and visit the website you wish to convert into an application.
  2. Access the browser menu: Click on the ellipsis (…) icon located at the top-right corner of the browser window to access the menu.
  3. Select the “Apps” option: From the menu, choose the “Apps” option, which contains the functionality for creating applications from websites.
  4. Install the site as an app: Within the “Apps” submenu, select the “Install this site as an app” option to initiate the conversion process.
  5. Confirmation and installation: Confirm your selection, and Microsoft Edge will proceed to install the website as a dedicated application on your device.
  6. Access the app: Once installed, the website-turned-application will be accessible from your desktop or application menu, providing a convenient and efficient means of accessing your favorite online destinations.

Summary

In summary, the ability to create apps from websites in Microsoft Edge offers users a practical solution to the challenge of managing multiple browser tabs and accessing frequently visited websites with ease. By transforming websites into standalone applications, Microsoft Edge streamlines the browsing experience, enhances productivity, and provides users with a tailored and efficient workflow. This feature underscores Microsoft’s commitment to delivering innovative solutions that empower users to make the most of their digital experiences.

How I Taught ChatGPT to Read the Clock: Introducing Semantic Kernel

2024-04-25

This article is a guide to developing your first semantic kernel app using dotnet and C#, enabling you to add dynamic features to your AI solution.

Banner

Challenge and Problem Statement

A common limitation of AI models is their static nature. For instance, when asked “Is the queen still alive?” ChatGPT might respond affirmatively based on outdated information. Such models struggle with dynamically changing information and complex calculations not readily available in public documents.

What Time Is It?

Ever wondered why ChatGPT can’t provide the current date and time? As a text-generating engine, it relies on predictions from existing data. Attempting to ask for the current date, time, or day of the week yields no response.

Below is the initial version of my sample application with no additional plugins.

Sample1

To enhance your AI solution’s intelligence, you can leverage the plugin feature of the open-source Semantic Kernel SDK. This enables you to write your own “features” for a large language model.

Requirements

To create your first semantic kernel plugin, I recommend using the latest version of dotnet and Visual Studio Code.

Additionally, you’ll need to install the Semantic Kernel SDK in your project with: dotnet add package Microsoft.SemanticKernel.

You’ll also need an existing Azure OpenAI Service in your Azure Tenant.

Code

The demo application is a basic console chat application offering only rudimentary mathematical and datetime calculation functions.

Configuration

Create a configuration file named appsetting.json, and include your model’s name, endpoint, and key in the json.

{
  "OpenAIEndpoint": "",
  "OpenAPIKey": "",
  "ModelName": ""
}

Plugin Code

To write a plugin it is only required to add attributes to methods and parameters. Those attributes indicate the methods with intentions to return data.

Create a new file with the name DateTimePlugin.cs.

using Microsoft.SemanticKernel;
using System.ComponentModel;

namespace Oliver.AI.Samples.ChatGPTPlugin.Plugins
{
    public sealed class DateTimePlugin
    {
        [KernelFunction, Description("What date is today?")]
        public static DateTime GetDate()
        {
            return DateTime.Today;
        }

        [KernelFunction, Description("What day of week is today?")]
        public static DayOfWeek GetDay()
        {
            return DateTime.Today.DayOfWeek;
        }

        [KernelFunction, Description("What time is it?")]
        public static DateTime GetTime()
        {
            return DateTime.Now;
        }
    }
}

The Console Application Code

The magic to add the plugin to the existing ChatCompletionService is just s single line of code:

builder.Plugins.AddFromType<DateTimePlugin>();

The complete code with the file name Program.cs.

using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Oliver.AI.Samples.ChatGPTPlugin.Plugins;

#region Configuration

// Read the condiguration from an appsettings.json file
// to avoid exploiting the API key and endpoint in a demo

var configuration = new ConfigurationBuilder()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile("appsettings.json", true)
    .AddJsonFile("appsettings.Development.json", true)
    .Build();

string endpoint = configuration["OpenAIEndpoint"] ?? "";
string modelName = configuration["ModelName"] ?? "";
string apiKey = configuration["OpenAPIKey"] ?? "";

#endregion 

// Create kernel
IKernelBuilder builder = Kernel.CreateBuilder();

// Add a text or chat completion service using either:
builder.Services.AddAzureOpenAIChatCompletion(modelName, endpoint, apiKey);
builder.Plugins.AddFromType<MathPlugin>();
builder.Plugins.AddFromType<DateTimePlugin>();

Kernel kernel = builder.Build();

// Create chat history
ChatHistory history = [];

// Get chat completion service
var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();

Console.WriteLine("Olivers ChatGPT Plugins");
Console.WriteLine("-----------------------");
Console.WriteLine("Type 'exit' to quit the conversation");
Console.WriteLine();

// Start the conversation
while (true)
{
    // Get user input
    Console.Write("User > ");
    string userInput = Console.ReadLine()!;
    if (userInput.ToLower() == "exit")
    {
        break;
    }
    history.AddUserMessage(userInput);

    // Enable auto function calling
    OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
    {
        ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
    };

    // Get the response from the AI
    var result = chatCompletionService.GetStreamingChatMessageContentsAsync(
        history,
        executionSettings: openAIPromptExecutionSettings,
        kernel: kernel);

    // Stream the results
    string fullMessage = "";
    var first = true;
    await foreach (var content in result.ConfigureAwait(false))
    {
        if (content.Role.HasValue && first)
        {
            Console.Write("Assistant > ");
            first = false;
        }
        Console.Write(content.Content);
        fullMessage += content.Content;
    }
    Console.WriteLine();
    Console.WriteLine();

    // Add the message from the agent to the chat history
    history.AddAssistantMessage(fullMessage);
}

Console.WriteLine("Goodbye!");

Running the Application

Run the application and ask some of the following questions:

  • Which day is today?
  • What time is it?
  • Which day of the week is today?
  • Welcher Wochentag ist heute?

Iimage2

Video

I’ve recorded a short video demonstrating this process, which I’ve posted on YouTube.

English Version | German Version

Conclusion

With the Semantic Kernel, you can create scenarios beyond simple questions. You can retrieve data from internal sources, engage in more intensive dialogs, and much more. Stay tuned for further developments.

You can find more information here.

Checking App Settings at Startup

2024-03-26

This article provides a comprehensive sample demonstrating how to effectively utilize app settings in ASP.NET Core applications.

Dog

Problem Statement

In the realm of application development, managing settings efficiently can be a pivotal but often overlooked aspect, especially when collaborating with team members. Imagine a scenario where you or your colleagues add or remove settings during development, such as passwords, connection strings, or keys. These sensitive pieces of information should never find their way into your source code control system.

However, a common occurrence is that someone adds a new setting essential for a feature without communicating it to other team members. Consequently, you might encounter unexpected exceptions or peculiar behavior within your application, leading to time-consuming investigations.

Consider this familiar code snippet:

string openAIKey = Environment.GetEnvironmentVariable("OpenAIKey");

This pattern, while prevalent, is both frustrating and risky when employed within teams.

Solution

To mitigate such issues effectively, I strongly advocate for implementing the following practices:

  1. Define Settings in a Dedicated Class
using System.ComponentModel.DataAnnotations;

namespace Oliver.Tools.Copilots
{
    public class OpenAISettings
    {
        public const string Key = "OpenAISettings";

        [Required(ErrorMessage = "OpenAIKey required")]
        public required string OpenAIKey { get; set; }

        [Required(ErrorMessage = "OpenAIEndpoint required")]
        public required string OpenAIEndpoint { get; set; }
    }
}
  1. Configure Settings at Startup in Program.cs
IServiceCollection services = builder.Services;

IConfigurationSection? openAISettings = builder.Configuration.GetSection(OpenAISettings.Key);
services
    .Configure<OpenAISettings>(openAISettings)
    .AddOptionsWithValidateOnStart<OpenAISettings>()
    .ValidateDataAnnotations();
  1. Run the application

Encountering an exception at the application’s start is both beneficial and intentional.

Exception

The sooner the better an exception is thrown, the earlier you can fix a problem. It is getting harder when you need to search late in the development process for a missing setting, somewhere hidden in the some code.

  1. Include Settings in your local appsettings.json File
{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "OpenAISettings": {
    "OpenAIKey": "1234567890987654321",
    "OpenAIEndpoint": "https://youropenaiendpoint.openai.azure.com/"
  },
  ...
}

Even though settings are not case-sensitive by default, any minor typos will result in exceptions.

  1. Additional Perk: Dependency Injection

This pattern facilitates effortless dependency injection, allowing settings to be readily injected into other classes.

namespace DataWebApp.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class ChatController : ControllerBase
    {
        private readonly OpenAISettings _mySettings;

        public ChatController(IOptions<OpenAISettings> mySettings)
        {
            _mySettings = mySettings.Value;
        }

        ...        
    }
}

In Conclusion

Adopting this recommended approach not only streamlines your development process but also saves invaluable time that would otherwise be spent scouring through codebases in search of elusive settings.

All My Posts

2024-03-25

Embed Sample Data in Your Code

2024-03-14

Banner

One of my favorite tricks for data-driven apps is to include sample data during development. This sample data is invaluable for various purposes such as designing UIs, conducting demos, or running tests.

For this reason, I recommend integrating test data into the solution for debug releases. While not suitable for release builds, it proves highly beneficial during debug mode.

Getting Started

First, obtain a sample dataset. For instance, you can use the Titanic dataset available here.

Next, add the CSV file to your file structure. Your Solution Explorer should resemble the following:

Solution Explorer

Particularly important, don’t forget to change the property Build Action to Embedded resource. Otherwise, this is not working.

Embedded Resources File

Code

Create a class to represent TitanicPassengers:

using System.ComponentModel.DataAnnotations;

namespace Common.Models;

public class TitanicPassenger
{
    [Key]
    public int PassengerId { get; set; }
    public bool Survived { get; set; }
    public int Pclass { get; set; }
    [Required]
    public string Sex { get; set; }
    public float Age { get; set; }
    [Required]
    public string Name { get; set; }
    public int SibSp { get; set; }
    public int Parch { get; set; }
    [Required]
    public string Ticket { get; set; }
    public float Fare { get; set; }
    [Required]
    public string Cabin { get; set; }
    public char Embarked { get; set; }
}

Use the following code to read data from the embedded file:

using Common;
using Common.Models;
using System.Reflection;

namespace DataWebApp.Models;

public class TitanicPassengersSeed
{
    // Load CSV file into a List of TitanicPassenger   
    public static List<TitanicPassenger> LoadPassengers()
    {
        // Get the file from embedded resources
        Assembly assembly = Assembly.GetExecutingAssembly();
        string resourceName = "Common.SampleData.TitanicPassengers.csv";
        Stream? stream = assembly.GetManifestResourceStream(resourceName);
        if (stream == null)
        {
            throw new Exception("Cannot find TitanicPassengers.csv");
        }
        StreamReader reader = new StreamReader(stream);
        string[] lines = reader.ReadToEnd().Split('\n');
        List<TitanicPassenger> passengers = new();

        // Read file and create TitanicPassenger objects
        foreach (var line in lines.Skip(1))
        {
            // The Names of the passengers have commas
            // so we need to replace ", " with "__ " to avoid splitting the name
            string lineHelper = line.Replace(", ", "__ ");
            string[] columns = lineHelper.Split(',');
            TitanicPassenger passenger = new()
            {
                Survived = columns[1] == "1",
                Pclass = int.Parse(columns[2]),
                Name = columns[3].Replace("__ ", ", ").Replace("\"", ""),
                Sex = columns[4],
                Age = float.Parse(string.IsNullOrEmpty(columns[5]) ? "0" : columns[5]),
                SibSp = int.Parse(string.IsNullOrEmpty(columns[6]) ? "0" : columns[6]),
                Parch = int.Parse(string.IsNullOrEmpty(columns[7]) ? "0" : columns[7]),
                Ticket = columns[8],
                Fare = float.Parse(string.IsNullOrEmpty(columns[9]) ? "0" : columns[9]),
                Cabin = columns[10],
                Embarked = columns[11][0]
            };
            passengers.Add(passenger);
        }
        return passengers;
    }

    // Seed the database with the List of TitanicPassenger
    public static void SeedPassengers(MyCopilotDbContext context)
    {
        if (context.TitanicPassengers.Any())
        {
            return;   // DB has been seeded
        }
        List<TitanicPassenger> passengers = LoadPassengers();
        context.TitanicPassengers.AddRange(passengers);
        context.SaveChanges();
    }
}

Conclusion

With this approach, you can easily load seed data into your database for testing purposes.

Happy coding!

Bavaria ipsum - Ein bairischer Blindtext-Generator

2024-03-13

Screenshot

My Engineering Toolset

2024-03-01

As Engineer I use some tools quite frequently. This is my incomplete current set of the last 12 Months.

Draw.io

Draw.io is a free, web-based diagramming application used for creating various types of diagrams such as flowcharts, network diagrams, organizational charts, UML diagrams, and more. It offers a user-friendly interface with a wide range of shapes, icons, and connectors to help users visualize their ideas and concepts. Draw.io allows users to collaborate in real-time, save diagrams to cloud storage services like Google Drive or Dropbox, and export diagrams in various formats such as PNG, JPEG, PDF, or SVG. It is popular among professionals, students, and anyone who needs to create diagrams for presentations, documentation, or brainstorming purposes.

We use it to visualize the architecture diagram, and the application flow.

Postman

Postman is a popular collaboration platform for API development. It provides a user-friendly interface that allows developers to design, test, document, and monitor APIs efficiently. Postman offers a variety of features, including:

  1. API Testing: Developers can create and execute automated tests for APIs using Postman’s testing capabilities. This helps ensure the correctness and reliability of APIs.

  2. API Documentation: Postman allows developers to generate comprehensive documentation for their APIs automatically. This documentation includes details about endpoints, request parameters, response bodies, and more.

  3. API Monitoring: Developers can monitor the performance and availability of their APIs in real-time using Postman’s monitoring features. This helps identify and address issues quickly to ensure optimal API performance.

  4. API Mocking: Postman enables developers to create mock servers for their APIs, allowing them to simulate API responses without implementing the actual backend logic. This is useful for testing and prototyping.

  5. API Collections: Postman allows developers to organize their APIs into collections, making it easy to manage and share them with team members or the broader community.

Overall, Postman is a valuable tool for API development that streamlines the process of designing, testing, and managing APIs, ultimately improving developer productivity and collaboration.

Detailed instructions how to use Postman

Azure Storage Explorer

Upload, download, and manage Azure Storage blobs, files, queues, and tables, as well as Azure Data Lake Storage entities and Azure managed disks. Configure storage permissions and access controls, tiers, and rules.

We use Azure Storage Explorer to check output of the different stages, the configuration, and the reference data.

Azure IoT Explorer

Azure IoT Explorer is a graphical user interface (GUI) tool provided by Microsoft for managing and interacting with Internet of Things (IoT) devices and solutions deployed on the Azure IoT platform. It’s part of Microsoft’s Azure IoT toolkit, designed to simplify the development, deployment, and management of IoT solutions.

Azure IoT Explorer offers various features, including:

  • Device Management: It allows users to register, monitor, and manage IoT devices connected to the Azure IoT Hub.
  • Telemetry Monitoring: Users can view real-time telemetry data generated by IoT devices, enabling them to monitor device status and performance.
  • Message Routing: Azure IoT Explorer facilitates the configuration of message routing rules to route messages between devices and Azure services based on predefined conditions.
  • Device Twin Management: Device twins are JSON documents that store device metadata and configuration information. Azure IoT Explorer provides functionalities to manage device twins, including updating desired properties and querying reported properties.
  • Direct Method Invocation: It enables users to invoke direct methods on IoT devices remotely, allowing for actions such as firmware updates or device reboots.
  • Device Simulation: Users can simulate IoT devices and their behavior within Azure IoT Explorer, which is helpful for testing and development purposes.

Overall, Azure IoT Explorer serves as a convenient tool for developers, administrators, and IoT solution architects to streamline the management and monitoring of IoT deployments on the Azure platform. However, it’s worth noting that features and capabilities may evolve over time, so it’s a good idea to check the latest documentation for any updates.

PowerShell Core for Linux & Mac

sudo apt-get install powershell

Visual Studio Code

Visual Studio Code (VS Code) is a lightweight, cross-platform source code editor developed by Microsoft. It’s designed to be highly customizable and efficient for various programming and scripting languages. VS Code provides features such as syntax highlighting, code completion, debugging support, version control integration (e.g., Git), and an extensive ecosystem of extensions to enhance functionality.

Key features of Visual Studio Code include:

  • Cross-platform: VS Code is available on Windows, macOS, and Linux, providing a consistent development experience across different operating systems.
  • Extensions: VS Code has a vast marketplace of extensions that can be easily installed to customize and extend its functionality. These extensions can add support for new languages, provide new themes, integrate with external tools, and more.
  • Integrated terminal: It includes a built-in terminal, allowing developers to run commands, scripts, and terminal-based tools directly within the editor.
  • Debugging: VS Code offers built-in support for debugging various programming languages, including JavaScript, TypeScript, Python, and more. Debugging can be done directly within the editor using breakpoints, watch variables, and other debugging features.
  • Git integration: VS Code provides seamless integration with Git and other version control systems, allowing developers to perform common version control operations such as committing changes, branching, merging, and resolving conflicts without leaving the editor.
  • Customizable: Users can customize almost every aspect of VS Code, including themes, keybindings, syntax highlighting, and more, to suit their preferences and workflow.

Overall, Visual Studio Code is widely popular among developers for its simplicity, performance, and extensive customization options, making it a versatile choice for various software development projects.

Latest Update: February 2024

Book 1

2023-12-24

Welcome to my world of code.

Book 2

2023-12-24

Welcome to my world of code.

Book 3

2023-12-24

Welcome to my world of code.

2023-12-02
  1. Frustration
  2. One Last Fire
  3. Fire in the Hall
  4. Stop the Train
  5. Primal Call
  6. Long Long Way
  7. All the Things
  8. Money Doesn’t Make You a Man
  9. Ochrasy
  10. Long Before Rock ’n’ Roll
  11. The Band
  12. Rabadam Ching
  13. Down in the Past
  14. Gloria
  15. Get Down
  16. Scream for You
  17. One Two Three
  18. Black Saturday
  19. Wake Up
  20. Get It On
  21. Dance With Somebody
  22. Love Last Forever

Curriculum Vitae / CV

2023-12-01

Principal Software Engineer @ Medialesson

I am an open-minded and curious Senior Software Engineer with more than 20 years of experience in professional software development. I’m experienced in a lot of frontend, backend, and cloud technologies. Currently I’m focusing heavily on cloud technologies.

I am an empathetic developer who enjoys writing code in teams.

I am an architect who solves customer’s technical challenges. I am an advocate of cloud technologies. But most of all, I try to be a pragmatic person who likes to learn something new every day.

Personal Information

NameOliver Scheer
E-Mailoliverscheer@outlook.com
Phone+49 175 5844505
AddressSelma-Lagerloef-Str. 4, 85375 Neufahrn bei Freising, Germany
LinkedInhttps://www.linkedin.com/in/scheeroliver/
GitHubhttps://github.com/oliverscheer
NationalityGerman
LanguagesGerman (native), English (full working proficiency)

Professional Experiences

Principal Software Engineer

Medialesson

Since 2023/11

Senior Software Engineer

Microsoft Corporation

2017/07 - 2023/10

Worked successfully on multiple customer projects for 3-12 months using modern cloud and development technologies like Azure, Kubernetes, Azure DevOps, GitHub Actions, SQL and Cosmos DB, .net, Python, node.js, Angular, React, TypeScript/JavaScript and many more.

Coached with empathy distributed engineers of customers and Microsoft in engineering fundamentals, like DevOps, CI/CD, code reviews, pair coding, testing, agile development (SCRUM, SAFe), retrospectives, and more.

Designed and implemented scaling solutions based on a wide variety of technologies and architectures from edge to cloud.

Technical Evangelist

Microsoft Germany

2006/01-2017/06

Trainer for a wide variety of developer technologies and tools: .net Framework, Mobile, Cloud, Web technologies, Azure, IoT, Desktop

Speaker/Track owner for national and international conferences and PR events

Management of developer communities

Organizer of multiple events (online/offline/distributed) up to 2000 people.

Software Developer

RTL Television, Germany

2002/10-2005/12

Architect and developer of greenfield data warehouse for finance and controlling, including management of immense amounts of data and utilization of BusinessObjects reporting.

Lead greenfield project (European-wide booking system) from inception with team of six and ensured sustainability of system for long term use.

Defined and implemented Lifecycle Management System for planning, design, implementing, testing, building and deployment of European-wide advertising booking system.

Defined engineering requirements and resource planning, implementation of quality/assurance mechanism.

Software Engineer & Consultant

Wettschereck and Partner

01/2000 - 09/2002

Developing of Custom Software Solutions based on Microsoft Frontend and Backend Technologies, like ASP, VBScript, IIS, and more.

Network Administration

Education

University of Essen (University of Essen, Germany)

Diploma of Business Informatics (D2), 1995-2000

Thesis: Comparison of Internet-Solution Architectures.

Focus: Business and Requirements Engineering, Information technology, Controlling

Teacher at ReDI School

From 09/2022 - 12/2022 I was a teacher at the ReDI School of Digital Integration. Teaching students the basics of Python development.

Certifications

  • 2023-04 GitHub Administration
  • 2023-04 GitHub Advanced Security
  • 2023-04 GitHub Actions
  • 2022-04 Certified SCRUM Product Owner
  • 2021-04 Microsoft Certified: Azure Data Scientist Associate
  • 2021-03 Advanced Certified Scrum Master Training Attendance
  • 2021-02 Microsoft Certified: Azure Developer Associate
  • 2021-02 Microsoft Certified: DevOps Engineer Expert
  • 2019-05 Microsoft Certified: Azure Fundamentals
  • 2016 INSEAD: Challenging Customers through Business Model Innovation
  • 2015-06 Microsoft Specialist: Azure Solutions
  • 2013-07 Microsoft Certified Solutions Developer: Windows Store Apps Using C#
  • 2013-07 Microsoft Specialist: Programming in C#
  • 2005 Microsoft Certified Solutions Engineer
  • 2004 Sybase IQ Data Warehouse for Developer
  • 1999 Microsoft Certified Systems Engineer for Windows NT 4.0

Technical Experiences and Skills

Main technologies used in last twelve Months.

Azure Cloud Services

DevOps on Azure DevOps and GitHub

GitHub

Infrastructure as Code Terraform and Biceps

ASP.NET Core

.net core

C#

React, Angular

node.js

Bash on Ubuntu

Docker

AKS / K8S

Python

Visual Studio & Visual Studio Code

Experienced in Engineering Fundamentals like:

Agile Methodologies: Scrum and SaFE

Testing

Code Reviews

CI/CD - Continuous Integration / Continuous Delivery

Observability

Source Code Control

I have more solid experiences with many tools and technologies collected over the last 25 years. I’m open to learn new technologies as required based on the project.

More Experiences

Coaching on the job of team members and customer developers during code-with projects.

Technology trainings for teams, colleagues, and partners

Experienced Speaker on internal, external, national, and international events (100+)

Experienced Community Evangelist

Hello and Servus, I am Oliver

2023-12-01

👋 I’m a seasoned Principal Software Engineer with over two decades of professional experience in software development in the field.

With a keen focus on developer experiences, DevOps, and cloud technologies, I bring a wealth of expertise across various frontend, backend, and cloud platforms.

As an open-minded and empathetic developer, I thrive on collaborative coding endeavors, finding joy in writing code as part of a team.

I also assume the role of an architect, adept at unraveling and solving complex technical challenges for our valued customers.

An enthusiastic advocate of cloud technologies, I am committed to staying abreast of the latest advancements in the field. Above all, I embody pragmatism in my approach, always eager to glean new knowledge and insights each day.

Join me on this journey as we navigate the ever-evolving landscape of software development and cloud technology.

Right now I’m working as a Principal Software Engineer at @medialesson.

My focus topics are:

- ☁️ Azure Cloud
- 🔧 DevOps with GitHub and Azure
- ⌨️ Developer Experiences
- 🧑‍💻 .net

About me:

- 🏠 I'm living close to Munich, Germany, Europe, Planet Earth

Contact 📫

LinkedIn

Buy me a coffee logo Oliver Scheer

Selma-Lagerloef-Str. 4

85375 Neufahrn

Germany

Email: oliverscheer@outlook.com

This blog is a personal blog and serves the purpose of providing information on creating software. The content of this blog is created with great care and attention to detail. However, I do not assume any liability for the correctness, completeness, and topicality of the content.

All texts, images and other works published on this website are subject to copyright. Any duplication, distribution, storage, transmission, broadcast, or reproduction of the content without written permission from me is expressly prohibited.

Despite careful control of the content, I do not assume any liability for the content of external links. The operators of the linked pages are solely responsible for their content.

The use of contact data, such as postal addresses, telephone and fax numbers, and email addresses published in the imprint or comparable information by third parties for the purpose of sending unsolicited information is not permitted. We reserve the right to take legal action against the senders of so-called spam mails in the event of violations of this prohibition.

Imprint

2023-12-01

Responsible for content:

Oliver Scheer

Selma-Lagerloef-Str. 4

85375 Neufahrn

Germany

Email: oliverscheer@outlook.com

This blog is a personal blog and serves the purpose of providing information on creating software. The content of this blog is created with great care and attention to detail. However, I do not assume any liability for the correctness, completeness, and topicality of the content.

All texts, images and other works published on this website are subject to copyright. Any duplication, distribution, storage, transmission, broadcast, or reproduction of the content without written permission from me is expressly prohibited.

Despite careful control of the content, I do not assume any liability for the content of external links. The operators of the linked pages are solely responsible for their content.

The use of contact data, such as postal addresses, telephone and fax numbers, and email addresses published in the imprint or comparable information by third parties for the purpose of sending unsolicited information is not permitted. We reserve the right to take legal action against the senders of so-called spam mails in the event of violations of this prohibition.

Privacy

2023-12-01

English version is below.

Datenschutzrichtlinie für oliverscheer.net

Ich lege großen Wert auf den Schutz Ihrer persönlichen Daten und die Einhaltung der geltenden datenschutzrechtlichen Bestimmungen. Nachfolgend erläutere ich Ihnen, welche Daten bei Ihrem Besuch meines Blogs erhoben werden und wie ich diese Daten nutze.

Verantwortlicher für die Datenverarbeitung

Verantwortlicher im Sinne der Datenschutz-Grundverordnung (DSGVO) und anderer nationaler Datenschutzgesetze sowie sonstiger datenschutzrechtlicher Bestimmungen ist:

Oliver Scheer

Selma-Lagerlöf-Str. 4

85375 Neufahrn

Deutschland

oliverscheer@outlook.com

Erhebung und Verarbeitung von Daten

a) Nutzung des Blogs

Wenn Sie meinen Blog besuchen, werden automatisch verschiedene Informationen von Ihrem Browser an meinen Server übermittelt. Diese Daten werden von mir erhoben und automatisch verarbeitet. Dabei handelt es sich um folgende Informationen:

Datum und Uhrzeit des Zugriffs

Ihre IP-Adresse

Browsertyp und -version

Betriebssystem

die Webseite, von der aus Sie uns besuchen (Referrer-URL)

die Unterseiten, die Sie bei uns aufrufen

Die Verarbeitung dieser Daten erfolgt, um Ihnen den Zugriff auf meinen Blog zu ermöglichen und die Sicherheit und Stabilität meines Systems zu gewährleisten. Rechtsgrundlage für die Verarbeitung ist Art. 6 Abs. 1 lit. f DSGVO.

b) Kommentare

Wenn Sie einen Kommentar auf meinem Blog hinterlassen, werden Ihr Name, Ihre E-Mail-Adresse und Ihre IP-Adresse gespeichert. Diese Daten werden benötigt, um unerwünschte Kommentare zu verhindern und gegebenenfalls rechtliche Ansprüche gegen den Verfasser des Kommentars geltend zu machen. Rechtsgrundlage für die Verarbeitung ist Art. 6 Abs. 1 lit. f DSGVO. Die Daten werden gelöscht, sobald sie nicht mehr benötigt werden.

Weitergabe von Daten

Ihre Daten werden von mir nicht an Dritte weitergegeben, es sei denn, ich bin gesetzlich dazu verpflichtet oder Sie haben der Weitergabe ausdrücklich zugestimmt.

Ihre Rechte

Sie haben das Recht, Auskunft darüber zu erhalten, welche personenbezogenen Daten von Ihnen bei mir gespeichert werden. Sie können mich auch auffordern, Ihre Daten zu korrigieren, zu löschen oder die Verarbeitung Ihrer Daten einzuschränken. Darüber hinaus haben Sie das Recht auf Datenübertragbarkeit und das Recht, Beschwerde bei der zuständigen Aufsichtsbehörde einzureichen.

Änderungen dieser Datenschutzrichtlinie

Ich behalte mir das Recht vor, diese Datenschutzrichtlinie jederzeit zu ändern. Die jeweils aktuelle Version finden Sie stets auf meinem Blog.

English Version

Privacy Policy for oliverscheer.net

I attach great importance to the protection of your personal data and compliance with the applicable data protection regulations. Below, I explain what data is collected when you visit my blog and how I use this data.

Controller for data processing

The controller in terms of the General Data Protection Regulation (GDPR) and other national data protection laws as well as other data protection regulations is:

Oliver Scheer

Selma-Lagerloef-Str. 4

85375 Neufahrn

Germany

oliverscheer@outlook.com

Collection and processing of data

a) Use of the blog

When you visit my blog, various information is automatically transmitted from your browser to my server. This data is collected and processed automatically by me. This information includes:

Date and time of access

Your IP address

Browser type and version

Operating system

The website from which you are visiting us (referrer URL)

The subpages you access on our site

The processing of this data is carried out to enable you to access my blog and to ensure the security and stability of my system. The legal basis for the processing is Art. 6 para. 1 lit. f GDPR.

b) Comments

If you leave a comment on my blog, your name, email address, and IP address will be stored. This data is required to prevent unwanted comments and, if necessary, to assert legal claims against the author of the comment. The legal basis for the processing is Art. 6 para. 1 lit. f GDPR. The data will be deleted as soon as it is no longer needed.

Disclosure of data

I do not disclose your data to third parties, unless I am legally obligated to do so or you have expressly consented to the disclosure.

Your rights

You have the right to obtain information about which personal data of yours is stored by me. You can also ask me to correct, delete, or restrict the processing of your data. In addition, you have the right to data portability and the right to lodge a complaint with the competent supervisory authority.

Changes to this privacy policy

I reserve the right to change this privacy policy at any time. The current version can always be found on my blog.

Auto Cleanup Azure Blob Storage

2023-11-14

This article gives you a snippet to clean up your blob storage frequent, to keep only a specific time of data.

Azure Blob Storage offers a brilliant and straightforward solution for storing vast amounts of data. However, when it’s unnecessary to retain all data indefinitely, such as data only needed for a few days, it becomes essential to periodically clean up the storage. This ensures optimal resource management and cost-effectiveness within your Azure environment.

using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;

namespace OliverSamples
{
    public class CleanupFunction(ILoggerFactory loggerFactory)
    {
        private readonly ILoggerFactory _loggerFactory = loggerFactory;
        private readonly ILogger _logger = loggerFactory.CreateLogger<CleanupFunction>();

        [Function("StorageCleanup")]
        public async Task Run([TimerTrigger("0 */2 * * * *")] TimerInfo myTimer)
        {
            _logger.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            StorageService storageService = new(_loggerFactory);
            await storageService.DeleteOldData();

            if (myTimer.ScheduleStatus is not null)
            {
                _logger.LogInformation($"Next timer schedule at: {myTimer.ScheduleStatus.Next}");
            }
        }
    }
}

The logic for cleaning up the storage resides within a small service helper that I’ve personally developed.

using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System.Text.Json;

public class OliverSamples
{
    private readonly string _blogStorageConnectionString;
    private readonly ILogger<StorageService> _logger;
    private CloudStorageAccount? _storageAccount;
    private CloudBlobClient? _blobClient;
    private int _maxHoursToKeep = 24;

    public StorageService(ILoggerFactory loggerFactory)
    {
        _logger = loggerFactory.CreateLogger<StorageService>();

        string blogStorageConnectionString = Environment.GetEnvironmentVariable(Const.AppSettings.STORAGE_ACCOUNT_CONNECTION_STRING) ?? "";
        if (string.IsNullOrEmpty(blogStorageConnectionString))
        {
            throw new Exception($"Configuration '{Const.AppSettings.STORAGE_ACCOUNT_CONNECTION_STRING}' is not set.");
        }
        _blogStorageConnectionString = blogStorageConnectionString;
    }

    private CloudBlobClient GetBlobClient()
    {
        if (_blobClient != null)
        {
            return _blobClient;
        }
        _storageAccount ??= CloudStorageAccount.Parse(_blogStorageConnectionString);
        _blobClient = _storageAccount.CreateCloudBlobClient();
        return _blobClient;
    }

    public async Task DeleteOldData()
    {
        List<string> containerToClean =
        [
            "MyContainer1", 
            "MyContainer2", 
            "MyContainer3"
        ];

        foreach(var container in containerToClean)
        {
            await CleanContainer(container);
        }
    }

    private async Task CleanContainer(string containerName)
    {
        CloudBlobClient blobClient = GetBlobClient();
        CloudBlobContainer container = blobClient.GetContainerReference(containerName);
        BlobContinuationToken continuationToken = null;
        do
        {
            var resultSegment = await container.ListBlobsSegmentedAsync(null, true, BlobListingDetails.Metadata, null, continuationToken, null, null);
            continuationToken = resultSegment.ContinuationToken;
            foreach (IListBlobItem item in resultSegment.Results)
            {
                if (item is CloudBlockBlob blockBlob)
                {
                    DateTimeOffset? created = blockBlob.Properties.Created;
                    if (created.HasValue && DateTimeOffset.UtcNow.Subtract(created.Value).TotalHours > _maxHoursToKeep)
                    {
                        await blockBlob.DeleteAsync();
                    }
                }
            }
        } while (continuationToken != null);
    }

}

Conclusion

With this Azure Function you clean containers in your blob storage ever 5 minutes. Files that are older than 24 hours, will be removed.

Authorize User in Azure Functions in Isolated Mode

2023-11-13

Alright, fellow cloud adventurers, let’s talk about Azure Functions and the wild ride that is .NET 8 Isolated Mode. You see, when it comes to authorizing functions for specific user groups, many of us rely on the trusty Authorize-Attribute. It’s been our go-to for granting access to authenticated user groups with ease.

But hold onto your hats, because things take an unexpected turn when you try to wield this power in Azure Functions .NET 8 Isolated Mode. Suddenly, that trusty old Authorize-Attribute seems to have lost its mojo.

What gives, you ask? Well, it seems the way our functions check request headers isn’t quite the same as it used to be. But fear not, intrepid developers! With a dash of brainstorming and a sprinkle of ingenuity, I stumbled upon a solution.

Enter: the DIY token checker. That’s right, folks. When the going gets tough, the tough get coding. I rolled up my sleeves and crafted a nifty little helper to handle token checks for specific user groups.

Because in the ever-evolving world of Azure Functions and .NET 8 Isolated Mode, sometimes you’ve got to take matters into your own hands. So here’s to blazing new trails, overcoming unexpected challenges, and always finding a way to make our functions work for us – no matter what mode they’re in.

using Microsoft.Azure.Functions.Worker.Http;
using System.Security.Claims;
using System.Security.Principal;

namespace OliverS.Helper
{
    public static class ClaimsHelper
    {
        public static bool CheckPrincipalHasClaim(HttpRequestData req, string claimType, string claimValue)
        {
            ClaimsPrincipal? principal = ClaimsPrincipalHelper.ParseFromRequest(req);

            if (principal == null)
            {
                return false;
            }

            if (principal.HasClaim(claimType, claimValue))
            {
                return true;
            }
            return false;
        }

        public static bool ClaimExists(this IPrincipal principal, string claimType)
        {
            if (principal is not ClaimsPrincipal ci)
            {
                return false;
            }

            Claim? claim = ci.Claims.FirstOrDefault(x => x.Type == claimType);
            return claim != null;
        }

        public static bool HasClaim(
            this IPrincipal principal, 
            string claimType,
            string claimValue, 
            string issuer = null)
        {
            if (principal is not ClaimsPrincipal ci)
            {
                return false;
            }

            var claim = ci
                .Claims
                .FirstOrDefault(x => x.Type == claimType && x.Value == claimValue && (issuer == null || x.Issuer == issuer));
            return claim != null;
        }

        public static string GetUserEmail(HttpRequestData req)
        {
            ClaimsPrincipal? principal = ClaimsPrincipalHelper.ParseFromRequest(req);
            if (principal == null)
            {
                return string.Empty;
            }
            string result = principal.FindFirst("unique_name")?.Value ?? string.Empty;
            return result;
        }

        public static string GetUserName(HttpRequestData req)
        {
            ClaimsPrincipal? principal = ClaimsPrincipalHelper.ParseFromRequest(req);
            if (principal == null)
            {
                return string.Empty;
            }
            string result = principal.Identity?.Name ?? string.Empty;
            return result;
        }
    }
}

Now, picture this: a trusty claims helper swoops in to save the day! With this nifty tool, we can determine whether a user possesses a specific claim they’re eager to access.

It’s like having a guardian angel for our authentication process, ensuring that only those with the right credentials can venture forth into the realm of our Azure Functions. So whether it’s a VIP pass to a restricted area or a golden ticket to exclusive features, our claims helper is here to grant access to those who truly deserve it.

[Function("mysamplefunction")]
public async Task<HttpResponseData> GetMyData(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "mydata")] HttpRequestData req)
{
    #region Is User Admin

    if (!ClaimsHelper.CheckPrincipalHasClaim(req, Const.RoleConst.RolesClaim, Const.RoleConst.Admin))
    {
        var unauthorizedResponse = new CustomResponse()
        {
            StatusCode = HttpStatusCode.Unauthorized,
            Message = "Unauthorized."
        };
        return unauthorizedResponse.CreateResponse(req);
    }

    #endregion

    
    HttpResponseData response = new CustomResponse()
    {
        StatusCode = HttpStatusCode.OK,
        Message = "Rolls found",
        Result = rolls
    }.CreateResponse(req);

    return response;
}

In this sample I’m using a helper class to have an easier work with HttpResponse.

using Microsoft.Azure.Functions.Worker.Http;
using System.Net;
using System.Text.Json;
using System.Text.Json.Serialization;

namespace Haehl.IoTRoll.Models.Response
{
    public class CustomResponse
    {
        [JsonIgnore]
        public HttpStatusCode StatusCode { get; set; }

        public string Message { get; set; } = string.Empty;

        [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
        public string[]? ErrorMessages { get; set; } = null;

        [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
        public object Result { get; set; } = default!;

        public HttpResponseData CreateResponse(HttpRequestData req)
        {
            HttpResponseData response = req.CreateResponse(StatusCode);
            response.Headers.Add("Content-Type", "application/json; charset=utf-8");

            try
            {
                var json = JsonSerializer.Serialize(this);
                response.WriteStringAsync(json);
                return response;
            } 
            catch (Exception exc)
            {
                response.WriteString(exc.Message);
                return response;
            }
            
        }
    }
}

Conclusion

Check the claim of an user is simple, but not automatically available in .net core 8 isolated mode for some reasons.

Azure Stream Analytics just don't support Azure KeyVault

2023-11-12

Azure KeyVault is undeniably invaluable for securely storing secrets, and I’ve increasingly relied on it in my projects. However, I encountered a significant roadblock when attempting to integrate it with Azure Stream Analytics Jobs.

Surprisingly, it’s not feasible to utilize KeyVault directly within Azure Stream Analytics Jobs. Attempts to create inputs or outputs with credentials stored in KeyVault proved futile.

This limitation underscores the need for alternative strategies when handling secrets in certain Azure services like Stream Analytics Jobs. While KeyVault remains a cornerstone of secure secret management, it’s crucial to explore alternative approaches to ensure seamless integration across various Azure services.

Enable and Disable Authentication With PowerShell

2023-11-10

In a recent project, a unique challenge emerged: the need to temporarily remove authentication from an Azure Function for testing purposes, only to later reinstate it. Surprisingly, finding a straightforward solution proved elusive. Despite extensive exploration, including searching for a Bicep solution or relevant APIs, I encountered obstacles. While some methods disabled authentication, artifacts persisted, preventing a clean removal.

However, amidst this quest for a solution, a breakthrough emerged: the Azure REST API, accessible via Azure CLI, revealed itself as the ultimate tool. Leveraging this powerful API, I devised a pair of PowerShell functions capable of seamlessly managing authentication providers within Azure Functions.

But why is this significant? Consider scenarios where developers need to streamline testing processes or troubleshoot authentication-related issues within Azure Functions. By understanding and harnessing the Azure REST API, developers gain unprecedented control and flexibility, empowering them to tailor authentication settings with precision and efficiency.

Let’s delve into the mechanics behind this solution. The PowerShell functions below exemplify the simplicity and effectiveness of utilizing the Azure REST API to delete and subsequently re-add authentication providers within Azure Functions:

Enable Authentication

param (
  [Parameter(Mandatory=$true)]
  [string]$functionAppName,

  [Parameter(Mandatory=$true)]
  [string]$resourceGroupName,

  [Parameter(Mandatory=$true)]
  [string]$issuer,

  [Parameter(Mandatory=$true)]
  [string]$clientId,

  [Parameter(Mandatory=$true)]
  [string]$subscriptionId
)

$identityProvider = "AzureActiveDirectory"
$resourceProviderName = "Microsoft.Web"
$resourceType = "sites"

$name = $functionAppName + "/config/authsettingsV2"

Write-Host "Enable Authentication"
Write-Host "Resource Group Name               : $resourceGroupName"
Write-Host "Function App Name                 : $functionAppName"
Write-Host "Identity Provider                 : $identityProvider"
Write-Host "Issuer                            : $issuer"
Write-Host "Client Id                         : $clientId"
Write-Host "Resource Provider Name            : $resourceProviderName"
Write-Host "Resource Type                     : $resourceType"
Write-Host "Name                              : $name"

$resourceType = "sites"
$uri = "/subscriptions/" + $subscriptionId + "/resourceGroups/" + $resourceGroupName + "/providers/Microsoft.Web/" + $resourceType + "/" + $name + "?api-version=2021-03-01"
Write-Host "Uri: $uri"

$body = "{ 'properties': { 'globalValidation': { 'requireAuthentication': 'true', 'unauthenticatedClientAction': 'Return401' }, 'identityProviders': { 'azureActiveDirectory': { 'enabled': 'true', 'registration': { 'openIdIssuer': '$issuer', 'clientId': '$clientId', 'clientSecretSettingName': 'MICROSOFT_PROVIDER_AUTHENTICATION_SECRET' } } } } }"
az rest --method Put --uri $uri --verbose --body $body

Disable Authentication

param (
  [Parameter(Mandatory=$true)]
  [string]$functionAppName,

  [Parameter(Mandatory=$true)]
  [string]$resourceGroupName,

  [Parameter(Mandatory=$true)]
  [string]$subscriptionId
)

$identityProvider = "AzureActiveDirectory"
$resourceProviderName = "Microsoft.Web"
$resourceType = "sites"

Write-Host "Enable Authentication"
Write-Host "Resource Group Name               : $resourceGroupName"
Write-Host "Function App Name                 : $functionAppName"
Write-Host "Identity Provider                 : $identityProvider"
Write-Host "Resource Provider Name            : $resourceProviderName"
Write-Host "Resource Type                     : $resourceType"
Write-Host "Name                              : $name"

$resourceType = "sites"
$name = $functionAppName + "/config/authsettingsV2"
$uri = "/subscriptions/" + $subscriptionId + "/resourceGroups/" + $resourceGroupName + "/providers/Microsoft.Web/" + $resourceType + "/" + $name + "?api-version=2021-03-01"
Write-Host "Uri: $uri"

$body = "{ 'globalValidation': { 'requireAuthentication': 'false', 'unauthenticatedClientAction': 'AllowAnonymous' }, 'httpSettings': { 'forwardProxy': { 'convention': 'NoProxy' }, 'requireHttps': 'true', 'routes': { 'apiPrefix': '/.auth' } }, 'identityProviders': { 'azureActiveDirectory': { 'enabled': 'true', 'login': { 'disableWWWAuthenticate': 'false' }, 'registration': {}, 'validation': { 'defaultAuthorizationPolicy': { 'allowedPrincipals': {} }, 'jwtClaimChecks': {} } } } }"

az rest --method Put --uri $uri --verbose --body $body

Conclusion

These two methods delete an Authentication on an Azure Function and re-add them with the help of the Azure CLI and PowerShell.

Enhance Your .NET Console Applications with Spectre.Console

2023-11-10

Console applications in .NET often lack visual appeal and interactivity. However, Spectre.Console emerges as my personal game-changer, revolutionizing the way developers craft command-line interfaces (CLIs). Offering a rich set of features, Spectre.Console elevates user experience and developer productivity.

With Spectre.Console, developers can effortlessly create stylish and dynamic text-based UIs. Its intuitive API enables easy customization of colors, styles, and layouts, breathing life into mundane console applications. From progress bars to tables, and interactive prompts to ASCII art, Spectre.Console empowers developers to build immersive command-line experiences with minimal effort.

Say goodbye to bland console applications and embrace the power of Spectre.Console for vibrant, engaging CLI development.

Documentation: https://spectreconsole.net/

Source: GitHub

Using PowerShell Files in Azure DevOps Pipelines

2023-11-09

My Learning Experience

Navigating environment variables, outputs, parameters, and other variables in pipelines can be a daunting task within Azure DevOps. With various methods for accessing variables, inconsistencies in casing, and numerous other challenges, the process can be far from straightforward.

My advice, learned through rigorous trial and error, is to steer clear of these complexities whenever possible or, at the very least, to encapsulate any code snippets exceeding two lines within PowerShell scripts.

By adopting this approach, not only can you streamline your pipeline processes, but you also gain the flexibility to test individual components separately.

A Practical Example

Executing a Bicep file within pipelines is a common requirement, yet passing parameters to Bicep can often be time-consuming and convoluted. To mitigate these challenges, I strongly recommend crafting a helper script. Such a script can be developed locally and seamlessly integrated into the pipeline once finalized.

param (
  [Parameter(Mandatory=$true)]
  [string]$templateFile1,

  [Parameter(Mandatory=$true)]
  [string]$resourceGroup,

  [Parameter(Mandatory=$true)]
  [string]$projectname,

  [Parameter(Mandatory=$true)]
  [string]$location,

  [Parameter(Mandatory=$true)]
  [string]$environment,

  [Parameter(Mandatory=$true)]
  [string]$iotHubName
)

$deploymentName = "deploy-part-1-$projectname-$environment"

Write-Host "Deploy Infrastructure with Bicep"
Write-Host "- deploymentName : $deploymentName"
Write-Host "- bicepScriptPath: $templateFile1"
Write-Host "- location       : $location"
Write-Host "- projectname    : $projectname"
Write-Host "- resourceGroup  : $resourceGroup"
Write-Host "- env            : $environment"
Write-Host "- iotHubName     : $iotHubName"

$deploymentResult = az deployment group create `
  --resource-group $resourceGroup `
  --template-file $templateFile1 `
  --name $deploymentName `
  --parameters `
    name=$projectname `
    location=$location `
    env=$environment `
    iothubName=$iotHubName 
  | ConvertFrom-Json

$outputs = $deploymentResult.properties.outputs
$powerbiFunctionAppName = $outputs.powerbiFunctionAppName.value
$settingsFunctionAppName = $outputs.settingsFunctionAppName.value
$storageAccountName = $outputs.storageAccountName.value

# Some outputs for the pipeline later
Write-Host "##vso[task.setvariable variable=POWERBI_FUNCTION_APP_NAME;isOutput=true;]$powerbiFunctionAppName"
Write-Host "##vso[task.setvariable variable=SETTINGS_FUNCTION_APP_NAME;isOutput=true;]$settingsFunctionAppName"
Write-Host "##vso[task.setvariable variable=STORAGE_ACCOUNT_NAME;isOutput=true;]$storageAccountName"

Write-Host "- Power Bi Function App Name : $powerbiFunctionAppName"
Write-Host "- Settings Function App Name : $settingsFunctionAppName"
Write-Host "- Storage Account Name       : $storageAccountName"

# Some outputs for local usage
$env:SETTINGS_FUNCTION_APP_NAME = $settingsFunctionAppName
$env:POWERBI_FUNCTION_APP_NAME = $powerbiFunctionAppName
$env:STORAGE_ACCOUNT_NAME = $storageAccountName

if ($null -eq $powerbiFunctionAppName) {
  Write-Error "- Powerbi Function App Name is null"
  Exit 1  
  return
}

if ($null -eq $settingsFunctionAppName) {
  Write-Error "- Settings Function App Name is null"
  Exit 1  
  return
}

if ($null -eq $storageAccountName) {
  Write-Error "- Storage Account Name is null"
  Exit 1  
  return
}

$waitSeconds = 30
write-Host "- Wait $waitSeconds seconds for the function app to be ready"
Start-Sleep -Seconds $waitSeconds

Conclusion

By segregating scripts for local testing and subsequent integration within pipelines, you can enhance both the efficiency and reliability of your deployment processes.

Using User Defined Functions in Azure Stream Analytics Job

2023-11-08

Challenge

Azure Stream Analytics exclusively supports functions written in .NET Standard 2.0. Handling JSON data often necessitates the utilization of tools such as Newtonsoft or features from System.Text.Json, both of which are NOT accessible in .NET Standard 2.0. Furthermore, another compelling reason to opt out for .NET is the complexity involved in storing and updating the compiled package to a designated path in Azure Blob Storage, followed by referencing it within the job configuration.

Loosing the comfort of C# and .net and switching to JavaScript wasn’t as hard as it seems to be in the beginning. The biggest challenge was to parse the incoming JSON correctly. Uppercase and lowercase characters can ruin your day.

JavaScript Calculations

The subsequent function provided is greatly simplified, merely adding two values to produce a new result. However, it’s important to note that you can perform highly intricate calculations as well.

function main(incomingData) {
    try {
        var result = calculateValues(incomingData);
        return result;
    }
    catch (err) {
        var result = {
            'newCalculatedValue': 0.0
        }
        return result;
    }
};

function calculateValues(incomingData) {
    var newCalculatedValue = incomingData.value1 + incomingData.value2;
    var result = {
        'newCalculatedValue': newCalculatedValue
    }
    return result;
}

I’ve implemented a catch block as a precautionary measure in case any “incorrect” values are received and cannot be converted accurately. Depending on the job’s settings, an uncaught exception could result in halting the job.

Calling the JavaScript Functions in the Stream Analytics Job Query

The query provided for the job is simplified to illustrate its usage.

WITH iothubstream AS
(
    SELECT 
        EventEnqueuedUtcTime,
        EventProcessedUtcTime,
        [IoTHub].ConnectionDeviceId AS ConnectionDeviceId,
        *
    FROM 
        inputiothub TIMESTAMP BY EventEnqueuedUtcTime
)
, calculateddata AS
(
    SELECT
        UDF.Calc(joineddata) as calculated,
        *
    FROM 
        iothubstream
)
, preparedView AS
(
    SELECT
        calculated.newCalculatedValue as newCalculatedValue,
        *
    FROM calculateddata
)

SELECT * 
INTO 
     outputblobstorage
FROM 
     reducedview

Conclusion

Creating custom values within a stream job using User-Defined Functions is straightforward in JavaScript. However, it’s not advisable to do so in the CLR (Common Language Runtime) way, as it only supports .NET Standard 2.0.

More Information

Updating a running Azure Stream Analytics Job

2023-11-07

Problem Statement

Stream Analytics Jobs are a powerful means of analyzing and distributing incoming data to various target storages or services in Azure and other locations. You can only update the definition of the job, when it is stopped. However, initiating and halting Stream Analytics Jobs can be time-consuming, often requiring several minutes depending on the query’s complexity and the input/output sources.

When it comes to updating the query within an automated process like CI/CD pipelines, waiting for the job to stop, updating it, and then restarting the service can present a significant challenge.

In a recent project, I devised several PowerShell routines to streamline this task.

Step 1: Stoping a Stream Analytics Job Step 2: Updating the job Step 3: Restart the job

Stoping a Stream Analytics Job

[CmdletBinding()]
param (

    [Parameter(Mandatory=$true)]
    [string]$streamAnalyticsJobName,
    
    [Parameter(Mandatory=$true)]
    [string]$resourceGroup
)

# Stop Job
Write-Host "Stop Stream Analytics Job"
Write-Host "- streamAnalyticsJobName: $streamAnalyticsJobName"
Write-Host "- resourceGroup         : $resourceGroup"
Write-Host "- We wait max 5 minutes for job to stop"

$isStopping = $false
$waitSeconds = 5
$counter = 0

# try for 5 minutes to start
do {
  $counter++
  $seconds = $counter * $waitSeconds
  
  $result=az stream-analytics job list --resource-group $resourceGroup --query "[?contains(name, '$streamAnalyticsJobName')].name" --output table
  $count = ($result | Measure-Object).Count
  if ($count -eq 1) {
    # Job not found, it is new
    Write-Host "- Job not found in list. We will create new one."
    break
  }

  # Current Status
  $resultRaw = az stream-analytics job show --job-name $streamAnalyticsJobName --resource-group $resourceGroup

  if ($? -eq $false) {
    # Job not found, it is new
    Write-Host "- Job not found. We will create new one."
    break
  }

  $result = $resultRaw | ConvertFrom-Json
  if ($null -eq $result) {
    # Job not found, it is new
    Write-Host "- Job not found. We will create new one."
    break
  } 

  # Job already exists, get job state

  $jobstate = $result.Jobstate
  Write-Host "- Current Jobstate: $jobstate"

  if ($jobstate -eq 'Stopped' -or $jobstate -eq 'Created' -or $jobstate -eq 'Failed') {
    break
  }

  # Only send stop command once
  if ($isStopping -eq $true) {
    Write-Host "- Job is already stopping, waiting for it to stop"
  } else {
    $isStopping = $true
    Write-Host "- Job is not stopped, stopping"
    az stream-analytics job stop --job-name $streamAnalyticsJobName --resource-group $resourceGroup
  }

  Write-Host "- Still stopping ($seconds seconds passed)"
  Start-Sleep -Seconds $waitSeconds

} while ($seconds -lt 300)

if ($seconds -gt 290) {
  Write-Error "- Job did not stop after 5 minutes"
  return
}

This script attempts to halt the specified job and will wait for 300 seconds, equivalent to 5 minutes, for a response. If the process exceeds this time frame, it indicates that stopping the job has likely failed.

Restarting the job

[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[Alias('name')]
[string]$streamAnalyticsJobName,

[Parameter(Mandatory=$true)]
[Alias('rg')] 
[string]$resourceGroup    
)

Write-Host "Start Stream Analytics Job"
Write-Verbose "- streamAnalyticsJobName: $streamAnalyticsJobName"
Write-Verbose "- resourceGroup         : $resourceGroup"

az stream-analytics job start --job-name $streamAnalyticsJobName --resource-group $resourceGroup --no-wait

$counter = 0
$maxRetries = 60 # 60*10 seconds = 6 minutes

$waitTime = $maxRetries * 10
Write-Host "- Job should start within 120 seconds, we try max $waitTime seconds"

do {
$counter++

# Current Status
$result = az stream-analytics job show --job-name $streamAnalyticsJobName --only-show-errors --resource-group $resourceGroup | ConvertFrom-Json

if ($null -eq $result) {
    # Job not found, it is new
    Write-Error "- Job not found, we have a problem"
    break
}

# Job already exists
$jobstate = $result.Jobstate
$seconds = $counter * 10
Write-Host "- Waiting - Current Jobstate: $jobstate ($seconds seconds passed)"

if ($jobstate -eq 'Started') {
    Write-Host "- Job started successfully"
    break
} 

if ($jobstate -eq 'Running') {
    Write-Host "- Job is running and started successfully"
    break
}

if ($jobstate -eq 'Failed') {
    Write-Error "- Job failed to start"
    break
}

Start-Sleep -Seconds 10

} while ($counter -lt $maxRetries) 

if ($counter -eq $maxRetries || $counter -gt $maxRetries) {
Write-Error "- Job did not start successfully after 5 minutes"
return $false
}

return $true    

Stopping and Starging the Job in a Pipeline

The update of the job is done during a Azure DevOps Pipeline:

parameters:
  - name: deploymentName
    type: string
  
  # ...

jobs:
- deployment: ${{ parameters.deploymentName }}
  displayName: ${{ parameters.deploymentTitle }}
  environment: ${{ parameters.environmentName }}
  workspace:
    clean: all
  strategy: 
    runOnce:
      deploy:
        steps:
        - download: current
          displayName: Download Artifacts

        - task: AzureCLI@2
          displayName: Stop ASA Job
          inputs:
            azureSubscription: ${{ parameters.azConnectionName }}
            scriptType: pscore
            scriptPath: $(Pipeline.Workspace)/Stop-StreamAnalyticsJob.ps1
            scriptArguments: >
              -streamAnalyticsJobName ${{ parameters.streamAnalyticsJobName }}
              -resourceGroup ${{ parameters.resourceGroup}}              

        # ... update 
        
        - task: AzureCLI@2
          displayName: Restart ASA Job
          inputs:
            azureSubscription: ${{ parameters.azConnectionName }}
            scriptType: pscore
            scriptPath: $(Pipeline.Workspace)/Start-StreamAnalyticsJob.ps1
            scriptArguments: >
              -streamAnalyticsJobName ${{ parameters.streamAnalyticsJobName }} 
              -resourceGroup ${{ parameters.resourceGroup}}              

Conclusion

With this approach, you are able to wait automatically in a pipeline for Azure Stream Analytics Job to stop and start again, to update the service.

Export GitHub Pull Requests

2023-11-06

Export Old Pull Requests from GitHub

For various (perhaps less than rational) reasons, I find myself needing to document my past work using Pull Requests. Since this is more about quantity than quality, I sought to automate the task. To simplify the process, I crafted a concise bash script that achieves precisely that, generating Markdown files.

#!/bin/bash
set -e

# Max number of PRs
LIMIT=500

# Check Output Folder
if [[ -z "${OUTPUT_FOLDER}" ]]; then
    # Set to default
    OUTPUT_FOLDER="ghexport"
# else
    # Folder is set
fi
mkdir -p $OUTPUT_FOLDER

PR_LIST_FILE="$OUTPUT_FOLDER/pr_list.txt"
gh pr list --json number --state closed --jq '.[].number' -L $LIMIT > $PR_LIST_FILE

lines=$(cat $PR_LIST_FILE)
for PR_NUMBER in $lines
do
    # Export PR into md file
    echo "Current PR: $PR_NUMBER "
    FILE_NAME="$OUTPUT_FOLDER/$PR_NUMBER.md"

    echo "Filename: $FILE_NAME"

    gh pr view $PR_NUMBER --json number,title,body,reviews,assignees,author,commits \
        --template   '{{printf "# %v" .number}} {{.title}}

Author: {{.author.name}} - {{.author.login}}

{{.body}}

## Commits
{{range .commits}}
- {{ .messageHeadline }} [ {{range .authors}}{{ .name }}{{end}} ]{{end}}

## Reviews

{{range .reviews}}{{ .body }}{{end}}


' > $FILE_NAME

done

# ## Assignees
# {{range .assignees}}{{.login .name}}{{end}}

The code can be accessed on GitHub: oliverscheer/github-export: Export Pull Requests and contributor information from GitHub projects.

https://github.com/oliverscheer/github-export

To be unequivocal, mandating developers to document their work through a set number of Pull Requests is among the least productive tasks managers can impose on their teams.

Creating a Power BI Streaming Dataset from the command line

2023-11-05

Problem Statement

Setting up a Power BI Streaming DataSet in the portal can be a cumbersome task prone to errors due to misspelling and incorrect casing.

In a recent customer project, I encountered this challenge and developed a PowerShell script to automate and streamline the process.

Automate Power BI DataSet Creation With PowerShell


param(
    [Parameter(Mandatory=$true)]
    [string]$newWorkspaceName,
    [Parameter(Mandatory=$true)]
    [string]$user,
    [Parameter(Mandatory=$true)]
    [SecureString]$password
)

# User credentials for
$credential = New-Object –TypeName System.Management.Automation.PSCredential –ArgumentList $user, $password

# Connect to AAD
$azureProfile = Connect-AzAccount -Credential $credential

# Get an AccessToken to the Power BI service
$accessTokenDetails = Get-AzAccessToken -ResourceUrl 'https://analysis.windows.net/powerbi/api' -DefaultProfile $azureProfile

$headers = @{
    'Authorization' = "Bearer $($accessTokenDetails.Token)"
}

# Create a new workspace

$workspaceName = "Workspace $newWorkspaceName"

$url = "https://api.powerbi.com/v1.0/myorg/groups?workspaceV2=true"
$body = '{
    "name": "' + $workspaceName + '"
}'
$response = Invoke-RestMethod -Method 'Post' -Headers $headers -Uri $url -Body $body -ContentType 'application/json'

if ($null -ne $response.error) {
    Write-Host "Error creating workspace: $($response.error.code) - $($response.error.message)"
    exit
}

$workspaceId = $response.id
Write-Host "Workspace created: $($workspaceId)"

# Create Internal dataset
$url = "https://api.powerbi.com/v1.0/myorg/groups/"+ $workspaceId + "/datasets"
$body = '{
    "name": "' + $newWorkspaceName + ' (internal)",
    "defaultMode": "Streaming",
    "tables": [
        {
            "name": "roll",
            "columns": [
                {"name": "timestamp", "dataType": "DateTime" },
                {"name": "field1", "dataType": "String" },
                {"name": "field2", "dataType": "Int64" },
                {"name": "field3", "dataType": "Int64" }
            ]
        }
    ]
}'
$response = Invoke-RestMethod -Method 'Post' -Headers $headers -Uri $url -Body $body -ContentType 'application/json'
$internalDataSetId = $response.id
Write-Host "Internal dataset created: $($internalDataSetId)"

Summarize

You can utilize this code to automate and replicate the creation of Power BI Datasets directly from the command line, eliminating the need for manual intervention.

Clone your Azure Resource Group with ARM and Bicep

2023-11-04

One common task for cloud engineers is setting up an environment in Azure for solution development. During the development and prototyping phases of an architecture, various manual tasks are performed to test different aspects. However, when it comes to recreating this environment in another resource group, the manual approach becomes impractical. This is where the use of Infrastructure as Code (IaC) becomes crucial, employing tools such as ARM, Terraform, or Bicep.

Retrieving all the settings from your “click-and-test” runs isn’t a straightforward process, and there’s a risk of forgetting crucial elements. To streamline this, I often rely on a simple script. This script allows me to extract the entire ARM/Bicep environment from one resource group and apply it to another for testing purposes. Alternatively, I can extract specific settings directly from the file to incorporate them into new Bicep files.

# Resourcegroup 
$resourceGroup = 'myresourcegroup'

# Path to ARM Template
$armTemplatePath = "./armexport.json"

# Path to bicep file
$bicepOutputPath = "./main.bicep"

az group export --name $resourceGroup --output json > $armTemplatePath

# ARM Template in Bicep dekompilieren
az bicep decompile --file $armTemplatePath --force > $bicepOutputPath

az deployment group create --resource-group $resourceGroup --template-file $bicepOutputPath --what-if

If you haven’t installed the Azure CLI (az cli) before, you can do this quite simple with the following command.

Install-Module -Name Az -AllowClobber -Force -Scope CurrentUser
Install-AzCLI

This complex code sample is also on GitHub: https://github.com/oliverscheer/copy-resource-group

Happy environment cloning.

Calling an Azure Functions Function from the command line

2023-11-03

Problem Statement

During a recent project I developed a solution that contains Azure Functions, that are deployed through a Azure DevOps Pipeline right after the infrastructure is created with bicep. For the final test in an Azure DevOps Pipeline/GitHub Actions, I aim to execute one of the newly installed Azure Functions to validate the installed/updated solution. If it runs successful, the deployment is validated and the pipeline can continue to run.

Because all names are dynamically created through bicep, the path/name of the functions and all keys are also dynamic and randomized, and only available as bicep outputs.

Solution

To obtain the URL and key of the Azure Functions Methods for testing purposes, I need to utilize the Azure CLI. The process involves retrieving the URL and key of the specific function to be called.

The following PowerShell script streamlines this task:

[CmdletBinding()]
param (
    [Parameter(Mandatory=$true)]
    [Alias('g')]
    [string]$resourceGroup,

    [Parameter(Mandatory=$true)]
    [Alias('fan')]
    [string]$functionAppName,

    [Parameter(Mandatory=$true)]
    [Alias('fn')]
    [string]$functionName
)

Write-Host "Test Solution"
Write-Host "- resourceGroup         : $resourceGroup"
Write-Host "- functionAppName       : $functionAppName"
Write-Host "- functionName          : $functionName"

# Get Url
$jsonResponse=az functionapp function show `
    --name $functionAppName `
    --resource-group $resourceGroup `
    --function-name $functionName
$decodedObject = $jsonResponse | ConvertFrom-Json
$url = $decodedObject.invokeUrlTemplate

# Get Key
$jsonResponse=az functionapp function keys list `
    --name $functionAppName `
    --resource-group $resourceGroup `
    --function-name $functionName
$decodedObject = $jsonResponse | ConvertFrom-Json
$key = $decodedObject.default

# Invoke
$invokeUrl=$url+"?code="+$key

$response = Invoke-RestMethod -Uri $invokeUrl -Method Post
Write-Host $response
Write-Host $response.StatusCode

return $response

Invoke this function in PowerShell using:

.\Test-Azure-Function.ps1 `
    -resourceGroup <myresourcegroup> `
    -functionAppName <myfunctionapp> `
    -functionName <myfunction>

This script enables me to check the solution by calling the an Azure Function that runs some internal checks. Hope that helps you.

There is no Functions runtime available that matches the version in the project

2023-11-02

During a current project I came across this error, because my Azure Functions are planned to run on .net 8. There is by default the runtime for .net 7 available, but this leeds to the following error.

There is no Functions runtime available that matches the version in the project.

Error Dialog

The fix for the problem is hidden in Visual Studio options dialog.

Error Dialog

Just click Download & Install, and hit F5.

Back online

2023-11-01

It’s been a while, since I blogged. But it is never to late to start again.

2023-07-23

2023-07-05

2023-06-20

2023-06-09

2023-06-08

1992-07-03
  1. Land of Confusion
  2. No Son of Mine
  3. Driving the Last Spike
  4. Old Medley
  5. Dance on a Volcano
  6. The Lamb Lies Down on Broadway
  7. The Musical Box
  8. (closing section)
  9. Firth of Fifth
  10. I Know What I Like (In Your Wardrobe)
  11. That’s All
  12. Illegal Alien
  13. Follow You Follow Me
  14. Throwing It All Away
  15. Fading Lights
  16. Jesus He Knows Me
  17. Home by the Sea
  18. Second Home by the Sea
  19. Hold on My Heart
  20. Domino
  21. Drum Duet
  22. I Can’t Dance
  23. Tonight, Tonight, Tonight
  24. Invisible Touch
  25. Turn It On Again

Setlist.fm

1992-05-30
  1. Perfect Crime
  2. Mr. Brownstone
  3. Live and Let Die (Wings cover)
  4. Bad Obsession
  5. Attitude (Misfits cover)
  6. Double Talkin’ Jive
  7. Civil War
  8. Patience
  9. (Wild Horses (Intro). Restarted)
  10. Welcome to the Jungle
  11. So Fine
  12. Rocket Queen
  13. November Rain
  14. You Could Be Mine
  15. Drum Solo
  16. Slash Guitar Solo
  17. Speak Softly Love (Love Theme From The Godfather)
  18. Sweet Child o’ Mine
  19. Knockin’ on Heaven’s Door
  20. Don’t Cry
  21. Paradise City

Setlist

1991-07-14
  1. Real Life
  2. Love Song
  3. See the Lights
  4. Travelling Man
  5. East at Easter
  6. Banging on the Door
  7. Book of Brilliant Things
  8. Don’t You (Forget About Me)
  9. Stand by Love
  10. Oh Jungleland
  11. Someone Somewhere in Summertime
  12. King Is White and in the Crowd
  13. Big Sleep
  14. Sanctify Yourself
  15. Let There Be Love
  16. Alive and Kicking
  17. Waterfront
  18. Ghostrider
  19. Belfast Child

Setlist

1989-02-11
  1. Ready or Not
  2. Just the Beginning
  3. Danger on the Track
  4. Let the Good Times Rock
  5. On the Loose
  6. Time Has Come
  7. Carrie
  8. Lights and Shadows
  9. Stormwind
  10. More Than Meets the Eye
  11. Drum Solo
  12. Coast to Coast
  13. Open Your Heart
  14. Sign of the Times
  15. Tower’s Callin'
  16. Guitar Solo
  17. Heart of Stone
  18. Cherokee
  19. Rock the Night
  20. Superstitious
  21. The Final Countdown
  22. A Hard Day’s Night
  23. Hound Dog

Setlist