PostsAboutGames
All posts tagged with cloud

How to disable the Azure AD password expiration policy through PowerShell

June 22, 2020 - Søren Alsbjerg Hørup

We recently encountered a problem with our automatic tests of a cloud solution. The solution utilizes Azure AD as identity provider and currently holds several test user accounts used by our automatic tests.

The tests were green for several weeks, but suddenly turned red due to the password expired! No problem we thought, we simply disable password expiration for the test users in the AD - but after traversing the Azure Portal we did not find the ability to disable or change the password expiration policy (WTF!)

After some Googling, I came to the conclusion that it is not possible to change the policy through the portal but that it is possible through PowerShelling (Is this a term I can use :-P)

Firstly, the AzureAD module must be installed in PowerShell:

Install-Module AzureAD 

This will populate the PowerShell with Azure specific cmdlets.

Next, the specific subscription needs to be selected:

Select-AzureSubscription -TenantId <GUID>

The GUID can be found Portal under Tenant ID:

annotation 2020 06 22 083340

Lastly, the following command gets the test user from the AD and sets the password policy to “DisablePasswordExpiration”:

Get-AzureADUser -ObjectId "testuser@XYZ.onmicrosoft.com") | Set-AzureADUser -PasswordPolicies DisablePasswordExpiration

That’s it! Password should no longer expire for the given user!

Hangfire - .NET background processing made easy

April 29, 2020 - Søren Alsbjerg Hørup

The cloud project I am currently working had the requirement that we needed to ingest, process, and write several gigs of data in a CosmosDB every 15 min.

For the processing part, we needed something that could scale, since the amount of data was proportional to the number of customers we have hooked up to the system.

Since the project consisted mainly of C# .NET Core developers, the initial processing was done using C# using async operations. This worked well, but was not really scalable - one in the team suggested to use Hangire.io for the processing, which turned out was a great fit for our use case. (Wished it was my idea, but it was not…)

Hangfire is an open source .NET Core library which manages distributed background jobs. It does this by starting a Server in the application where jobs can be submitted. Job types include: fire and forget, delayed jobs, recurring jobs, and continuations.

Hangfire uses a database to ensure information and metadata about jobs are persisted. In our case, we simply use an Azure SQL server. Multiple instances of the application hosting the Hangfire server helps with the processing of the jobs.

This architecture makes it possible to e.g. submit a job for each connected customer, which is processed by one or more nodes. If resources becomes a problem, we horizontal scale up the application to include more instances - which can even be done automatically depending on CPU load or other metric. In our case, we use Kubernetes for that.

What I really like about Hangfire is the fact that one can simply start with one instance of the application hosting the Hangfire server, and scale up later if needed.

Oh! and Hangfire comes with its own integrated dashboard that allows one to see submitted jobs. Neat!

Although we are not yet in production, my gut feeling is good on this one. Highly recommended!

Copy microk8s.kubectl config to Windows kubectl

September 10, 2019 - Søren Alsbjerg Hørup

For testing purposes, I recently installed Kubernetes on an Ubuntu Server VM, running on my XEN Server, through the use of the Microk8s package:

https://microk8s.io/

Installation was a breeze, and I quickly got Kubernetes up and running and was able to interact with it using microk8s.kubectl. Microk8s.kubectl is a version of kubectl having it’s own configuration pointing to the locally installed Kubernetes. This avoid conflicting with the standard kubectl which would have had its configuration overwritten by the microk8s package.

On my Windows developer PC, I wanted the ability to access the cluster using kubectl without having to run microk8s.kubectl through an SSH session.

To do this, one first have to find the kubectl config yaml file. This resides in the %userprofile%\.kube directory. If the file is not there, create it.

Configuration of the file to match that of microk8s.kubectl can be done copy paste the configuration of microk8s.kubectl and replace localhost with the external IP of the cluster. This can be done through SSH using the config view command.

$ microk8s.kubectl config view

apiVersion: v1
clusters:
- cluster:
    server: https://<externalip>:<externalport>
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    password: <password>
    username: admin

Now, copy paste to the config file on Windows, and replace localhost with the external IP of the cluster.

In addition, if the SSL certificate is untrusted on the cluster (which it typically is), make sure to add insecure-skip-tls-verify: true under the cluster part.

The final config file should look like this:

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://<externalip>:<externalport>
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    password: <password>
    username: admin

If everything is well, executing kubectl get services on Windows should return at-least the Kubernetes service.

Mosquitto Performance on low-end VM

July 24, 2019 - Søren Alsbjerg Hørup

I am currently designing a new topology for inter-controller communication using Mosquitto as broker. For fun, I wanted to see how much I could push Mosquitto so I started a low-end Ubuntu VM in Azure and wrote a simple .NET Test application using MQTTnet to put a bit of a load onto it.

I choose the B1s VM (1 vcpu and 1 GiB of memory), installed Ubuntu, installed Mosquitto and opened the default 1883 port for MQTT. This would serve as my broker.

For the benchmarking application I wrote a .NET Core Console app that would:

  • Instantiate N Tasks
  • Each Task instantiates its own MQTT Net client and Connect to the broker.
  • Each Task subscribes to to a “ping/{clientid}” topic and each second send a ping message containing Environment.TickCount + a 65K payload to the ping topic with its own client id.
  • The ping would be returned back to the Task due to the subscribe and ping-time could be measured.

With N = 200, 200/msg/s would be transmitted to the broker and back again, resulting in a theoretical bandwidth requirement of 13MB/s + overhead.

For the benchmark I decided to run the client on my own dev machine, since I can currently upload 50MB/s using the fiber connection that I have available - plenty of bandwidth for my test purposes.

Starting the client app with N=200, I saw the ping times fluctuating from 60ms to 300ms, indicating a bottleneck somewhere (needs to be analyzed further before concluding anything)

Using Azure Metrics, I could quickly mesure the CPU and bandwidth usage of the VM. The CPU usage of the VM peaked at 28.74% and the upload from the VM to my Client peaked at 15.61 MB/s or 125 Mbit/s.

I did not expect this kind of bandwidth / performance from this class of VM in Azure. I of course only tested the burst performance and not the long running performance of the VM, which might differ due to the VM class being a ‘burstable VM’.

Conclusion after this test: if you want a MQTT broker, start with a small burstable VM and see how it performs over time and then upgrade.