PowerToys for Windows 10

I’ve been using Microsoft PowerToys on my work machine for some time and find myself more and more rely on it every day. The name, PowerToys, is a very old name on Windows. The 1st version of PowerToys came with Windows 95 more than two decades ago. It’s actually a fantastic idea to build new the set of productivity tools on top of a legacy brand. It feels like the good old days were finally connected with the new era, and the history continues.

I cannot remember exactly when I used PowerToys for Windows last time. It could be the time around Windows 98 when I was still in college, long time ago. Honestly, I was not a fan of it at that time. Although I’ve forgotten why I didn’t like it, it was not something that I must install on my machine. But PowerToys for Windows 10 has changed my mind and it has made its way to my must-installed software list.

The two features that I used most in PowerToys are FancyZones and File Explorer Preview. As I’m using a 4k monitor, FancyZones helped me to better utilize the space of the monitor. It made me feel like the laptop screen becomes redundant that I turned it off most of the time. File Explorer Preview is an add-on of Windows file explorer. I can preview file content directly in Windows explorer with it. The best part is it supports markdown preview. Now I can read those README files without even opening them with markdown editors.

Another tool that I am going to use frequently is PowerToys Run which was just released with version 0.18. I always wanted to have such a tool to run apps quickly and I’ve already tried some 3rd tools. PowerToys Run looks quite promising.

The PowerToys is quite stable. I didn’t hit any problem with it since I started using it. So if you are on Windows 10 and haven’t tried it, maybe it’s the time. 🙂

Return HTTP 405 When HTTP Method Not Match in Azure API Management

In Azure API Management, when the HTTP method of a request doesn’t match the one defined in the corresponding operation, APIM returns status code HTTP 400 Resource Not Found to the client. For example, the OOTB Echo API defined the Retrieve resource (cached) operation with HTTP GET. If you call it with HTTP POST, you’ll get HTTP 404 Resource Not Found in the response.

The HTTP 404 returned by APIM in this scenario doesn’t really follow the definition of HTTP status code strictly. According to the definition, HTTP 405 Method Not Allowed is designated for this situation. There was a feedback for this issue to APIM team and it would be addressed in the future according to the response. But before that, we have to use some workaround to bypass this issue. Here is how you can do it with policies in APIM.

Handle the error

When APIM failed to identify an API or operation, it will raise a configuration error which will returns HTTP 404. What we need to do is to handle this error and change the status code to HTTP 405. In this way, you avoid the overhead of creating operations for each of the HTTP methods to handle the situation. The next question is on which scope the error should be handled. Depending on the configurations of your APIM, I think you can handle the error on either all operations or all APIs.

The policy code

The following policy code is a sample for Echo API all operations.

<on-error>
    <base />
    <choose>
        <when condition="@(context.LastError.Source == "configuration" && context.Request.Url.Path == "/echo/resource-cached")">
            <return-response>
                <set-status code="405" reason="Method not allowed" />
                <set-body>@{
                    return new JObject(
                        new JProperty("status", "HTTP 405"),
                        new JProperty("message", "Method not allowed")
                    ).ToString();
                }</set-body>
            </return-response>
        </when>
        <otherwise />
    </choose>
</on-error>

The tricky part is in the <when> condition. The first part of the condition is to check if this is a configuration error. If it is, the second part will test if the request is to Retrieve resource (cached) operation. The second test is to avoid the situation where a real HTTP 404 happens.

You may wonder why I used context.Request rather than context.Operation to test which operation it is. The reason is APIM sets context.Operation to null in this case because it cannot identify which operation it is (and that is why the configuration error happens).

You can use this walkaround to return HTTP 405 until APIM fixes its behavior.

Azure Batch – Create a Custom Pool using Shared Image Gallery

If you have a custom image in a Shared Image Gallery and you want to use it to create a pool in Azure Batch, this document, Use the Shared Image Gallery to create a custom pool, provides a pretty good guidance for it. I followed it to test the scenario and hit two minor issues.

  • As it is mentioned in another doc, AAD authentication is also a prerequisite for using shared image gallery. If you use --shared-key-auth with az batch account login, you would hit an anthentication error with Azure Cli. I raised an issue for the document and hopefully a note will be added to it.
  • There is no sample code to demonstrate how to create a pool with shared image gallery with Python.

So I wrote a simple sample in Python. It is based on the latest version (9.0.0) of Azure Batch package for Python. And it uses a service principal for the AAD authentication. The custom image I used for test was built on top of Ubuntu 18.04-LTS. So the node agent sku is ubuntu 18.04. It needs to be changed accordingly if other os version is used.

# Import the required modules from the
# Azure Batch Client Library for Python
import azure.batch._batch_service_client as batch
import azure.batch.models as batchmodels
from azure.common.credentials import ServicePrincipalCredentials

# Specify Batch account credentials
account = "<batch-account-name>"
batch_url = "<batch-account-url>"
ad_client_id = "<client id of the SP>"
ad_tenant = "<tenant id>"
ad_secret = "<secret of the SP>"

# Pool settings
pool_id = "LinuxNodesSamplePoolPython"
vm_size = "STANDARD_D2_V3"
node_count = 1

# Initialize the Batch client
creds = ServicePrincipalCredentials(
    client_id=ad_client_id,
    secret=ad_secret,
    tenant=ad_tenant,
    resource="https://batch.core.windows.net/"
)
config = batch.BatchServiceClientConfiguration(creds, batch_url)
client = batch.BatchServiceClient(creds, batch_url)

# Create the unbound pool
new_pool = batchmodels.PoolAddParameter(id=pool_id, vm_size=vm_size)
new_pool.target_dedicated = node_count

# Configure the start task for the pool
start_task = batchmodels.StartTask(
    command_line="printenv AZ_BATCH_NODE_STARTUP_DIR"
)
start_task.run_elevated = True
new_pool.start_task = start_task

# Create an ImageReference which specifies the Marketplace
# virtual machine image to install on the nodes.
ir = batchmodels.ImageReference(
    virtual_machine_image_id="<resource id of the image version in sig>"
)

# Create the VirtualMachineConfiguration, specifying
# the VM image reference and the Batch node agent to
# be installed on the node.
vmc = batchmodels.VirtualMachineConfiguration(
    image_reference=ir,
    node_agent_sku_id="batch.node.ubuntu 18.04"
)

# Assign the virtual machine configuration to the pool
new_pool.virtual_machine_configuration = vmc

# Create pool in the Batch service
client.pool.add(new_pool)

Update: I polished the above sample code and pushed it into the document I mentioned at the beginning of this post via a PR. The Python sample code in that document is based on the one in this post.

My dotfiles

Recently I spent more and more time on WSL 2. I ran a Ubuntu 18.04 on WSL 2 and mainly used the tools such as zsh, tmux and vim. To make fun with the environment, I customized them little by little, and now the environment looks like this:

my terminal

As there are several customized dotfiles now, I put them together and created a simple script so that I can run a single script to get the environment ready. Now I have my own dotfiles repo and it is here. I will update it if I make other changes to these dotfiles in the future.

And I also shared the profile of my Windows Terminal here. Now I can easily get Windows Terminal and WSL 2 environment configured on any Windows 10 machine.

CKAD Certified

Early this week, I took the 2nd try of the CKAD exam and passed it. I scored 89% this time despite the situation where I ran out of time and couldn’t finish the last question completely. Now I am CKAD certified!

I had my first try of the exam in Aug last year without many preparations. As CNCF gives you a free 2nd chance if the 1st one is not successful, I wanted to take the 1st exam as an opportunity to test my Kubernetes knowledge, to get familiar with the test environment, and to sense how difficult it is. I scored 64% of the 1st exam which is 2% short of the passing score.

Late last year I was busy on my work and until recently I got some time to properly prepare for and take the exam again. I spent about 2 weeks to polish my skills with kubectl and other tools, and used the resources in the following two github repos heavily.

Here are several my tips regarding the exam which I hope could help those who are preparing for the certificate.

  • The only tools that you can use in the exam environment are kubectl, vim and tmux. So be very familiar with them.
  • In the exam, you are allowed to open another browser tab to connect to https://kubernetes.io/docs, but you may not have enough time to read the docs in detail. I relied on kubectl explain more than checking the docs.
  • As the exam environment runs in Chrome, a big screen definitely helps.
  • Most importantly, a lot of practices. The history showed that I tapped k/kubectl for 1372 times and vim for 429 times in 2 weeks before the exam.

Configuring VNET integration of Azure Redis and Web App

To configure Azure Redis with the VNET support, we can follow the steps described in this document. And to integrate Azure web app with a VNET, there is a detailed document for it as well. In this post, I listed some of the common issues that one might hit during the configuration.

  1. The VNET integration for Azure Redis requires an empty subnet for the VNET that is created with Resource Manager. This subnet needs to be created before you create Azure Redis. Otherwise, the configuration would fail.
  2. The subnet for Azure Redis can be protected with a network security group (NSG). Usually the default NSG rules are good enough for protecting the connections. If you need further hardening, you will have to create rules based on the ports list in the Azure Redis document.
  3. To troubleshoot the connection between Azure web app and Azure Redis, you can use the Kudu of web app. There are two tools built in with the web app for network troubleshooting:
    nameresolver.exe can be used to test the DNS functionalities, and
    tcpping.exe
    can be used to test if the host and port can be pinged. But you cannot test the Redis function directly from the Kudu.
  4. Once the VNET integration is configured, the Redis console in Azure Portal will not work anymore. To test the Redis functions with tools such as redis-cli, you will have to build a VM in the VNET and connect to Azure Redis from it.
  5. If somehow your web app cannot access the Azure Redis, although the network configurations are correct, you can try to sync the network for App Service Plan. See this issue for details. Make sure you don’t hit any error when syncing the network.

Move WordPress sites to a new domain

Today I finally decided to move this site from its old domain to this new one, chunliu.me, which I got last year. For a personal hosted WordPress site, changing domain name is not a straight forward task, especially when you don’t want to break the existing external links that point to the site. Here is how I did it, with a lot of search on internet of course.

Copy the database and WordPress folder

To minimize the impact to the existing site, I duplicated the MySQL database of this site and its WordPress folder. Copy WordPress folder is easy, simply use cp command to copy the whole folder. Copy database is a bit tricky because I forgot the password of the MySQL root user. I used the following way to reset the password of the root user.

  1. Run mysql with --skip-grant-tables.
    $ sudo service mysql stop
    $ sudo mkdir -p /var/run/mysqld
    $ sudo chown mysql:mysql /var/run/mysqld
    $ sudo mysqld --skip-grant-tables --skip-networking &
    $ jobs
    
  2. Update the password of root user with mysql client.
    $ mysql -u root
    mysql> FLUSH PRIVILEGES;
    mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';
    mysql> QUIT;
    
  3. Restart mysql service.
    $ sudo pkill mysqld
    $ jobs
    $ sudo service mysql start
    

With the above steps, the database can be copied with the following command.

mysqldump -u root --password=<pwd> <original db> | mysql -u root -p <new db>

Update the URL of the new site

With the copied WordPress and database, the URL needs to be updated in the following several places.

  • wp_config.php: update DB_NAME and DOMAIN_CURRENT_SITE;
  • wp_options table: update site_url and home;
  • wp_site and wp_blogs table: update domain;

Create a URL rewrite rule in the old site

The last thing is to create a url rewrite rule in the old site so that all requests to the old site can be redirected to the new site. This will help to keep the external links without broken.

Edit the .htaccess file of the old site with the following code.

RewriteEngine On
RewriteCond %{HTTP_HOST} ^olddomain\.com$ [OR]
RewriteCond %{HTTP_HOST} ^blog\.olddomain\.com$
RewriteRule (.*)$ https://newdomain/$1 [R=301,L]

With that, the site is moved to the new domain successfully.

Build Hexo site with Azure Pipelines

Azure Pipelines has the very good integration with GitHub. It provides 10 free parallel jobs and unlimited minutes for open source projects. It is a very good choice for open source projects to have the CI/CD capability.

As I have configured the site to use hexo’s git deployment, an Azure build pipeline is enough to carry out the deployment. And in the build pipeline, I need the following three tasks.

  • Install hexo-cli.
  • Install all npm packages used by the site with npm install.
  • Configure git credential and run hexo deploy.

The first 2 steps are quite straight forward. The tricky part is how to use script to configure git credentials. git credential store only accept inputs from stdin. I did some search and tests, and ended up with a bash pipe to feed in the credentials. The following is a sample for credential store.

git config --global user.email "<git email address>"
git config --global user.name "<git user name>"   
git config --global credential.helper 'store --file ~/.my-credentials'
printf "protocol=https\nhost=github.com\nusername=<username>\npassword=%s\n\n" "$GHP" | git credential-store --file ~/.my-credentials store
./node_modules/.bin/hexo clean && ./node_modules/.bin/hexo deploy

As credential store will save the credential to a file on disk, it is not a good practice for a shared build agent. I ended up to replace the credential store with credential cache, and exit the cache when deployment is done.

The following is the yaml file of the build definition. $GHP is an environment variable storing the personal token of GitHub. The actual value is saved in Azure DevOps encrypted.

resources:
- repo: self
queue:
  name: Hosted Ubuntu 1604
  demands: npm

steps:
- script: |
   echo "install hexo-cli"   
   npm install hexo-cli

  displayName: 'Install hexo-cli'

- task: Npm@1
  displayName: 'npm install'
  inputs:
    workingDir: /

    verbose: false

- script: |
   git config --global user.email "<git email address>"   
   git config --global user.name "<git user name>"   
   git config --global credential.helper cache   
   printf "protocol=https\nhost=github.com\nusername=<username>\npassword=%s\n\n" "$GHP" | git credential-cache store   
   ./node_modules/.bin/hexo clean && ./node_modules/.bin/hexo deploy   
   git credential-cache exit
  displayName: 'hexo deploy'
  env:
    GHP: $(password)

Now when I update the site, I just need to commit the changes and push it to GitHub. Azure Pipeline will take care of the build and deployment automatically.

Spring Boot, Azure Database for MySQL, and Azure App Service – Part 1

I recently played with Java and Azure App Service. What I was trying to find out is how the development experience would look like for Java developers if they want to build their applications with Azure App Service and Azure Database for MySQL.

There are some documents on Microsoft doc site, such as this one. It might be good enough for an experienced Java developer, but for someone like me who has limit Java experience, it is not easy to follow, and the sample is also too simple to make any sense for a real development. So I decided to try it myself and documented my experience here for others to reference. There would be a series of posts, and this is the first one. 

Prepare the dev environment

So instead of installing IntelliJ or Eclipse, I choose to use VSCode as my Java IDE. On my computer I’ve already had the VSCode installed. According to this tutorial, I just need to install JDK and Maven. I am a bit lost with the Java terms like Java SE, JDK, JRE and their versions, but I don’t want to be bothered. I choose to install OpenJDK because Oracle JDK requires a license. So here are steps to install OpenJDK. 

  1. Download OpenJDK from here. Windows version of OpenJDK is a zip file. Unzip it to C:\Program Files\Java so the root fold of the JDK would be something like C:\Program Files\Java\jdk-11.0.1
  2. Add an environment variable JAVA_HOME, set its value to the root of the JDK, for example, C:\Program Files\Java\jdk-11.0.1
  3. Add C:\Program Files\Java\jdk-11.0.1\bin to the system path. 
  4. With the above steps, OpenJDK is installed completely. To test if it works, open a command window and run java -version. It should print out the OpenJDK version and runtime information. 

When OpenJDK is installed, you can follow the vscode tutorial to download and install maven, and the Java Extension Pack for vscode. 

Create a MySQL database

Instead of installing MySQL on my local computer, I choose to create an Azure Database for MySQL instance as the dev database environment. It is easy to provision an Azure Database for MySQL instance. Azure has quick start for it. I also run the following SQL query to configure the database in Azure Cloud Shell. 

CREATE DATABASE tododb; -- Create a database
CREATE USER 'springuser'@'%' IDENTIFIED BY 'Spring1234'; -- Create a database user
GRANT ALL PRIVILEGES ON tododb.* TO 'springuser'@'%'; -- Grant user permissions to the database
FLUSH PRIVILEGES;

With the above preparation, we have a Java development environment and a MySQL database ready for the development. In the next post, I will start to create a Spring Boot REST API app with VSCode. Stay tuned. 

Upgrade Ubuntu Server From 16.04 to 18.04.1

I have received several notifications from my Ubuntu server running in Azure for asking me to upgrade the server to Ubuntu 18.04.1. When Ubuntu 18.04 was first released, I didn’t upgrade the server. I was afraid there could be compatibility issues and I don’t want to break the server. With the release of 18.04.1, it seems the version is stable enough for an upgrade. So I decided to upgrade the server. 

Here is what I did.

First of all, I updated the server with apt update && apt upgrade, and then I backed up my server with Azure VM backup. In case upgrade failed, I can restore the VM back.

Then I ran do-release-upgrade to upgrade the server. The os kernel seemed to upgrade successfully, but the software package upgrade failed with the following output. 

authenticate 'bionic.tar.gz' against 'bionic.tar.gz.gpg' 
extracting 'bionic.tar.gz'

 libpython3.6-stdlib:amd64
 python3.6
 python3-apt
 python3
 python3-cffi-backend
 apt-xapian-index
 python3-xapian
 python3-gi
 mailutils
 python3-markupsafe
 python3-systemd
 python3-gdbm:amd64
 python3-lib2to3
 python-apt
 dh-python
 python3-distutils
 libpython3-stdlib:amd64
 python3-yaml
 python3-pycurl
 python3-dbus

Upgrade complete

The upgrade has completed but there were errors during the upgrade
process.

To continue please press [ENTER]

I did some search on the internet. It seems a common issue. To solve this issue, I ran the command sudo mv /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service.bak as it is mentioned here

After the issue was fixed, I just ran sudo apt-get dist-upgrade to upgrade all packages, and I chose to keep all local copies of configurations. After that, the upgrade completed successfully with all software and services running normally.