Early this week, I took the 2nd try of the CKAD exam and passed it. I scored 89% this time despite the situation where I ran out of time and couldn’t finish the last question completely. Now I am CKAD certified!
I had my first try of the exam in Aug last year without many preparations. As CNCF gives you a free 2nd chance if the 1st one is not successful, I wanted to take the 1st exam as an opportunity to test my Kubernetes knowledge, to get familiar with the test environment, and to sense how difficult it is. I scored 64% of the 1st exam which is 2% short of the passing score.
Late last year I was busy on my work and until recently I got some time to properly prepare for and take the exam again. I spent about 2 weeks to polish my skills with kubectl and other tools, and used the resources in the following two github repos heavily.
Here are several my tips regarding the exam which I hope could help those who are preparing for the certificate.
The only tools that you can use in the exam environment are kubectl, vim and tmux. So be very familiar with them.
In the exam, you are allowed to open another browser tab to connect to https://kubernetes.io/docs, but you may not have enough time to read the docs in detail. I relied on kubectl explain more than checking the docs.
As the exam environment runs in Chrome, a big screen definitely helps.
To configure Azure Redis with the VNET support, we can follow the steps described in this document. And to integrate Azure web app with a VNET, there is a detailed document for it as well. In this post, I listed some of the common issues that one might hit during the configuration.
The VNET integration for Azure Redis requires an empty subnet for the VNET that is created with Resource Manager. This subnet needs to be created before you create Azure Redis. Otherwise, the configuration would fail.
The subnet for Azure Redis can be protected with a network security group (NSG). Usually the default NSG rules are good enough for protecting the connections. If you need further hardening, you will have to create rules based on the ports list in the Azure Redis document.
To troubleshoot the connection between Azure web app and Azure Redis, you can use the Kudu of web app. There are two tools built in with the web app for network troubleshooting: nameresolver.exe can be used to test the DNS functionalities, and tcpping.execan be used to test if the host and port can be pinged. But you cannot test the Redis function directly from the Kudu.
Once the VNET integration is configured, the Redis console in Azure Portal will not work anymore. To test the Redis functions with tools such as redis-cli, you will have to build a VM in the VNET and connect to Azure Redis from it.
If somehow your web app cannot access the Azure Redis, although the network configurations are correct, you can try to sync the network for App Service Plan. See this issue for details. Make sure you don’t hit any error when syncing the network.
Today I finally decided to move this site from its old domain to this new one, chunliu.me, which I got last year. For a personal hosted WordPress site, changing domain name is not a straight forward task, especially when you don’t want to break the existing external links that point to the site. Here is how I did it, with a lot of search on internet of course.
Copy the database and WordPress folder
To minimize the impact to the existing site, I duplicated the MySQL database of this site and its WordPress folder. Copy WordPress folder is easy, simply use cp command to copy the whole folder. Copy database is a bit tricky because I forgot the password of the MySQL root user. I used the following way to reset the password of the root user.
Run mysql with --skip-grant-tables.
$ sudo service mysql stop
$ sudo mkdir -p /var/run/mysqld
$ sudo chown mysql:mysql /var/run/mysqld
$ sudo mysqld --skip-grant-tables --skip-networking &
Update the password of root user with mysql client.
$ mysql -u root
mysql> FLUSH PRIVILEGES;
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass';
Restart mysql service.
$ sudo pkill mysqld
$ sudo service mysql start
With the above steps, the database can be copied with the following command.
Azure Pipelines has the very good integration with GitHub. It provides 10 free parallel jobs and unlimited minutes for open source projects. It is a very good choice for open source projects to have the CI/CD capability.
As I have configured the site to use hexo’s git deployment, an Azure build pipeline is enough to carry out the deployment. And in the build pipeline, I need the following three tasks.
Install all npm packages used by the site with npm install.
Configure git credential and run hexo deploy.
The first 2 steps are quite straight forward. The tricky part is how to use script to configure git credentials. git credential store only accept inputs from stdin. I did some search and tests, and ended up with a bash pipe to feed in the credentials. The following is a sample for credential store.
As credential store will save the credential to a file on disk, it is not a good practice for a shared build agent. I ended up to replace the credential store with credential cache, and exit the cache when deployment is done.
The following is the yaml file of the build definition. $GHP is an environment variable storing the personal token of GitHub. The actual value is saved in Azure DevOps encrypted.
I recently played with Java and Azure App Service. What I was trying to find out is how the development experience would look like for Java developers if they want to build their applications with Azure App Service and Azure Database for MySQL.
There are some documents on Microsoft doc site, such as this one. It might be good enough for an experienced Java developer, but for someone like me who has limit Java experience, it is not easy to follow, and the sample is also too simple to make any sense for a real development. So I decided to try it myself and documented my experience here for others to reference. There would be a series of posts, and this is the first one.
Prepare the dev environment
So instead of installing IntelliJ or Eclipse, I choose to use VSCode as my Java IDE. On my computer I’ve already had the VSCode installed. According to this tutorial, I just need to install JDK and Maven. I am a bit lost with the Java terms like Java SE, JDK, JRE and their versions, but I don’t want to be bothered. I choose to install OpenJDK because Oracle JDK requires a license. So here are steps to install OpenJDK.
Download OpenJDK from here. Windows version of OpenJDK is a zip file. Unzip it to C:\Program Files\Java so the root fold of the JDK would be something like C:\Program Files\Java\jdk-11.0.1
Add an environment variable JAVA_HOME, set its value to the root of the JDK, for example, C:\Program Files\Java\jdk-11.0.1
Add C:\Program Files\Java\jdk-11.0.1\bin to the system path.
With the above steps, OpenJDK is installed completely. To test if it works, open a command window and run java -version. It should print out the OpenJDK version and runtime information.
When OpenJDK is installed, you can follow the vscode tutorial to download and install maven, and the Java Extension Pack for vscode.
Create a MySQL database
Instead of installing MySQL on my local computer, I choose to create an Azure Database for MySQL instance as the dev database environment. It is easy to provision an Azure Database for MySQL instance. Azure has quick start for it. I also run the following SQL query to configure the database in Azure Cloud Shell.
CREATE DATABASE tododb; -- Create a database
CREATE USER 'springuser'@'%' IDENTIFIED BY 'Spring1234'; -- Create a database user
GRANT ALL PRIVILEGES ON tododb.* TO 'springuser'@'%'; -- Grant user permissions to the database
With the above preparation, we have a Java development environment and a MySQL database ready for the development. In the next post, I will start to create a Spring Boot REST API app with VSCode. Stay tuned.
I have received several notifications from my Ubuntu server running in Azure for asking me to upgrade the server to Ubuntu 18.04.1. When Ubuntu 18.04 was first released, I didn’t upgrade the server. I was afraid there could be compatibility issues and I don’t want to break the server. With the release of 18.04.1, it seems the version is stable enough for an upgrade. So I decided to upgrade the server.
Here is what I did.
First of all, I updated the server with apt update && apt upgrade, and then I backed up my server with Azure VM backup. In case upgrade failed, I can restore the VM back.
Then I ran do-release-upgrade to upgrade the server. The os kernel seemed to upgrade successfully, but the software package upgrade failed with the following output.
authenticate 'bionic.tar.gz' against 'bionic.tar.gz.gpg'
The upgrade has completed but there were errors during the upgrade
To continue please press [ENTER]
I did some search on the internet. It seems a common issue. To solve this issue, I ran the command sudo mv /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service /usr/share/dbus-1/system-services/org.freedesktop.systemd1.service.bak as it is mentioned here.
After the issue was fixed, I just ran sudo apt-get dist-upgrade to upgrade all packages, and I chose to keep all local copies of configurations. After that, the upgrade completed successfully with all software and services running normally.
From container perspective, Service Fabric is a container orchestrator which supports both Windows and Linux containers. In legacy application lift and shift scenarios, we usually containerize the legacy application with minimal code change. And Service Fabric is a good platform to run these containers.
To deploy a Service Fabric cluster on Azure which is suitable for running containers, we can use ARM template. I created a template with the following special settings:
1 – An additional data disk is attached to the VMs in the cluster to host the downloaded container images. We need this disk is because by default all container images would be downloaded to C drive of the VMs. The C drive may run out of space if there are several large images downloaded.
I started a side project to create a client package for Cosmos DB SQL API with Go so I can try Go in a real project. My plan is to implement something similar to .NET Core SDK with Go. As this is a project for learning and practice, I will do it little by little, and there is no timeline regarding when it can be done.
I build the project based on SQL API via REST. To access resources in Cosmos DB with SQL API via REST, an authorization header is required for the requests. The value of the authorization header has the following format, as it is mentioned in this document.
In the above string, the values of type and version are simple: type is either master or resource, while the current version is 1.0. The value of signature is a bit complex. It is a hash of several other values by using the access key of Cosmos DB as the hash key. The document has all details in it and even better it has a sample written in C#.
So following the document and the sample, I implemented a Go equivalence as follows. It is a good example to try the base64 encoding and HMAC hash in Go.
The date format in the signature is required to be in HTTP-date format defined by RFC7231. However, the time package in the Go standard library doesn’t seem to support this format out of the box, but it provides a very easy way to create custom format. The utcNow() function in the above code is what I implemented to format the time to RFC7231 format.