Return HTTP 405 When HTTP Method Not Match in Azure API Management

In Azure API Management, when the HTTP method of a request doesn’t match the one defined in the corresponding operation, APIM returns status code HTTP 404 Resource Not Found to the client. For example, the OOTB Echo API defined the Retrieve resource (cached) operation with HTTP GET. If you call it with HTTP POST, you’ll get HTTP 404 Resource Not Found in the response.

The HTTP 404 returned by APIM in this scenario doesn’t really follow the definition of HTTP status code strictly. According to the definition, HTTP 405 Method Not Allowed is designated for this situation. There was a feedback for this issue to APIM team and it would be addressed in the future according to the response. But before that, we have to use some workaround to bypass this issue. Here is how you can do it with policies in APIM.

Handle the error

When APIM failed to identify an API or operation, it will raise a configuration error which will returns HTTP 404. What we need to do is to handle this error and change the status code to HTTP 405. In this way, you avoid the overhead of creating operations for each of the HTTP methods to handle the situation. The next question is on which scope the error should be handled. Depending on the configurations of your APIM, I think you can handle the error on either all operations or all APIs.

The policy code

The following policy code is a sample for Echo API all operations.

    <base />
        <when condition="@(context.LastError.Source == "configuration" && context.Request.Url.Path == "/echo/resource-cached")">
                <set-status code="405" reason="Method not allowed" />
                    return new JObject(
                        new JProperty("status", "HTTP 405"),
                        new JProperty("message", "Method not allowed")
        <otherwise />

The tricky part is in the <when> condition. The first part of the condition is to check if this is a configuration error. If it is, the second part will test if the request is to Retrieve resource (cached) operation. The second test is to avoid the situation where a real HTTP 404 happens.

You may wonder why I used context.Request rather than context.Operation to test which operation it is. The reason is APIM sets context.Operation to null in this case because it cannot identify which operation it is (and that is why the configuration error happens).

You can use this walkaround to return HTTP 405 until APIM fixes its behavior.

Azure Batch – Create a Custom Pool using Shared Image Gallery

If you have a custom image in a Shared Image Gallery and you want to use it to create a pool in Azure Batch, this document, Use the Shared Image Gallery to create a custom pool, provides a pretty good guidance for it. I followed it to test the scenario and hit two minor issues.

  • As it is mentioned in another doc, AAD authentication is also a prerequisite for using shared image gallery. If you use --shared-key-auth with az batch account login, you would hit an anthentication error with Azure Cli. I raised an issue for the document and hopefully a note will be added to it.
  • There is no sample code to demonstrate how to create a pool with shared image gallery with Python.

So I wrote a simple sample in Python. It is based on the latest version (9.0.0) of Azure Batch package for Python. And it uses a service principal for the AAD authentication. The custom image I used for test was built on top of Ubuntu 18.04-LTS. So the node agent sku is ubuntu 18.04. It needs to be changed accordingly if other os version is used.

# Import the required modules from the
# Azure Batch Client Library for Python
import azure.batch._batch_service_client as batch
import azure.batch.models as batchmodels
from azure.common.credentials import ServicePrincipalCredentials

# Specify Batch account credentials
account = "<batch-account-name>"
batch_url = "<batch-account-url>"
ad_client_id = "<client id of the SP>"
ad_tenant = "<tenant id>"
ad_secret = "<secret of the SP>"

# Pool settings
pool_id = "LinuxNodesSamplePoolPython"
vm_size = "STANDARD_D2_V3"
node_count = 1

# Initialize the Batch client
creds = ServicePrincipalCredentials(
config = batch.BatchServiceClientConfiguration(creds, batch_url)
client = batch.BatchServiceClient(creds, batch_url)

# Create the unbound pool
new_pool = batchmodels.PoolAddParameter(id=pool_id, vm_size=vm_size)
new_pool.target_dedicated = node_count

# Configure the start task for the pool
start_task = batchmodels.StartTask(
    command_line="printenv AZ_BATCH_NODE_STARTUP_DIR"
start_task.run_elevated = True
new_pool.start_task = start_task

# Create an ImageReference which specifies the Marketplace
# virtual machine image to install on the nodes.
ir = batchmodels.ImageReference(
    virtual_machine_image_id="<resource id of the image version in sig>"

# Create the VirtualMachineConfiguration, specifying
# the VM image reference and the Batch node agent to
# be installed on the node.
vmc = batchmodels.VirtualMachineConfiguration(
    node_agent_sku_id="batch.node.ubuntu 18.04"

# Assign the virtual machine configuration to the pool
new_pool.virtual_machine_configuration = vmc

# Create pool in the Batch service

Update: I polished the above sample code and pushed it into the document I mentioned at the beginning of this post via a PR. The Python sample code in that document is based on the one in this post.

Configuring VNET integration of Azure Redis and Web App

To configure Azure Redis with the VNET support, we can follow the steps described in this document. And to integrate Azure web app with a VNET, there is a detailed document for it as well. In this post, I listed some of the common issues that one might hit during the configuration.

  1. The VNET integration for Azure Redis requires an empty subnet for the VNET that is created with Resource Manager. This subnet needs to be created before you create Azure Redis. Otherwise, the configuration would fail.
  2. The subnet for Azure Redis can be protected with a network security group (NSG). Usually the default NSG rules are good enough for protecting the connections. If you need further hardening, you will have to create rules based on the ports list in the Azure Redis document.
  3. To troubleshoot the connection between Azure web app and Azure Redis, you can use the Kudu of web app. There are two tools built in with the web app for network troubleshooting:
    nameresolver.exe can be used to test the DNS functionalities, and
    can be used to test if the host and port can be pinged. But you cannot test the Redis function directly from the Kudu.
  4. Once the VNET integration is configured, the Redis console in Azure Portal will not work anymore. To test the Redis functions with tools such as redis-cli, you will have to build a VM in the VNET and connect to Azure Redis from it.
  5. If somehow your web app cannot access the Azure Redis, although the network configurations are correct, you can try to sync the network for App Service Plan. See this issue for details. Make sure you don’t hit any error when syncing the network.

Spring Boot, Azure Database for MySQL, and Azure App Service – Part 1

I recently played with Java and Azure App Service. What I was trying to find out is how the development experience would look like for Java developers if they want to build their applications with Azure App Service and Azure Database for MySQL.

There are some documents on Microsoft doc site, such as this one. It might be good enough for an experienced Java developer, but for someone like me who has limit Java experience, it is not easy to follow, and the sample is also too simple to make any sense for a real development. So I decided to try it myself and documented my experience here for others to reference. There would be a series of posts, and this is the first one. 

Prepare the dev environment

So instead of installing IntelliJ or Eclipse, I choose to use VSCode as my Java IDE. On my computer I’ve already had the VSCode installed. According to this tutorial, I just need to install JDK and Maven. I am a bit lost with the Java terms like Java SE, JDK, JRE and their versions, but I don’t want to be bothered. I choose to install OpenJDK because Oracle JDK requires a license. So here are steps to install OpenJDK. 

  1. Download OpenJDK from here. Windows version of OpenJDK is a zip file. Unzip it to C:\Program Files\Java so the root fold of the JDK would be something like C:\Program Files\Java\jdk-11.0.1
  2. Add an environment variable JAVA_HOME, set its value to the root of the JDK, for example, C:\Program Files\Java\jdk-11.0.1
  3. Add C:\Program Files\Java\jdk-11.0.1\bin to the system path. 
  4. With the above steps, OpenJDK is installed completely. To test if it works, open a command window and run java -version. It should print out the OpenJDK version and runtime information. 

When OpenJDK is installed, you can follow the vscode tutorial to download and install maven, and the Java Extension Pack for vscode. 

Create a MySQL database

Instead of installing MySQL on my local computer, I choose to create an Azure Database for MySQL instance as the dev database environment. It is easy to provision an Azure Database for MySQL instance. Azure has quick start for it. I also run the following SQL query to configure the database in Azure Cloud Shell. 

CREATE DATABASE tododb; -- Create a database
CREATE USER 'springuser'@'%' IDENTIFIED BY 'Spring1234'; -- Create a database user
GRANT ALL PRIVILEGES ON tododb.* TO 'springuser'@'%'; -- Grant user permissions to the database

With the above preparation, we have a Java development environment and a MySQL database ready for the development. In the next post, I will start to create a Spring Boot REST API app with VSCode. Stay tuned. 


Crazyflie 2.0
My Crazyflie 2.0

Azure Marketplace搞了一个叫做Super Human的推广活动,推广Azure Marketplace里的各种服务。这个活动推出了一些virtual labs,你如果正好对这些服务感兴趣,通过这些virtual labs,可以学习怎么在Azure中使用它们。

其实virtual labs不是我想说的重点。重点是,如果你成功做出了某个virtual lab的结果,会得到一个奖励。你可以选择得到一个为期3个月的Azure Pass,或者一个Crazyflie 2.0无人机。3个月的Azure Pass也许不错,可是我想大家应该都会选无人机吧?


Remote Desktop for Linux VM on Azure

Usually you don’t need remote desktop or VNC on Linux servers running in the cloud. But as I wanted to try some scenarios with a Linux desktop and I actually don’t have a physical machine loaded with any Linux OS, I ended up setting up a Ubuntu server on Azure and enabling the remote desktop on it.

Obviously, I am not the first one who want to use remote desktop on servers running on cloud. There are plenty of posts on the internet talking about how to do it. Most of them are about using xrdp + xcfe4, including this one for Azure VMs. I am using Ubuntu 15.10 image. The only gotcha is that running the following command could uninstall the waagent service.

$sudo apt-get install ubuntu-desktop

This is a known issue that you can track on the To get waagent back, you have to reinstall it with the following:

$sudo apt-get install --reinstall walinuxagent

I ended up not installing the Ubuntu-desktop. Without it, you also avoid installing applications that you don’t need, such as those Office software.

Although xcfe4 is good enough as a lightweight window management system, I am more used to a GNOME like desktop environment. So I decided to try the MATE desktop. The configuration of it is very easy. Just run the following:

sudo apt-get install xrdp
sudo apt-get install mate-desktop-environment
echo mate-session >~/.xsession
sudo service xrdp restart

You may have to reboot the server after installing the desktop environment.

That’s all. I am using it now and so far so good.

The simplest way to copy blobs between different Azure storage accounts

Recently I have to move my Ubuntu VM to another Windows Azure subscription. In order to do it, I have to move the blobs of vhds to the storage of that subscription. At the beginning, I thought it was simple, just all I need would be copying the blobs to my local disk and then uploading them to the new storage account. But when I tried to do it, it appeared unachievable because the downloading and uploading would spend too much time that I cannot afford. So I searched on the web to find if there is any other better way to do it.

After searching on the web for a while, I think the simplest way I can figure out is to use AzCopy, a tool from Windows Azure Storage team. This tool leverages the Asynchronous Cross-Account Copy feature which was introduced in Windows Azure in June 2012. It is a very easy to use command line tool. For example, to copy all blobs in the source container to the destination container, you just need the following command.

AzCopy https://<sourceaccount><sourcecontainer>/ https://<destaccount><destcontainer>/  /sourcekey:<key> /destkey:<key> /S

The best part is all copying happens on Windows Azure server side. It is very fast. The speed could go up to more than 100MB/s. It really saves a lot time and effort comparing to the copying to local way. Give it a try if you need to copy or move blobs between different azure storage account.

配置Windows Azure上的Linux虚拟机

之前的post里讲过,我已经将这个blog从之前基于Windows Azure web role的解决方案,迁移到目前的部署在Windows Azure中的Linux虚拟机上了。这两个周末都在折腾这个虚拟机,记录一些细节。

首先,在Windows Azure中部署Linux虚拟机其实并不复杂,Windows Azure的文档里有相关的说明,照着做基本就可以创建好虚拟机,唯一需要注意的是ssh的证书。当我部署完成的时候,甚至惊讶,原来微软对开源系统的支持已经这么好了。

我使用的是Ubuntu 13.04的模版创建的虚拟机,一切顺利。虚拟机创建好之后,就可以像普通ubuntu机器一样来配置Apache, PHP和MySQL了,Ubuntu的help里有相关说明。拥有自己的虚拟机的好处是,我可以掌控上到系统下到应用的一切。目前我在这个虚拟机上运行了WordPress, tt-rss和tt-rss mobile。有空再折腾别的有趣的应用。


  1. Windows Azure的虚拟机除了运行OS的.vhd之外,还需要单独挂一个.vhd,作为数据分区。你可能会注意到,创建好的虚拟机中,除了OS分区,还有一个分区/dev/sdb1,mount到/mnt/resource,而且空间挺大,有100多GB。但是这个分区和Azure里的Windows虚拟机的D:分区一样,是Azure虚拟机用来放临时数据的,它的数据不会persist到你自己的storage里,所以你还是需要自己创建一个.vhd挂上。只有在自己storage里的.vhd的数据才是persistent的。我之前将一个网站目录放在/mnt/resource里,昨天就碰到了404的错误,日志显示是I/O错误。
  2. Windows Azure的虚拟机目前没有备份或快照整个虚拟机的功能。文档里关于capture image的文章并不是备份的方法,capture image会unprovision整个虚拟机。官方的备份方法是,对storage里的.vhd做快照。当虚拟机有问题的时候,就恢复.vhd的快照,然后重建虚拟机。我的快照还没做,要找时间做一下。

Update: 一个可以用来给.vhd做快照的免费的Windows Azure storage explore工具:Azure Explorer


虽然Blog现在不流行了,这个Blog我也不常更新,不过还是趁着新加坡国庆假期,把整个Blog迁移到了一个新的,基于Windows Azure IaaS的LAMP虚拟机上。这么做,一来是体验一下Azure虚拟机对开源系统的支持到底如何,二来也是解决系统稳定性和升级的问题,毕竟WordPress还是在原生的LAMP系统上更好一些。


初步体验,Windows Azure对开源系统的支持挺不错的。我从Gallery里直接选的Ubuntu Server 13.04,然后apt-get安装APH,配置apache,MySql和WordPress,一切顺利,顺带还把以前在DigiCert申请的SSL证书也用上了,目前看来一切正常。剩下的一个小问题是媒体文件的管理。我倾向于使用公开的媒体文件服务,比如flickr和YouTube,而不是把文件存储在WordPress里。不过,使用那些服务,上传和写文章就变得比较麻烦,不知道有没有好的编辑器插件可以解决这个上传的问题。或者试一下Windows Azure Storage for WordPress也说不定。


本来以为blog放在Azure上可以一劳永逸,再也不会宕机了,谁知上周我在印度的时候,blog忽然出HTTP 500的错误,访问不了了。Azure在每个月底的时候,会重启服务器,重启的过程会重新部署package。我感觉可能是重新部署package时出错了。

我用remote desktop连接到一台Azure Server上,查看了startup tasks log。其中有一个错误,是在安装SQL Driver for PHP的时候出错了,这个driver的安装包无法从原来的地址下载。我试着用浏览器打开那个下载地址:,果然不可用了。搜索了一下,原来随着WPI 4.0的发布,这个安装包的地址变了,新的地址是:。IIS的论坛里有完整的讨论:


  1. 删除resources目录下原有的WebPICmdLine目录,这个是旧的WebPI预览版的命令行程序。
  2. 新建一个WebPICmd目录,将新的WebPICmd.exe复制到这个目录下,Microsoft.Web.PlatformInstaller.dll也需要复制过来。它们的默认安装路径是”C:Program FilesMicrosoftWeb Platform Installer”。
  3. 更新bin目录下的install-php-impl.cmd,将原来的命令行:

    “..resourcesWebPICmdLinewebpicmdline” /Products:PHP53,SQLDriverPHP53IIS,


    “..resourcesWebPICmdwebpicmd” /Install /Products:PHP53,SQLDriverPHP53IIS,


  4. 重新package,然后选择更新已部署的服务。
  5. Package重新部署后,可能还需要reimage azure服务器。