Control API arguments per team with Azure API management
Several times now I’ve been asked how a API can be controlled in a way where only certain parameters are allowed for certain teams. Very recently I’ve gotten this question again so I decided to write a blogpost about it. Let’s set the case for this time.
There is an azure function which can redeploy/restart a Azure DevOps Agent. In the organization are multiple Agents and the requirement is that access to this function is restricted per Project. Developers for these projects might want to redeploy/restart this Agent on different occasions so they want to open the function for everyone and make sure developers can only call it for their own agents.
The first attempt to achieve this was done by using function access keys in the Azure function. The idea was to give every team a separate function access key and then inside the function check this. The problem is that retrieving these keys inside the function is not very straightforward. It seems to be possible with Azure Cli, but this can’t be invoked inside the Azure Function by default. It could be possible with a REST API call but this requires some complex authentication set up which the development team didn’t want.
So the solution proposed here was to use API Management in front of the Azure Function. Inside this API Management a validation can be created. By using a severless variant the costs can be kept to a minimum too. And this also allows to the solution to be expended on later if more services need to be secured this way. By using a Azure Key Vault secrets could be stored securely and retrieved easily. The design will look like this:
In the image above it shows the flow the request will take. First a request is done toward API Management. This will query the Azure Keyvault to validate the provided key. If the key is valid it will send the request toward the Azure function which will provide a response back which is send back to the user doing the request. Users will do a request to the function providing a body like:
The “Team” parameter is the indication for which team the agent needs to be redeployed/restarted. The “Key” parameter is a provided key to authenticate they can request it for this project.
To test this setup I’ve created a serverless azure function with the Powershell Core Framework which only accepts POST requests. The run.ps1 file looks like this:
Normally this would contain the code to redeploy/restart the agent. For the purpose of this blogpost I’ve just created a simple function which takes the value in the body and returns it with some text appended.
After creating the function I’ve created a API Management with the pricing tier set to Serverless during the creation of the API management I’ve also created a managed identity for this service. I’ve also linked the azure function to the API management.
Now there should be a API in API management which directly goes to the azure function and it will also use the function key to improve security. This function key prevents users from accessing the azure function directly as the key needs to be known. The linking of API management and the Azure function set up this key in the named values of API management so it’s stored securely.
I’ve also created a Azure Key Vault which uses RBAC. I’ve given myself permissions to create secrets and the management identity of the API Management to read the secrets.
In the API Management I’ve created a named value to store the name of the keyvault.
In the screenshot above you see two values. One is the “fa-apimcheck-key” which is automatically created when we linked the Azure Function. The other is the keyvaultname which I’ve created myself.
Now for the last part. I’ve edited the API which was imported by the linking of the function. I’ve change the policy code to this:
Let’s take a look at the code to see what it does. The first part after “Validate body” is purely validation. It will validate if a body is provided and if so if the key and team parameter are provided. If any of these are not provided it will return a custom error message to explain the user what to do to make it work.
The part after “Retrieve key from keyvault” does a request to the Azure Keyvault. It uses the Rest API to retieve a secret from the keyvault. It takes the value of Team provided in the body of the request and enters this in the url to retrieve a secret called that name. To make sure this works I’ve added two secrets named “Test-Team” and “Test-Team2” inside my keyvault. Their value are set to a key I’ve generated.
The send-request expression in the API Management will store the response from the keyvault in the “responseObj”. So in the next step we take the value and store this in a variable. This is mostly done to make the next expression easier to read and could be omitted.
In the next part listed as “Compare the retrieved key with the given key” the key retrieved from the keyvault is compared with the key given in the body of the request. if these are equal the backend will be called. This backend is the Azure Function. If it’s not equal a 401 is returned. The rest of the provided code is standard code.
So now we can test the call in a tool like PowerShell:
As shown in the screenshot a body and headers are provided. When send to the API Management URL it will return the value from the Azure Function. Now I do need to note here that for testing purposes I’ve disabled the “subscription required” setting.
To add even more security this subscription key could be added and passed along in the headers of the request. As we already need to pass on a key in the body there is a form of security already. But for production environments it’s advised to still also enable this key to make it harder for an attacker to brute force the keys.
I hope this will help you set up a check system like this too. By using services like API management logic can very easily be added to endpoints without adding significant costs. The API management has several different pricing tiers and depending on the response time of the API the costs will increase. Using a serverless variant is very good for situation where the costs need to be kept to a minimum while the response time is less important. For situation like unattended scripts or scripts which are ran but not expected to return in a few seconds these solution work very well. But when a response is expected very quickly the serverless solution could cause problems. Especially if no requests if received for a while the response time can be a bit higher as the solution actually needs to be provisioned somewhere. This response time is still measured in seconds but it is noticeable. There are many ways to make sure this is prevented if needed. But for a solution like this I would strongly recommend using the serverless option.
Thanks for reading this, and if you have any questions feel free to reply here or contact me on any platform. I’m always happy to help with these kind of things!