Make your AI more sustainable with API Management

Make your AI more sustainable with API Management

Use Azure API Management to enhance your Microsoft AI Foundry LLM models. By implementing semantic caching your can use an Managed Redis Cache as a vector database to check for similar requests. This way you can reduce the amount on inferences done by your LLM and reduce your environmental impact.

Read More

Parallel processing in Azure logic apps

Parallel processing in Azure logic apps

Azure logic apps can be used to automate things without writing code, but because a lot of stuff happens under the hood your workflow might corrupt your data. By looking at the different ways you can handle parallel processing of data in logic apps you can prevent this data to be corrupted and maybe in the meantime increase the runtime of your apps!

Read More