Guest yodobrin Posted January 23, 2023 Posted January 23, 2023 The 202 pattern is a way to handle long-running requests in a scalable and resilient manner. The basic idea is to accept a request, but instead of processing it immediately, return a "202 Accepted" response, with a location header that points to a status endpoint. The client can then poll the status endpoint to check the progress of the request. Further reading - Asynchronous Request-Reply pattern - Azure Architecture Center. Whats is the Use Cases? This pattern can be used in the following scenarios: Where the request processing time may vary significantly, or Where the client does not need to wait for the request to complete before performing other tasks. Why Azure Container App? Azure Container App offers several benefits for implementing the 202 pattern: Scalability: Scale automatically based on workload, making it easier to handle a large number of requests concurrently. Reliability: Designed to be highly available and resilient, with features such as Auto-restart and Self-healing capabilities. [*]Integration: Easily integrate with other Azure services, such as Azure Service Bus and Azure Queue Storage, to build a more powerful and flexible solution. [*]Monitoring: Built-in monitoring and logging capabilities, allowing you to track the status of your requests and troubleshoot any issues that may arise. [*]Flexibility: Supports a wide range of languages and frameworks, making it easy to build solutions using the tools and technologies you are most familiar with. Why Azure Queue Scale Rule? A good document to start reading is this: Scaling in Azure Container Apps | Microsoft Learn Configuration To configure the scale rule, you'll need to set a connection string as a secret and then configure the rule itself. In the Azure portal, navigate to your container app and then select Secrets. Select Add, and then enter your secret key/value information – in our case it’s the queue connection string. Select Add when you're done. In Azure portal, there are few options to edit the scaling rules of a container Enter a Rule name, select Azure Queue. Enter your Queue Name, Enter your desired Queue depth, under the Authentication section, add your Secret reference which in case is your Azure Queue connection string, and Trigger parameter (you can enter 'connection'). select Add when you're done. The queue length instructs Keda scaler, how many messages require scaling, for example if your setting is '5' and your queue length is 25, 5 instances would be created. Select Create when you're done. The Azure Queue scaling rule uses a queue depth to control the scale of your replicas. Here are the key learnings for the worker: Implementation should always dequeue a message from the queue. QueueMessage msg = await queueClient.ReceiveMessageAsync(); // attempt to cast to a TaskRequest object string body = msg.Body.ToString(); TaskRequest taskRequest; try{ taskRequest = TaskRequest.FromJson(body); _logger.LogInformation($"Parsed message {msg.MessageId} with content {body} to TaskRequest object"); }catch(Exception ex){ _logger.LogError($"Error parsing message {msg.MessageId} with content {body} to TaskRequest object. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send a message to posion queue here } Address all failures. Log both success and failure. try{ // update the blob with new status await DoSomething(taskRequest); BlobContainerClient blobContainerClient = new BlobContainerClient(storageCS, blobContainerName); await blobContainerClient.CreateIfNotExistsAsync(); BlobClient blobClient = blobContainerClient.GetBlobClient($"{taskRequest.TaskId.ToString()}-202"); // update the task status taskRequest.Status = TaskRequest.TaskStatus.Completed; Stream stream = new MemoryStream(Encoding.UTF8.GetBytes(taskRequest.ToJson())); // upload and overwrite the blob await blobClient.UploadAsync(stream, overwrite: true); _logger.LogInformation($"Completed task processing for message {msg.MessageId}."); }catch(Exception ex){ _logger.LogError($"Error updating blob for message {msg.MessageId} with content {body}. Error: {ex.Message}, removing message from queue."); // delete the message from the queue await queueClient.DeleteMessageAsync(msg.MessageId, msg.PopReceipt); // send to a poison queue } Upon error, send a message to poison queue. Example You can see an example of how this pattern can be used in practice by visiting my repo Conclusion The Azure Container App and Queue Scale Rule are an efficient way to handle long-running requests in a scalable and resilient manner. Continue reading... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.