An Azure Function is a serverless computing service provided by Microsoft Azure that enables developers to build event-driven applications that can be executed without the need for provisioning and managing servers. With Azure Functions, developers can write small, single-purpose functions that respond to events such as HTTP requests, changes to data in Azure Storage or Azure Cosmos DB, or messages from Azure Service Bus or Azure Event Hubs. These functions can be written in several programming languages including C#, Java, JavaScript, Python, and PowerShell. Azure Functions scales automatically, from just a few instances up to thousands of instances, depending on the demand of the application.
Tuesday, May 2, 2023
Creating Custom Triggers for Azure Functions with Azure Event Hubs and Azure Service Bus
module.exports = async function(context, eventHubMessages) {context.log(`Event hub trigger function called for message array: ${eventHubMessages}`);eventHubMessages.forEach(message => {// Process message here});};
module.exports = async function(context, mySbMsg) {context.log(`Service bus trigger function called for message: ${mySbMsg}`);// Process message here};
Real-time Image Processing with Azure Functions and Azure Blob Storage
Image processing is a critical component of many
applications, from social media to healthcare. However, processing large
volumes of image data can be time-consuming and resource-intensive. In this
tutorial, we'll show you how to use Azure Functions and Azure Blob Storage to
create a real-time image processing pipeline that can handle large volumes of
data with scalability and flexibility.
Prerequisites
Before we get started, you'll need to have the following:
1.
An Azure account
2.
Visual Studio Code
3.
Azure Functions extension for Visual Studio Code
4.
Azure Blob Storage extension for Visual Studio
Code
Creating the Azure
Functions App
The first step is to create an Azure Functions app. In
Visual Studio Code, select the Azure Functions extension and choose
"Create New Project". Follow the prompts to choose your programming
language and runtime.
Once your project is created, you can create a new function
by selecting the "Create Function" button in the Azure Functions
Explorer. Choose the Blob trigger template to create a function that responds
to new files added to Azure Blob Storage.
In this example, we'll create a function that recognizes
objects in images using Azure Cognitive Services. We'll use the Cognitive
Services extension for Visual Studio Code to connect to our Cognitive Services
account.
Creating the Azure
Blob Storage Account
Next, we'll create an Azure Blob Storage account to store
our image data. In the Azure portal, select "Create a resource" and
search for "Blob Storage". Choose "Storage account" and
follow the prompts to create a new account.
Once your account is created, select "Containers"
to create a new container for your image data. Choose a container name and
access level, and select "Create". You can now add images to your
container through the Azure portal or through your Azure Functions app.
Connecting the Azure
Functions App to Azure Cognitive Services
To connect your Azure Functions app to Azure Cognitive
Services, you'll need to add the Cognitive Services extension to your project.
In Visual Studio Code, select the Extensions icon and search for "Azure
Cognitive Services". Install the extension and reload Visual Studio Code.
Next, open your function code and add the following code to
your function:
const { ComputerVisionClient } = require("@azure/cognitiveservices-computervision");
const { BlobServiceClient } = require("@azure/storage-blob");
module.exports = async function (context, myBlob) {
const endpoint = process.env["ComputerVisionEndpoint"];
const key = process.env["ComputerVisionKey"];
const client = new ComputerVisionClient({ endpoint, key });
const blobEndpoint = process.env["BlobEndpoint"];
const blobKey = process.env["BlobKey"];
const blobServiceClient = BlobServiceClient.fromConnectionString(`BlobEndpoint=${blobEndpoint};BlobAccessKey=${blobKey}`);
const containerClient = blobServiceClient.getContainerClient("mycontainer");
const buffer = myBlob;
const result = await client.analyzeImageInStream(buffer, { visualFeatures: ["Objects"] });
const blobName = context.bindingData.name;
const blobClient = containerClient.getBlockBlobClient(blobName);
const metadata = { tags: result.objects.map(obj => obj.objectProperty) };
await blobClient.setMetadata(metadata);
}
This code connects to your Azure Cognitive Services account
and creates a new ComputerVisionClient object. It also connects to your Blob
Storage account and retrieves the image data from the blob trigger.
The code then uses the Computer Vision API to analyze the
image and extract any objects it detects. It adds these object tags to the
image metadata and saves the updated metadata to Blob Storage.
Testing the Image
Processing Pipeline
Now that our image processing pipeline is set up, we can
test it by uploading an image to our Blob Storage container. The function
should automatically trigger and process the image, adding object tags to the
metadata.
To view the updated metadata, select the image in the Azure
portal and choose "Properties". You should see a list of object tags
extracted from the image.
Building a Serverless Web App with Azure Functions and Azure Cosmos DB
Server less computing has revolutionized the way we build and deploy web applications. With server less, you can focus on writing code without worrying about managing infrastructure, and pay only for the compute resources you use. In this tutorial, we'll show you how to build a server less web app with Azure Functions and Azure Cosmos DB that provides scalable and cost-effective data storage and processing.
Prerequisites
Before we get started, you'll need to have the following:
- An Azure account
- Visual Studio Code
- Azure Functions extension for Visual Studio Code
- Azure Cosmos DB extension for Visual Studio Code
const { CosmosClient } = require("@azure/cosmos");
module.exports = async function (context, req) {
const endpoint = process.env["CosmosDBEndpoint"];
const key = process.env["CosmosDBKey"];
const client = new CosmosClient({ endpoint, key });
const database = client.database("mydatabase");
const container = database.container("mycontainer");
const querySpec = {
query: "SELECT * FROM c"
};
const { resources } = await container.items.query(querySpec).fetchAll();
context.res = {
body: resources
};
}
This code connects to your Azure Cosmos DB account and retrieves all data from the specified container. Replace "mydatabase" and "mycontainer" with your database and container names.
Finally, add your Azure Cosmos DB account endpoint and key to your function's Application Settings. In the Azure Functions Explorer, select your function and choose "Application Settings". Add the following settings:
CosmosDBEndpoint: Your Azure Cosmos DB account endpoint
CosmosDBKey: Your Azure Cosmos DB account key
Conclusion
we learned how to build a serverless web app with Azure Functions and Azure Cosmos DB. We created an Azure Functions app and a new function that retrieves data from Azure Cosmos DB using the Cosmos DB extension for Visual Studio Code.We also created an Azure Cosmos DB account and added a new container to store our data. Finally, we connected our Azure Functions app to Azure Cosmos DB by adding the necessary code and application settings. By using Azure Functions and Azure Cosmos DB together, you can build scalable and cost-effective web applications that handle data storage and processing without managing infrastructure.
Friday, April 28, 2023
Maximizing Azure Functions: Use Cases and Limitations for Effective Serverless Computing
code for azure function with storage of excel file
This function listens to HTTP POST requests and stores the Excel file in Blob storage under the "excel-files" container with a random GUID as the file name. Note that this function requires the Microsoft.Azure.WebJobs.Extensions.Storage NuGet package. When you make a POST request to this function with an Excel file in the request body, the function will store the file in Blob storage and return an HTTP 200 OK response with the message "Excel file stored successfully". You can then use this file in other Azure Functions or download it from Blob storage using the Azure Storage SDK or Azure portal.
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System.IO;
using System.Threading.Tasks;
public static class StoreExcelFunction
{
[FunctionName("StoreExcel")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
[Blob("excel-files/{rand-guid}.xlsx", FileAccess.Write)] Stream excelFile,
ILogger log)
{
await req.Body.CopyToAsync(excelFile);
log.LogInformation("Excel file stored successfully");
return new OkObjectResult("Excel file stored successfully");
}
}
Thursday, April 27, 2023
Handwritten Digit Recognition using OpenCV using Python
This code loads a pre-trained CNN model to recognize the digits, captures the video from the webcam, and analyzes each frame in real-time to recognize the digits. The code uses OpenCV to preprocess the images and extract the digits from the video frames. The recognized digits are printed on the video frames and displayed in real-time.
import cv2
import numpy as np
from keras.models import load_model
# Load the pre-trained CNN model
model = load_model('model.h5')
# Define the size of the image to be analyzed
IMG_SIZE = 28
# Define the function to preprocess the image
def preprocess_image(img):
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
img = img.astype('float32') / 255.0
img = np.reshape(img, (1, IMG_SIZE, IMG_SIZE, 1))
return img
# Define the function to recognize the digit
def recognize_digit(img):
img_processed = preprocess_image(img)
digit = model.predict_classes(img_processed)[0]
return digit
# Capture the video from the webcam
cap = cv2.VideoCapture(0)
while True:
# Read a frame from the video stream
ret, frame = cap.read()
# Convert the frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Threshold the grayscale image
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
# Find the contours in the thresholded image
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Loop through all the contours
for contour in contours:
# Find the bounding rectangle of the contour
x, y, w, h = cv2.boundingRect(contour)
# Ignore contours that are too small
if w < 10 or h < 10:
continue
# Draw the rectangle around the contour
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Extract the digit from the image
digit_img = gray[y:y+h, x:x+w]
# Recognize the digit
digit = recognize_digit(digit_img)
# Print the recognized digit on the frame
cv2.putText(frame, str(digit), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
# Display the video stream
cv2.imshow('Handwritten Digit Recognition', frame)
# Wait for a key press
key = cv2.waitKey(1)
# If the 'q' key is pressed, exit the loop
if key == ord('q'):
break
# Release the resources
cap.release()
cv2.destroyAllWindows()
ASP.NET Core
Certainly! Here are 10 advanced .NET Core interview questions covering various topics: 1. **ASP.NET Core Middleware Pipeline**: Explain the...
-
The error message you encountered ("DeleteService FAILED 1072: The specified service has been marked for deletion") indicates tha...
-
may be you need to show serial number in your rdlc report then you can use following syntax for showing 1,2,3... continuous on in table fie...
-
Transformers have shown promising results in various natural language processing (NLP) tasks, but they can also be adapted for time series ...