A
antien_soran_huang
Introduction
RoomRadar.ai is an AI-powered hotel search application developed by:
@Anchit Dr Anchit Chandran - PolyAI | LinkedIn
@matt https://www.linkedin.com/in/matt-peniket-6a051318a
and myself @soran https://www.linkedin.com/in/an-tien-huang/
As part of our MSc Computer Science project at University College London. Under the expert guidance of our supervisors, Dr. Yun Fu https://www.linkedin.com/in/dryunfu/ from UCL and Prof. LeeStott Lee Stott - Microsoft | LinkedIn from Microsoft, we created a cutting-edge solution that leverages Microsoft Azure infrastructure to deliver an exceptional user experience.
Our application utilises Azure OpenAI and Azure AI Search to offer a multimodal interface, enabling users to search for hotels using text, voice, and images. Additionally, Azure Maps is also integrated to provide an interactive map for a more engaging and intuitive experience for our users.
RoomRadar.ai showcases the potential of AI in enhancing travel planning, combining advanced technology with user-friendly design to revolutionise the way people find their ideal accommodations.
In this blog, I will walk through the project’s development process of Azure Maps and image vector search, which are my main contribution to the project.
Demo of image similarity search
Demo of Azure Maps
Outline
- Project Overview and Goals
- Technical Details
- Azure Maps
- Azure AI Vision and Azure AI Search
- Results and Outcome
- Future Development
- Conclusion
Overview and Objectives
RoomRadar.ai was designed to enhance the hotel search user experience with features that traditional hotel search applications do not offer. The interactive map and image search are two of these features and their detailed objectives are listed in the following table:
|
|
[td]
Feature
[/td][td]
Objectives
[/td][td]
Map View
[/td][td]
Image Search
[/td]The diagram below provides a high-level overview of RoomRadar’s architecture:
Technical Details
Azure Maps
Demo of Azure Maps
RoomRadar's map component, which is used in both Map View and the hotel map in the details page, leverages the Azure Maps services. The map construction process is as below:
Step 0: Setup the Azure Maps Resources in Azure
For detailed steps for this, as well as resolving issues related to Azure Maps integration with NextJS and TypeScript, please refer to my tutorial here.
Step 1. Initialise Base Map using the Microsoft *atlas* Package
Step 1.1: Initialise Map
Code:
import * as atlas from 'azure-maps-control';
const map = new atlas.Map(mapRef.current!, {
authOptions: {
authType: atlas.AuthenticationType.subscriptionKey,
subscriptionKey: 'key'
}
});
Step 1.2: Add Custom Icons for Hotels and Underground Stations
Promise.all is added to prevent map accessing the custom icons before their loading is completed.
Code:
var iconPromises = [
map.imageSprite.add('underground_icon', '/map_icons/metro.png'),
map.imageSprite.add('hotel_red_icon', '/map_icons/hotel_red.png'),
];
Promise.all(iconPromises).then(function () {
// Rest of initialising code omitted for brevity
});
Step 1.3: Add Layers for Displaying Hotels and Underground Stations and for Displaying Route Information
Code:
// Add symbol layer for hotels and stations
const symbolLayer = new atlas.layer.SymbolLayer(datasource, 'symbolLayer', {
iconOptions: {
image: [
'match',
['get', 'type'],
//For each entity type, specify the icon name to use.
'tube', 'underground_icon',
'hotel', 'hotel_red_icon',
'hotel_red_icon' //Default icon
],
allowOverlap: false,
size: 0.1
},
textOptions: {
// Additional code omitted for brevity
},
filter: ['any', ['==', ['geometry-type'], 'Point'], ['==', ['geometry-type'], 'MultiPoint']]
});
map.layers.add(symbolLayer);
// Add route layer
const routeLayer = new atlas.layer.LineLayer(datasource, 'routeLayer', {
strokeColor: '#0059ff',
strokeWidth: 5,
lineJoin: 'round',
lineCap: 'round'
});
map.layers.add(routeLayer, 'labels');
Step 2: Add Hotel Points on the Map
The hotel points are added using the props (`Properties`) passed in from search result.
Code:
const datasource = new atlas.source.DataSource();
map.source.add(datasource);
const hotelPoints = Properties.map((property: Property) =>
new atlas.data.Feature(new atlas.data.Point([Number(property.longitude), Number(property.latitude)]), {
hotel_id: property.location_id,
type: 'hotel',
name: property.name,
})
);
datasource.add(hotelPoints);
Step 3. Add Hotel Cards
The hotel card offers a clear and convenient way for users to browse results while receiving visual feedback on the map. To link the hotel cards with their corresponding map elements, a hash map is used to store hotel IDs and their associated shape IDs on the map. This enables an interactive UI experience, such as displaying a popup on the map when a user hovers over a hotel card:
Code:
const listItemHover = (id?: string) => {
const shapeId = id && hotelIdToShapeIdMap.get(id);
const shape = shapeId && datasource.getShapeById(shapeId);
if (shape) {
showPopup(shape);
}
};
Step 4: Add Nearby Underground Stations When a Hotel is Clicked
The underground station points are added in the same manner as the hotel points, with the exception that their type is set to 'tube' instead of 'hotel'.
Step 5: Call Azure Maps’ Get Route Direction API to calculate the direction information.
When an underground station point is clicked, the Azure Maps Get Route Direction API is called to retrieve the route coordinates and estimated travel time. The result is then displayed on the map using the `routeLayer` and a popup.
Azure AI Vision and Azure AI Search
Demo of image similarity search
The image search feature is implemented using a combination of the following technologies:
- Azure AI Vision: Converts images into vector embeddings. Azure AI Vision was chosen for embedding generation due to its generous free quota, which can accommodate the required image volume.
- MongoDB Atlas: Stores the transformed vector embeddings for efficient retrieval.
- Azure AI Search and MongoDB Atlas Vector Search: Performs vector searches to find and return similar images based on the embeddings.
The overall process is illustrated in the diagram below:
Step 1: Convert images into vectors
The images are transformed by calling the Multimodal embeddings API by Azure AI Services, which takes binary data as input and return a 1x1024 vector.
Step 2: Store and Index Vectors
The converted image vectors are stored in MongoDB Atlas and Azure AI Search for indexing. Each record includes three fields: hotel ID, image URL, and image vector. The hotel ID and image URL are included to enable easy retrieval and display of hotels with similar images.
Step 3: Retrieve the image vector upon search
When the user clicks the "Find Similar" button, the stored image vector is retrieved from the database to perform a vector search. This approach, compared to real-time image conversion, avoids redundant API calls to Azure AI Vision, reducing costs and improving performance.
Step 4: Perform vector search
The retrieved vector is used to perform Atlas Vector Search and Azure AI Search in parallel. Atlas Vector Search is implemented using Prisma, which supports this functionality with an out-of-the-box function. Azure AI Search, on the other hand, is implemented through API calls:
Code:
const body = {
count: true,
select: 'hotel_location_id, image_url',
vectorQueries: [
{
vector: image_vector,
k: 6,
fields: 'image_vector',
kind: 'vector',
exhaustive: true
}
]
}
const response = await fetch(`${baseUrl}/indexes/${indexName}/docs/search?api-version=${apiVersion}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'api-key': token
},
body: JSON.stringify(body)
});
Step 5: Combine and return results
Once the results from both searches are ready, the returned hotel lists are merged and deduplicated before displayed to users.
Results and Outcomes
Key accomplishments of this project include:
- The implementation of Azure Maps API for a dynamic map view of search results, combined with route planning, further enhanced the user experience.
- AI-powered image search feature enables users to find hotels with similar visual attributes, addressing complex search needs that are not met by traditional platforms.
The two features also received positive feedback during the user acceptance testing phase. In particular, the 20 friends invited to test the system, who had no prior knowledge of the project and had not seen any system demos, found the system interaction to be intuitive and user-friendly.
Future Development
The following features could be developed to make RoomRadar even more valuable:
- Expand route planning beyond just hotels and underground stations. By integrating a chatbot with Azure Maps, the system could display the optimised tourism routes and hence improves the decision-making process for users.
- Allow users to upload their own images and search for hotels with similar images, which adds a personalised touch to the search process.
Conclusion
RoomRadar.ai demonstrates the transformative potential of AI and cloud computing in travel planning. By leveraging Azure Maps and Azure AI Search, we've created an innovative hotel search platform that offers an interactive map view with route planning and a groundbreaking image-based search feature. The positive feedback from our User Acceptance Testing validates our approach, confirming that RoomRadar.ai addresses real user needs in the hotel search market. As we look to the future, we're excited about the possibilities for expanding our platform's capabilities, continuing to push the boundaries of AI-powered travel planning.
For more information or to get involved:
- Contact me via LinkedIn or email
- Check out My Detailed tutorial for Azure Maps
- Check out the Official Azure Examples
Continue reading...